Lucille Chalifoux hiding her face in shame after putting up her children for sale, Chicago, 1948, from $2!
(Source: Rare Historical Photos)
It baffles me to see a soul being sold, let alone
being sold for as little as $2 as if it was meaningless. Many people are not
aware of this problem that devastated most of America. The picture above is only
one sad case from the possible thousands, or even tens of thousands…
[BAFS] AFTER
CAUSING SOME 6O MILLION DEATHS IN WWII, ALLIED WITH 2 OTHER MASS
MURDERERS - CHURCHILL AND STALIN, STARVING TO DEATH OVER 1 MILLION POWs
AND SOME 3 MILLION IN BENGAL, AND RAPING OVER 1 MILLION WHITE WOMEN !!!
The US,
UK, France, the USSR, and International Jewry leadership murdered some 60
million people after accusing Adolf Hitler as being the most evil man on earth.
Yet, Hitler saved his country and gave work to some 6 million Germans, and
stopped Jewish-led German decadence that had made Berlin the sex capital of
Europe. They starved to death over a million of prisoners of war and raped over
a million women.
They also
stole Muslim Palestine and gave it to the Jews who murdered or expelled millions of
its native inhabitants, and stole Muslim Kashmir and gave it to the Hindus who never stopped murdering Muslims there, beating them up, and raping their women,
WHILE IN
1948, WHITES WERE SELLING THOUSANDS OF THEIR CHILDREN FOR AS LITTLE AS $2 IN
CHICAGO!!!
This
is what schools will not teach us!
Jesse Ventura | 63 Documents the Government Doesn't Want You to Read | Talks at Google
It’s the 22nd November 2021 and this is the moment when the jabbing has to
stop.
A couple of hours ago Darren Smith, the editor of the excellent The Light
Paper, sent me a paper from the medical journal Circulation which proves that
the covid-19 jabbing experiment has to stop today. I believe that any doctor or
nurse who gives one of the mRNA covid jabs after today will in due course be
struck off the appropriate register and arrested.
The journal Circulation is a well-respected publication. It’s 71-years-old, its
articles are peer reviewed and in one survey it was rated the world’s no 1
journal in the cardiac and cardiovascular system category.
I’m going to quote the final sentence of the abstract which appears at the
beginning of the article. This is all I, you – or anyone else – needs to know.
`We conclude that the mRNA vacs dramatically increase inflammation on the
endothelium and T cell infiltration of cardiac muscle and may account for the
observations of increased thrombosis, cardiomyopathy and other vascular events
following vaccination.’
That’s it. That’s the death bell for the covid-19 mRNA jabs.
The endothelium is a layer of cells lining blood vessels and lymphatic vessels.
T cells are a type of white cell.
We always knew these jabs were experimental. My video in December 2020, just
under a year ago, warned about these specific risks. I read out a list of
possible adverse events published officially by the American Government.
But now we have the proof of the link.
The mRNA jab is, remember, known not to stop people catching covid. And it is
known not to stop people spreading it. I don’t believe anyone disputes these
facts.
And yet vast numbers of deaths and serious injuries have occurred among people
who have been jabbed. Look at the item entitled `Updated: how many are the
vaccines killing?’ on my websites.
Now we have the evidence to stop the jabbing programmes.
In the study quoted in Circultion, a total of 566 patients aged 28 to 97 were
tested. They were equally divided among men and women.
`At the time of this report,’ says the author, `these changes persist for at
least 2.5 months post second dose of vaccine.’
At the very least, the use of these jabs must stop now. Immediately, until more
long-term tests are done.
If there were any journalists left in the mainstream media, this news would be
lead item on all TV and radio programmes and be on the front pages of all
newspapers.
Thank heavens for free speech platforms such as BNT which enables me to bring
you this news.
I’ve said for a year that this jab was an experiment – certain to kill and
injure.
We’ve always known that to experiment on people without their full consent and
understanding – after disclosing all the risks and potential side effects – is
a crime.
Now the evidence exists that must stop this experiment.
If the covid jab experiment continues after today then we know for absolute
sure that this is not a medical treatment, it is a cull.
Please share this video immediately with everyone you know.
Thank you.
Copyright Vernon Coleman November 22nd 2021
There are four free books available on both my websites. All are widely banned.
Please read them and send to everyone you know.
It’s not disputable, since the
information comes from official patent registries in the Netherlands and
US. And we have all the documentation
UPDATE: Reuters took on doing
damage control for this article and published a slander and smear piece
on us disguised as “fact-checking”. We fact-checked their fact-checking phrase by phrase here.
As we’ve shown in previous exposes, the whole Covidiocracy is a
masquerade and a simulation long prepared by The World Bank / IMF / The
Rothschilds and their lemmings, with Rockefeller partnership. Our newest discoveries further these previous revelations.
A method is provided for acquiring and
transmitting biometric data (e.g., vital signs) of a user, where the
data is analyzed to determine whether the user is suffering from a viral
infection, such as COVID-19. The method includes using a pulse oximeter
to acquire at least pulse and blood oxygen saturation percentage, which
is transmitted wirelessly to a smartphone. To ensure that the data is
accurate, an accelerometer within the smartphone is used to measure
movement of the smartphone and/or the user. Once accurate data is
acquired, it is uploaded to the cloud (or host), where the data is used
(alone or together with other vital signs) to determine whether the user
is suffering from (or likely to suffer from) a viral infection, such as
COVID-19. Depending on the specific requirements, the data, changes
thereto, and/or the determination can be used to alert medical staff and
take corresponding actions.
second registration: us, 2017
Detailed info below.
ONE KEY DETAIL STRUCK ME ON THESE REGISTRATIONS: Both were filed and updated years ago, but they were SCHEDULED to be made public in September 2020.
Title: System and Method for Using, Biometric, and Displaying Biometric Data United States Patent Application 20170229149 Kind Code: A1
Abstract:
A method is provided for processing and displaying biometric data of a
user, either alone or together (in synchronization) with other data,
such as video data of the user during a time that the biometric data was
acquired. The method includes storing biometric data so that it is
linked to an identifier and at least one time-stamp (e.g., a start time,
a sample rate, etc.), and storing video data so that it is linked to
the identifier and at least one time-stamp (e.g., a start time). By
storing data in this fashion, biometric data can be displayed (either in
real-time or delayed) in synchronization with video data, and biometric
data can be searched to identify at least one biometric event. Video
corresponding to the biometric event can then be displayed, either alone
or together with at least one biometric of the user during the
biometric event.
Inventors: Rothschild, Richard A. (London, GB) Macklin, Dan (Stafford, GB) Slomkowski, Robin S. (Eugene, OR, US) Harnischfeger, Taska (Eugene, OR, US) Application Number: 15/495485 Publication Date: 08/10/2017 Filing Date: 04/24/2017 Export Citation: Click for automatic bibliography generation Assignee: Rothschild Richard A. Macklin Dan Slomkowski Robin S. Harnischfeger Taska International Classes: G11B27/10; G06F19/00; G06K9/00; G11B27/031; H04N5/77 View Patent Images: Download PDF 20170229149
US Patent References:
20160035143
N/A
2016-02-04
20140316713
N/A
2014-10-23
20140214568
N/A
2014-07-31
20090051487
N/A
2009-02-26
20070189246
N/A
2007-08-16
Primary Examiner: MESA, JOSE M Attorney, Agent or Firm: Fitzsimmons IP Law (Gardena, CA, US) Claims: What is claimed is:
1. A method for identifying video corresponding to a biometric event
of a user, said video being displayed along with at least one biometric
of said user during said biometric event, comprising: receiving a
request to start a session; using at least one program running on a
mobile device to assign a session number and a start time to said
session; receiving video data from a camera, said video data including
video of at least one of said user and said user’s surroundings during a
period of time, said period of time starting at said start time;
receiving biometric data from a sensor, said biometric data including a
plurality of values on a biometric of said user during said period of
time; using said at least one program to link at least said session
number and said start time to said video data; using said at least one
program to link at least said session number, said start time, and a
sample rate to said biometric data, at least said session number being
used to link said biometric data to said video data, and at least said
sample rate and said start time being used to link individual ones of
said plurality of values to individual times within said period of time;
receiving said biometric event, said biometric event comprising one of a
value and a range of said biometric; using said at least one program to
identify a first one of said plurality of values corresponding to said
biometric event; using said at least one program and at least said start
time, said sample rate, and said period of time to identify a first
time within said period of time corresponding to said first one of said
plurality of values; and displaying on said mobile device at least said
video data during said first time along with said first one of said
plurality of values, wherein said first time is used to show said first
one of said plurality of values in synchronization with a portion of
said video data that shows at least one of said user and said user’s
surroundings during said biometric event.
2. The method of claim 1, wherein said step of receiving biometric
data from said sensor further comprises receiving heart rate data from a
heart rate monitor.
3. The method of claim 1, wherein said steps of linking said session
number to said video data and said biometric data further comprises
linking an activity number to both said video data and said biometric
data, wherein said activity number identifies one of a plurality of
activities, said session comprises said plurality of activities, and
both said session number and said activity number are used to link said
biometric data to said video data.
4. The method of claim 1, wherein said step of assigning a session
number to said session further comprises linking a description of said
session to said session.
5. The method of claim 1, wherein said steps of receiving video data
and biometric data further comprises receiving said video data and said
biometric data during said period of time.
6. The method of claim 1, wherein said step of receiving video data
from a camera further comprises receiving said video data from said
camera after said period of time.
7. The method of claim 6, further comprising the step of analyzing
said video data for an identifier identifying said session, said
identifier being used by said at least one program to link said session
number to said video data.
8. The method of claim 1, wherein said steps of identifying a first
one of said plurality of values corresponding to said biometric event
and identifying a first time corresponding to said first one of said
plurality of values further comprises identifying each one of said
plurality of values corresponding to said biometric event and
identifying each time corresponding to said each one of said plurality
of values.
9. The method of claim 8, wherein said step of displaying at least
said video data during said first time further comprises displaying at
least said video data during said each time corresponding to said each
one of said plurality of values, wherein said each time is used to show
said each one of said plurality of values in synchronization with
portions of said video data that show at least one of said user and said
user’s surroundings during said biometric event.
10. The method of claim 1, further comprising the steps of receiving
self-realization data from said user, and linking at least said session
number and at least one time to said self-realization data, wherein said
self-realization data indicates how said user feels during said at
least one time, and said at least one time is used to display said
self-realization data in synchronization with at least one portion of
said video data.
11. A system for identifying video corresponding to a biometric event
of a user, said video being displayed along with at least one biometric
of said user during said biometric event, comprising: at least one
server in communication with a wide area network (WAN); a mobile device
in communication with said at least one server via said WAN, said mobile
device comprising: a display; at least one processor for downloading
machine readable instructions from said at least one server; and at
least one memory device for storing said machine readable instructions,
said machine readable instructions being adapted to perform the steps
of: receiving a request to start a session; assigning a session number
and a start time to said session; receiving video data from a camera,
said video data including video of at least one of said user and said
user’s surroundings during a period of time; receiving biometric data
from a sensor, said biometric data including a plurality of values on a
biometric of said user during said period of time; linking at least said
session number and said start time to said video data; linking at least
said session number, said start time, and a sample rate to said
biometric data, at least said session number being used to link said
biometric data to said video data, and at least said sample rate and
said start time being used to link individual ones of said plurality of
values to individual times within said period of time; receiving said
biometric event, said biometric event comprising one of a value and a
range of said biometric; identifying a first one of said plurality of
values corresponding to said biometric event; identifying a first time
within said period of time corresponding to said first one of said
plurality of values; and displaying on said display at least said video
data during said first time along with said first one of said plurality
of values, wherein said first time is used to show said first one of
said plurality of values in synchronization with a portion of said video
data that shows at least one of said user and said user’s surroundings
during said biometric event.
12. The system of claim 11, wherein said step of receiving biometric
data from said sensor further comprises receiving heart rate data from a
heart rate monitor.
13. The system of claim 11, wherein said steps of linking said
session number to said video data and said biometric data further
comprises linking an activity number to both said video data and said
biometric data, wherein said activity number identifies one of a
plurality of activities, said session comprises said plurality of
activities, and both said session number and said activity number are
used to link said biometric data to said video data.
14. The system of claim 11, wherein said steps of receiving video
data and biometric data further comprises receiving said video data and
said biometric data during said period of time.
15. The system of claim 11, wherein said step of receiving video data
from a camera further comprises receiving said video data from said
camera after said period of time.
16. The system of claim 15, wherein said machine readable
instructions are further adapted to perform the step of analyzing said
video data for a barcode, said barcode identifying said session number
and being used to link said session number to said video data.
17. The system of claim 11, wherein said steps of identifying a first
one of said plurality of values corresponding to said biometric even
and identifying a first time corresponding to said first one of said
plurality of values further comprises identifying each one of said
plurality of values corresponding to said biometric event and
identifying each time corresponding to said each one of said plurality
of values.
18. The system of claim 17, wherein said step of displaying at least
said video data during said first time further comprises displaying at
least said video data during said each time corresponding to said each
one of said plurality of values, wherein said each time is used to show
said each one of said plurality of values in synchronization with
portions of said video data that show at least one of said user and said
user’s surroundings during said biometric event.
19. The system of claim 11, wherein said machine readable
instructions are further adapted to perform the steps of receiving
self-realization data from said user, and linking said session number
and at least one time to said self-realization data, wherein said
self-realization data indicates how said user feels during said at least
one time, and said at least one time is used to display said
self-realization data in synchronization with at least one portion of
said video data.
20. A method for displaying video in synchronization with at least
one biometric of a subject, comprising: using at least one program
running on a computing device to assign a session number and a start
time to said session; receiving video data from at least one camera,
said video data including video of at least one of said subject and said
subject’s surroundings during a period of time; receiving biometric
data from at least one sensor, said biometric data including a plurality
of values on at least one biometric of said subject during said period
of time; using said at least one program to link at least said session
number and said start time to said video data; using said at least one
program to link at least said session number, said start time, and at
least one sample rate to said biometric data; receiving a biometric
event, said biometric event comprising one of a value and a range of
said at least one biometric; using said at least one program to identify
individual ones of said plurality of values corresponding to said
biometric event; using said at least one program and at least said start
time, said at least one sample rate, and said period of time to
identify individual times within said period of time corresponding to
said individual ones of said plurality of values; and displaying on said
computing device at least said video data and said individual ones of
said plurality of values, wherein said individual times are used to show
said individual ones of said plurality of values in synchronization
with portions of said video data that show at least one of said subject
and said subject’s surroundings during said biometric event.
Description:
CROSS-REFERENCE TO RELATED APPLICATION
This application is a continuation of Ser. No. 15/293,211, filed Oct.
13, 2016, which claims priority pursuant to 35 U.S.C. §119 (e) to U.S.
Provisional Application No. 62/240,783, filed Oct. 13, 2015, which
applications are specifically incorporated herein, in their entirety, by
reference.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to the reception and use of biometric
data, and more particularly, to a system and method for displaying at
least one biometric of a user along with video of the user at a time
that the at least one biometric is being measured and/or received.
2. Description of Related Art
Recently, devices have been developed that are capable of measuring,
sensing, or estimating in a convenient form factor at least one or more
metric related to physiological characteristics, commonly referred to as
biometric data. For example, devices that resemble watches have been
developed which are capable of measuring an individual’s heart rate or
pulse, and, using that data together with other information (e.g., the
individual’s age, weight, etc.), to calculate a resultant, such as the
total calories burned by the individual in a given day. Similar devices
have been developed for measuring, sensing, or estimating other kinds of
metrics, such as blood pressure, breathing patterns, breath
composition, sleep patterns, and blood-alcohol level, to name a few.
These devices are generically referred to as biometric devices or
biosensor metrics devices.
While the types of biometric devices continue to grow, the way in
which biometric data is used remains relatively static. For example,
heart rate data is typically used to give an individual information on
their pulse and calories burned. By way of another example,
blood-alcohol data is typically used to give an individual information
on their blood-alcohol level, and to inform the individual on whether or
not they can safely or legally operate a motor vehicle. By way of yet
another example, an individual’s breathing pattern (measurable for
example either by loudness level in decibels, or by variations in
decibel level over a time interval) may be monitored by a doctor, nurse,
or medical technician to determine whether the individual suffers from
sleep apnea.
While biometric data is useful in and of itself, such data would be
more informative or dynamic if it could be combined with other data
(e.g., video data, etc.), provided (e.g., wirelessly, over a network,
etc.) to a remote device, and/or searchable (e.g., allowing certain
conditions, such as an elevated heart rate, to be quickly identified)
and/or cross-searchable (e.g., using biometric data to identify a video
section illustrating a specific characteristic, or vice-versa). Thus, a
need exists for an efficient system and method capable of achieving at
least some, or indeed all, of the foregoing advantages, and capable also
of merging the data generated in either automatic or manual form by the
various devices, which are often using operating systems or
technologies (e.g., hardware platforms, protocols, data types, etc.)
that are incompatible with one another.
In certain embodiments of the present invention, the system and/or
method is configured to receive, manage, and filter the quantity of
information on a timely and cost-effective basis, and could also be of
further value through the accurate measurement, visualization (e.g.,
synchronized visualization, etc.), and rapid notification of data points
which are outside (or within) a defined or predefined range.
Such a system and/or method could be used by an individual (e.g.,
athlete, etc.) or their trainer, coach, etc., to visualize the
individual during the performance of an athletic event (e.g., jogging,
biking, weightlifting, playing soccer, etc.) in real-time (live) or
afterwards, together with the individual’s concurrently measured
biometric data (e.g., heart rate, etc.), and/or concurrently gathered
“self-realization data,” or subject-generated experiential data, where
the individual inputs their own subjective physical or mental states
during their exercise, fitness or sports activity/training (e.g.,
feeling the onset of an adrenaline “rush” or endorphins in the system,
feeling tired, “getting a second wind,” etc.). This would allow a person
(e.g., the individual, the individual’s trainer, a third party, etc.)
to monitor/observe physiological and/or subjective psychological
characteristics of an individual while watching or reviewing the
individual in the performance of an athletic event, or other physical
activity. Such inputting of the self-realization data, ca be achieved by
various methods, including automatically, time-stamped-in-the-system
voice notes, short-form or abbreviation key commands on a smart phone,
smart watch, enabled fitness band, or any other system-linked input
method which is convenient for the individual to utilize so as not to
impede (or as little as possible) the flow and practice by the
individual of the activity in progress.
Such a system and/or method would also facilitate, for example,
remote observation and diagnosis in telemedicine applications, where
there is a need for the medical staff, or monitoring party or parent, to
have clear and rapid confirmation of the identity of the patient or
infant, as well as their visible physical condition, together with their
concurrently generated biometric and/or self-realization data.
Furthermore, the system and/or method should also provide the
subject, or monitoring party, with a way of using video indexing to
efficiently and intuitively benchmark, map and evaluate the subject’s
data, both against the subject’s own biometric history and/or against
other subjects’ data samples, or demographic comparables, independently
of whichever operating platforms or applications have been used to
generate the biometric and video information. By being able to
filter/search for particular events (e.g., biometric events,
self-realization events, physical events, etc.), the acquired data can
be reduced down or edited (e.g., to create a “highlight reel,” etc.)
while maintaining synchronization between individual video segments and
measured and/or gathered data (e.g., biometric data, self-realization
data, GPS data, etc.). Such comprehensive indexing of the events, and
with it the ability to perform structured aggregation of the related
data (video and other) with (or without) data from other individuals or
other relevant sources, can also be utilized to provide richer levels of
information using methods of “Big Data” analysis and “Machine
Learning,” and adding artificial intelligence (“AI”) for the
implementation of recommendations and calls to action.
SUMMARY OF THE INVENTION
The present invention provides a system and method for using,
processing, indexing, benchmarking, ranking, comparing and displaying
biometric data, or a resultant thereof, either alone or together (e.g.,
in synchronization) with other data (e.g., video data, etc.). Preferred
embodiments of the present invention operate in accordance with a
computing device (e.g., a smart phone, etc.) in communication with at
least one external device (e.g., a biometric device for acquiring
biometric data, a video device for acquiring video data, etc.). In a
first embodiment of the present invention, video data, which may include
audio data, and non-video data, such as biometric data, are stored
separately on the computing device and linked to other data, which
allows searching and synchronization of the video and non-video data.
In one embodiment of the present invention, an application (e.g.,
running on the computing device, etc.) includes a plurality of modules
for performing a plurality of functions. For example, the application
may include a video capture module for receiving video data from an
internal and/or external camera, and a biometric capture module for
receiving biometric data from an internal and/or external biometric
device. The client platform may also include a user interface module,
allowing a user to interact with the platform, a video editing module
for editing video data, a file handling module for managing data, a
database and sync module for replicating data, an algorithm module for
processing received data, a sharing module for sharing and/or storing
data, and a central login and ID module for interfacing with third party
social media websites, such as Facebook™.
These modules can be used, for example, to start a new session,
receive video data for the session (i.e., via the video capture module)
and receive biometric data for the session (i.e., via the biometric
capture module). This data can be stored in local storage, in a local
database, and/or on a remote storage device (e.g., in the company cloud
or a third-party cloud service, such as Dropbox™, etc.). In a preferred
embodiment, the data is stored so that it is linked to information that
(i) identifies the session and (ii) enables synchronization.
For example, video data is preferably linked to at least a start time
(e.g., a start time of the session) and an identifier. The identifier
may be a single number uniquely identifying the session, or a plurality
of numbers (e.g., a plurality of global or universal unique identifiers
(GUIDs/UUIDs)), where a first number uniquely identifying the session
and a second number uniquely identifies an activity within the session,
allowing a session to include a plurality of activities. The identifier
may also include a session name and/or a session description. Other
information about the video data (e.g., video length, video source,
etc.) (i.e., “video metadata”) can also be stored and linked to the
video data. Biometric data is preferably linked to at least the start
time (e.g., the same start time linked to the video data), the
identifier (e.g., the same identifier linked to the video data), and a
sample rate, which identifies the rate at which biometric data is
received and/or stored.
Once the video and biometric data is stored and linked, algorithms
can be used to display the data together. For example, if biometric data
is stored at a sample rate of 30 samples per minute (spm), algorithms
can be used to display a first biometric value (e.g., below the video
data, superimposed over the video data, etc.) at the start of the video
clip, a second biometric value two seconds later (two seconds into the
video clip), a third biometric value two seconds later (four seconds
into the video clip), etc. In alternate embodiments of the present
invention, non-video data (e.g., biometric data, self-realization data,
etc.) can be stored with a plurality of time-stamps (e.g., individual
stamps or offsets for each stored value, or individual sample rates for
each data type), which can be used together with the start time to
synchronize non-video data to video data.
In one embodiment of the present invention, the biometric device may
include a sensor for sensing biometric data, a display for interfacing
with the user and displaying various information (e.g., biometric data,
set-up data, operation data, such as start, stop, and pause, etc.), a
memory for storing the sensed biometric data, a transceiver for
communicating with the exemplary computing device, and a processor for
operating and/or driving the transceiver, memory, sensor, and display.
The exemplary computing device includes a transceiver (1)
for receiving biometric data from the exemplary biometric device, a
memory for storing the biometric data, a display for interfacing with
the user and displaying various information (e.g., biometric data,
set-up data, operation data, such as start, stop, and pause, input
in-session comments or add voice notes, etc.), a keyboard (or other user
input) for receiving user input data, a transceiver (2)
for providing the biometric data to the host computing device via the
Internet, and a processor for operating and/or driving the transceiver (1), transceiver (2), keyboard, display, and memory.
The keyboard (or other input device) in the computing device, or
alternatively the keyboard (or other input device) in the biometric
device, may be used to enter self-realization data, or data on how the
user is feeling at a particular time. For example, if the user is
feeling tired, the user may enter the “T” on the keyboard. If the user
is feeling their endorphins kick in, the user may enter the “E” on the
keyboard. And if the user is getting their second wind, the user may
enter the “S” on the keyboard. Alternatively, to further facilitate
operation during the exercise, or sporting activity, short-code key
buttons such as “T,” “E,” and “S” can be preassigned, like speed-dial
telephone numbers for frequently called contacts on a smart phone, etc.,
which can be selected manually or using voice recognition. This data
(e.g., the entry or its representation) is then stored and linked to
either a sample rate (like biometric data) or time-stamp data, which may
be a time or an offset to the start time that each button was pressed.
This would allow the self-realization data to be synchronized to the
video data. It would also allow the self-realization data, like
biometric data, to be searched or filtered (e.g., in order to find video
corresponding to a particular event, such as when the user started to
feel tired, etc.).
In an alternate embodiment of the present invention, the computing
device (e.g., a smart phone, etc.) is also in communication with a host
computing device via a wide area network (“WAN”), such as the Internet.
This embodiment allows the computing device to download the application
from the host computing device, offload at least some of the
above-identified functions to the host computing device, and store data
on the host computing device (e.g., allowing video data, alone or
synchronized to non-video data, such as biometric data and
self-realization data, to be viewed by another networked device). For
example, the software operating on the computing device (e.g., the
application, program, etc.) may allow the user to play the video and/or
audio data, but not to synchronize the video and/or audio data to the
biometric data. This may be because the host computing device is used to
store data critical to synchronization (time-stamp index, metadata,
biometric data, sample rate, etc.) and/or software operating on the host
computing device is necessary for synchronization. By way of another
example, the software operating on the computing device may allow the
user to play the video and/or audio data, either alone or synchronized
with the biometric data, but may not allow the computing device (or may
limit the computing device’s ability) to search or otherwise extrapolate
from, or process the biometric data to identify relevant portions
(e.g., which may be used to create a “highlight reel” of the
synchronized video/audio/biometric data) or to rank the biometric and/or
video data. This may be because the host computing device is used to
store data critical to search and/or to rank the biometric data
(biometric data, biometric metadata, etc.), and/or software necessary
for searching (or performing advanced searching of) and/or ranking (or
performing advanced ranking of) the biometric data.
In one embodiment of the present invention, the video data, which may
also include audio data, starts at a time “T” and continues for a
duration of “n.” The video data is preferably stored in memory (locally
and/or remotely) and linked to other data, such as an identifier, start
time, and duration. Such data ties the video data to at least a
particular session, a particular start time, and identifies the duration
of the video included therein. In one embodiment of the present
invention, each session can include different activities. For example, a
trip to Berlin on a particular day (session) may involve a bike ride
through the city (first activity) and a walk through a park (second
activity). Thus, the identifier may include both a session identifier,
uniquely identifying the session via a globally unique identifier
(GUID), and an activity identifier, uniquely identifying the activity
via a globally unique identifier (GUID), where the session/activity
relationship is that of a parent/child.
In one embodiment of the present invention, the biometric data is
stored in memory and linked to the identifier and a sample rate “m.”
This allows the biometric data to be linked to video data upon playback.
For example, if identifier is one, start time is 1:00 PM, video
duration is one minute, and the sample rate is 30 spm, then the playing
of the video at 2:00 PM would result in the first biometric value to be
displayed (e.g., below the video, over the video, etc.) at 2:00 PM, the
second biometric value to be displayed (e.g., below the video, over the
video, etc.) two seconds later, and so on until the video ends at 2:01
PM. While self-realization data can be stored like biometric data (e.g.,
linked to a sample rate), if such data is only received periodically,
it may be more advantageous to store this data linked to the identifier
and a time-stamp, where “m” is either the time that the self-realization
data was received or an offset between this time and the start time
(e.g., ten minutes and four seconds after the start time, etc.). By
storing video and non-video data separately from one another, data can
be easily search and synchronized.
With respect to linking data to an identifier, which may be linked to
other data (e.g., start time, sample rate, etc.), if the data is
received in real-time, the data can be linked to the identifier (s) for
the current session (and/or activity). However, when data is received
after the fact (e.g., after a session has ended), there are several ways
in which the data can be linked to a particular session and/or activity
(or identifier (s) associated therewith). The data can be manually
linked (e.g., by the user) or automatically linked via the application.
With respect to the latter, this can be accomplished, for example, by
comparing the duration of the received data (e.g., the video length)
with the duration of the session and/or activity, by assuming that the
received data is related to the most recent session and/or activity, or
by analyzing data included within the received data. For example, in one
embodiment, data included with the received data (e.g., metadata) may
identify a time and/or location associated with the data, which can then
be used to link the received data to the session and/or activity. In
another embodiment, the computing device could display data (e.g., a
barcode, such as a QR code, etc.) that identifies the session and/or
activity. An external video recorder could record the identifying data
(as displayed by the computing device) along with (e.g., before, after,
or during) the user and/or his/her surroundings. The application could
then search the video data for identifying data, and use this data to
link the video data to a session and/or activity. The identifying
portion of the video data could then be deleted by the application if
desired.
A more complete understanding of a system and method for using,
processing, and displaying biometric data, or a resultant thereof, will
be afforded to those skilled in the art, as well as a realization of
additional advantages and objects thereof, by a consideration of the
following detailed description of the preferred embodiment. Reference
will be made to the appended sheets of drawings, which will first be
described briefly.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates a system for using, processing, and displaying
biometric data, and for synchronizing biometric data with other data
(e.g., video data, audio data, etc.) in accordance with one embodiment
of the present invention;
FIG. 2A illustrates a system for using, processing, and displaying
biometric data, and for synchronizing biometric data with other data
(e.g., video data, audio data, etc.) in accordance with another
embodiment of the present invention;
FIG. 2B illustrates a system for using, processing, and displaying
biometric data, and for synchronizing biometric data with other data
(e.g., video data, audio data, etc.) in accordance with yet another
embodiment of the present invention;
FIG. 3 illustrates an exemplary display of video data synchronized
with biometric data in accordance with one embodiment of the present
invention;
FIG. 4 illustrates a block diagram for using, processing, and
displaying biometric data, and for synchronizing biometric data with
other data (e.g., video data, audio data, etc.) in accordance with one
embodiment of the present invention;
FIG. 5 illustrates a block diagram for using, processing, and
displaying biometric data, and for synchronizing biometric data with
other data (e.g., video data, audio data, etc.) in accordance with
another embodiment of the present invention;
FIG. 6 illustrates a method for synchronizing video data with
biometric data, operating the video data, and searching the biometric
data, in accordance with one embodiment of the present invention;
FIG. 7 illustrates an exemplary display of video data synchronized
with biometric data in accordance with another embodiment of the present
invention;
FIG. 8 illustrates exemplary video data, which is preferably linked
to an identifier (ID), a start time (T), and a finish time or duration
(n);
FIG. 9 illustrates an exemplary identifier (ID), comprising a session identifier and an activity identifier;
FIG. 10 illustrates exemplary biometric data, which is preferably
linked to an identifier (ID), a start time (T), and a sample rate (S);
FIG. 11 illustrates exemplary self-realization data, which is preferably linked to an identifier (ID) and a time (m);
FIG. 12 illustrates how sampled biometric data points can be used to
extrapolate other biometric data point in accordance with one embodiment
of the present invention;
FIG. 13 illustrates how sampled biometric data points can be used to
extrapolate other biometric data points in accordance with another
embodiment of the present invention;
FIG. 14 illustrates an example of how a start time and data related
thereto (e.g., sample rate, etc.) can be used to synchronized biometric
data and self-realization data to video data;
FIG. 15 depicts an exemplary “sign in” screen shot for an application
that allows a user to capture at least video and biometric data of the
user performing an athletic event (e.g., bike riding, etc.) and to
display the video data together (or in synchronization) with the
biometric data;
FIG. 16 depict an exemplary “create session” screen shot for the
application depicted in FIG. 15, allowing the user to create a new
session;
FIG. 17 depicts an exemplary “session name” screen shot for the
application depicted in FIG. 15, allowing the user to enter a name for
the session;
FIG. 18 depicts an exemplary “session description” screen shot for
the application depicted in FIG. 15, allowing the user to enter a
description for the session;
FIG. 19 depicts an exemplary “session started” screen shot for the
application depicted in FIG. 15, showing the video and biometric data
received in real-time;
FIG. 20 depicts an exemplary “review session” screen shot for the
application depicted in FIG. 15, allowing the user to playback the
session at a later time;
FIG. 21 depicts an exemplary “graph display option” screen shot for
the application depicted in FIG. 15, allowing the user to select data
(e.g., heart rate data, etc.) to be displayed along with the video data;
FIG. 22 depicts an exemplary “review session” screen shot for the
application depicted in FIG. 15, where the video data is displayed
together (or in synchronization) with the biometric data;
FIG. 23 depicts an exemplary “map” screen shot for the application
depicted in FIG. 15, showing GPS data displayed on a Google map;
FIG. 24 depicts an exemplary “summary” screen shot for the application depicted in FIG. 15, showing a summary of the session;
FIG. 25 depicts an exemplary “biometric search” screen shot for the
application depicted in FIG. 15, allowing a user to search the biometric
data for particular biometric event (e.g., a particular value, a
particular range, etc.);
FIG. 26 depicts an exemplary “first result” screen shot for the
application depicted in FIG. 15, showing a first result for the
biometric event shown in FIG. 25, together with corresponding video;
FIG. 27 depicts an exemplary “second result” screen shot for the
application depicted in FIG. 15, showing a second result for the
biometric event shown in FIG. 25, together with corresponding video;
FIG. 28 depicts an exemplary “session search” screen shot for the
application depicted in FIG. 15, allowing a user to search for sessions
that meet certain criteria; and
FIG. 29 depicts an exemplary “list” screen shot for the application
depicted in FIG. 15, showing a result for the criteria shown in FIG. 28.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
The present invention provides a system and method for using,
processing, indexing, benchmarking, ranking, comparing and displaying
biometric data, or a resultant thereof, either alone or together (e.g.,
in synchronization) with other data (e.g., video data, etc.). It should
be appreciated that while the invention is described herein in terms of
certain biometric data (e.g., heart rate, breathing patterns,
blood-alcohol level, etc.), the invention is not so limited, and can be
used in conjunction with any biometric and/or physical data, including,
but not limited to oxygen levels, CO2 levels, oxygen
saturation, blood pressure, blood glucose, lung function, eye pressure,
body and ambient conditions (temperature, humidity, light levels,
altitude, and barometric pressure), speed (walking speed, running
speed), location and distance travelled, breathing rate, heart rate
variance (HRV), EKG data, perspiration levels, calories consumed and/or
burnt, ketones, waste discharge content and/or levels, hormone levels,
blood content, saliva content, audible levels (e.g., snoring, etc.),
mood levels and changes, galvanic skin response, brain waves and/or
activity or other neurological measurements, sleep patterns, physical
characteristics (e.g., height, weight, eye color, hair color, iris data,
fingerprints, etc.) or responses (e.g., facial changes, iris (or pupal)
changes, voice (or tone) changes, etc.), or any combination or
resultant thereof.
As shown in FIG. 1, a biometric device 110 may be in communication with a computing device 108, such as a smart phone, which, in turn, is in communication with at least one computing device (102, 104, 106) via a wide area network (“WAN”) 100,
such as the Internet. The computing devices can be of different types,
such as a PC, laptop, tablet, smart phone, smart watch etc., using one
or different operating systems or platforms. In one embodiment of the
present invention, the biometric device 110 is
configured to acquire (e.g., measure, sense, estimate, etc.) an
individual’s heart rate (e.g., biometric data). The biometric data is
then provided to the computing device 108, which includes a video and/or audio recorder (not shown).
In a first embodiment of the present invention, the video and/or
audio data are provided along with the heart rate data to a host
computing device 106 via the network 100. Because the concurrent video and/or audio data and the heart rate data are provided to the host computing device 106,
a host application operating thereon (not shown) can be used to
synchronize the video data, audio data, and/or heart rate data, thereby
allowing a user (e.g., via the user computing devices 102, 104)
to view the video data and/or listen to the audio data (either in
real-time or time delayed) while viewing the biometric data. For
example, as shown in FIG. 3, the host application may use a time-stamp 320, or other sequencing method using metadata, to synchronize the video data 310 with the biometric data 330, allowing a user to view, for example, an individual (e.g., patient in a hospital, baby in a crib, etc.) at a particular time 340 (e.g., 76 seconds past the start time) and biometric data associated with the individual at that particular time 340 (e.g., 76 seconds past the start time).
It should be appreciated that the host application may further be
configured to perform other functions, such as search for a particular
activity in video data, audio data, biometric data and/or metadata,
and/or ranking video data, audio data, and/or biometric data. For
example, the host application may allow the user to search for a
particular biometric event, such as a heart rate that has exceeded a
particular threshold or value, a heart rate that has dropped below a
particular threshold or value, a particular heart rate (or range) for a
minimum period of time, etc. By way of another example, the host
application may rank video data, audio data, biometric data, or a
plurality of synchronized clips (e.g., highlight reels) chronologically,
by biometric magnitude (highest to lowest, lowest to highest, etc.), by
review (best to worst, worst to best, etc.), or by views (most to
least, least to most, etc.). It should further be appreciated that such
functions as the ranking, searching, and analysis of data is not limited
to a user’s individual session, but can be performed across any number
of individual sessions of the user, as well as the session or number of
sessions of multiple users. One use of this collection of all the
various information (video, biometric and other) is to be able to
generate sufficient data points for Big Data analysis and Machine
Learning of the purposes of generating AI inferences and
recommendations.
By way of example, machine learning algorithms could be used to
search through video data automatically, looking for the most compelling
content which would subsequently be stitched together into a short
“highlight reel.” The neural network could be trained using a plurality
of sports videos, along with ratings from users of their level of
interest as the videos progress. The input nodes to the network could be
a sample of change in intensity of pixels between frames along with the
median excitement rating of the current frame. The machine learning
algorithms could also be used, in conjunction with a multi-layer
convolutional neural network, to automatically classify video content
(e.g., what sport is in the video). Once the content is identified,
either automatically or manually, algorithms can be used to compare the
user’s activity to an idealized activity. For example, the system could
compare a video recording of the user’s golf swing to that of a
professional golfer. The system could then provide incremental tips to
the user on how the user could improve their swing. Algorithms could
also be used to predict fitness levels for users (e.g., if they maintain
their program, giving them an incentive to continue working out), match
users to other users or practitioners having similar fitness levels,
and/or create routines optimized for each user.
It should also be appreciated, as shown in FIG. 2A, that the biometric data may be provided to the host computing device 106 directly, without going through the computing device 108. For example, the computing device 108 and the biometric device 110 may communicate independently with the host computing device, either directly or via the network 100.
It should further be appreciated that the video data, the audio data,
and/or the biometric data need not be provided to the host computing
device 106 in real-time. For example, video data could
be provided at a later time as long as the data can be identified, or
tied to a particular session. If the video data can be identified, it
can then be synchronized to other data (e.g., biometric data) received
in real-time.
In one embodiment of the present invention, as shown in FIG. 2B, the system includes a computing device 200, such as a smart phone, in communication with a plurality of devices, including a host computing device 240 via a WAN (see, e.g., FIG. 1 at 100), third party devices 250 via the WAN (see, e.g., FIG. 1 at 100), and local devices 230 (e.g., via wireless or wired connections). In a preferred embodiment, the computing device 200 downloads a program or application (i.e., client platform) from the host computing device 240
(e.g., company cloud). The client platform includes a plurality of
modules that are configured to perform a plurality of functions.
For example, the client platform may include a video capture module 210 for receiving video data from an internal and/or external camera, and a biometric capture module 212
for receiving biometric data from an internal and/or external biometric
device. The client platform may also include a user interface module 202, allowing a user to interact with the platform, a video editing module 204 for editing video data, a file handling module 206
for managing (e.g., storing, linking, etc.) data (e.g., video data,
biometric data, identification data, start time data, duration data,
sample rate data, self-realization data, time-stamp data, etc.), a
database and sync module 214 for replicating data (e.g., copying data stored on the computing device 200 to the host computing device 240 and/or copying user data stored on the host computing device 240 to the computing device 200), an algorithm module 216
for processing received data (e.g., synchronizing data,
searching/filtering data, creating a highlight reel, etc.), a sharing
module 220 for sharing and/or storing data (e.g., video
data, highlight reel, etc.) relating either to a single session or
multiple sessions, and a central login and ID module 218 for interfacing with third party social media websites, such as Facebook™.
With respect to FIG. 2B, the computing device 200,
which may be a smart phone, a tablet, or any other computing device, may
be configured to download the client platform from the host computing
device 240. Once the client platform is running on the computing device 200, the platform can be used to start a new session, receive video data for the session (i.e., via the video capture module 210) and receive biometric data for the session (i.e., via the biometric capture module 212).
This data can be stored in local storage, in a local database, and/or
on a remote storage device (e.g., in the company cloud or a third-party
cloud, such as Dropbox™, etc.). In a preferred embodiment, the data is
stored so that it is linked to information that (i) identifies the
session and (ii) enables synchronization.
For example, video data is preferably linked to at least a start time
(e.g., a start time of the session) and an identifier. The identifier
may be a single number uniquely identifying the session, or a plurality
of numbers (e.g., a plurality of globally (or universally) unique
identifiers (GUIDs/UUIDs), where a first number uniquely identifying the
session and a second number uniquely identifies an activity within the
session, allowing a session (e.g., a trip to or an itinerary in a
destination, such as Berlin) to include a plurality of activities (e.g.,
a bike ride, a walk, etc.). By way of example only, an activity (or
session) identifier may be a 128 bit identifier that has a high
probability of uniqueness, such as
8bf25512-f17a-4e9e-b49a-7c3f59ec1e85). The identifier may also include a
session name and/or a session description. Other information about the
video data (e.g., video length, video source, etc.) (i.e., “video
metadata”) can also be stored and linked to the video data. Biometric
data is preferably linked to at least the start time (e.g., the same
start time linked to the video data), the identifier (e.g., the same
identifier linked to the video data), and a sample rate, which
identifies the rate at which biometric data is received and/or stored.
For example, heart rate data may be received and stored at a rate of
thirty samples per minute (30 spm), i.e., once every two seconds, or
some other predetermined time interval sample.
In some cases, the sample rate used by the platform may be the sample
rate of the biometric device (i.e., the rate at which data is provided
by the biometric device). In other cases, the sample rate used by the
platform may be independent from the rate at which data is received
(e.g., a fixed rate, a configurable rate, etc.). For example, if the
biometric device is configured to provide biometric data at a rate of
sixty samples per minute (60 spm), the platform may still store the data
at a rate of 30 spm. In other words, with a sample rate of 30 spm, the
platform will have stored five values after ten seconds, the first value
being the second value transmitted by the biometric device, the second
value being the fourth value transmitted by the biometric device, and so
on. Alternatively, if the biometric device is configured to provide
biometric data only when the biometric data changes, the platform may
still store the data at a rate of 30 spm. In this case, the first value
stored by the platform may be the first value transmitted by the
biometric device, the second value stored may be the first value
transmitted by the biometric device if at the time of storage no new
value has been transmitted by the biometric device, the third value
stored may be the second value transmitted by the biometric device if at
the time of storage a new value is being transmitted by the biometric
device, and so on.
Once the video and biometric data is stored and linked, algorithms
can be used to display the data together. For example, if biometric data
is stored at a sample rate of 30 spm, which may be fixed or
configurable, algorithms (e.g., 216) can be used to
display a first biometric value (e.g., below the video data,
superimposed over the video data, etc.) at the start of the video clip, a
second biometric value two seconds later (two seconds into the video
clip), a third biometric value two seconds later (four seconds into the
video clip), etc. In alternate embodiments of the present invention,
non-video data (e.g., biometric data, self-realization data, etc.) can
be stored with a plurality of time-stamps (e.g., individual stamps or
offsets for each stored value), which can be used together with the
start time to synchronize non-video data to video data.
It should be appreciated that while the client platform can be
configured to function autonomously (i.e., independent of the host
network device 240), in one embodiment of the present
invention, certain functions of the client platform are performed by the
host network device 240, and can only be performed when the computing device 200 is in communication with the host computing device 240. Such an embodiment is advantageous in that it not only offloads certain functions to the host computing device 240, but it ensures that these functions can only be performed by the host computing device 240
(e.g., requiring a user to subscribe to a cloud service in order to
perform certain functions). Functions offloaded to the cloud may include
functions that are necessary to display non-video data together with
video data (e.g., the linking of information to video data, the linking
of information to non-video data, synchronizing non-video data to video
data, etc.), or may include more advanced functions, such as generating
and/or sharing a “highlight reel.” In alternate embodiments, the
computing device 200 is configured to perform the
foregoing functions as long as certain criteria has been met. This
criteria may include the computing device 200 being in communication with the host computing device 240, or the computing device 200 previously being in communication with the host computing device 240
and the period of time since the last communication being equal to or
less than a predetermined amount of time. Technology known to those
skilled in the art (e.g., using a keyed hash-based method authentication
code (HMAC), a stored time of said last communication (allowing said
computing device to determine whether said delta is less than a
predetermined amount of time), etc.) can be used to ensure that this
criteria is met before allowing the performance of certain functions.
Block diagrams of an exemplary computing device and an exemplary
biometric device are shown in FIG. 5. In particular, the exemplary
biometric device 500 includes a sensor for sensing
biometric data, a display for interfacing with the user and displaying
various information (e.g., biometric data, set-up data, operation data,
such as start, stop, and pause, etc.), a memory for storing the sensed
biometric data, a transceiver for communicating with the exemplary
computing device 600, and a processor for operating and/or driving the transceiver, memory, sensor, and display. The exemplary computing device 600 includes a transceiver (1) for receiving biometric data from the exemplary biometric device 500
(e.g., using any of telemetry, any WiFi standard, DNLA, Apple AirPlay,
Bluetooth, near field communication (NFC), RFID, ZigBee, Z-Wave, Thread,
Cellular, a wired connection, infrared or other method of data
transmission, datacasting or streaming, etc.), a memory for storing the
biometric data, a display for interfacing with the user and displaying
various information (e.g., biometric data, set-up data, operation data,
such as start, stop, and pause, input in-session comments or add voice
notes, etc.), a keyboard for receiving user input data, a transceiver (2)
for providing the biometric data to the host computing device via the
Internet (e.g., using any of telemetry, any WiFi standard, DNLA, Apple
AirPlay, Bluetooth, near field communication (NFC), RFID, ZigBee,
Z-Wave, Thread, Cellular, a wired connection, infrared or other method
of data transmission, datacasting or streaming, etc.), and a processor
for operating and/or driving the transceiver (1), transceiver (2), keyboard, display, and memory.
The keyboard in the computing device 600, or alternatively the keyboard in biometric device 500,
may be used to enter self-realization data, or data on how the user is
feeling at a particular time. For example, if the user is feeling tired,
the user may hit the “T” button on the keyboard. If the user is feeling
their endorphins kick in, the user may hit the “E” button on the
keyboard. And if the user is getting their second wind, the user may hit
the “S” button on the keyboard. This data is then stored and linked to
either a sample rate (like biometric data) or time-stamp data, which may
be a time or an offset to the start time that each button was pressed.
This would allow the self-realization data, in the same way as the
biometric data, to be synchronized to the video data. It would also
allow the self-realization data, like the biometric data, to be searched
or filtered (e.g., in order to find video corresponding to a particular
event, such as when the user started to feel tired, etc.).
It should be appreciated that the present invention is not limited to
the block diagrams shown in FIG. 5, and a biometric device and/or a
computing device that includes fewer or more components is within the
spirit and scope of the present invention. For example, a biometric
device that does not include a display, or includes a camera and/or
microphone is within the spirit and scope of the present invention, as
are other data-entry devices or methods beyond a keyboard, such as a
touch screen, digital pen, voice/audible recognition device, gesture
recognition device, so-called “wearable,” or any other recognition
device generally known to those skilled in the art. Similarly, a
computing device that only includes one transceiver, further includes a
camera (for capturing video) and/or microphone (for capturing audio or
for performing spatial analytics through recording or measurement of
sound and how it travels), or further includes a sensor (see FIG. 4) is
within the spirit and scope of the present invention. It should also be
appreciated that self-realization data is not limited to how a user
feels, but could also include an event that the user or the application
desires to memorialize. For example, the user may want to record (or
time-stamp) the user biking past wildlife, or a particular architectural
structure, or the application may want to record (or time-stamp) a
patient pressing a “request nurse” button, or any other sensed
non-biometric activity of the user.
Referring back to FIG. 1, as discussed above in conjunction with FIG.
2B, the host application (or client platform) may operate on the
computing device 108. In this embodiment, the computing device 108 (e.g., a smart phone) may be configured to receive biometric data from the biometric device 110
(either in real-time, or at a later stage, with a time-stamp
corresponding to the occurrence of the biometric data), and to
synchronize the biometric data with the video data and/or the audio data
recorded by the computing device 108 (or a camera
and/or microphone operating thereon). It should be appreciated that in
this embodiment of the present invention, other than the host
application being run locally (e.g., on the computing device 108), the host application (or client platform) operates as previously discussed.
Again, with reference to FIG. 1, in another embodiment of the present invention, the computing device 108
further includes a sensor for sensing biometric data. In this
embodiment of the present invention, the host application (or client
platform) operates as previously discussed (locally on the computing
device 108), and functions to at least synchronize the
video, audio, and/or biometric data, and allow the synchronized data to
be played or presented to a user (e.g., via a display portion, via a
display device connected directly to the computing device, via a user
computing device connected to the computing device (e.g., directly, via
the network, etc.), etc.).
It should be appreciated that the present invention, in any
embodiment, is not limited to the computing devices (number or type)
shown in FIGS. 1 and 2, and may include any of a computing, sensing,
digital recording, GPS or otherwise location-enabled device (for
example, using WiFi Positioning Systems “WPS”, or other forms of
deriving geographical location, such as through network triangulation),
generally known to those skilled in the art, such as a personal
computer, a server, a laptop, a tablet, a smart phone, a cellular phone,
a smart watch, an activity band, a heart-rate strap, a mattress sensor,
a shoe sole sensor, a digital camera, a near field sensor or sensing
device, etc. It should also be appreciated that the present invention is
not limited to any particular biometric device, and includes biometric
devices that are configured to be worn on the wrist (e.g., like a
watch), worn on the skin (e.g., like a skin patch) or scalp, or
incorporated into computing devices (e.g., smart phones, etc.), either
integrated in, or added to items such as bedding, wearable devices such
as clothing, footwear, helmets or hats, or ear phones, or athletic
equipment such as rackets, golf clubs, or bicycles, where other kinds of
data, including physical performance metrics such as racket or club
head speed, or pedal rotation/second, or footwear recording such things
as impact zones, gait or shear, can also be measured synchronously with
biometrics, and synchronized to video. Other data can also be measured
synchronously with video data, including biometrics on animals (e.g., a
bull’s acceleration or pivot or buck in a bull riding event, a horse’s
acceleration matched to heart rate in a horse race, etc.), and physical
performance metrics of inanimate objects, such a revolutions/minute
(e.g., in a vehicle, such as an automobile, a motorcycle, etc.),
miles/hour (or the like) (e.g., in a vehicle, such as an automobile, a
motorcycle, etc., a bicycle, etc.), or G-forces (e.g., experienced by
the user, an animal, and inanimate object, etc.). All of this data
(collectively “non-video data,” which may include metadata, or data on
non-video data) can be synchronized to video data using a sample rate
and/or at least one time-stamp, as discussed above.
It should further be appreciated that the present invention need not
operate in conjunction with a network, such as the Internet. For
example, as shown in FIG. 2A, the biometric device 110, which may be, for example, be a wireless activity band for sensing heart rate, and the computing device 108, which may be, for example, a digital video recorder, may be connected directly to the host computing device 106
running the host application (not shown), where the host application
functions as previously discussed. In this embodiment, the video, audio,
and/or biometric data can be provided to the host application either
(i) in real time, or (ii) at a later time, since the data is
synchronized with a sample rate and/or time-stamp. This would allow, for
example, at least video of an athlete, or a sportsman or woman (e.g., a
football player, a soccer player, a racing driver, etc.) to be shown in
action (e.g., playing football, playing soccer, motor racing, etc.)
along with biometric data of the athlete in action (see, e.g., FIG. 7).
By way of example only, this would allow a user to view a soccer
player’s heart rate 730 as the soccer player dribbles a ball, kicks the ball, heads the ball, etc. This can be accomplished using a time stamp 720 (e.g., start time, etc.), or other sequencing method using metadata (e.g., sample rate, etc.), to synchronize the video data 710 with the biometric data 730, allowing the user to view the soccer player at a particular time 740 (e.g., 76 seconds) and biometric data associated with the athlete at that particular time 340
(e.g., 76 seconds). Similar technology can be used to display biometric
data on other athletes, card players, actors, online garners, etc.
Where it is desirable to monitor or watch more than one individual
from a camera view, for example, patients in a hospital ward being
observed from a remote nursing station or, during a televised broadcast
of a sporting event such as a football game, with multiple players on
the sports field, the system can be so configured, by the subjects using
Bluetooth or other wearable or NFC sensors (in some cases with their
sensing capability also being location-enabled in order to identify
which specific individual to track) capable of transmitting their
biometrics over practicable distances, in conjunction with relays or
beacons if necessary, such that the viewer can switch the selection of
which of one or multiple individuals’ biometric data to track, alongside
the video or broadcast, and, if wanted and where possible within the
limitations of the video capture field of the camera used, also to
concentrate the view of the video camera on a reduced group or on a
specific individual. In an alternate embodiment of the present
invention, selection of biometric data is automatically accomplished,
for example, based on the individual’s location in the video frame
(e.g., center of the frame), rate of movement (e.g., moving quicker than
other individuals), or proximity to a sensor (e.g., being worn by the
individual, embedded in the ball being carried by the individual, etc.),
which may be previously activate or activated by a remote radio
frequency signal. Activation of the sensor may result in biometric data
of the individual being transmitted to a receiver, or may allow the
receiver to identified biometric data of the individual amongst other
data being transmitted (e.g., biometric data from other individuals).
In the context of fitness or sports tracking, it should be
appreciated that the capturing of an individual’s activity on video is
not dependent on the presence of a third party to do this, but various
methods of self-videoing can be envisaged, such as a video capture
device mounted on the subject’s wrist or a body harness, or on a selfie
attachment or a gimbal, or fixed to an object (e.g., sports equipment
such as bicycle handlebars, objects found in sporting environments such
as a basketball or tennis net, a football goal post, a ceiling, etc., a
drone-borne camera following the individual, a tripod, etc.). It should
be further noted that such video capture devices can include more than
one camera lens, such that not only the individual’s activity may be
videoed, but also simultaneously a different view, such as what the
individual is watching or sees in front of them (i.e., the user’s
surroundings). The video capture device could also be fitted with a
convex mirror lens, or have a convex mirror added as an attachment on
the front of the lens, or be a full 360 degree camera, or multiple 360
cameras linked together, such that either with or without the use of
specialized software known in the art, a 360 degree all-around or
surround view can be generated, or a 360 global view in all axes can be
generated.
In the context of augmented or virtual reality, where the individual
is wearing suitably equipped augmented reality (“AR”) or virtual reality
(“VR”) glasses, goggles, headset or is equipped with another type of
viewing display capable of rendering AR, VR, or other synthesized or
real 3D imagery, the biometric data such as heart rate from the sensor,
together with other data such as, for example, work-out run or speed,
from a suitably equipped sensor, such as an accelerometer capable of
measuring motion and velocity, could be viewable by the individual,
superimposed on their viewing field. Additionally an avatar of the
individual in motion could be superimposed in front of the individual’s
viewing field, such that they could monitor or improve their exercise
performance, or otherwise enhance the experience of the activity by
viewing themselves or their own avatar, together (e.g., synchronized)
with their performance (e.g., biometric data, etc.). Optionally, the
biometric data also of their avatar, or the competing avatar, could be
simultaneously displayed in the viewing field. In addition (or
alternatively), at least one additional training or competing avatar can
be superimposed on the individual’s view, which may show the competing
avatar (s) in relation to the individual (e.g., showing them
superimposed in front of the individual, showing them superimposed to
the side of the user, showing them behind the individual (e.g., in a
rear-view-mirror portion of the display, etc.), and/or showing them in
relation to the individual (e.g., as blips on a radar-screen portion of
the display, etc.), etc. Competing avatar (s), either of real people
such as their friends or training acquaintances, can be used to motivate
the user to improve or correct their performance and/or to make their
exercise routine more interesting (e.g., by allowing the individual to
“compete” in the AR, VR, or Mixed Reality (“MR”) environment while
exercising, or training, or virtually “gamifying” their activity through
the visualization of virtual destinations or locations, imagined or
real, such as historical sites, scanned or synthetically created through
computer modeling).
Additionally, any multimedia sources to which the user is being
exposed whilst engaging in the activity which is being tracked and
recorded, should similarly be able to be recorded with the time stamp,
for analysis and/or correlation of the individual’s biometric response.
An example of an application of this could be in the selection of
specific music tracks for when someone is carrying out a training
activity, where the correlation of the individual’s past response,
based, for example, on heart rate (and how well they achieved specific
performance levels or objectives) to music type (e.g., the specific
music track (s), a track (s) similar to the specific track (s), a track
(s) recommended or selected by others who have listened to or liked the
specific track (s), etc.) is used to develop a personalized algorithm,
in order to optimize automated music selection to either enhance the
physical effort, or to maximize recovery during and after exertion. The
individual could further specify that they wished for the specific track
or music type, based upon the personalized selection algorithm, to be
played based upon their geographical location; an example of this would
be someone who frequently or regularly uses a particular circuit for
training or recreational purposes. Alternatively, tracks or types of
music could be selected through recording or correlation of past
biometric response in conjunction with self-realization inputting when
particular tracks were being listened to.
It should be appreciated that biometric data does not need to be
linked to physical movement or sporting activity, but may instead be
combined with video of an individual at a fixed location (e.g., where
the individual is being monitored remotely or recorded for subsequent
review), for example, as shown in FIG. 3, for health reasons or a
medical condition, such as in their home or in hospital, or a senior
citizen in an assisted-living environment, or a sleeping infant being
monitored by parents whilst in another room or location.
Alternatively, the individual might be driving past or in the
proximity of a park or a shopping mall, with their location being
recorded, typically by geo-stamping, or additional information being
added by geo-tagging, such as the altitude or weather at the specific
location, together with what the information or content is, being viewed
or interacted with by the individual (e.g., a particular advertisement,
a movie trailer, a dating profile, etc.) on the Internet or a
smart/enabled television, or on any other networked device incorporating
a screen, and their interaction with that information or content, being
viewable or recorded by video, in conjunction with their biometric
data, with all these sources of data being able to be synchronized for
review, by virtue of each of these individual sources being time-stamped
or the like (e.g., sampled, etc.). This would allow a third party
(e.g., a service provider, an advertiser, a provider of advertisements, a
movie production company/promoter, a poster of a dating profile, a
dating site, etc.) to acquire for analysis of their response, the
biometric data associated with the viewing of certain data by the
viewer, where either the viewer or their profile could optionally be
identifiable by the third party’s system, or where only the identity of
the viewer’s interacting device is known, or can be acquired from the
biometric sending party’s GPS, or otherwise location-enabled, device.
For example, an advertiser or an advertisement provider could see how
people are responding to an advertisement, or a movie production
company/promoter could evaluate how people are responding to a movie
trailer, or a poster of a dating profile or the dating site itself,
could see how people are responding to the dating profile.
Alternatively, viewers of online players of an online gaming or eSports
broadcast service such as twitch.tv, or of a televised or streamed
online poker game, could view the active participants’ biometric data
simultaneously with the primary video source as well as the
participants’ visible reactions or performance. As with video/audio,
this can either be synchronized in real-time, or synchronized later
using the embedded time-stamp or the like (e.g., sample rate, etc.).
Additionally, where facial expression analysis is being generated from
the source video, for example in the context of measuring an
individual’s response to advertising messages, since the video is
already time-stamped (e.g., with a start time), the facial expression
data can be synchronized and correlated to the physical biometric data
of the individual, which has similarly been time-stamped and/or sampled,
As previously discussed, the host application may be configured to
perform a plurality of functions. For example, the host application may
be configured to synchronize video and/or audio data with biometric
data. This would allow, for example, an individual watching a sporting
event (e.g., on a TV, computer screen, etc.) to watch how each player’s
biometric data changes during play of the sporting event, or also to map
those biometric data changes to other players or other comparison
models. Similarly, a doctor, nurse, or medical technician could record a
person’s sleep habits, and watch, search or later review, the recording
(e.g., on a TV, computer screen, etc.) while monitoring the person’s
biometric data. The system could also use machine learning to build a
profile for each patient, identifying certain characteristics of the
patient (e.g., their heart rate rhythm, their breathing pattern, etc.)
and notify a doctor, a nurse, or medical technician or trigger an alarm
if the measured characteristics appear abnormal or irregular.
The host application could also be configured to provide biometric
data to a remote user via a network, such as the Internet. For example, a
biometric device (e.g., a smart phone with a blood-alcohol sensor)
could be used to measure a person’s blood-alcohol level (e.g., while the
person is talking to the remote user via the smart phone), and to
provide the person’s blood-alcohol level to the remote user. By placing
the sensor near, or incorporating it in the microphone, such a system
would allow a parent to determine whether their child has been drinking
alcohol by participating in a telephone or video call with their child.
Different sensors known in the art could be used to sense different
chemicals in the person’s breath, or detect people’s breathing patterns
through analysis of sound and speed variations, allowing the monitoring
party to determine whether the subject has been using alcohol or other
controlled substances or to conduct breath analysis for other diagnostic
reasons.
The system could also be adapted with a so-called “lab on a chip”
(LOC) integrated in the device itself, or with a suitable attachment
added to it, for the remote testing for example, of blood samples where
the smart-phone is either used for the collection and sending of the
sample to a testing laboratory for analysis, or is used to carry out the
sample collection and analysis within the device itself. In either case
the system is adapted such that the identity of the subject and their
blood sample are cross-authenticated for the purposes of sample and
analysis integrity as well as patient identity certainty, through the
simultaneous recording of the time-stamped video and time and/or
location (or GPS) stamping of the sample at the point of collection
and/or submission of the sample. This confirmation of identity is
particularly important for regulatory, record keeping and health
insurance reasons in the context of telemedicine, since the individual
will increasingly be performing functions which, till now, have been
carried out typically on-site at the relevant facility, by qualified and
regulated medical or laboratory staff, rather than by the subject using
a networked device, either for upload to the central analysis facility,
or for remote analysis on the device itself.
This, or the collection of other biometric data such as heart rate or
blood pressure, could also be applied in situations where it is
critical for safety reasons, to check, via regular remote video
monitoring in real time, whether say a pilot of a plane, a truck or
train driver, are in fit and sound condition to be in control of their
vehicle or vessel or whether for example they are experiencing a sudden
incapacity or heart attack etc. Because the monitored person is being
videoed at the same time as providing time-stamped, geo-stamped and/or
sampled biometric data, there is less possibility for the monitored
person or a third party, to “trick”, “spoof” or bypass the system. In a
patient/doctor remote consultation setting, the system could be used for
secure video consults where also, from a regulatory or health insurance
perspective, the consultation and its occurrence is validated through
the time and/or geo stamp validation. Furthermore, where there is a
requirement for a higher level of authentication, the system could
further be adapted to use facial recognition or biometric algorithms, to
ensure that the correct person is being monitored, or facial expression
analysis could be used for behavioral pattern assessment.
The concern that a monitored party would not wish to be permanently
monitored (e.g., a senior citizen not wanting to have their every move
and action continuously videoed) could be mitigated by the incorporation
of various additional features. In one embodiment, the video would be
permanently recording in a loop system which uses a reserved memory
space, recording for a predetermined time period, and then,
automatically erasing the video, where n represents the selected minutes
in the loop and E is the event which prevents the recorded loop of n
minutes being erased, and triggers both the real time transmission of
the visible state or actions of the monitored person to the monitoring
party, as well as the ability to rewind, in order for the monitoring
party to be able to review the physical manifestation leading up to E.
The trigger mechanism for E could be, for example, the occurrence of
biometric data outside the predefined range, or the notification of
another anomaly such as a fall alert, activated by movement or location
sensors such as a gyroscope, accelerometer or magnetometer within the
health band device worn by, say the senior citizen, or on their mobile
phone or other networked motion-sensing device in their proximity. The
monitoring party would be able not only to view the physical state of
the monitored party after E, whilst getting a simultaneous read-out of
their relevant biometric data, but also to review the events and
biometric data immediately leading up to the event trigger notification.
Alternatively, it could be further calibrated so that although video is
recorded, as before, in the n loop, no video from the n loop will
actually be transmitted to a monitoring party until the occurrence of E.
The advantages of this system include the respect of the privacy of the
individual, where only the critical event and the time preceding the
event would be available to a third party, resulting also in a desired
optimization of both the necessary transmission bandwidth and the data
storage requirements. It should be appreciated that the foregoing system
could also be configured such that the E notification for remote
senior, infant or patient monitoring is further adapted to include
facial tracking and/or expression recognition features.
Privacy could be further improved for the user if their video data
and biometric data are stored by themselves, on their own device, or on
their own external, or own secure third-party “cloud” storage, but with
the index metadata of the source material, which enables the sequencing,
extrapolation, searching and general processing of the source data,
remaining at a central server, such as, in the case of medical records
for example, at a doctor’s office or other healthcare facility. Such a
system would enable the monitoring party to have access to the video and
other data at the time of consultation, but with the video etc.
remaining in the possession of the subject. A further advantage of
separating the hosting of the storage of the video and biometric source
data from the treatment of the data, beyond enhancing the user’s privacy
and their data security, is that by virtue of its storage locally with
the subject, not having to upload it to the computational server results
both in reduced cost and increased efficiency of storage and data
bandwidth. This would be of benefit also where such kind of remote
upload of tests for review by qualified medical staff at a different
location from the subject are occurring in areas of lower-bandwidth
network coverage. A choice can also be made to lower the frame rate of
the video material, provided that this is made consistent with sampling
rate to confirm the correct time stamp, as previously described.
It should be appreciated that with information being stored at the
central server (or the host device), various techniques known in the art
can be implemented to secure the information, and prevent unauthorized
individuals or entities from accessing the information. Thus, for
example, a user may be provided (or allowed to create) a user name,
password, and/or any other identifying (or authenticating) information
(e.g., a user biometric, a key fob, etc.), and the host device may be
configured to use the identifying (or authenticating) information to
grant access to the information (or a portion thereof). Similar security
procedures can be implemented for third parties, such as medical
providers, insurance companies, etc., to ensure that the information is
only accessible by authorized individuals or entities. In certain
embodiments, the authentication may allow access to all the stored data,
or to only a portion of the stored data (e.g., a user authentication
may allow access to personal information as well as stored video and/or
biometric data, whereas a third party authentication may only allow
access to stored video and/or biometric data). In other embodiments, the
authentication is used to determine what services are available to an
individual or entity logging into the host device, or the website. For
example, visitors to the website (or non-subscribers) may only be able
to synchronize video/audio data to biometric data and/or perform
rudimentary searching or other processing, whereas a subscriber may be
able to synchronize video/audio data to biometric data and/or perform
more detailed searching or other processing (e.g., to create a highlight
reel, etc.).
It should further be appreciated that while there are advantages to
keeping just the index metadata at the central server in the interests
of storage and data upload efficiency as well as so providing a common
platform for the interoperability of the different data types and
storing the video and/or audio data on the user’s own device (e.g.,
iCloud™, DropBox™, OneDrive™, etc.), the present invention is not so
limited. Thus, in certain embodiments, where feasible, it may be
beneficial to (1) store data (e.g., video, audio, biometric data, and
metadata) on the user’s device (e.g., allowing the user device to
operate independent of the host device), (2) store data (e.g., video,
audio, biometric data, and metadata) on the central server (e.g., host
device) (e.g., allowing the user to access the data from any
network-enabled device), or (3) store a first portion (e.g., video and
audio data) on the user’s device and store a second portion (e.g.,
biometric data and metadata) on the central server (e.g., host device)
(e.g., allowing the user to only view the synchronized
video/audio/biometric data when the user device is in communication with
the host device, allowing the user to only search the biometric data
(e.g., to create a “highlight reel”) or rank the biometric data (to
identify and/or list data chronologically, magnitude (highest to
lowest), magnitude (lowest to highest), best reviewed, worst reviewed,
most viewed, least viewed, etc.) when the user device is in
communication with the host device, etc.).
In another embodiment of the present invention, the functionality of
the system is further (or alternatively) limited by the software
operating on the user device and/or the host device. For example, the
software operating on the user device may allow the user to play the
video and/or audio data, but not to synchronize the video and/or audio
data to the biometric data. This may be because the central server is
used to store data critical to synchronization (time-stamp index,
metadata, biometric data, sample rate, etc.) and/or software operating
on the host device is necessary for synchronization. By way of another
example, the software operating on the user device may allow the user to
play the video and/or audio data, either alone or synchronized with the
biometric data, but may not allow the user device (or may limit the
user device’s ability) to search or otherwise extrapolate from, or
process the biometric data to identify relevant portions (e.g., which
may be used to create a “highlight reel” of the synchronized
video/audio/biometric data) or to rank the biometric and/or video data.
This may be because the central server is used to store data critical to
search and/or rank the biometric data (biometric data, biometric
metadata, etc.), and/or software necessary for searching (or performing
advanced searching of) and/or ranking (or performing advanced ranking
of) the biometric data.
In any or all of the above embodiments, the system could be further
adapted to include password or other forms of authentication to enable
secured access (or deny unauthorized access) to the data in either of
one or both directions, such that the user requires permission to access
the host, or the host to access the user’s data. Where interaction
between the user and the monitoring party or host is occurring in real
time such as in a secure video consult between patient and their medical
practitioner or other medical staff, data could be exchanged and viewed
through the establishment of a Virtual Private Network (VPN). The
actual data (biometric, video, metadata index, etc.) can alternatively
or further be encrypted both at the data source, for example at the
individual’s storage, whether local or cloud-based, and/or at the
monitoring reviewing party, for example at patient records at the
medical facility, or at the host administration level.
In the context of very young infant monitoring, a critical and often
unexplained problem is Sudden Infant Death Syndrome (SIDS). Whilst the
incidences of SIDS are often unexplained, various devices attempt to
prevent its occurrence. However, by combining the elements of the
current system to include sensor devices in or near the baby’s crib to
measure relevant biometric data including heart rate, sleep pattern,
breath analyzer, and other measures such as ambient temperature,
together with a recording device to capture movement, audible breathing,
or lack thereof (i.e., silence) over a predefined period of time, the
various parameters could be set in conjunction with the time-stamped
video record, by the parent or other monitoring party, to provide a more
comprehensive alert, to initiate a more timely action or intervention
by the user, or indeed to decide that no action response would in fact
be necessary. Additionally, in the case, for example, of a crib
monitoring situation, the system could be so configured to develop from
previous observation, with or without input from a monitoring party, a
learning algorithm to help in discerning what is “normal,” what is false
positive, or what might constitute an anomaly, and therefore a call to
action.
The host application could also be configured to play video data that
has been synchronized to biometric data, or search for the existence of
certain biometric data. For example, as previously discussed, by video
recording with sound a person sleeping, and synchronizing the recording
with biometric data (e.g., sleep patterns, brain activity, snoring,
breathing patterns, etc.), the biometric data can be searched to
identify where certain measures such as sound levels, as measured for
example in decibels, or periods of silences, exceed or drop below a
threshold value, allowing the doctor, nurse, or medical technician to
view the corresponding video portion without having to watch the entire
video of the person sleeping.
Such a method is shown in FIG. 6, starting at step 700, where biometric data and time stamp data (e.g., start time, sample rate) is received (or linked) at step 702. Audio/video data and time stamp data (e.g., start time, etc.) is then received (or linked) at step 704. The time stamp data (from steps 702 and 704)
is then used to synchronize the biometric data with the audio/video
data. The user is then allowed to operate the audio/video at step 708. If the user selects play, then the audio/video is played at step 710. If the user selects search, then the user is allowed to search the biometric data at step 712. Finally, if the user selects stop, then the video is stopped at step 714.
It should be appreciated that the present invention is not limited to
the steps shown in FIG. 6. For example, a method that allows a user to
search for biometric data that meets at least one condition, play the
corresponding portion of the video (or a portion just before the
condition), and stop the video from playing after the biometric data no
longer meets the at least one condition (or just after the biometric
data non longer meets the condition) is within the spirit and scope of
the present invention. By way of another example, if the method involves
interacting between the user device and the host device to synchronize
the video/audio data and the biometric data and/or search the biometric
data, then the method may further involve the steps of uploading the
biometric data and/or metadata to the host device (e.g., in this
embodiment the video/audio data may be stored on the user device), and
using the biometric data and/or metadata to create a time-stamp index
for synchronization and/or to search the biometric data for relevant or
meaningful data (e.g., data that exceeds a threshold, etc.). By way of
yet another example, the method may not require step 706
if the audio/video data and the biometric data are played together
(synchronized) in real-time, or at the time the data is being played
(e.g., at step 710).
In one embodiment of the present invention, as shown in FIG. 8, the video data 800,
which may also include audio data, starts at a time “T” and continues
for a duration of “n.” The video data is preferably stored in memory
(locally and/or remotely) and linked to other data, such as an
identifier 802, start time 804, and duration 806.
Such data ties the video data to at least a particular session, a
particular start time, and identifies the duration of the video included
therein. In one embodiment of the present invention, each session can
include different activities. For example, a trip to a destination in
Berlin, or following a specific itinerary on a particular day (session)
may involve a bike ride through the city (first activity) and a walk
through a park (second activity). Thus, as shown in FIG. 9, the
identifier 802 may include both a session identifier 902, uniquely identifying the session via a globally unique identifier (GUID), and an activity identifier 904,
uniquely identifying the activity via a globally unique identifier
(GUID), where the session/activity relationship is that of a
parent/child.
In one embodiment of the present invention, as shown in FIG. 10, the biometric data 1000 is stored in memory and linked to the identifier 802 and a sample rate “m” 1104. This allows the biometric data to be linked to video data upon playback. For example, if identifier 802 is one, start time 804 is 1:00 PM, video duration is one minute, and the sample rate 1104 is 30 spm, then the playing of the video at 2:00 PM would result in the first biometric value (biometric (1)) to be displayed (e.g., below the video, over the video, etc.) at 2:00 PM, the second biometric value (biometric (2))
to be displayed (e.g., below the video, over the video, etc.) two
seconds later, and so on until the video ends at 2:01 PM. While
self-realization data can be stored like biometric data (e.g., linked to
a sample rate), if such data is only received periodically, it may be
more advantageous to store this data 110 as shown in FIG. 11, i.e., linked to the identifier 802 and a time-stamp 1104, where “m” is either the time that the self-realization data 1100 was received or an offset between this time and the start time 804 (e.g., ten minutes and four seconds after the start time, etc.).
This can be seen, for example, in FIG. 14, where video data starts at
time T, biometric data is sampled every two seconds (30 spm), and
self-realization data is received at time T+3 (or three units past the
start time). While the video 1402 is playing, a first biometric value 1404 is displayed at time T+1, first self-realization data 1406 is displayed at time T+2, and a second biometric value 1406
is displayed at time T+4. By storing data in this fashion, both video
and non-video data can be stored separately from one another and
synchronized in real-time, or at the time the video is being played. It
should be appreciated that while separate storage of data may be
advantageous for devices having minimal memory and/or processing power,
the client platform may be configured to create new video data, or data
that includes both video and non-video data displayed synchronously.
Such a feature may advantageous in creating a highlight reel, which can
then be shared using social media websites, such as Facebook™ or
Youtube™, and played using standard playback software, such as
Quicktime™. As discussed in greater detail below, a highlight reel may
include various portions (or clips) of video data (e.g., when certain
activity takes place, etc.) along with corresponding biometric data.
When sampled data is subsequently displayed, the client platform can
be configured to display this data using certain extrapolation
techniques. For example, in one embodiment of the present invention, as
shown in FIG. 12, where a first biometric value 1202 is displayed at T+1, a second biometric value 1204 is displayed at T+2, and a third biometric value 1206
is displayed at T+3, biometric data can be displayed at non-sampled
times using known extrapolation techniques, including linear and
non-linear interpolation and all other extrapolation and/or
interpolation techniques generally known to those skilled in the art. In
another embodiment of the present invention, as shown in FIG. 13, the
first biometric value 1202 remains on the display until the second biometric value 1204 is displayed, the second biometric value 1204 remains on the display until the third biometric value 1206 is displayed, and so on.
With respect to linking data to an identifier, which may be linked to
other data (e.g., start time, sample rate, etc.), if the data is
received in real-time, the data can be linked to the identifier (s) for
the current session (and/or activity). However, when data is received
after the fact (e.g., after a session has ended), there are several ways
in which the data can be linked to a particular session and/or activity
(or identifier (s) associated therewith). The data can be manually
linked (e.g., by the user) or automatically linked via the application.
With respect to the latter, this can be accomplished, for example, by
comparing the duration of the received data (e.g., the video length)
with the duration of the session and/or activity, by assuming that the
received data is related to the most recent session and/or activity, or
by analyzing data included within the received data. For example, in one
embodiment, data included with the received data (e.g., metadata) may
identify a time and/or location associated with the data, which can then
be used to link the received data to the session and/or activity. In
another embodiment, the computing device could display or play data
(e.g., a barcode, such as a QR code, a sound, such as a repeating
sequence of notes, etc.) that identifies the session and/or activity. An
external video/audio recorder could record the identifying data (as
displayed or played by the computing device) along with (e.g., before,
after, or during) the user and/or his/her surroundings. The application
could then search the video/audio data for identifying data, and use
this data to link the video/audio data to a session and/or activity. The
identifying portion of the video/audio data could then be deleted by
the application if desired. In an alternate embodiment, a barcode (e.g.,
a QR code) could be printed on a physical device (e.g., a medical
testing module, which may allow communication of medical data over a
network (e.g., via a smart phone)) and used (as previously described) to
synchronize video of the user using the device to data provided by the
device. In the case of a medical testing module, the barcode printed on
the module could be used to synchronize video of the testing to the test
result provided by the module. In yet another embodiment, both the
computing device and the external video/audio recorder are used to
record video and/or audio of the user (e.g., the user stating “begin
Berlin biking session,” etc.) and to use the user-provided data to link
the video/audio data to a session and/or activity. For example, the
computing device may be configured to link the user-provided data with a
particular session and/or activity (e.g., one that is started, one that
is about to start, one that just ended, etc.), and to use the
user-provided data in the video/audio data to link the video/audio data
to the particular session and/or activity.
In one embodiment of the present invention, the client platform (or
application) is configured to operate on a smart phone or a tablet. The
platform (either alone or together with software operating on the host
device) may be configured to create a session, receive video and
non-video data during the session, and playback video data together
(synchronized) with non-video data. The platform may also allow a user
to search for a session, search for certain video and/or non-video
events, and/or create a highlight reel. FIGS. 15-29 show exemplary
screen shots of such a platform.
For example, FIG. 15 shows an exemplary “sign in” screen 1500,
allowing a user to sign into the application and have access to
application-related, user-specific data, as stored on the computing
device and/or the host computing device. The login may involve a user ID
and password unique to the application, the company cloud, or a social
service website, such as Facebook™.
Once the user is signed in, the user may be allowed to create a session via an exemplary “create session” screen 1600,
as shown in FIG. 16. In creating a session, the user may be allowed to
select a camera (e.g., internal to the computing device, external to the
computing device (e.g., accessible via the Internet, connected to the
computing device via a wired or wireless connection), etc.) that will be
providing video data. Once a camera is selected, video data 1602
from the camera may be displayed on the screen. The user may also be
allowed to select a biometric device (e.g., internal to the computing
device, external to the computing device (e.g., accessible via the
Internet, connected to the computing device via a wired or wireless
connection), etc.) that will be providing biometric data. Once a
biometric device is selected, biometric data 1604 from
the biometric device may be displayed on the screen. The user can then
start the session by clicking the “start session” button 1608.
While the selection process is preferably performed before the session
is started, the user may defer selection of the camera and/or biometric
device until after the session is over. This allows the application to
receive data that is not available in real-time, or is being provided by
a device that is not yet connected to the computing device (e.g., an
external camera that will be plugged into the computing device once the
session is over).
It should be appreciated that in a preferred embodiment of the present invention, clicking the “start session” button 1608 not only starts a timer 1606
that indicates a current length of the session, but it triggers a start
time that is stored in memory and linked to a globally unique
identifier (GUID) for the session. By linking the video and biometric
data to the GUID, and linking the GUID to the start time, the video and
biometric data is also (by definition) linked to the start time. Other
data, such as sample rate, can also be linked to the biometric data,
either by linking the data to the biometric data, or linking the data to
the GUID, which is in turn linked to the biometric data.
Either before the session is started, or after the session is over,
the user may be allowed to enter a session name via an exemplary
“session name” screen 1700, as shown in FIG. 17.
Similarly, the user may also be allowed to enter a session description
via an exemplary “session description” screen 1800, as shown in FIG. 18.
FIG. 19 shows an exemplary “session started” screen 1900, which is a screen that the user might see while the session is running. On this screen, the user may see the video data 1902 (if provided in real-time), the biometric data 1904 (if provided in real-time), and the current running time of the session 1906. If the user wishes to pause the session, the user can press the “pause session” button 1908,
or if the user wishes to stop the session, the user can press the “stop
session” button (not shown). By pressing the “stop session” button (not
shown), the session is ended, and a stop time is stored in memory and
linked to the session GUID. Alternatively, by pressing the “pause
session” button 1908, a pause time (first pause time)
is stored in memory and linked to the session GUID. Once paused, the
session can then be resumed (e.g., by pressing the “resume session”
button, not shown), which will result in a resume time (first resume
time) to be stored in memory and linked to the session GUID. Regardless
of whether a session is started and stopped (i.e., resulting in a single
continuous video), or started, paused (any number of times), resumed
(any number of times), and stopped (i.e., resulting in a plurality of
video clips), for each start/pause time stored in memory, there should
be a corresponding stop/resume time stored in memory.
Once a session has been stopped, it can be reviewed via an exemplary “review session” screen 2000,
as shown in FIG. 20. In its simplest form, the review screen may
playback video data linked to the session (e.g., either a single
continuous video if the session does not include at least one
pause/resume, multiple video clips played one after another if the
session includes at least one pause/resume, or multiple video clips
played together if the multiple video clips are related to one another
(e.g., two videos (e.g., from different vantage points) of the user
performing a particular activity, a first video of the user performing a
particular activity while viewing a second video, such as a training
video). If the user wants to see non-video data displayed along with the
video data, the user can press the “show graph options” button 2022. By pressing this button, the user is presented with an exemplary “graph display option” screen 2100,
as shown in FIG. 21. Here, the user can select data that he/she would
like to see along with the video data, such as biometric data (e.g.,
heart rate, heart rate variance, user speed, etc.), environmental data
(e.g., temperature, altitude, GPS, etc.), or self-realization data
(e.g., how the user felt during the session). FIG. 22 shows an exemplary
“review session” screen 2000 that includes both video data 2202 and biometric data, which may be shown in graph form 2204 or written form 2206.
If more than one individual can be seen in the video, the application
may be configured to show biometric data on each individual, either at
one time, or as selected by the user (e.g., allowing the user to view
biometric data on a first individual by selecting the first individual,
allowing the user to view biometric data on a second individual by
selecting the second individual, etc.).
FIG. 23 shows an exemplary “map” screen 2300, which
may be used to show GPS data to the user. Alternatively, GPS data can be
presented together with the video data (e.g., below the video data,
over the video data, etc.). An exemplary “summary” screen 2400
of the session may also be presented to the user (see FIG. 24),
displaying session information such as session name, session
description, various metrics, etc.
By storing video and non-video data separately, the data can easily
be searched. For example, FIG. 25 shows an exemplary “biometric search”
screen 2500, where a user can search for a particular
biometric value or range (i.e., a biometric event). By way of example,
the user may want to jump to a point in the session where their heart
rate is between 95 and 105 beats-per-minute (bpm). FIG. 26 shows an
exemplary “first result” screen 2600 where the user’s heart rate is at 100.46 bmp twenty minutes and forty-two seconds into the session (see, e.g., 2608). FIG. 27 shows an exemplary “second result” screen 2700 where the user’s heart rate is at 100.48 bmp twenty-three minutes and forty-eight seconds into the session (see, e.g., 2708).
It should be appreciated that other events can be searched for in a
session, including video events and self-realization events.
Not only can data within a session be searched, but so too can data
from multiple sessions. For example, FIG. 28 shows an exemplary “session
search” screen 2800, where a user can enter particular
search criteria, including session date, session length, biometric
events, video event, self-realization event, etc. FIG. 29 shows an
exemplary “list” screen 2900, showing sessions that meet the entered criteria.
The foregoing description of a system and method for using,
processing, and displaying biometric data, or a resultant thereof, has
been presented for the purposes of illustration and description. It is
not intended to be exhaustive or to limit the invention to the precise
forms disclosed, and many modifications and variations are possible in
light of the above teachings. Those skilled in the art will appreciate
that there are a number of ways to implement the foregoing features, and
that the present invention it not limited to any particular way of
implementing these features. The invention is solely defined by the
following claims.
More revelations soon!
To be continued? Our work and existence, as media and people, is funded solely by our most generous readers and we want to keep this way. We
hardly made it before, but this summer something’s going on, our
audience stats show bizarre patterns, we’re severely under estimates and
the last savings are gone. We’re not your responsibility, but if you
find enough benefits in this work… Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!
! Articles can always be subject of later editing as a way of perfecting them