calit2

SenseCam Study Helps Show Humans, Machines What 'Healthy' Looks Like

San Diego, Calif., Nov. 3, 2011— Imagine someone were to take a photo of what you were doing every 15 seconds of every hour you were awake. How many of the resulting images would depict you staring at a computer monitor or television screen? How many would depict you eating a healthy meal or taking a walk outside?

Watch 'a day in the life' of a SenseCam user. Length: 2:53. Video courtesy of Dr. Cathal Gurrin, Dublin City University.

Researchers at the University of California, San Diego’s Center for Wireless and Population Health Systems (CWPHS) have begun to collect precisely this type of visual “life-logging” data with a Microsoft research prototype camera known as the SenseCam. The researchers, who are based at the UCSD division of the California Institute for Telecommunications and Information Technology (Calit2), hope the images they’ve gathered in a recent pilot study will reveal details about the modern “sedentary” lifestyle and teach machines how to help people live more healthfully. 

“The SenseCam helps give researchers and clinicians an idea of the lifestyle choices a person makes from one moment to the next,” says Jacqueline Kerr, a principal investigator for the study and an assistant professor with the UCSD Department of Family and Preventive Medicine. “It’s also a way of presenting feedback so clinicians can recommend behavior modifications. If you show a person their day in a time-lapse video, they’ll likely say, ‘My God! I spend ages in front of the TV!’”

The SenseCam, which is now being manufactured as the Vicon Revue, is a lightweight device designed to be worn around the neck so its internal camera can take a wide-angle, first-person point-of-view photo every 10-15 seconds. The resulting images are purposely low-resolution (30-40 kb jpgs) to account for the user’s privacy — if the camera snaps a photo of the computer screen the user is staring at, the content on the screen will be too low-resolution to properly see. Users can also delete images they are uncomfortable sharing, or “pause” the camera for five and a half minutes (a feature essentially designed for bathroom breaks). 

What the SenseCam does capture is the “bigger picture” — how often a person eats, for example, or if the majority of physical activity in a person’s day is done in a pleasant leafy green space that is more conducive to exercise, or in a busy urban environment with few usable sidewalks.

“Being able to measure physical activity and dietary intake is important for establishing links to wellness or disease” adds Paul Kelly, a Ph.D. candidate at Oxford University and a member of the SenseCam research team. “And if you want to do a lifestyle intervention, you need to accurately know what the activity levels are before and after to know if it has worked.”

SenseCam
 The SenseCam is a lightweight device designed to be worn around the neck so its internal camera can take a wide-angle, first-person point-of-view photo every 10-15 seconds.
For the trial study, the team at CWPHS team outfitted 100 individuals in the U.S., UK and New Zealand with SenseCam cameras for at least three days to quantify how much the camera improved upon CWPHS’ existing suite of wearable sensors, known as the Physical Activity and Location Measurement System (PALMS). PALMS measures physical activity (with accelerometers and heart rate/motion sensors, for example), physical location (GPS) sensors and, if a researcher desires, environmental parameters (air quality) to detect patterns in physical activity within the context of a specific environment. But as advanced as it is, even PALMS has its limits, say the researchers.

“With the accelerometers and motion sensors, we know how much the hips are moving but not exactly what the person is doing,” Kelly explains. “And with GPS, all we know is where the person is going. We don’t know if they’re cycling or driving.”  Self-reporting via food or exercise diaries — another popular data-gathering method — is also flawed because individuals tend to give answers that are socially desirable rather than wholly accurate.

“The way people analyzed this type of data in the past was crude,” notes Kerr. “We wanted to know exactly what kind of data we were actually getting from these devices, but following people around and recording what they do can be very expensive. We wanted to see if the Sensecam could be used to improve the measurements we have now.”

Adds Kelly: “The idea is that if we can combine something like SenseCam with existing measures, we can remove the limitations. The images could help researchers determine how accurate an exercise diary is, for example, and then come up with correction factors, like ‘men exaggerate twice as much as women.’ Combined with an accelerometer, it could help us estimate energy expenditure but also see what the individual was doing at the time of the movement.”

For clinicians, the Sensecam provides important contextual data that could assist them in making lifestyle recommendations.

First-person perspective view of a street scene taken with the Sensecam
"Being able to measure physical activity and dietary intake is important for establishing links to wellness or disease,” says Paul Kelly, a member of the SenseCam research team.
“The SenseCam might capture what someone is eating and if they’re they’re watching the telly at the same time,” explains Kerr. “If they are eating in a social situation, the camera can tell us who they’re with when they eat the worst foods. That’s visually important information for a dietician.”

Kelly, Kerr and their colleagues on the study, who include CWPHS Director Kevin Patrick of UCSD and San Diego State University (SDSU) associate professor Simon Marshall, as well as Oxford University researchers Charlie Foster and Aiden Doherty, are now in the process of analyzing and coding the images to teach computers to automatically recognize behaviors or environments, and avoid laborious human coding.

“To train the machine or validate what we’re finding, you need the ‘truth file,” says Kerr. “If the SenseCam can be used to help us validate behaviors, that’s one phase.”

The researchers say the SenseCam and similar devices could be used to modify just about any behavior, from smoking to substance abuse to parenting. Alzheimer’s patients and other cognitively impaired individuals might even use camera-based devices to make memory diaries, say the researchers.

Kelly says he envisions a future where an individual’s general practitioner would ask him to wear the SenseCam (or a similarly equipped mobile technology) for a week and then suggest behavior modifications.

“If you want to change someone’s behavior you have to personalize it,” he notes. “The doctor might say, ‘You ate with your family only once this week. You ate this many times while driving. You only walked once this week. If you can walk again, here’s what might happen.’ If your body mass index is too high, rather than give generic information to a doctor, he can look at the information from the SenseCam and tailor a plan. Later the doctor can see if a patient actually followed her advice.”

“You still have to have willpower,” concludes Kerr, “but at least you have the images to remind you. The images are just so powerful.”

Media Contacts

Tiffany Fox, (858) 246-0353, tfox@ucsd.edu