Smart Environment Technologies for Healthcare and Sustainability

AirSense: Indoor Air Quality Sensing and Assessment to Promote Healthy and Sustainable Lifestyles

airsense In the United States, people spend approximately 90 percent of their day inside buildings, consuming about 15 kilograms of air on average every day. Being highly related to the comfort and health of indoor occupants, indoor air quality (IAQ) plays a significant role in people's daily lives. In recent years, an increasing number of research findings has shown that the air quality inside homes, workplaces, and other buildings can be much worse than the air quality outdoors in many populated modern cities. However, since the majority of indoor air pollutants are colorless, odorless, or too tiny to be seen, it is difficult for people to sense and assess their indoor air quality. To raise public awareness to indoor air quality and promote healthy lifestyles, in this project, we have designed and developed an indoor air quality monitoring and assessment system, called AirSense, which helps people sense and assess the air quality of their living environments. AirSense is able to sense ambient environmental information and the concentrations of a number of most common indoor air pollutants. It uploads the sensed data to a cloud service that allows people to explore their everyday indoor air quality information. In addition, by analyzing the sensed air quality data on mobile phone, AirSense is capable of identifying household activities that are potential sources that pollute the indoor air and providing suggestions for actions people can take to improve their IAQ. We have conducted a four-week field study at three families to validate the effectiveness of AirSense. Initial results indicate that AirSense is able to raise people's awareness to IAQ effectively and motivate people to change their unhealthy behaviors to keep their indoor air clean.

Human Daily Activity Monitoring and Recognition for Personalized Healthcare

The Design of Physical Features for Human Activity Recognition

feature It is well understood that high quality features are essential to improve the classification accuracy of any pattern recognition system. In this project, we focus on the problem of wearable sensor-based human activity recognition. The objective is to identify the most important features to differentiate various activities in people's daily lives. Specifically, we design a set of new features (called Physical Features) based on the physical parameters of human activities. These physical features are expected to represent activity signals more accurately and concisely than commonly used statistical features. To systematically evaluate the impact of the physical features on the performance of the recognition system, a single-layer feature selection framework is developed. Experimental results indicate that physical features are always among the top features selected by different feature selection methods and the recognition accuracy is 8% better than when only statistical features are used. Moreover, we show that the recognition performance is further improved by extending the single-layer framework to a multi-layer framework which takes advantage of the inherent structure of human activities and performs feature selection and classification in a hierarchical manner.


Motion Primitive Modeling and Identification

primitive Motion Primitives are defined as the basic units to construct complex human activities. The idea of motion primitives originates from the problem of human speech recognition due to the similarity between human activity and speech signals. In speech recognition, sentences are first divided into isolated words. Each word is then further divided into a sequence of phonemes. In English, there are about 50 phonemes shared by all the English words. Models are first built for each of these phonemes. These phoneme models then act as the building blocks to build words and sentences. Following this idea, the objective of this work is to build activity models based on motion primitives. The key issues related to this motion primitive-based model are: (1) constructing useful motion primitives that contain salient motion information; and (2) representing activities based on the extracted primitives in a meaningful way. We have used two different approaches to solve these issues from two different perspectives. Our first approach is to adopt the Bag-of-Features (BoF) framework that uses clustering techniques to construct motion primitives. Our second approach is to formulate the problem as a L0-norm optimization problem which favors the sparsest solutions. Both of these two approaches demonstrate great performance and achieve a significant improvement compared to the conventional string-matching-based approach.


Learning and Recognizing Activity Manifolds

manifold Manifold learning is an important technique for effective nonlinear dimensionality reduction in machine learning. In this work, we propose a framework based on manifold learning techniques that embeds the high-dimensional human activity signals into a low-dimensional subspace for compact representation and recognition. The key idea stems from the observation that the sensor signals of a subject performing certain activities are constrained by the physical body kinematics and the temporal constraints posed by the activity being performed. Given these constraints, it is expected that the sensor signals vary smoothly and lie on a low-dimensional manifold embedded in the high-dimensional input space. Moreover, these manifolds capture the intrinsic activity structures and act as trajectories to characterize different types of activities. Therefore, we refer to these low-dimensional manifolds as Activity Manifolds. Our experimental results validate the existence of the activity manifolds and demonstrate that activity recognition can be performed on top of this compact representation and achieves promising results.


Body Area Sensor Network and Virtual Reality for Computerized Rehabilitation

Microsoft Kinect + Serious Game for Pervasive Virtual Rehabilitation

kinect kinect The use of Virtual Reality (VR) technology for developing tools for rehabilitation has attracted significant interest in the physical therapy arena. The core idea of VR-based rehabilitation is to use sensing devices to capture and quantitatively assess the movements of patients under treatment to track their progress more accurately. In addition, by integrating VR technology with video games, the goal is for patients to be more motivated and engaged in these physical activities. Traditional motion capture systems are relatively expensive and typically housed at large medical facilities, requiring a patient visit and further increasing operational cost. In this project, we are exploring Microsoft Kinect as a VR tool for advancing rehabilitation for patients with spinal cord injuries (SCI). Kinect includes an RGB camera and a depth sensor, which together provide full-body 3D motion capture and joint tracking capabilities. More importantly, Kinect is inexpensive, easy to set up, and can be used in both home and clinical environments. This "pervasive" accessibility could significantly facilitate rehabilitation, allowing more frequent repetition of exercises outside standard therapy sessions.


Beyond the Standard Clinical Rating Scales: Fine-Grained Motor Function Assessment Using Wearable Technology

motor_assessment Stroke is a leading cause of adult disability and death worldwide today. According to the latest report from the American Heart Association, there are approximately 795,000 people experiencing a new or recurrent stroke every year in the United States. Existing clinical studies demonstrate that many post-stroke patients can regain their impaired mobility and coordination by taking comprehensive physical rehabilitation. To design the most appropriate rehabilitation interventions to the patient, it is important to accurately assess the patient's current motor functionality. Standardized motor function assessments typically involve clinicians and physical therapists to supervise the patients' movements and assess their performance using standard clinical rating scales. However, this strategy has two drawbacks. First, since the assessment is based on the clinician's subjective judgments, the accuracy and consistency may vary significantly across clinicians. Second, the rating scales cannot record the details of the motor performance, thus failing to precisely evaluate the patient's progress during rehabilitation interventions. To bridge the gaps, we develop a methodology for Fine-Grained Assessments of motor functionalities for stroke survivors using wearable sensing technology. Our approach is capable of providing quantitative evaluations on patients' motor function performance based on sensor signals and acts as a significant complement to the standard clinical rating scales.


RehabSPOT: The Design of Customizable Networked Body Area Sensing System for Physical Rehabilitation

rehabspot Motor-training tasks used in conventional therapy interventions are carried out by physical therapists based on their many years' experience. However, this methodology is limited by their capability to systematically control stimulus presentation and precisely capture motor performance for diagnose and evaluation in real time. As a consequence, the status and the progress patients have achieved during their rehabilitation interventions can not be reliably monitored and precisely evaluated. Meanwhile, from the perspective of physical therapists, it is also very difficult for them to evaluate the efficiency of their designed motor-training tasks. To fill this gap, in this work, we are developing a body area sensing system called RehabSPOT which provides numerous assets beyond what is currently available with conventional therapy intervention. rehabspot_screenshot Specifically, RehabSPOT collects patients' motor signals from sensor attached on different parts of their bodies. The collected signals are transmitted wirelessly to the desktop machine for display and quantitative assessment by professional medical personnel. Moreover, patients with different levels of impaired mobility normally require different rehabilitation interventions. RehabSPOT has the capability of rapid configuration to facilitate personalized intervention delivery. Our prototype system has been installed in a local clinical center in Southern California for testing and evaluation. Initial feedback from the patients under rehabilitation treatment and professional physical therapists is promising. This indicates that the RehabSPOT platform holds a great potential in benefiting physical therapists' daily work.


Energy Optimization for Networked Body Area Sensing System

MDP Power consumption is a critical issue in body area sensor network. One of the important sources of power consumption is data sampling at each sensor node in the network. The higher the sampling rate is, the more power is consumed. In this work, we are exploring the feasibility of using Markov Decision Process (MDP) to learn a data sampling policy for optimal power control in body area sensor network. The MDP sampling policy is learned based on the current human activity and physiological condition and the availability of energy at each sensor node. The policy is optimal in the sense that the data sampling rate is minimized without affecting the activity and physiological condition recognition performance. However, the space complexity of the representation of the learned MDP policy is exponential in the number of the sensor nodes. Therefore, a compact representation of the decision policy is needed in order to deploy the model in sensor nodes with limited memory. We build and compare the capability of the compact representation of the MDP policy by using different supervised learners. The experimental results show that unpruned decision trees and high confidence pruned decision trees provide the lowest error rate while the required node number of the decision tree is small enough to be stored in the sensors. Ensembles of lower-confidence trees are capable of perfect representation with only an order of magnitude increase in classifier size compared to individual pruned trees.


Intelligent Mobile Sensing and Computing for Assisted Living

OCRdroid: Optical Character Recognition (OCR) on Mobile Phones as Assistive Technology

ocrdroid Optical Character Recognition (OCR) is a powerful technology for bringing information from our analog lives into the increasingly digital world. By applying OCR technologies, scanned or camera-captured documents are converted into machine editable soft copies that can be edited, searched, reproduced and transported with ease. Meanwhile, mobile phone is becoming one of the most commonly used electronic devices today. Smartphones with powerful microprocessors, high resolution cameras, and a variety of embedded sensors are widely deployed and becoming ubiquitous. By fully exploiting these advantages, smartphones are becoming powerful portable computing platforms. We see great opportunities to develop assistive technologies on this powerful platform. In this project, we focus on enabling OCR on mobile phones and developing OCR-based applications that provide personalized assistance to help people with special needs. The framework we have developed is called OCRdroid. It combines a light-weight image preprocessing suite installed inside the mobile phone and an OCR engine connected to a backend server. We demonstrate the power and functionality of OCRdroid by implementing two applications: (1) PocketPal: a mobile application which extracts text and digits on receipts and keeps track of one's shopping history digitally; and (2) PocketReader: a mobile application which provides a text-to-speech interface to read text contents extracted from any text sources from magazines and newspapers for blind people. Initial evaluations of these pilot experiments demonstrate the potential of using OCRdroid framework for real-world OCR-based mobile applications. For more information, please read the paper below and visit the project site with a demo video here.


Automatic Fall Detection for Elderly Citizens to Support Independent Living

PocketPal Falls and fall-induced injuries are a leading cause of death among elderly people. In the United States, more than a third of people aged over 65 fall at least once every year. Among these falls, 10% to 15% cause serious injuries. Therefore, the development of an automatic fall detection technology for elderly people to support their independent living and save their lives becomes a clear necessity. In this project, we focus on designing a mobile system that is worn on the people's bodies to detect falls. Compared to the traditionally used video surveillance systems which have limited aperture range and coverage, we believe this mobile solution better serves the purpose since it enables continuous monitoring. Furthermore, we utilize Context Information, such as the person's physical activity status, current physiological conditions, his/her personal health record (PHR), and location information to improve detection reliability. This context information normally has a strong semantic meaning and can provide important "hints" that help to detect falls. As one example, elderly people are more likely to fall when they just get out of bed after a sleep due to the temporary muscle weakness and the balance disorders. As another example, people with a history of stroke, Parkinson's disease (PD), diabetes, and previous falls experience are more likely to fall than people who do not have those illnesses. Our system is taking advantage of this context information to detect falls and exhibits promising results.


Mobile Labor Market: A Community-Based Participatory Job Hunting and Storytelling Platform for Social Support

mobilevoice mobilevoice Mobile Labor Market (MLM) is a project under Mobile Voices, a storytelling platform for immigrants in Los Angeles to create and publish stories about their lives and community directly from mobile phones. This project is a collaboration between the Viterbi School of Engineering, the Annenberg School for Communication at University of Southern California (USC), and the Institute of Popular Education of Southern California (IDEPSCA), a nonprofit institution serving low-income Latino immigrants in Southern California. The objective of MLM is to build a community-based participatory platform for low-wage workers without computers access to have greater participation in job huntings using their mobile phones. To achieve this goal, we have developed a prototype system on Nokia N95 mobile phones. Using this mobile system, Workers can upload their personal profiles, search and apply for jobs, and share their working experiences with people in the community. Contractors or employers can post new jobs, search for job applicants, and select applicants for specific jobs. In addition, the prototype platform includes an evaluation tool such that workers can post reviews about the jobs and the contracters. The prototype system becomes the tool for professors at USC conducting their research. In 2010, the Mobile Voices project won the United Nations World Summit Award in recognition of online and mobile content that promotes global digital access

Project Report (Unpublished Manuscript)