, , , , , , ,

Author Name:- Jubil T Sunny

Department of Computer Science and Engineering

St. Joseph’s College of Engineering and Technology Palai, Kerala, India

Paper Title:- Applications and Challenges of Human Activity Recognition using Sensors in a Smart Environment

Abstract:- We are currently using smart phone sensors to detect physical activities. The sensors which are currently being used are accelerometer, gyroscope, barometer, etc. Recently, smart phones, equipped with a rich set of sensors, are explored as alternative platforms for human activity recognition. Automatic recognition of physical activities – commonly referred to as human activity recognition (HAR) – has emerged as a key research area in human-computer interaction (HCI) and mobile and ubiquitous computing. One goal of activity recognition is to provide information on a user’s behavior that allows computing systems to proactively assist users with their tasks. Human activity recognition requires running classification algorithms, originating from statistical machine learning techniques. Mostly, supervised or semi-supervised learning techniques are utilized and such techniques rely on labeled data, i.e., associated with a specific class or activity. In most of the cases, the user is required to label the activities and this, in turn, increases the burden on the user. Hence, user- independent training and activity recognition are required to foster the use of human activity recognition systems where the system can use the training data from other users in classifying the activities of a new subject.

Keywords:- Human Activity Recognition

I.  Introduction

Mobile phones or smart phones are rapidly becoming the central computer and communication device in people’s lives. Smart phones, equipped with a rich set of sensors, are explored as an alternative platform for human activity recognition in the ubiquitous computing domain. Today’s Smartphone not only serves as the key computing and communication mobile device of choice, but it  also comes with a rich set of embedded sensors [1], such as an accelerometer, digital compass, gyroscope, GPS, microphone, and camera. Collectively, these sensors are enabling new applications across a wide variety of domains, such as healthcare, social networks, safety, environmental monitoring, and transportation, and give rise to a new area of research called mobile phone sensing. Human activity recognition systems using different sensing modalities, such as cameras or wearable inertial sensors, have been an active field of research. Besides the inclusion of sensors, such as accelerometer, compass, gyroscope, proximity, light, GPS, microphone, camera, the ubiquity, and unobtrusiveness of the phones and the availability of different wireless interfaces, such as WI-Fi, 3G and Bluetooth, make them an attractive platform for human activity recognition. The current research in activity monitoring and reasoning has mainly targeted elderly people, or sportsmen and patients with chronic conditions.

The percentage of elderly people in today’s societies keep on growing. As a consequence, the problem of supporting older adults in loss of cognitive autonomy who wish to continue living independently in their home as opposed to being forced to live in a hospital. Smart environments have been developed in order to provide support to the elderly people or people with risk factors who wish to continue living independently in their homes, as opposed to live in an institutional care. In order to be a smart environment, the house should be able to detect what the occupant is doing in terms of one’s daily activities. It should also be able to detect possible emergency situations. Furthermore, once such a system is completed and fully operational, it should be able to detect anomalies or deviations in the occupant’s routine, which could indicate a decline in his abilities. In order to obtain accurate results, as much information as possible must be retrieved from the environment, enabling the system to locate and track the supervised person in each moment, to detect the position of the limbs and the objects the person interacts or has the intention to interact with. Sometimes, details like gaze direction or hand gestures [1] can provide important information in the process of analyzing the human activity. Thus, the supervised person must be located in a smart environment, equipped with devices such as sensors, multiple view cameras or speakers.

Although smart phone devices are powerful tools, they are still passive communication enablers rather than active assistive devices from the user’s point of view. The next step is to introduce intelligence into these platforms to allow them to proactively assist users in their everyday activities. One method of accomplishing this is by integrating situational awareness and context recognition into these devices. Smart phones represent an attractive platform for activity recognition, providing built-in sensors and powerful processing units. They are capable of detecting complex everyday activities of the user (i.e. Standing, walking, biking) or the device (i.e.  Calling), and they are able to exchange information with other devices and systems using a large variety of data communication channels.

Mobile phone sensing is still in its infancy. There is little or no consensus on the sensing architecture for the phone. Common methods for collecting and sharing data need to be developed. Mobile phones cannot be overloaded with continuous sensing commitments that undermine the performance of the phone (e.g., by depleting battery power). It is not clear what architectural components [4] should run on the phone. Individual mobile phones collect raw sensor data from sensors embedded in the phone. Information is extracted from the sensor data by applying machine learning and data mining techniques. These operations occur either directly on the phone. Where these components run could be governed by various architectural considerations, such as privacy, providing user real-time feedback, reducing communication cost between the phone and cloud, available computing resources, and sensor fusion requirements.The rest of the paper is organized as follows: Section II presents some existing methods. Section III describes important sensors used for human activity recognition. Chapter IV represents various challenges and applications of activity recognition. Conclusions are presented in Chapter V.

For more information go on below link.