IJIRST Journal New Impact Factor = 4.371


, ,

New Impact Factor Received.
Latest Impact Factor : 4.371(Year-2016)
International Journal for Innovative Research in Science & Technology
Click here for more Details : www.ijirst.org

IJIRST_2016 Impact Factor

IJIRST (International Journal for Innovative Research in Science & Technology) is interested in all parts of building teaching method. Real fields of investment incorporate

  • Teaching and learning styles
  • Methods, practices and philosophies in engineering
  • Assessment
  • Ethics
  • Inclusivity
  • Sustainability
  • Online and laboratory learning
  • Professional practice
  • Global dimensions of engineering education/globalisation
  • Quality issues
  • Technical teacher training
  • Student communities
  • Curricula in the Bachelor and Master system
  • Faculty development
  • Lifelong learning

National Conference LTNCS-2017


, , , , , ,

We are pleased to invite you to attend/participate in the  National Conference LTNCS-2017 will be held on 17th March 2017, organized by Computer Engineering Department of SAL INSTITUTE OF TECHNOLOGY AND ENGINEERING RESEARCH, Ahmedabad.


LTNCS-2017 aims to gather technocrats from different states of India on a common platform to promote research activities in all fields of Networking and Cyber Security. LTNCS-2017 adhere different Technical Topics as under.

  • Cybercrime
  • Distributed Network
  • Forensic & Cyber Security with Cloud
  • Security issues with big data
  • Ethical hacking

We would like to invite research papers based on original research work from Researchers, Academicians, Faculty Members and Students in various innovative areas of Networking and Cyber Security.

All submitted papers will be peer-reviewed by renowned experts. The reviewed papers will be published in one of the reputed technical journal namely IJIRSTInternational Journal for Innovative Research in Science & Technology” having impact factor 3.559.

With the same, we take this opportunity to invite you all to participate in this conference and share your innovative ideas and research. We shall appreciate your participation in the conference and confirmation for the same at the earliest.

We request you to forward this information to your faculty colleagues, research scholars and students for contributing research papers for LTNCS-2017.

For more information about the conference kindly contact us

Website:       http://conference.ijirst.org/

Email:               ltncs2017@gmail.com

Facebook:         https://www.facebook.com/LTNCS2017/

Contact No:    8128989597

System That Replaces Human Intuition with Algorithms Outperforms Human Teams


, , , ,


Big-data analysis consists of searching for buried patterns that have some kind of predictive power. But choosing which “features” of the data to analyze usually requires some human intuition. In a database containing, say, the beginning and end dates of various sales promotions and weekly profits, the crucial data may not be the dates themselves but the spans between them, or not the total profits but the averages across those spans.

                MIT researchers aim to take the human element out of big-data analysis, with a new system that not only searches for patterns but designs the feature set, too. To test the first prototype of their system, they enrolled it in three data science competitions, in which it competed against human teams to find predictive patterns in unfamiliar data sets. Of the 906 teams participating in the three competitions, the researchers’ “Data Science Machine” finished ahead of 615.
 In two of the three competitions, the predictions made by the Data Science Machine were 94 percent and 96 percent as accurate as the winning submissions. In the third, the figure was a more modest 87 percent. But where the teams of humans typically labored over their prediction algorithms for months, the Data Science Machine took somewhere between two and 12 hours to produce each of its entries.
“We view the Data Science Machine as a natural complement to human intelligence,” says Max Kanter, whose MIT master’s thesis in computer science is the basis of the Data Science Machine. “There’s so much data out there to be analyzed. And right now it’s just sitting there not doing anything. So maybe we can come up with a solution that will at least get us started on it, at least get us moving.”

Between the lines

Kanter and his thesis advisor, Kalyan Veeramachaneni, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), describe the Data Science Machine in a paper that Kanter will present next week at the IEEE International Conference on Data Science and Advanced Analytics.
Veeramachaneni co-leads the Anyscale Learning for All group at CSAIL, which applies machine-learning techniques to practical problems in big-data analysis, such as determining the power-generation capacity of wind-farm sites or predicting which students are at risk for dropping out of online courses.
“What we observed from our experience solving a number of data science problems for industry is that one of the very critical steps is called feature engineering,” Veeramachaneni says. “The first thing you have to do is identify what variables to extract from the database or compose, and for that, you have to come up with a lot of ideas.”

In predicting dropout, for instance, two crucial indicators proved to be how long before a deadline a student begins working on a problem set and how much time the student spends on the course website relative to his or her classmates. MIT’s online-learning platform MITx doesn’t record either of those statistics, but it does collect data from which they can be inferred.

For More Information Click Here

Decentralized Access Control with Unidentified Authentication for Information Security in Cloud Computing


, , , ,

Paper Title:-Decentralized Access Control with Unidentified Authentication for Information Security in Cloud Computing

Siva Rama Prasad Kollu

Abstract:- Cloud computing has is a popular design in managing world to back up large volumetric details using cluster of commodity computer systems. It is the newest effort in offering and managing computing as a service. The decentralized access control scheme distributes the data stored in cloud to user. Valid user can only access the stored information. The valid users attribute satisfies the access policy that attached to the cipher text. In the proposed decentralized approach, the technique does not authenticate users. When the users have matching set of attributes, can they decrypt the information stored in the cloud the set of attributes possessed by the revoked user. This provides user revocation and prevents replay attacks. Decentralized access control scheme can distribute secret keys for valid user in set of attribute. If the user is not authorized individually decentralized access control distribute authorized secret keys to user in set of attribute such that only that user can encrypt the stored data using its secret key. The proposed algorithm is Token Verification algorithm. Using this algorithm the creator or author of the data can verify who are all modifying the document. The algorithm provides more security in access control and authentication. Moreover, our authentication and access management theme is suburbanized and sturdy, in contrast to different access management schemes designed for clouds that square measure centralized.

Keywords: Decentralized Access Control Authentication

I.  Introduction

Cloud computing is set of services offered through the internet. Cloud computing is receiving a lot of attention from both academic and industrial worlds. Cloud services are delivered from data centers located throughout the world. The boom in cloud computing has brought lots of security challenges for the consumers and service providers. In cloud computing, users can outsource storage and infrastructure to servers using Internet [2].

                Clouds can provide several types of services like applications (e.g., Google Apps, Microsoft online), infrastructures (e.g., Amazon’s EC2, Eucalyptus, Nimbus), and platforms to help developers write applications. Much of the data stored in clouds is highly sensitive, for example, medical records and social networks. Security and privacy are, thus, very important issues in cloud computing. In one hand, the user should authenticate itself before initiating any transaction, and on the other hand, it must be ensured that the cloud does not tamper with the data that is outsourced [1]. User privacy is also required so that the cloud or other users do not know the identity of the user. The cloud can hold the user accountable for the data it outsources, and likewise, the cloud is itself accountable for the services it provides. The validity of the user who stores the data is also verified. Apart from the technical solutions to ensure security and privacy, there is also a need for law enforcement [3].

                The cloud can hold the user accountable for the data it outsources, and likewise, the cloud is itself accountable for the services it provides. To provide secure data storage the data stored in cloud should be in an encrypted format. There are many types of access control is there in cloud User Based Access Control (UBAC), Role Based Access Control (RBAC) [7] And Attribute Based Access Control (ABAC). In User based Access control scheme there is a list of user that who can access the data. Only those users can access the data that stored in cloud. In Role Based Access Control Scheme the users who having matching set of roles they can access the data and in Attribute Based Access Control the users can access the data only if they having matching set off attributes. According to the access policy the user who satisfies certain conditions only can access the data that stored in cloud [13]. It prevents replay attacks and support creation, modification and reading data stored in cloud.

                Cipher text Policy Attribute Based Encryption is a type of ABAC it provides a secure access control. Authentication and access control scheme is decentralized and robust. The valid user in set of attribute that satisfies the access policy attached with the attribute of cipher text means they can modify and store data in cloud. The validity of the user who stores the data is also verified. Using ABE, the records area unit encrypted below some access policy and keep within the cloud [4, 5]. User’s area unit given sets of attributes and corresponding keys. Only if the users have matching set of attributes, will they rewrite the data keep within the cloud [6]. Access management in health care has been studied. Access management is additionally gaining importance in on-line social networking where users (members) store their personal info, pictures, and videos and share them with selected teams of users or communities they belong to. Access management in on-line social networking has been studied in [8]. Such information area unit being keeps in clouds. Data stored in clouds is highly sensitive, for example, medical records and social networks. Providing security and privacy are important issues in cloud computing [9].

Two main things are firstly, the user should authenticate itself before initiating any transaction, and on the second one is that, it must be ensured that the cloud does not tamper or interfere with the data that is outsourced or the data which is sent to the user. The wide acceptance of www has raised security risks along with the uncountable benefits so is the case with cloud computing. Also user privacy [10] is required so that the cloud or other users do not know the identity of the user. The cloud can hold the user accountable for the data it outsources to the client, and likewise, the cloud is itself accountable for the services it provides to the client or the user who is accessing the cloud. The validity of the user who stores the data is also verified (by the admin). Apart from the technical solutions to ensure security and privacy in cloud, there is also a need for law enforcement such as access policies provided to the client or the users.

For More Information Click Here

Performance of WRF (ARW) over River Basins in Odisha, India During Flood Season 2014 #IJIRST Journal


, , ,

Paper Title:- Performance of WRF (ARW) over River Basins in Odisha, India During Flood Season 2014

Author Name:- Sumant Kr. Diwakar, Dr. (Mrs.) Surinder Kaur, Dr. Ashok Kumar Das, Anuradha Agarwala

Abstract:- Operational Weather Research & Forecasting – Advanced Research WRF in short WRF (ARW) 9 km x 9 km Model (IMD) based rainfall forecast of India Meteorological Department (IMD) is utilized to compute rainfall forecast over River basins in Odisha during Flood season 2014. The performance of the WRF Model at the sub-basin level is studied in detail. It is observed that the IMD’s WRF (ARW) day1, day2, day3 correct forecast range lies in between 31-47 %, 37-43%, and 28-47% respectively during the flood season 2014.

Keywords: GIS; WRF (ARW); IMD; Flood 2014; Odisha

I.  Introduction

Forecast during the monsoon season river sub-basin wise in India is difficult task for meteorologist to give rainfall forecast where the country have large spatial and temporal variations. India Meteorological Department (IMD) through its Flood Meteorological Offices (FMO) is issuing Quantitative Precipitation Forecast (QPF) sub-basin wise for all Flood prone river basins in India (IMD, 1994). There are 10 FMOs all over India spread in the flood prone river basins and FMO Bhubaneswar, Odisha is one of them. The Categories in which QPF are issued are as follows

Rainfall (in mm) 0 1-10 11-25 26-50 51-100 >100

Odisha is an Indian state on the subcontinent’s east coast, by the Bay of Bengal. It is located between the parallels of 17.49’ N and 22.34’ N Latitudes and meridians of 81.27’ E and 87.29’ E Longitudes. It is surrounded by the Indian states of West Bengal to the north-east and in the east, Jharkhand to the north, Chhattisgarh to the west and north-west and Andhra Pradesh to the south. Bhubaneswar is the capital of Odisha.

Odisha is the 9th largest state by area in India and the 11th largest by population. Odisha has a coastline about 480 km long. The narrow, level coastal strip including the Mahanadi river delta supports the bulk of the population. On the basis of homogeneity, continuity and physiographical characteristics, Odisha has been divided into five major morphological regions. The Odisha Coastal Plain in the east, the Middle Mountainous and Highlands Region, the Central Plateaus, the western rolling uplands and the major flood plains.

For more Information Click Here

Evaluation of Response Reduction Factor using Nonlinear Analysis


, , , , ,

Paper Title:- Evaluation of Response Reduction Factor using Nonlinear Analysis

Author Name:- Tia Toby, Ajesh K. Kottuppillil

Department of Civil Engineering

Abstract:- The main objective of the study is to evaluate the response reduction factor of RC frames. We know that the actual earthquake force is considerably higher than what the structures are designed for. The structures can’t be designed for the actual value of earthquake intensity as the cost of construction will be too high. The actual intensity of earthquake is reduced by a factor called response reduction factor R. The value of R depends on ductility factor, strength factor, structural redundancy and damping. The concept of R factor is based on the observations that well detailed seismic framing systems can sustain large inelastic deformation without collapse and have excess of lateral strength over design strength. Here the nonlinear static analysis is conducted on regular and irregular RC frames considering OMRF and SMRF to calculate the response reduction factor and the codal provisions for the same is critically evaluated.

Keywords: Response Reduction Factor, Ductility Factor, Strength Factor, Nonlinear Analysis, Regular and Irregular Frames, OMRF, SMRF

I.  Introduction

The devastating potential of an earthquake can have major consequences on infrastructures and lifelines. In the past few years, the earthquake engineering community has been reassessing its procedures, in the wake of devastating earthquakes which have caused extensive damage, loss of life and property. These procedures involve assessment of seismic force demands on the structure and then developing design procedures for the structure to withstand the applied actions Seismic design follows the same procedure, except for the fact that inelastic deformations may be utilized to absorb certain levels of energy leading to reduction in the forces for which structures are designed. This leads to the creation of the Response Modification Factor (R factor); the all-important parameter that accounts for over-strength, energy absorption and dissipation as well as structural capacity to redistribute forces from inelastic highly stressed regions to other less stressed locations in the structure. This factor is unique and different for different type of structures and materials used. The objective of this paper is to evaluate the response reduction factor of a RC frame designed and detailed as per Indian standards IS 456, IS 1893 and IS 13920.The codal provisions for the same will be critically evaluated. Moreover parametric studies will be done on both regular and irregular buildings and finally a comparison of R value between OMRF and SMRF is also done.

II.    Definition of r factor and its components

During an earthquake, the structures may experience certain inelasticity, the R factor defines the levels of inelasticity. The R factor is allowed to reflect a structures capability of dissipating energy via inelastic behavior. The statically determinate structures response to stress will be linear until yielding takes place. But the behavioral change in structure from elastic to inelastic occurs as the yielding prevails and linear elastic structural analysis can no longer be applied. The seismic energy exerted by the structure is too high which makes the cost of designing a structure based on elastic spectrum too high. To reduce the seismic loads, IS 1893 introduces a “response reduction factor” R. So in order to obtain the exact response, it is recommended to perform Nonlinear analysis. In actual speaking R factor is a measure of over strength and redundancy. It may be defined as a function of various parameters of the structural system, such as strength, ductility, damping and redundancy.

For More Information Click Here

Performance Assessment for Students using Different Defuzzification Techniques


, , , , ,

Author Name:- Anjana Pradeep

Department of Computer Science & Engineering

St. Joseph College of Engineering and Technology Palai, Kerala, India

Abstract:- The aim of this study is to evaluate the performance of students using a fuzzy expert system. The fuzzy process is based solely on the principle of taking non-precise inputs on the factors affecting the performance of students and subjecting them to fuzzy arithmetic to obtain a crisp value of the performance. The system classifies each student’s performance by considering various factors using fuzzy logic. Aimed at improving the performance of fuzzy system, several defuzzification methods other than the built methods in MATLAB have been devised in this system for producing more accurate and quantifiable result.  This study provides comparison and in depth examination of various defuzzification techniques like Weighted Average Formula (WAF), WAF-max method and Quality Method (QM). A new defuzzification method named as Max-QM which is extended from Quality method that falls within the general framework is also given and commented upon in this study.

Keywords: Fuzzy logic, Fuzzy Expert System, Defuzzification, Weighted Average Formula, Quality Method

I.  Introduction

An expert system is a software program that can be used to solve complex reasoning tasks that usually require a (human) expert. In other words, an expert system should help a novice, or partly experienced, problem solver, to match acknowledged experts in the particular domain of problem solving that the system is designed to assist. To be more specific, expert systems are generally conceptualized as shown in Fig 1. The user makes an interaction through the interface system and the system questions the user through the same interface in order to obtain the vital information upon which a decision is to be made. Behind this interface, there are two other sub-systems viz. the knowledge base, which is made up of all the domain-specific knowledge that human experts use when solving that category of problems and the inference engine, a system that performs the necessary reasoning and uses knowledge from the knowledge base in order to come to a judgment with respect to the problem modelled [1].

                Expert system has been playing a major role in many disciplines such as in medicines, assist physician in diagnosis of diseases, in agriculture for crop management, insect control, in space technology and  in power systems for fault diagnosis[5]. Some expert systems have been developed to replace human experts and to aid humans. The use of an expert system is increasing day by day in today’s world [40]. Expert systems are becoming an integral part of engineering education and even other courses like accounting and management are also accepting them as a better way of teaching[4].Another feature that makes expert system more demanding for students is its ability to adaptively adjust the training for each particular student on the bases of individual students learning pace. This feature can be used more effectively in teaching engineering students. It should be able to monitor student’s progress and make a decision about the next step in training.


Fig. 1: Expert system structure

               The few expert systems available in the market present a lot of opportunities for the students who desire more spotlight and time to learn the subjects. Some expert systems present an interactive and friendly environment for students which encourage them to study and adopt a more practical approach towards learning. The expert systems can also act as an assistor or substitute for the teacher. Expert systems focus on each student individually and also keep track of their learning pace. This behavior of an expert system provides autonomous learning procedure for both student and teacher, where teachers act as mentor and students can judge their own performance. Expert system is not only beneficial for the students but also for the teachers which help them guiding students in a better way.

                The integration of fuzzy logic with an expert system enhances its capability and is called a fuzzy expert system, as it is useful for solving real world problems which do not require a precise solution. So, there is a need to develop a fuzzy expert system as it can handle imprecise data efficiently and reduces the manual working while enhancing the use of expert system[40].

                There are various factors inside and outside college that results in poor quality of academic performance of students[2,3]. To determine all the influencing factors in a single effort is a complex and difficult task. It necessitates a lot of resources and time for an educator to identify all these factors first and then plan the classroom activities and approaches of teaching and learning. It also requires appropriate training, organizational planning and skills to conduct such studies for determining the contributing factors inside and outside college. This process of identification of determinants must be given full attention and priority so that the teachers may be able to develop instructional strategies for making sure that all the students be provided with the opportunities to attain at their fullest potential in learning and performance.  By using suitable statistical package it was found that communication, learning facilities, proper guidance and family stress were the factors that affect the student performance. Communication, learning facilities and proper guidance showed a positive impact on student performance and family stress showed a negative impact on student performance. It is indicated that communication is more important factor that affect the student performance than learning facilities and proper guidance [3].

                In this research article seven most important factors are included which affect the students’ performance. These are personal factors, college environment, family factors, and university factors, teaching factors, attendance and marks obtained by students. All these factors are scaled and ranked based on the various sub-factors that are further divided from the base factors. In this study the students’ marks have been focused and not solely on social, economic, and cultural features.  To evaluate students’ performance, fuzzy expert system has been developed by considering all the seven factors as inputs to the system. This system has been developed by taking the data of students collected from St. Josephs College of Engineering and Technology, Palai affiliated to M.G University.

For More Information Click Here

#IJIRST Journal: A Review on Thermal Insulation and Its Optimum Thickness to Reduce Heat Loss


, , , , ,

Paper Title:- A Review on Thermal Insulation and Its Optimum Thickness to Reduce Heat Loss

Author Name:- Dinesh Kumar Sahu

Department of Mechanical Engineering

Abstract:- An understanding of the mechanisms of heat transfer is becoming increasingly important in today’s world. Conduction and convection heat transfer phenomena are found throughout virtually all of the physical world and the industrial domain. A thermal insulator is a poor conductor of heat and has a low thermal conductivity. In this paper we studied that Insulation is used in buildings and in manufacturing processes to prevent heat loss or heat gain. Although its primary purpose is an economic one, it also provides more accurate control of process temperatures and protection of personnel. It prevents condensation on cold surfaces and the resulting corrosion. We also studied that critical radius of insulation is a radius at which the heat loss is maximum and above this radius the heat loss reduces with increase in radius. We also gave the concept of selection of economical insulation material and optimum thickness of insulation that give minimum total cost.

Keywords: Heat, Conduction, Convection, Heat Loss, Insulation

I.  Introduction

Heat flow is an inevitable consequence of contact between objects of differing temperature. Thermal insulation provides a region for insulation in which thermal conduction is reduced or thermal radiation is reflected rather than absorbed by the lower temperature body. To change the temperature of an object, energy is required in the form of heat generation to increase the temperature, or heat extraction to reduce the temperature. Once the heat generation or heat extraction is terminated a reverse flow of heat occurs to reverse the temperature back to ambient. To maintain a given temperature considerable continuous energy is required. Insulation will reduce this energy loss.

Heat may be transferred in three mechanisms: conduction, convection and radiation. Thermal conduction is the molecular transport of heat under the effect of temperature gradient. Convection mechanism of heat occurs in liquids and gases, whereby the flow processes transfer heat. Free convection is flow caused by the differences in density as a result of temperature differences. Forced convection is flow caused by external influences (wind, ventilators, etc.). Thermal radiation mechanism occurs when thermal energy is emitted similar to light radiation.

Heat transfers through insulation material occur by means of conduction, while heat loss to or heat gain from atmosphere occurs by means of convection and radiation. Materials, which have a low thermal conductivity, are those, which have a high proportion of small voids containing air or gases. These voids are not big enough to transmit heat by convection or radiation, and therefore reduce the flow of heat. Thermal insulation materials come into the latter category. Thermal insulation materials may be natural substances or man-made.

II.  The Need for Insulation

A thermal insulator is a poor conductor of heat and has a low thermal conductivity. Insulation Is used in buildings and in manufacturing processes to prevent heat loss or heat gain. Although its primary purpose is an economic one, it also provides more accurate control of process temperatures and protection of personnel. It prevents condensation on cold surfaces and the resulting corrosion. Such materials are porous, containing large number of dormant air cells. Thermal insulation delivers the following benefits: [1][2]

A.      Energy Conservation

Conserving energy by reducing the rate of heat flow (fig 1) is the primary reason for insulating surfaces. Insulation materials that will perform satisfactorily in the temperature range of -268°C to 1000°C are widely available.


Fig. 1: Thermal insulation retards heat transfer by acting as a barrier in the path of heat flow

For more Information Click Here

A Survey on localization in Wireless Sensor Network by Angle of Arrival #IJIRST Journal


, , , ,

  Author Name:- Shubhra Shubhankari Dash

College:- Veer Surendra Sai University of Technology Burla, Odisha, India

Paper Title:- A Survey on localization in Wireless Sensor Network by Angle of Arrival

Abstract:- Wireless sensor network is a type of sensor network which consist of a large number of sensor nodes, which are low cost small in size. When sensor nodes send the data to the user it must be localized itself. Here basically we focus on cooperative and distributed localization. Here we use cooperative positioning algorithm which is a hybrid positioning method used for the improvement of coverage network positioning and accuracy which is based on ultra-wide band technique, normally in harsh environment. In cooperative localization nodes are work together to make measurements and after that it form a map of network.  There is a method for selection of neighbors for the distributed localization. Basically distributed algorithms are simple, have asynchronous operations that are there is no global coordination between nodes is required, and have the capability to operate in a disconnected network. Here also describe a Distributed algorithm in which whole network can be divided into cluster, after the initial cluster localized, by stitching together into other cluster whole network can be localized. The graph rigidity theory is define here for the new structure and their relationship. So the algorithm can provide relative location to maximum number of nodes. A hybrid scheme of AOA/TOA explained here. This hybrid scheme employ in multiple seeds in the line of sight scenario. Here we have to find the position of target sensor which not changes its moving direction.

Keywords: WSN, cooperative localization, AOA/TOA hybrid scheme, Distributed localization

I.   Introduction

A wireless sensor network consists of a large number of sensor nodes which can sense or monitor the environment. The node must be localized itself i.e. to find out the exact location, before sending the data to the neighbors, which is called as “localization”. By localization estimation we can find the absolute position of a sensor node. This can be done by “anchor node”, the node who knows its own position. Previously localization can be done through GPS. But due to some limitation i.e. by installing GPS its antenna increases the sensor node size factor, whereas sensor nodes are required to be small. It reduces the battery life of the sensor nodes due to high power consumption.

Sensor nodes can be used in Military applications, Battlefield Surveillance, target tracking, Nuclear, biological and chemical attack detection. The process of estimating unknown node position in the network is referred to node localization. We can apply localization algorithm not only 2D plane but also 3D. In 2d localization require less energy and time as compare to 3d plane. In 2D provides high accuracy on flat terrains but difficult to estimate in harsh terrains or hilly area. So by using 3D localization positioning error is improved in harsh terrain as well as in flat terrains.

For 3D localization we use some algorithms [1]:

  • New 3-dimensional DV-Hop localization Algorithm.
  • Novel Centroid Algorithm for 3D.
  • The 3-dimensional accurate positioning algorithm.
  • Unitary Matrix pencil Algorithm for ranged-Based 3D Localization.

In WSN localization we can locate the node by some techniques [2]:

A.   Ranged Based Technique

this technique relay on the distance and angle measurement of nodes. This technique employ on point to point distance to identify the location among neighboring nodes. Ranged based techniques are Angle-Of-Arrival (AOA), TOA (Time-Of-Arrival), RSS (Receive Signal Strength).


Applications and Challenges of Human Activity Recognition using Sensors in a Smart Environment #IJIRST Journal


, , , , , , ,

Author Name:- Jubil T Sunny

Department of Computer Science and Engineering

St. Joseph’s College of Engineering and Technology Palai, Kerala, India

Paper Title:- Applications and Challenges of Human Activity Recognition using Sensors in a Smart Environment

Abstract:- We are currently using smart phone sensors to detect physical activities. The sensors which are currently being used are accelerometer, gyroscope, barometer, etc. Recently, smart phones, equipped with a rich set of sensors, are explored as alternative platforms for human activity recognition. Automatic recognition of physical activities – commonly referred to as human activity recognition (HAR) – has emerged as a key research area in human-computer interaction (HCI) and mobile and ubiquitous computing. One goal of activity recognition is to provide information on a user’s behavior that allows computing systems to proactively assist users with their tasks. Human activity recognition requires running classification algorithms, originating from statistical machine learning techniques. Mostly, supervised or semi-supervised learning techniques are utilized and such techniques rely on labeled data, i.e., associated with a specific class or activity. In most of the cases, the user is required to label the activities and this, in turn, increases the burden on the user. Hence, user- independent training and activity recognition are required to foster the use of human activity recognition systems where the system can use the training data from other users in classifying the activities of a new subject.

Keywords:- Human Activity Recognition

I.  Introduction

Mobile phones or smart phones are rapidly becoming the central computer and communication device in people’s lives. Smart phones, equipped with a rich set of sensors, are explored as an alternative platform for human activity recognition in the ubiquitous computing domain. Today’s Smartphone not only serves as the key computing and communication mobile device of choice, but it  also comes with a rich set of embedded sensors [1], such as an accelerometer, digital compass, gyroscope, GPS, microphone, and camera. Collectively, these sensors are enabling new applications across a wide variety of domains, such as healthcare, social networks, safety, environmental monitoring, and transportation, and give rise to a new area of research called mobile phone sensing. Human activity recognition systems using different sensing modalities, such as cameras or wearable inertial sensors, have been an active field of research. Besides the inclusion of sensors, such as accelerometer, compass, gyroscope, proximity, light, GPS, microphone, camera, the ubiquity, and unobtrusiveness of the phones and the availability of different wireless interfaces, such as WI-Fi, 3G and Bluetooth, make them an attractive platform for human activity recognition. The current research in activity monitoring and reasoning has mainly targeted elderly people, or sportsmen and patients with chronic conditions.

The percentage of elderly people in today’s societies keep on growing. As a consequence, the problem of supporting older adults in loss of cognitive autonomy who wish to continue living independently in their home as opposed to being forced to live in a hospital. Smart environments have been developed in order to provide support to the elderly people or people with risk factors who wish to continue living independently in their homes, as opposed to live in an institutional care. In order to be a smart environment, the house should be able to detect what the occupant is doing in terms of one’s daily activities. It should also be able to detect possible emergency situations. Furthermore, once such a system is completed and fully operational, it should be able to detect anomalies or deviations in the occupant’s routine, which could indicate a decline in his abilities. In order to obtain accurate results, as much information as possible must be retrieved from the environment, enabling the system to locate and track the supervised person in each moment, to detect the position of the limbs and the objects the person interacts or has the intention to interact with. Sometimes, details like gaze direction or hand gestures [1] can provide important information in the process of analyzing the human activity. Thus, the supervised person must be located in a smart environment, equipped with devices such as sensors, multiple view cameras or speakers.

Although smart phone devices are powerful tools, they are still passive communication enablers rather than active assistive devices from the user’s point of view. The next step is to introduce intelligence into these platforms to allow them to proactively assist users in their everyday activities. One method of accomplishing this is by integrating situational awareness and context recognition into these devices. Smart phones represent an attractive platform for activity recognition, providing built-in sensors and powerful processing units. They are capable of detecting complex everyday activities of the user (i.e. Standing, walking, biking) or the device (i.e.  Calling), and they are able to exchange information with other devices and systems using a large variety of data communication channels.

Mobile phone sensing is still in its infancy. There is little or no consensus on the sensing architecture for the phone. Common methods for collecting and sharing data need to be developed. Mobile phones cannot be overloaded with continuous sensing commitments that undermine the performance of the phone (e.g., by depleting battery power). It is not clear what architectural components [4] should run on the phone. Individual mobile phones collect raw sensor data from sensors embedded in the phone. Information is extracted from the sensor data by applying machine learning and data mining techniques. These operations occur either directly on the phone. Where these components run could be governed by various architectural considerations, such as privacy, providing user real-time feedback, reducing communication cost between the phone and cloud, available computing resources, and sensor fusion requirements.The rest of the paper is organized as follows: Section II presents some existing methods. Section III describes important sensors used for human activity recognition. Chapter IV represents various challenges and applications of activity recognition. Conclusions are presented in Chapter V.

For more information go on below link.