Investigación · 10 April 2019

State of the art report on automatic detection of activity in the home of elderly people for their remote assistance and contribution of cognitive computing.

INTRODUCTION

The Securhome Project is part of the INTERREG V-A Spain-Portugal 2014-2020, which is the international cooperation programme financed by the European Regional Development Fund (ERDF) approved by the European Commission in its Decision C (2015) 893, on 12 February 2015 which promotes development along the largest border of Europe with a planned investment of more than 365M €. 
The technology needed to detect remotely and in real time changes in the behavior of people served in the home is based on the recognition of activities from information obtained from sensors. Being a multidisciplinary study there are several approaches that could fit with it, such as the intelligent home, smart home, cyber-activity, cyberphysics, or from a more general perspective, such as Ambient Intelligence (AmI), which provides the framework for developing such recognition. Applications related to Securhome are assisted living, elderly care (Uddin, Khaksar, and Torresen 2018), health monitoring, rehabilitation and behavioural analysis. 

The Centro de Innovación Experimental del Conocimiento (CEIEC) of the Universidad Francisco de Vitoria (UFV) participates in the Securhome Project through the Universidad Carlos III de Madrid (UC3M). This project aims to "Detection of behavioral changes in elderly people through non-invasive IoT systems with IA" and the participation of UC3M focuses on obtaining a sensory device for homes (DSH). The specific collaboration of the CEIEC is "to develop Deep Learning algorithms that can take advantage of the data captured by the DSH and identify specific situations in which the carrier of the device is". 

STATE OF THE ART

There are now numerous studies on the care of the elderly, especially considering that the life expectancy of humanity has increased in recent years (United Nations World Population Ageing Report 2013), causing the population of older people to increase, as well as possible health emergencies present in their homes, which, without due care by a relative, caregiver or a health center. The situation can become complicated, even causing the death of the person. 

Among the work carried out to date we can find those that focus on achieving greater well-being in the daily lives of older people, but which do not include any type of alert when witnessing a possible emergency situation. The study carried out by (Pollack et al. 2003) proposes a system of adaptive and personalised reminders which, with the help of different sensors placed at strategic points in the house and by means of artificial intelligence (AI), makes it possible to choose intelligently whether or not to issue a specific reminder or whether or not to bring forward or delay the time of the reminder (depending on the activities that the user is or will be carrying out). The proposal creates plans that allow you to know the right time to broadcast messages, with any basic activity, BADL (Basic Activities of Daily Living), such as dressing, eating, going to the bathroom, sleeping or hygiene or with instrumental activities, IADL (Instrumental Activities of Daily Living), such as watching television, calling someone or taking a medicine. However, while it may help plan the older person's routine, there is no warning if these planned activities are not done correctly, which could indicate the presence of a possible problem that needs attention. 

On the other hand, the study carried out by Lago (Lago, Roncancio, and Jiménez-Guarín 2019) presents a system, called LaPlace, that manages the behavior patterns observed in the user, optimizing their correct interpretation by means of the information obtained by sensors installed in the home, such as the one carried out by the adaptive online learning algorithm, TIMe. This study specifies that such adaptive learning can be used to observe changes in the person's habitual behaviour, allowing possible cognitive or physical health problems to be detected. However, it does not specify the action that the system would apply when witnessing a change in the user's habitual behavior, not being able to alert family members when the user is in an emergency. In addition to this, the article does not mention the types of sensors that were used for the study. 

Other studies focus on automatically alerting family members or nearby health centres to the emergencies suffered by these people, such as a fall or a change in behaviour that reveals a motor or cognitive problem, so that they can be attended to promptly, preventing their vital situation from worsening and unloading emergency services. Botia (Botia, Villa, and Palma 2012), which follows the same principle as the two studies described above. Botia proposes a system of sensors placed in each of the rooms of the user's house, the system is made up of: 

- Motion sensors in all rooms. 
- Pressure sensors in different furniture in the home. 
- Sensors in all the doors, to detect their opening and closing. 

In addition to alerting when a fall or other emergencies observed by abnormal behavior are detected, the system becomes more precise, as it learns throughout the time of daily routine, so that false fall notifications decrease with respect to the first days. This solution degrades in the event that the elderly person has a pet in their home and at the time of visits.  

Another similar system is proposed (Just Checking n.d.), which provides a monitoring service without video cameras or microphones: By means of this system they are able to know the activities that the elderly person is doing in their home through different sensors that are placed in each of the rooms. The sensors used are. 

- Wireless motion sensor. These are placed in different strategic points of the house to know if the user has performed all the actions that must be performed in his daily routine, for example, to know if he has entered the kitchen to prepare his food or eat, or to know if he has entered the bathroom or his room at night to sleep. With the help of these sensors, the system is able to create an online activity chart, where the elderly person's caregiver can visualize which rooms the elderly person has visited and for how long. 

- Door sensors. These are placed in each of the doors of the user's home. These sensors have two components, the contact and the magnet, and by means of these it is possible to know when a door has been opened or closed. The functionality of these sensors, together with those of movement, is to be able to detect when the user has received a visit and how long this visitor has been in the home, it is also possible to visualize when the user has left the home and for how long the house has been left alone.   

In addition to these sensors, it is necessary to place in the home a device that functions as a temporary database of all the information collected by the sensors, which is then uploaded to Just Checking servers. This system notifies the caregiver's mobile when the elderly person has gone to sleep, has received a visitor in the evening/night, if the user has left the door open when leaving home (door notifications can be made for each of the rooms of the home if the caregiver configures it in this way). 

The main problem of this system is that it is not intelligent, so it does not allow to create by itself patterns of activities to detect possible strange behaviors on the part of the user, nor to know if the user has fallen. The person who should be aware of changes in the user's behavior or immobility is the caregiver himself, who is often a close relative who on a hard day at work might forget to monitor the person and not be aware of any emergency or possible problem. On the other hand, although it is true that the video camera is a very invasive device, the microphone could be used to allow voice commands, in order for the user to have one more tool at his disposal at the time of any problem, for example, using a rescue voice command that automatically sends a notification to the caregiver or to a nearby health center. 

Another motivation for studies focusing on helping older people in emergency situations at home is to reduce the cost of large facilities. Today, however, some of these studies no longer consider issues of great importance, such as older people's privacy. When they require a system installed in all the rooms and in different objects of the house it can generate anguish to the users when feeling invaded in their own homes.  Related to this, the study proposed by (Principi et al. 2015) has a device with audio sensors that is connected in a local network to all types of devices. The system allows the user through voice commands to trigger automatic telephone calls as a distress alert to a family member or health center previously indicated in the configurations. This system has two modalities that will be activated depending on the situation in which the home is. 

- The first mode is voice recognition, calls and distress alerts, which is activated when the user is at home. 
- The second mode is activated when the user has left home, in which the system is activated for surveillance purposes, monitoring the acoustic environment to detect unusual events, and in the event that this situation is detected, the system makes an emergency call to the number indicated by the user. 

The recognizer has a noise cancellation module that is capable of reducing the sounds produced by the radio or television.  

Other work focuses on the information collected by wearable devices, which can detect falls and alert family members or health centers. One of these works is described in (Pierleoni et al. 2014), in which a device is designed to be used on the user's ankle. This is capable of sending alert messages to the phones previously indicated when it detects that the elderly person has suffered a potential fall. On the other hand, if the person does not get up for a period of time, a second critical fall alert message is sent. For detection, the device has information from three measurements, combines the information from a triaxial accelerometer, a triaxial gyroscope and a triaxial magnetometer using a data fusion algorithm known as orientation filter. The device detects the user's real-time orientation. 

On the other hand, the work described in (Chernbumroong et al. 2013) designs a system that consists of three sensors: an accelerometer, a temperature sensor and an altimeter, since the study states that with these three sensors it is possible to classify most of the activities that are performed by older people. The system comes in a common sports watch, so that users feel no difference from using this device to a normal accessory to see the time. The data obtained are processed to create patterns of behavior, being able to classify the movements made by the user in 9 different activities of daily life of the person. They used neural networks for the classification and recognition of activities. The method can detect various daily activities, including BADLs, such as eating, brushing teeth, walking to sleep, and IADLs, such as washing dishes, ironing, sweeping the floor, and watching television. However, the study only recognizes activities, without detecting falls or other emergencies. On the other hand, these studies design wearable devices, which need to be worn permanently, so they are not operative if they are not worn. In addition, people do not want to feel identified as dependent, when they go down the street, in addition to the fact that they may be uncomfortable because they are not accustomed to wearing them. In fact, there are devices and home systems on the market, but their acceptance comes with the perception of being considered invasive. Another disadvantage is that they are forgotten or not used because they are a removable device. 

Another product that is currently on the market is ENEST (Nestwork n.d.), which is a security system that goes on a bracelet, which allows: 

- Talking and listening: By pressing a button it is possible to communicate quickly with a family member or caregiver. 

- Establish geo-security zones: The device can be configured to delimit the secure geographical area for the user, so that the relative or caregiver receives an alert when the user has crossed that area, this is of great importance if for example the user suffers some cognitive problem. 

- Detect falls: An alert is sent to the family member or caregiver if the user has received an accidental impact or fall. 

- Determine the inactivity limit time: A specific time period can be configured as the maximum inactivity limit, so that the device will send an alert when that time is exceeded. This is useful to know if the person's fall was critical, since it has not been able to get up, it is also useful in the case that the elderly person has fainted, among many other emergencies. 

- Determine the maximum speed: You can set the maximum speed at which the user can move, this is important if for example the user should not drive for some kind of condition present at the time. When this speed limit is exceeded, an alert or notification is sent to the family member or caregiver of the elderly person. 

The main problem of this tool is that it is a wearable device, which means that the person must wear it every day as an accessory, and this action could be forgotten, on the other hand, if they are not accustomed to using such accessories, they could be uncomfortable or if they are people who are accustomed because they use bracelets or watches, they might not want to use them every day because they also want to use their own accessories. On the other hand, this device is not able to notice if the user has changed his daily routine or if he has an unusual behavior in his own home, which, in most cases, is the usual environment of older people. In addition to this, the "downtime limit" functionality provides a lot of information if a fall alert has also been received, however, this notification alone can generate many false alarms, as the person may simply be watching television at home. DSH is able to combine that information with that obtained by the different integrated sensors, which allows the device to optimize the user's behavior pattern. 

The study described in (Joshi and Nalbalwar 2017), presents a system composed of a single camera that is responsible for collecting information on the life of the user, this system like the household assistant DSH, is intended to be placed in the room of greatest use by the user. The system has a process that analyzes the information obtained by the camera, which is responsible for noting if the elderly user has suffered a fall. It does this by detecting four characteristics: the aspect ratio, the angle of orientation, the center of masses, the momentary invariants of Hu images. If the system considers that the user has suffered a fall, it notifies by means of an e-mail to the previously indicated persons, to whom part of the recorded video and screen captures are sent as an attachment. In addition, it is possible to observe a direct transmission from the home room. This study allows the caregivers of an elderly person to be alerted when he or she has suffered a fall. However, it is not capable of observing emergencies outside the visualized area, as the system does not consider the time when the user is not in his or her field of vision. On the other hand, when using a video camera, this system will not have the desired acceptance, considering that many of the users think that the camera is a very invasive device. 

The solution proposed in this study, DSH, is a device that is designed to be a system that aesthetically looks like a simple decorative object of the home and that internally has all the sensors necessary to perform the recognition of the patterns of daily life of the user. Artificial intelligence will be able to carry out training, continuously improving the identification of stored life patterns. This device must be placed in the room where the user performs the most activities or spends the most time. In principle it has been thought that the DSH assistant should be placed in the room, as many of these elderly people spend much of their time watching television in that place. 

The DSH home assistant, on the other hand, will be a highly configurable device, as it will differentiate the presence of a pet, as well as planned family visits. It will also be possible to establish days of absence for the elderly user, the type of notification preferred by family members, family contacts, or the health center to contact in emergency situations. Another problem discussed above is the dimensions of the systems proposed in the studies mentioned, since they may require a large number of sensors, which becomes very expensive and requires a longer installation time, being uncomfortable for older people. DSH on the other hand is intended to be a single device that aesthetically does not stand out in the domestic environment and only needs to be connected to the current, thus being cheaper and less invasive. 

The DSH assistant does not include a video camera, since the integrated sensors are sufficient to identify routine patterns of the user, which allows inferring when the person has a problem when presenting strange variations in the actions carried out throughout the day, even if he has had a problem in another room, the system could detect it when observing that the user has a long time without entering the room. 

All these aspects (audio) are considered by the DSH device, but in addition to these, the device proposed in our project includes other sensors in order not only to function by voice commands, but also to perceive changes in the daily behavior of the person, if he is unable to pronounce the voice command, it is possible to identify the emergency through other aspects. 

The DSH device will incorporate: sound sensor, motion sensor, temperature sensor and infrared sensor to visualize if the person is watching television. 
In the same way, it will have a process of analysis of data by artificial intelligence that will allow to create a pattern of behavior of the user, so that if for example, the user usually enters the room at 10:00 in the morning, and for some reason it is 10:30 or 11 and has not been presented in that room, a message of possible alert is sent; just as this case, at the moment that the DSH device witnesses any change in its common behavior, it sends a notification to the relatives. 
 
 DATA PROCESSING

Recognition systems must be able to classify human activities (basic) in everyday life (Cornacchia et al. 2017), either corporeal, such as running, walking, sitting, standing, falling, walking, jumping, lying down or climbing stairs, or interactive, such as hygiene, household cleaning, making food, or an office. SENSORS In order to detect these activities, the data coming from the body of the environment are sampled with sensors, normally between 20Hz and 50Hz. The mobile and body can be: 

- Mobile devices, which can combine contextual location information. In this group are A/M/G (accelerometers, gyroscopes and magnetometers), GPS, microphone or camera.

- Body mobiles -wearable-, commonly seen to monitor sports activity.

- Specific physicians, who are placed in the body to measure biological signals of medical interest, such as body thermometers, pulse counters, blood pressure monitors, oximeters, glucometers, electrocardiograph (ECG), electromyograph (EMG), electroculograph (EOG) or electroencephalograph (EEG).  

On the other hand, the environmental sensors are located in the environment where the person is unwrapped and their advantage is that they do not bother when not applied, but, on the contrary, their signals are more affected by noise. Some examples (Acampora et al. 2013): 

- Thermometers

- Barometers

- Microphones

- Chambers arranged according to an ambient intelligence

- Passive Infrared (PIR), which detects motion

- Active infrared, which also allows identification

- Radio Frequency Identifiers (RFID), to identify and locate objects

- Pressure sensors, which go in chairs, carpets...

- Intelligent tiles, which detect the pressure on the floor

- Magnetic switches, which detect cabinet door openings and closings

- Ultrasounds, which detect movement.

SIGNAL PREPROCESSING

Sensor signals are preprocessed to fill in null values, reduce noise and obtain their relevant characteristics (Nweke et al. 2019). The methods used in data cleansing are: 

- Tree imputation, i-Tree, 

- Multi-matricial factoring

- k nearest neighbours, k-NN

- Discard instances.  

Techniques are also used to reduce signal noise with frequency domain transformation or empirical wavelet transformation, to filter high pass or low pass, or to apply efficient filters such as Laplace, Kalman or Gauss. 

Preprocessing also employs time segmentation techniques that, along with window width selection, allow for the extraction of interesting features such as sliding, event-based or energy-based windows. 

In order to finish the treatment it is necessary to make the data manageable, so it is necessary to reduce its dimensionality to obtain subsets of variables that, in addition, increase the precision of the classification (Nweke et al. 2019). The methods usually used are: 

- Principal Component Analysis (PCA)

- Cumulative Distribution Function (ECDF)

- Component Independence Analysis (ICA)

- Linear discriminant analysis (LDA)

RELEVANT CHARACTERIZATION OF THE DATA

Once the signals are pre-processed, their characteristics are extracted, using traditional methods, hand-crafted, or deep learning, which allow handling large amounts of data and improve accuracy. The traditional ones analyze the signal through: 

- Time domain - Frequency domain

- Hilbert-Huang Domain (HHT) 

On the other hand, deep learning reduces time and dependence on traditional methods. It uses its multiple layers to differentiate between elementary and high-level activities. The recognition of the activity with deep learning can be generative (unsupervised) or discriminative (supervised). Generatives are used: 

- Restricted Boltzman machine (RBM), which gives a robust representation of the characteristics, but is computationally complex, which makes it difficult to optimize.

- Deep autoencoder (Wang et al. 2016), reduces the dimensionality in a robust and invariant way before changes in data distributions, but it is not very scalable, requiring many sample steps, being difficult to optimize and not working well with non-linear characteristics.

- Sparse Coding, reduces dimensionality well and extracts robust features, but they are difficult to implement correctly. 

And as discriminative they are used: 

- Convolutional Neural Network (CNN), which plays a very important role in the extraction of characteristics (Ignatov 2018), but requires a high adjustment of hyperparameters and a large number of samples to minimize their overadjustment.

- Recurrent neural network (RNN), very common for temporal modeling and sensor sequences, but difficult to manage and may need too many parameters to update. A particular example is the Long/Short-Term Memory (LSTM) network, which can improve performance between 4% and 9%, although performance deteriorates as gradients disappear or become overloaded (Ordóñez et al. 2016).

CLASSIFICATION OF ACTIVITIES

Finally, the signals are classified using machine learning techniques: 

- Support vector machines (SVM)

- Decision trees or combining them with random forest

- Grouping k nearest neighbours (k-NN) or with K-means

- Hidden Markov Models (HMM) - Gaussian Component Blend Models (GMM)

- Self-organizing maps or Kohonen Networks (SOM)

- Deep Learning Neural Models o Convolutional Neural Networks (CNN) o Recurrent Networks (RNN) o LSTM (Long-short term memory) o Autoencoders

TECHNICAL TOOLS

The tools typically used to build the models are (Nweke et al. 2019):  

- Microsoft Cognitive Toolkit

- Deeplearning4J, for Java

- Matlab

- Python libraries: TensorFlow, Theano, Keras, Torch or Pytorch. 

COMBINATION TECHNIQUES (FUSION)

Activity recognition has improved naturally with the combination of information, producing greater robustness, generalization, accuracy, differentiation, less noise and complementarity (Onofri et al. 2016). This leads to greater reliability and less uncertainty in health monitoring and in the identification of everyday activities (Nweke et al. 2019). Fusion can occur at three levels: combining sensors, intuitively recombining data characteristics by applying transformations to other domains or through deep learning techniques and combining classifiers. 

COMBINATION OF SENSORS AND APPLICATION OF VARIOUS PREPROCESSING TECHNIQUES

Normally used to increase reliability and reduce noise in health monitoring or ADL. On the one hand, it is possible to obtain simultaneous data from different types of sensors or to clean the registers that arrive in order to obtain quality data. 

COMBINATION OF DIFFERENT TYPES OF SENSORS

At a low level, signals of the same nature, homogeneous, or from different types of heterogeneous sensors can be combined in real time, and probabilistic methods can be used to refine the results. The sensors can be physically combined according to their modality or fusion methods can be applied. The basic sensors that are usually fused are inertial ones, such as A/M/G (accelerometer, magnetometer, gyroscope), multimodal ones, such as biological signals, environment, objects, vision and location. The current trend is to combine several inertial sensors with several multimodals (Nweke et al. 2019): 

COMBINATION OF RAW DATA CLEANSING METHODS

In addition to combining sensor information, the following combinations of cleaning methods are used: 

- Application of the weighted average and least squares, allowing to correct a potential inadequate positioning or orientation of the devices.

- Use of a Kalman Filter, to correct signal with the previous temporary values, although it is only for linear or normal values. Kalman is good for merging accelerometer and gyroscope data and modifications are used such as Kalman Extended, which is very efficient, Kalman Extended by quaternions, or Rao-blackwellization without essence - Rao-blackwellization unscented-.

- Dempster-Shaffer theory, which characterizes the imperfections and drifts of the sensor before interpreting its data.

- Epidemic routing, which reduces energy consumption (vital in this context) and transmission delay.

- Theory of graphs, which combines with activity on social networks or with information from the person's medical history.

- Deep canonical correlation, which learns complex non-linear transformations of heterogeneous data obtaining practically linear correlations.

DATA CONCENTRATION BY CHARACTERISTICS

Data sources from different sensors are combined using machine learning supported by traditional techniques, handcrafed features, or deep learning.  

EXTRACTION OF FEATURES WITH TRADITIONAL TECHNIQUES

The extraction of relevant features is essential for the recognition of human activity. Together with dimensionality reduction, they minimize classification error and identify the set of variables that best discriminates the activity: 

- Because of the type of transformation of the variable, in the time domain (central statistical and dispersion values) and frequency domain (spectral energy with Fourier Transform (FFT) or Cosine, both good in linear problems. Hilbert-Huang better for nonlinear.

- By selection of variables, by means of filters, enveloping algorithms, wrappers, which depend on their classifiers or insertions, embedded. Methods such as Kernel-Fisher discriminant, Minimum redundancy-Maximum relevance, Correlation or ReliefF, or more recent ones such as diversified forward-backward with logistic regression, power-aware, elitist binary Wolf search algorithm (EBWSA) are used.

- Applying machine learning. SVM, k-NN, ANN, decision trees, random forest, HMM, BayesNaïve, multikernel learning, gaussian kernel, linear discriminant classifier, clustering Kmeans. HMM and trees for hierarchical recognition of activities (low and high level). Kmenas is used to group similar activities before their integration into high-level activities.

FEATURE EXTRACTION WITH DEEP LEARNING

The most common combination is CNN with: 

- RNN, to establish dependencies between space and time by combining sensors, or to extract invariant characteristics of the displacement.

- LSTM, for recognition of several concurrent activities, but consumes many resources which makes it difficult for real time applications.

- Bidirectional LSTM with multimodal sensors for medical monitoring

- Autoencoder, used for fall detection using body sensors

- RBM, to extract invariant characteristics when the individual moves and reduce the size of examples, but scarce data sets and use a single sensor, but reduces the generalization

- Deep belief network, which is used for medical pre-diagnosis and properly for tele-assistance

- Gated RNN (GRU), for sensor analysis and activity tracking, but consumes many resources, so it is not advisable in real time.

Deep learning is also used with the support of traditional techniques, which reduces the computational load, although they are not efficient extracting temporal characteristics: 

- CNN, for activity recognition with mobile sensors

- LSTN and mixture density network (MDN), which solve the problem of having few examples to train, since they generate a synthetic dataset, which, to distinguish it from the real one, use heuristic averages.

- Dispersed code convolutional network, sparse coding, with dispersion of completely connected layers, reducing the kernel, to download the working memory, although it is very difficult to use dispersed code.

- Deep belief combined with scattered code has medical applications in the elderly and has the same behavior as the previous one. 

Transfer learning is used to shorten training and reduce dependence on how sensors are placed. 

ASSEMBLY OF CLASSIFIERS

The elementary classifiers that are normally used for their assembly are: Decision trees, SVM, HMM, ANN and LDA, which are combined according to the following design methods 

- Diversification of models, achieves great differentiation, increases the reliability of prediction and generalization. The only problem is in the decision of the classifier

- Manipulation of input characteristics, ensures dependencies between the classifiers used and faster by reducing the input space, but runs the risk of including irrelevant characteristics and suffers the problem of fragmentation, especially if they are few instances

- Random initialization, provides differentiation in a non-linear spatial distribution, but requires computational resources for parameter updates

- Data partitioning with bagging, boosting or cross validation, which by applying different hypotheses allows for greater differentiation and consistency and less uncertainty. Unsuitable for many dimensions or to be used in isolation.

And this is done according to the following assembly criteria: 

- Joining the dataset class by consensus or by weighted consensus. Widely used, although it does not have more guarantees than if only one classifier is used.

- Trainable fusion, with Dempster-Shafer theory, weighted sum, localized template or by random committee. Optimization improves accuracy and reduces uncertainties, but outputs can be confused.

- Fusion with support function, by probabilities a posteriori, Naïve-Bayes, Aggregation of means, selection of preferred, by space of knowledge of the behavior. Efficient and precise, but imposes very restrictive conditions on the classifiers which makes it difficult to implement in practice.

AMBIENT INTELLIGENCE

On the other hand, Ambient Intelligence (AmI) is a new paradigm that improves people's possibilities through "digital environments" that perceive, adapt and respond to their needs, habits, gestures or emotions. AmI takes advantage of context information, is personalized for each individual, anticipates, adapts to needs, is located everywhere and is non-invasive at the body level. 

Normally two types of communication infrastructure are provided for AmI sensors when building a smart environment (Acampora et al. 2013): 

- Body Area Network (BAN, in analogy with LAN), composed of sensors on clothing or skin. Vital signs are monitored and used to improve health and quality of life. The communication is established in 3 layers: intra-sensors, for distances around 2 meters, inter-sensors, which communicate with access points and beyond the BAN, being able to connect at any point in the metropolitan area with device that acts as a gateway.

- Wireless Dense/Mesh Sensor Networks (WMSN), consisting of sensors located in everyday objects and places, such as clothing, furniture, etc. The sensors can also act as relays for other sensors and are connected via gateways, access points or mobile or stationary nodes.

Structures are recently being developed for more convenient sensor systems, such as epidermal sensors and electromechanical microsensors (MEMS) of the A/G/M type, CO2 detector, gas sensor, or medical. 

CHALLENGES TO THE RECOGNITION OF THE ACTIVITY IN HOMES FOR TELECARE

Securhome seeks a technological differentiation that provides a robust and reliable solution. In order to do so, it has to face the following challenges common to research in this area: 

- It is necessary to increase current robustness, generalization and reliability, as well as to reduce uncertainty and increase the precision of classification techniques.

- The massive data collection and tedious annotation process makes it necessary to automate them to achieve these objectives.

- Reduce the excessive invasive load of body monitoring devices.

- Video and environmental sensors work in fixed environments which makes them unsuitable for activity recognition.

- Video also invades privacy, locates people and captures collateral information, which is not desirable.

- Environmental sensors are greatly affected by noise and this must be resolved.

- Possible excessive exposure to radiation from some devices by monitored persons.

- It is necessary to study further the fusion between multimodal sensors with other contexts such as social networks or with details of a high level of abstraction. 

With respect to the challenges that must assume deep learning concretely are found: 

- The use of deep learning online can bring great benefits in the improvement of detection, however, sensors, especially mobile, only use models that have already been trained offline, so that communication with the server and local computing in the device has been reduced to a minimum to save energy.  

- Deep learning requires better accuracy to recognize the activity, so your training needs huge amounts of data. The generalization of the deployment of these applications with infrastructures such as sensor grid or Internet of Things (IoT) makes it easier to use crowdsourcing, facilitating a massive capture of data from multiple individuals to better train the model. Likewise, the interconnection facilitates the transfer of information between different domains automatically, deep transfer learning.  

- More flexible models are needed to recognize high-level activities, such as combining sensors or merging information with context.

- There is a new line of research called light deep learning that combines deep learning with traditional techniques, or standard neural networks. 

REFERENCES

Acampora, Giovanni, Diane J. Cook, Parisa Rashidi, and Athanasios V. Vasilakos. 2013. “A Survey on Ambient Intelligence in Healthcare.” Proceedings of the IEEE 101(12):2470–94. Retrieved February 8, 2019 (http://ieeexplore.ieee.org/document/6579688/). 

Botia, Juan A., Ana Villa, and Jose Palma. 2012. “Ambient Assisted Living System for In-Home Monitoring of Healthy Independent Elders.” Expert Systems with Applications 39(9):8136–48. Retrieved February 8, 2019 (https://linkinghub.elsevier.com/retrieve/pii/S095741741200173X). 

Chernbumroong, Saisakul, Shuang Cang, Anthony Atkins, and Hongnian Yu. 2013. “Elderly Activities Recognition and Classification for Applications in Assisted Living.” Expert Systems with Applications 40(5):1662–74. Retrieved February 8, 2019 (https://linkinghub.elsevier.com/retrieve/pii/S0957417412010585). 

Cornacchia, Maria, Koray Ozcan, Yu Zheng, and Senem Velipasalar. 2017. “A Survey on Activity Detection and Classification Using Wearable Sensors.” IEEE Sensors Journal 17(2):386–403. Retrieved February 8, 2019 (http://ieeexplore.ieee.org/document/7742959/). 

Ignatov, Andrey. 2018. “Real-Time Human Activity Recognition from Accelerometer Data Using Convolutional Neural Networks.” Applied Soft Computing 62:915–22. Retrieved February 8, 2019 (https://www.sciencedirect.com/science/article/pii/S1568494617305665). 

Joshi, Nirmala B. and S. L. Nalbalwar. 2017. “A Fall Detection and Alert System for an Elderly Using Computer Vision and Internet of Things.” Pp. 1276–81 in 2017 2nd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT). IEEE. Retrieved February 8, 2019 (http://ieeexplore.ieee.org/document/8256804/). 

Just Checking. n.d. “Hello, We’re Just Checking.” Retrieved February 8, 2019 (https://justchecking.co.uk/about-us). 

Lago, Paula, Claudia Roncancio, and Claudia Jiménez-Guarín. 2019. “Learning and Managing Context Enriched Behavior Patterns in Smart Homes.” Future Generation Computer Systems 91:191–205. Retrieved February 8, 2019 (https://www.sciencedirect.com/science/article/pii/S0167739X18307180). 

Nestwork. n.d. “Solución Personal Móvil de Localización y Emisión de Alertas.” Retrieved February 8, 2019 (http://www.nestwork.eu/que-es-enest/). 

Nweke, Henry Friday, Ying Wah Teh, Ghulam Mujtaba, and Mohammed Ali Al-garadi. 2019. “Data Fusion and Multiple Classifier Systems for Human Activity Detection and Health Monitoring: Review and Open Research Directions.” Information Fusion 46:147–70. Retrieved February 8, 2019 (https://www.sciencedirect.com/science/article/pii/S1566253518304135). 

Onofri, Leonardo, Paolo Soda, Mykola Pechenizkiy, and Giulio Iannello. 2016. “A Survey on Using Domain and Contextual Knowledge for Human Activity Recognition in Video Streams.” Expert Systems with Applications 63:97–111. Retrieved February 8, 2019 (https://www.sciencedirect.com/science/article/pii/S0957417416302913). 

Ordóñez, Francisco, Daniel Roggen, Francisco Javier Ordóñez, and Daniel Roggen. 2016. “Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.” Sensors 16(1):115. Retrieved February 8, 2019 (http://www.mdpi.com/14248220/16/1/115). 

Pierleoni, Paola, Alberto Belli, Lorenzo Palma, Luca Pernini, and Simone Valenti. 2014. “A Versatile Ankle-Mounted Fall Detection Device Based on Attitude Heading Systems.” Pp. 153–56 in 2014 IEEE

Biomedical Circuits and Systems Conference (BioCAS) Proceedings. IEEE. Retrieved February 8, 2019 (http://ieeexplore.ieee.org/document/6981668/). 

Pollack, Martha E. et al. 2003. “Autominder: An Intelligent Cognitive Orthotic System for People with Memory Impairment.” Robotics and Autonomous Systems 44(3–4):273–82. Retrieved February 8, 2019 (https://www.sciencedirect.com/science/article/pii/S0921889003000770). 

Principi, Emanuele, Stefano Squartini, Roberto Bonfigli, Giacomo Ferroni, and Francesco Piazza. 2015. “An Integrated System for Voice Command Recognition and Emergency Detection Based on Audio Signals.” Expert Systems with Applications 42(13):5668–83. Retrieved February 8, 2019 (https://www.sciencedirect.com/science/article/pii/S0957417415001438). 

Uddin, Md Zia, Weria Khaksar, and Jim Torresen. 2018. “Ambient Sensors for Elderly Care and Independent Living: A Survey.” Sensors (Basel, Switzerland) 18(7). Retrieved February 8, 2019 (http://www.ncbi.nlm.nih.gov/pubmed/29941804). 

Wang, Aiguo, Guilin Chen, Cuijuan Shang, Miaofei Zhang, and Li Liu. 2016. “Human Activity Recognition in a Smart Home Environment with Stacked Denoising Autoencoders.” Pp. 29–40 in. Springer, Cham. Retrieved February 8, 2019 (http://link.springer.com/10.1007/978-3-319-47121-1_3). 
 

 

Compartir 
Under the framework of: Programa Operativo Cooperación Transfronteriza España-Portugal
Sponsors: Fundación General de la Universidad de Salamanca Fundación del Consejo Superior de Investigaciones Científicas Direção Geral da Saúde - Portugal Universidad del Algarve - Portugal