1,977 research outputs found

    A novel monitoring system for fall detection in older people

    Get PDF
    Indexación: Scopus.This work was supported in part by CORFO - CENS 16CTTS-66390 through the National Center on Health Information Systems, in part by the National Commission for Scientific and Technological Research (CONICYT) through the Program STIC-AMSUD 17STIC-03: ‘‘MONITORing for ehealth," FONDEF ID16I10449 ‘‘Sistema inteligente para la gestión y análisis de la dotación de camas en la red asistencial del sector público’’, and in part by MEC80170097 ‘‘Red de colaboración científica entre universidades nacionales e internacionales para la estructuración del doctorado y magister en informática médica en la Universidad de Valparaíso’’. The work of V. H. C. De Albuquerque was supported by the Brazilian National Council for Research and Development (CNPq), under Grant 304315/2017-6.Each year, more than 30% of people over 65 years-old suffer some fall. Unfortunately, this can generate physical and psychological damage, especially if they live alone and they are unable to get help. In this field, several studies have been performed aiming to alert potential falls of the older people by using different types of sensors and algorithms. In this paper, we present a novel non-invasive monitoring system for fall detection in older people who live alone. Our proposal is using very-low-resolution thermal sensors for classifying a fall and then alerting to the care staff. Also, we analyze the performance of three recurrent neural networks for fall detections: Long short-term memory (LSTM), gated recurrent unit, and Bi-LSTM. As many learning algorithms, we have performed a training phase using different test subjects. After several tests, we can observe that the Bi-LSTM approach overcome the others techniques reaching a 93% of accuracy in fall detection. We believe that the bidirectional way of the Bi-LSTM algorithm gives excellent results because the use of their data is influenced by prior and new information, which compares to LSTM and GRU. Information obtained using this system did not compromise the user's privacy, which constitutes an additional advantage of this alternative. © 2013 IEEE.https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=842305

    Application-aware optimization of Artificial Intelligence for deployment on resource constrained devices

    Get PDF
    Artificial intelligence (AI) is changing people's everyday life. AI techniques such as Deep Neural Networks (DNN) rely on heavy computational models, which are in principle designed to be executed on powerful HW platforms, such as desktop or server environments. However, the increasing need to apply such solutions in people's everyday life has encouraged the research for methods to allow their deployment on embedded, portable and stand-alone devices, such as mobile phones, which exhibit relatively low memory and computational resources. Such methods targets both the development of lightweight AI algorithms and their acceleration through dedicated HW. This thesis focuses on the development of lightweight AI solutions, with attention to deep neural networks, to facilitate their deployment on resource constrained devices. Focusing on the computer vision field, we show how putting together the self learning ability of deep neural networks with application-specific knowledge, in the form of feature engineering, it is possible to dramatically reduce the total memory and computational burden, thus allowing the deployment on edge devices. The proposed approach aims to be complementary to already existing application-independent network compression solutions. In this work three main DNN optimization goals have been considered: increasing speed and accuracy, allowing training at the edge, and allowing execution on a microcontroller. For each of these we deployed the resulting algorithm to the target embedded device and measured its performance

    TractorEYE: Vision-based Real-time Detection for Autonomous Vehicles in Agriculture

    Get PDF
    Agricultural vehicles such as tractors and harvesters have for decades been able to navigate automatically and more efficiently using commercially available products such as auto-steering and tractor-guidance systems. However, a human operator is still required inside the vehicle to ensure the safety of vehicle and especially surroundings such as humans and animals. To get fully autonomous vehicles certified for farming, computer vision algorithms and sensor technologies must detect obstacles with equivalent or better than human-level performance. Furthermore, detections must run in real-time to allow vehicles to actuate and avoid collision.This thesis proposes a detection system (TractorEYE), a dataset (FieldSAFE), and procedures to fuse information from multiple sensor technologies to improve detection of obstacles and to generate a map. TractorEYE is a multi-sensor detection system for autonomous vehicles in agriculture. The multi-sensor system consists of three hardware synchronized and registered sensors (stereo camera, thermal camera and multi-beam lidar) mounted on/in a ruggedized and water-resistant casing. Algorithms have been developed to run a total of six detection algorithms (four for rgb camera, one for thermal camera and one for a Multi-beam lidar) and fuse detection information in a common format using either 3D positions or Inverse Sensor Models. A GPU powered computational platform is able to run detection algorithms online. For the rgb camera, a deep learning algorithm is proposed DeepAnomaly to perform real-time anomaly detection of distant, heavy occluded and unknown obstacles in agriculture. DeepAnomaly is -- compared to a state-of-the-art object detector Faster R-CNN -- for an agricultural use-case able to detect humans better and at longer ranges (45-90m) using a smaller memory footprint and 7.3-times faster processing. Low memory footprint and fast processing makes DeepAnomaly suitable for real-time applications running on an embedded GPU. FieldSAFE is a multi-modal dataset for detection of static and moving obstacles in agriculture. The dataset includes synchronized recordings from a rgb camera, stereo camera, thermal camera, 360-degree camera, lidar and radar. Precise localization and pose is provided using IMU and GPS. Ground truth of static and moving obstacles (humans, mannequin dolls, barrels, buildings, vehicles, and vegetation) are available as an annotated orthophoto and GPS coordinates for moving obstacles. Detection information from multiple detection algorithms and sensors are fused into a map using Inverse Sensor Models and occupancy grid maps. This thesis presented many scientific contribution and state-of-the-art within perception for autonomous tractors; this includes a dataset, sensor platform, detection algorithms and procedures to perform multi-sensor fusion. Furthermore, important engineering contributions to autonomous farming vehicles are presented such as easily applicable, open-source software packages and algorithms that have been demonstrated in an end-to-end real-time detection system. The contributions of this thesis have demonstrated, addressed and solved critical issues to utilize camera-based perception systems that are essential to make autonomous vehicles in agriculture a reality

    Energetsko učinkovit sistem za detekcijo slonov s pomočjo strojnega učenja

    Get PDF
    Human-Elephant Conflicts are a major problem in terms of elephant conservation. According to WILDLABS, an average of 400 people and 100 elephants are killed every year in India alone because of them. Early warning systems replace the role of human watchers and warn local communities of nearby, potentially life threatening, elephants, thus minimising the Human-Elephant Conflicts. In this Master\u27s thesis we present the structure of an early warning system, which consists of several low-power embedded systems equipped with thermal cameras and a single gateway. To detect elephants from captured thermal images we used Machine Learning methods, specifically Convolutional Neural Networks. The main focus of this thesis was the design, implementation and evaluation of Machine Learning models running on microcontrollers under low-power conditions. We designed and trained several accurate image classification models, optimised them for on-device deployment and compared them against models trained with commercial software in terms of accuracy, inference speed and size. While writing firmware, we ported a part of the TensorFlow library and created our own build system, suitable for the libopencm3 platform. We also implemented reporting of inference results over the LoRaWAN network and described a possible server-size solution. We finally a constructed fully functional embedded system from various development and evaluation boards, and evaluated its performance in terms of power consumption. We show that embedded systems with Machine Learning capabilities are a viable solution to many real life problems.Konflikti med ljudmi in sloni predstavljajo velik problem ohranjanja populacije slonov. Zaradi fragmentacije in pomanjkanja habitata sloni, v iskanju hrane, pogosto zaidejo na riževa polja in plantaže, kjer pridejo v stik s človekom. Po podatkih skupnosti WILDLABS, zaradi konfliktov, samo v Indiji, letno umre povprečno 400 ljudi in 100 slonov. Sistemi zgodnje opozoritve nadomeščajo vlogo človeških stražarjev in opozarjajo bližnjo skupnost o bližini, potencialno nevarnih, slonov in tako pripomorejo k zmanjševanju konfliktov med ljudmi in sloni. V tem magistrskem delu predstavljamo strukturo sistema zgodnje opozoritve, ki je sestavljen iz večih, nizko porabnih, vgrajenih sistemov, ki so opremljeni s termalnimi kamerami in ene dostopne točke oz. prehoda (gateway). Vgrajeni sistemi so postavljeni na terenu, ob zaznavi slona pošljejo opozorilo preko brezžičnega omrežja do dostopne točke, ki nato lahko opozori lokalno skupnost. Za prepoznavo slonov iz zajetih termalnih slik smo uporabili metode strojnega učenja, bolj specifično konvolucijske nevronske mreže. Glavni cilji tega magistrskega dela so bili zasnova, izvedba in ovrednotenje modelov strojnega učenja, ki jih je možno poganjati na mikrokrmilnkih pod pogoji nizke porabe

    Optimized energy and air quality management of shared smart buildings in the covid-19 scenario

    Get PDF
    Worldwide increasing awareness of energy sustainability issues has been the main driver in developing the concepts of (Nearly) Zero Energy Buildings, where the reduced energy consumptions are (nearly) fully covered by power locally generated by renewable sources. At the same time, recent advances in Internet of Things technologies are among the main enablers of Smart Homes and Buildings. The transition of conventional buildings into active environments that process, elaborate and react to online measured environmental quantities is being accelerated by the aspects related to COVID-19, most notably in terms of air exchange and the monitoring of the density of occupants. In this paper, we address the problem of maximizing the energy efficiency and comfort perceived by occupants, defined in terms of thermal comfort, visual comfort and air quality. The case study of the University of Pisa is considered as a practical example to show preliminary results of the aggregation of environmental data

    Compact recurrent neural networks for acoustic event detection on low-energy low-complexity platforms

    Full text link
    Outdoor acoustic events detection is an exciting research field but challenged by the need for complex algorithms and deep learning techniques, typically requiring many computational, memory, and energy resources. This challenge discourages IoT implementation, where an efficient use of resources is required. However, current embedded technologies and microcontrollers have increased their capabilities without penalizing energy efficiency. This paper addresses the application of sound event detection at the edge, by optimizing deep learning techniques on resource-constrained embedded platforms for the IoT. The contribution is two-fold: firstly, a two-stage student-teacher approach is presented to make state-of-the-art neural networks for sound event detection fit on current microcontrollers; secondly, we test our approach on an ARM Cortex M4, particularly focusing on issues related to 8-bits quantization. Our embedded implementation can achieve 68% accuracy in recognition on Urbansound8k, not far from state-of-the-art performance, with an inference time of 125 ms for each second of the audio stream, and power consumption of 5.5 mW in just 34.3 kB of RAM
    corecore