10 research outputs found

    Guest Editorial Computational and smart cameras

    Get PDF
    published_or_final_versio

    Deep Learning-based Fall Detection Algorithm Using Ensemble Model of Coarse-fine CNN and GRU Networks

    Full text link
    Falls are the public health issue for the elderly all over the world since the fall-induced injuries are associated with a large amount of healthcare cost. Falls can cause serious injuries, even leading to death if the elderly suffers a "long-lie". Hence, a reliable fall detection (FD) system is required to provide an emergency alarm for first aid. Due to the advances in wearable device technology and artificial intelligence, some fall detection systems have been developed using machine learning and deep learning methods to analyze the signal collected from accelerometer and gyroscopes. In order to achieve better fall detection performance, an ensemble model that combines a coarse-fine convolutional neural network and gated recurrent unit is proposed in this study. The parallel structure design used in this model restores the different grains of spatial characteristics and capture temporal dependencies for feature representation. This study applies the FallAllD public dataset to validate the reliability of the proposed model, which achieves a recall, precision, and F-score of 92.54%, 96.13%, and 94.26%, respectively. The results demonstrate the reliability of the proposed ensemble model in discriminating falls from daily living activities and its superior performance compared to the state-of-the-art convolutional neural network long short-term memory (CNN-LSTM) for FD

    New Fast Fall Detection Method Based on Spatio-Temporal Context Tracking of Head by Using Depth Images

    Get PDF
    © 2015 by the authors; licensee MDPI, Basel, Switzerland. In order to deal with the problem of projection occurring in fall detection with two-dimensional (2D) grey or color images, this paper proposed a robust fall detection method based on spatio-temporal context tracking over three-dimensional (3D) depth images that are captured by the Kinect sensor. In the pre-processing procedure, the parameters of the Single-Gauss-Model (SGM) are estimated and the coefficients of the floor plane equation are extracted from the background images. Once human subject appears in the scene, the silhouette is extracted by SGM and the foreground coefficient of ellipses is used to determine the head position. The dense spatio-temporal context (STC) algorithm is then applied to track the head position and the distance from the head to floor plane is calculated in every following frame of the depth image. When the distance is lower than an adaptive threshold, the centroid height of the human will be used as the second judgment criteria to decide whether a fall incident happened. Lastly, four groups of experiments with different falling directions are performed. Experimental results show that the proposed method can detect fall incidents that occurred in different orientations, and they only need a low computation complexity

    Computer Vision Algorithms for Mobile Camera Applications

    Get PDF
    Wearable and mobile sensors have found widespread use in recent years due to their ever-decreasing cost, ease of deployment and use, and ability to provide continuous monitoring as opposed to sensors installed at fixed locations. Since many smart phones are now equipped with a variety of sensors, including accelerometer, gyroscope, magnetometer, microphone and camera, it has become more feasible to develop algorithms for activity monitoring, guidance and navigation of unmanned vehicles, autonomous driving and driver assistance, by using data from one or more of these sensors. In this thesis, we focus on multiple mobile camera applications, and present lightweight algorithms suitable for embedded mobile platforms. The mobile camera scenarios presented in the thesis are: (i) activity detection and step counting from wearable cameras, (ii) door detection for indoor navigation of unmanned vehicles, and (iii) traffic sign detection from vehicle-mounted cameras. First, we present a fall detection and activity classification system developed for embedded smart camera platform CITRIC. In our system, the camera platform is worn by the subject, as opposed to static sensors installed at fixed locations in certain rooms, and, therefore, monitoring is not limited to confined areas, and extends to wherever the subject may travel including indoors and outdoors. Next, we present a real-time smart phone-based fall detection system, wherein we implement camera and accelerometer based fall-detection on Samsung Galaxy S™ 4. We fuse these two sensor modalities to have a more robust fall detection system. Then, we introduce a fall detection algorithm with autonomous thresholding using relative-entropy within the class of Ali-Silvey distance measures. As another wearable camera application, we present a footstep counting algorithm using a smart phone camera. This algorithm provides more accurate step-count compared to using only accelerometer data in smart phones and smart watches at various body locations. As a second mobile camera scenario, we study autonomous indoor navigation of unmanned vehicles. A novel approach is proposed to autonomously detect and verify doorway openings by using the Google Project Tango™ platform. The third mobile camera scenario involves vehicle-mounted cameras. More specifically, we focus on traffic sign detection from lower-resolution and noisy videos captured from vehicle-mounted cameras. We present a new method for accurate traffic sign detection, incorporating Aggregate Channel Features and Chain Code Histograms, with the goal of providing much faster training and testing, and comparable or better performance, with respect to deep neural network approaches, without requiring specialized processors. Proposed computer vision algorithms provide promising results for various useful applications despite the limited energy and processing capabilities of mobile devices

    Fall Detection by Using Video

    Full text link
    Cameras have become common in our society and as a result there is more video available today than ever before. While the video can be used for entertainment or possibly as storage it can also be used as a sensor capturing crucial information, The information captured can be put to all types of uses, but one particular use is to identify a fall. The importance of identifying a fall can be seen especially in the older population that is affected by falls every year. The falls experienced by the elderly are devastating as they can cause apprehension to normal life activities and in some cases premature death. Another fall related issue is the intentional deception in a business with intent of insurance fraud. Classification algorithms based on video can be constructed to detect falls and separate them as either accidental or intentional. This thesis proposes an algorithm based on frame segmentation, and speed components in the x, y, z directions over time t. The speed components are estimated from the video of orthogonally positioned cameras. The algorithm can discern between fall activities and others like sitting on the floor, lying on the floor, or exercising

    ENERGY-EFFICIENT LIGHTWEIGHT ALGORITHMS FOR EMBEDDED SMART CAMERAS: DESIGN, IMPLEMENTATION AND PERFORMANCE ANALYSIS

    Get PDF
    An embedded smart camera is a stand-alone unit that not only captures images, but also includes a processor, memory and communication interface. Battery-powered, embedded smart cameras introduce many additional challenges since they have very limited resources, such as energy, processing power and memory. When camera sensors are added to an embedded system, the problem of limited resources becomes even more pronounced. Hence, computer vision algorithms running on these camera boards should be light-weight and efficient. This thesis is about designing and developing computer vision algorithms, which are aware and successfully overcome the limitations of embedded platforms (in terms of power consumption and memory usage). Particularly, we are interested in object detection and tracking methodologies and the impact of them on the performance and battery life of the CITRIC camera (embedded smart camera employed in this research). This thesis aims to prolong the life time of the Embedded Smart platform, without affecting the reliability of the system during surveillance tasks. Therefore, the reader is walked through the whole designing process, from the development and simulation, followed by the implementation and optimization, to the testing and performance analysis. The work presented in this thesis carries out not only software optimization, but also hardware-level operations during the stages of object detection and tracking. The performance of the algorithms introduced in this thesis are comparable to state-of-the-art object detection and tracking methods, such as Mixture of Gaussians, Eigen segmentation, color and coordinate tracking. Unlike the traditional methods, the newly-designed algorithms present notable reduction of the memory requirements, as well as the reduction of memory accesses per pixel. To accomplish the proposed goals, this work attempts to interconnect different levels of the embedded system architecture to make the platform more efficient in terms of energy and resource savings. Thus, the algorithms proposed are optimized at the API, middleware, and hardware levels to access the pixel information of the CMOS sensor directly. Only the required pixels are acquired in order to reduce the unnecessary communications overhead. Experimental results show that when exploiting the architecture capabilities of an embedded platform, 41.24% decrease in energy consumption, and 107.2% increase in battery-life can be accomplished. Compared to traditional object detection and tracking methods, the proposed work provides an additional 8 hours of continuous processing on 4 AA batteries, increasing the lifetime of the camera to 15.5 hours

    Automatic Fall Detection and Activity Classification by a Wearable Embedded Smart Camera

    No full text

    Intelligent fall detection system for eldercare

    Get PDF
    Fall among elders is a main reason to cause accidental death among the population over the age 65 in United States. The fall detection methods have been brought into scene by implemented on different fall monitoring devices. For the advantages in privacy protection and non-invasive, independent of light, I design the fall detection system based on Doppler radar sensor. This dissertation explores different Doppler radar sensor configurations and positioning in both of the lab and real senior home environment, signal processing and machine learning algorithms. Firstly, I design the system based on the data collected with three configurations: two floor radars, one ceiling and one wall radars, one ceiling and one floor radars in lab. The performance of the sensor positioning and features are evaluated with classifiers: support vector machine, nearest neighbor, naïve Bayes, hidden Markov model. In the real senior home, I investigate the system by evaluating the detection variances caused by training dataset due to the variable subjects and environment settings. Moreover, I adjust the automatic fall detection system for the actual retired community apartment. I examine different features: Mel-frequency cepstral coefficients (MFCCs), local binary patterns (LBP) and the combined version of features with RELIEF algorithm. I also improve the detection performance with both pre-screener and features selection. I fuse the radar fall detection system with motion sensors. I develop a standalone fall detection system and generate a result to display on a designed webpage

    Social, Private, and Trusted Wearable Technology under Cloud-Aided Intermittent Wireless Connectivity

    Get PDF
    There has been an unprecedented increase in the use of smart devices globally, together with novel forms of communication, computing, and control technologies that have paved the way for a new category of devices, known as high-end wearables. While massive deployments of these objects may improve the lives of people, unauthorized access to the said private equipment and its connectivity is potentially dangerous. Hence, communication enablers together with highly-secure human authentication mechanisms have to be designed.In addition, it is important to understand how human beings, as the primary users, interact with wearable devices on a day-to-day basis; usage should be comfortable, seamless, user-friendly, and mindful of urban dynamics. Usually the connectivity between wearables and the cloud is executed through the user’s more power independent gateway: this will usually be a smartphone, which may have potentially unreliable infrastructure connectivity. In response to these unique challenges, this thesis advocates for the adoption of direct, secure, proximity-based communication enablers enhanced with multi-factor authentication (hereafter refereed to MFA) that can integrate/interact with wearable technology. Their intelligent combination together with the connection establishment automation relying on the device/user social relations would allow to reliably grant or deny access in cases of both stable and intermittent connectivity to the trusted authority running in the cloud.The introduction will list the main communication paradigms, applications, conventional network architectures, and any relevant wearable-specific challenges. Next, the work examines the improved architecture and security enablers for clusterization between wearable gateways with a proximity-based communication as a baseline. Relying on this architecture, the author then elaborates on the social ties potentially overlaying the direct connectivity management in cases of both reliable and unreliable connection to the trusted cloud. The author discusses that social-aware cooperation and trust relations between users and/or the devices themselves are beneficial for the architecture under proposal. Next, the author introduces a protocol suite that enables temporary delegation of personal device use dependent on different connectivity conditions to the cloud.After these discussions, the wearable technology is analyzed as a biometric and behavior data provider for enabling MFA. The conventional approaches of the authentication factor combination strategies are compared with the ‘intelligent’ method proposed further. The assessment finds significant advantages to the developed solution over existing ones.On the practical side, the performance evaluation of existing cryptographic primitives, as part of the experimental work, shows the possibility of developing the experimental methods further on modern wearable devices.In summary, the set of enablers developed here for wearable technology connectivity is aimed at enriching people’s everyday lives in a secure and usable way, in cases when communication to the cloud is not consistently available

    Caractérisation et reconnaissance de sons d'eau pour le suivi des activités de la vie quotidienne. Une approche fondée sur le signal, l'acoustique et la perception

    Get PDF
    Avec le vieillissement de la population, le diagnostic et le traitement des démences telle que la maladie d'Alzheimer constituent des enjeux sociaux de grande importance. Le suivi des activités de la vie quotidienne du patient représente un point clé dans le diagnostic des démences. Dans ce contexte, le projet IMMED propose une utilisation innovante de la caméra portée pour le suivi à distance des activités effectuées. Nous avons ainsi travaillé sur la reconnaissance de sons produits par l'eau, qui permet d'inférer sur un certain nombre d'activités d'intérêt pour les médecins, dont les activités liées à l'alimentation, à l'entretien, ou à l'hygiène. Si divers travaux ont déjà été effectués sur la reconnaissance des sons d'eau, ils sont difficilement adaptables aux enregistrements de la vie quotidienne, caractérisés par un recouvrement important de différentes sources sonores. Nous plaçons donc ce travail dans le cadre de l'analyse computationnelle de scènes sonores, qui pose depuis plusieurs années les bases théoriques de la reconnaissance de sources dans un mélange sonore. Nous présentons dans cette thèse un système basé sur un nouveau descripteur audio, appelé couverture spectrale, qui permet de reconnaître les flux d'eau dans des signaux sonores issus d'environnements bruités. Des expériences effectuées sur plus de 7 heures de vidéo valident notre approche et permettent d'intégrer ce système au sein du projet IMMED. Une étape complémentaire de classification permet d'améliorer notablement les résultats. Néanmoins, nos systèmes sont limités par une certaine difficulté à caractériser, et donc à reconnaître, les sons d'eau. Nous avons élargi notre analyse aux études acoustiques qui décrivent l'origine des sons d'eau. Selon ces analyses, les sons d'eau proviennent principalement de la vibration de bulles d'air dans l'eau. Les études théoriques et l'analyse de signaux réels ont permis de mettre au point une nouvelle approche de reconnaissance, fondée sur la détection fréquentielle de bulles d'air en vibration. Ce système permet de détecter des sons de liquide variés, mais se trouve limité par des flux d'eau trop complexes et bruités. Au final, ce nouveau système, basé sur la vibration de bulles d'air, est complémentaire avec le système de reconnaissance de flux d'eau, mais ne peux s'y substituer. Pour comparer ce résultat avec le fonctionnement de l'écoute humaine, nous avons effectué une étude perceptive. Dans une expérience de catégorisation libre, effectuée sur un ensemble important de sons de liquide du quotidien, les participants sont amenés à effectuer des groupes de sons en fonction de leur similarité causale. Les analyses des résultats nous permettent d'identifier des catégories de sons produits par les liquides, qui mettent en évidence l'utilisation de différentes stratégies cognitives dans l'identification les sons d'eau et de liquide. Une expérience finale effectuée sur les catégories obtenues souligne l'aspect nécessaire et suffisant de nos systèmes sur un corpus varié de sons d'eau du quotidien. Nos deux approches semblent donc pertinentes pour caractériser et reconnaître un ensemble important de sons produits par l'eau.The analysis of instrumental activities of daily life is an important tool in the early diagnosis of dementia such as Alzheimer. The IMMED project investigates tele-monitoring technologies to support doctors in the diagnostic and follow-up of the illnesses. The project aims to automatically produce indexes to facilitate the doctor’s navigation throughout the individual video recordings. Water sound recognition is very useful to identify everyday activities (e.g. hygiene, household, cooking, etc.). Classical methods of sound recognition, based on learning techniques, are ineffective in the context of the IMMED corpus, where data are very heterogeneous. Computational auditory scene analysis provides a theoretical framework for audio event detection in everyday life recordings. We review applications of single or multiple audio event detection in real life. We propose a new system of water flow recognition, based on a new feature called spectral cover. Our system obtains good results on more than seven hours of videos, and thus is integrated to the IMMED framework. A second stage improves the system precision using Gammatone Cepstral Coefficients and Support Vector Machines. However, a perceptive study shows the difficulty to characterize water sounds by a unique definition. To detect other water sounds than water flow, we used material provide by acoustics studies. A liquid sound comes mainly from harmonic vibrations resulting from the entrainment of air bubbles. We depicted an original system to recognize water sounds as group of air bubble sounds. This new system is able to detect a wide variety of water sounds, but cannot replace our water flow detection system. Our two systems seem complementary to provide a robust recognition of different water sounds of daily living. A perceptive study aims to compare our two approaches with human perception. A free categorization task has been set up on various excerpts of liquid sounds. The framework of this experiment encourages causal similarity. Results show several classes of liquids sounds, which may reflect the cognitive categories. In a final experiment performed on these categories, most of the sounds are detected by one of our two systems. This result emphasizes the necessary and sufficient aspect of our two approaches, which seem relevant to characterize and identify a large set of sounds produced by the water
    corecore