813 research outputs found

    A Novel Approach to Complex Human Activity Recognition

    Get PDF
    Human activity recognition is a technology that offers automatic recognition of what a person is doing with respect to body motion and function. The main goal is to recognize a person\u27s activity using different technologies such as cameras, motion sensors, location sensors, and time. Human activity recognition is important in many areas such as pervasive computing, artificial intelligence, human-computer interaction, health care, health outcomes, rehabilitation engineering, occupational science, and social sciences. There are numerous ubiquitous and pervasive computing systems where users\u27 activities play an important role. The human activity carries a lot of information about the context and helps systems to achieve context-awareness. In the rehabilitation area, it helps with functional diagnosis and assessing health outcomes. Human activity recognition is an important indicator of participation, quality of life and lifestyle. There are two classes of human activities based on body motion and function. The first class, simple human activity, involves human body motion and posture, such as walking, running, and sitting. The second class, complex human activity, includes function along with simple human activity, such as cooking, reading, and watching TV. Human activity recognition is an interdisciplinary research area that has been active for more than a decade. Substantial research has been conducted to recognize human activities, but, there are many major issues still need to be addressed. Addressing these issues would provide a significant improvement in different aspects of the applications of the human activity recognition in different areas. There has been considerable research conducted on simple human activity recognition, whereas, a little research has been carried out on complex human activity recognition. However, there are many key aspects (recognition accuracy, computational cost, energy consumption, mobility) that need to be addressed in both areas to improve their viability. This dissertation aims to address the key aspects in both areas of human activity recognition and eventually focuses on recognition of complex activity. It also addresses indoor and outdoor localization, an important parameter along with time in complex activity recognition. This work studies accelerometer sensor data to recognize simple human activity and time, location and simple activity to recognize complex activity

    Cooperative Localization on Computationally Constrained Devices

    Get PDF
    Cooperative localization is a useful way for nodes within a network to share location information in order to better arrive at a position estimate. This is handy in GPS contested environments (indoors and urban settings). Most systems exploring cooperative localization rely on special hardware, or extra devices to store the database or do the computations. Research also deals with specific localization techniques such as using Wi-Fi, ultra-wideband signals, or accelerometers independently opposed to fusing multiple sources together. This research brings cooperative localization to the smartphone platform, to take advantage of the multiple sensors that are available. The system is run on Android powered devices, including the wireless hotspot. In order to determine the merit of each sensor, analysis was completed to determine successes and failures. The accelerometer, compass, and received signal strength capability were examined to determine their usefulness in cooperative localization. Experiments at meter intervals show the system detected changes in location at each interval with an average standard deviation of 0.44m. The closest location estimates occurred at 3m, 4m and 6m with average errors of 0.15m, 0.11m, and 0.07m respectively. This indicates that very precise estimates can be achieved with an Android hotspot and mobile nodes

    Localino T-shirt: The Real-time Indoor Localization in Ambient Assisted Living Applications

    Get PDF
    In the last decade, smart textiles have become very popular as a concept and have found use in many applications, such as military, electronics, automotive, and medical ones. In the medical area, smart textiles research is focused more on biomonitoring, telemedicine, rehabilitation, sport medicine or home healthcare systems. In this research, the development and localization accuracy measurements of a smart T-shirt are presented, which will be used by elderly people for indoor localization in ambient assisted living applications. The proposed smart T-shirt and the work presented is considered to be applicable in cases of elderly, toddlers or even adults in indoor environments where their continuous real-time localization is critical. This smart T-shirt integrates a localization sensor, namely the Localino sensor, together with a solar panel for energy harvesting when the user is moving outdoors, as well as a battery/power bank that is both connected to the solar panel and the Localino sensor for charging and power supply respectively. Moreover, a mock-up house was deployed, where the Localino platform anchors were deployed at strategic points within the house area. Localino sensor nodes were installed in all the house rooms, from which we obtained the localization accuracy measurements. Furthermore, the localization accuracy was also measured for a selected number of mobile user scenarios, in order to assess the platform accuracy in both static and mobile user cases. Details about the implementation of the T-shirt, the selection and integration of the electronics parts, and the mock-up house, as well as about the localization accuracy measurements results are presented in the paper

    Performance evaluation of neural network assisted motion detection schemes implemented within indoor optical camera based communications

    Get PDF
    This paper investigates the performance of the neural network (NN) assisted motion detection (MD) over an indoor optical camera communication (OCC) link. The proposed study is based on the performance evaluation of various NN training algorithms, which provide efficient and reliable MD functionality along with vision, illumination, data communications and sensing in indoor OCC. To evaluate the proposed scheme, we have carried out an experimental investigation of a static indoor downlink OCC link employing a mobile phone front camera as the receiver and an 8 x000D7; 8 red, green and blue light-emitting diodes array as the transmitter. In addition to data transmission, MD is achieved using a camera to observe userx02019;s finger movement in the form of centroids via the OCC link. The captured motion is applied to the NN and is evaluated for a number of MD schemes. The results show that, resilient backpropagation based NN offers the fastest convergence with a minimum error of 10x02212;5 within the processing time window of 0.67 s and a success probability of 100 x00025; for MD compared to other algorithms. We demonstrate that, the proposed system with motion offers a bit error rate which is below the forward error correction limit of 3.8 x000D7; 10x02212;3, over a transmission distance of 1.17 m

    Distributed and adaptive location identification system for mobile devices

    Full text link
    Indoor location identification and navigation need to be as simple, seamless, and ubiquitous as its outdoor GPS-based counterpart is. It would be of great convenience to the mobile user to be able to continue navigating seamlessly as he or she moves from a GPS-clear outdoor environment into an indoor environment or a GPS-obstructed outdoor environment such as a tunnel or forest. Existing infrastructure-based indoor localization systems lack such capability, on top of potentially facing several critical technical challenges such as increased cost of installation, centralization, lack of reliability, poor localization accuracy, poor adaptation to the dynamics of the surrounding environment, latency, system-level and computational complexities, repetitive labor-intensive parameter tuning, and user privacy. To this end, this paper presents a novel mechanism with the potential to overcome most (if not all) of the abovementioned challenges. The proposed mechanism is simple, distributed, adaptive, collaborative, and cost-effective. Based on the proposed algorithm, a mobile blind device can potentially utilize, as GPS-like reference nodes, either in-range location-aware compatible mobile devices or preinstalled low-cost infrastructure-less location-aware beacon nodes. The proposed approach is model-based and calibration-free that uses the received signal strength to periodically and collaboratively measure and update the radio frequency characteristics of the operating environment to estimate the distances to the reference nodes. Trilateration is then used by the blind device to identify its own location, similar to that used in the GPS-based system. Simulation and empirical testing ascertained that the proposed approach can potentially be the core of future indoor and GPS-obstructed environments

    Underwater 3D positioning on smart devices

    Full text link
    The emergence of water-proof mobile and wearable devices (e.g., Garmin Descent and Apple Watch Ultra) designed for underwater activities like professional scuba diving, opens up opportunities for underwater networking and localization capabilities on these devices. Here, we present the first underwater acoustic positioning system for smart devices. Unlike conventional systems that use floating buoys as anchors at known locations, we design a system where a dive leader can compute the relative positions of all other divers, without any external infrastructure. Our intuition is that in a well-connected network of devices, if we compute the pairwise distances, we can determine the shape of the network topology. By incorporating orientation information about a single diver who is in the visual range of the leader device, we can then estimate the positions of all the remaining divers, even if they are not within sight. We address various practical problems including detecting erroneous distance estimates, addressing rotational and flipping ambiguities as well as designing a distributed timestamp protocol that scales linearly with the number of devices. Our evaluations show that our distributed system running on underwater deployments of 4-5 commodity smart devices can perform pairwise ranging and localization with median errors of 0.5-0.9 m and 0.9-1.6

    Advanced Pedestrian Positioning System to Smartphones and Smartwatches

    Get PDF
    In recent years, there has been an increasing interest in the development of pedestrian navigation systems for satellite-denied scenarios. The popularization of smartphones and smartwatches is an interesting opportunity for reducing the infrastructure cost of the positioning systems. Nowadays, smartphones include inertial sensors that can be used in pedestrian dead-reckoning (PDR) algorithms for the estimation of the user's position. Both smartphones and smartwatches include WiFi capabilities allowing the computation of the received signal strength (RSS). We develop a new method for the combination of RSS measurements from two different receivers using a Gaussian mixture model. We also analyze the implication of using a WiFi network designed for communication purposes in an indoor positioning system when the designer cannot control the network configuration. In this work, we design a hybrid positioning system that combines inertial measurements, from low-cost inertial sensors embedded in a smartphone, with RSS measurements through an extended Kalman filter. The system has been validated in a real scenario, and results show that our system improves the positioning accuracy of the PDR system thanks to the use of two WiFi receivers. The designed system obtains an accuracy up to 1.4 m in a scenario of 6000 m2
    • …
    corecore