104,815 research outputs found

    Smart Embedded Passive Acoustic Devices for Real-Time Hydroacoustic Surveys

    Get PDF
    This paper describes cost-efficient, innovative and interoperable ocean passive acoustics sensors systems, developed within the European FP7 project NeXOS (Next generation Low-Cost Multifunctional Web Enabled Ocean Sensor Systems Empowering Marine, Maritime and Fisheries Management) These passive acoustic sensors consist of two low power, innovative digital hydrophone systems with embedded processing of acoustic data, A1 and A2, enabling real-time measurement of the underwater soundscape. An important part of the effort is focused on achieving greater dynamic range and effortless integration on autonomous platforms, such as gliders and profilers. A1 is a small standalone, compact, low power, low consumption digital hydrophone with embedded pre-processing of acoustic data, suitable for mobile platforms with limited autonomy and communication capability. A2 consists of four A1 digital hydrophones with Ethernet interface and one master unit for data processing, enabling real-time measurement of underwater noise and soundscape sources. In this work the real-time acoustic processing algorithms implemented for A1 and A2 are described, including computational load evaluations of the algorithms. The results obtained from the real time test done with the A2 assembly at OBSEA observatory collected during the verification phase of the project are presented.Postprint (author's final draft

    Machine Learning for Indoor Localization Using Mobile Phone-Based Sensors

    Get PDF
    In this paper we investigate the problem of localizing a mobile device based on readings from its embedded sensors utilizing machine learning methodologies. We consider a real-world environment, collect a large dataset of 3110 datapoints, and examine the performance of a substantial number of machine learning algorithms in localizing a mobile device. We have found algorithms that give a mean error as accurate as 0.76 meters, outperforming other indoor localization systems reported in the literature. We also propose a hybrid instance-based approach that results in a speed increase by a factor of ten with no loss of accuracy in a live deployment over standard instance-based methods, allowing for fast and accurate localization. Further, we determine how smaller datasets collected with less density affect accuracy of localization, important for use in real-world environments. Finally, we demonstrate that these approaches are appropriate for real-world deployment by evaluating their performance in an online, in-motion experiment.Comment: 6 pages, 4 figure

    Enhancing The Sensing Capabilities of Mobile and Embedded Systems

    Get PDF
    In this work, we aim to develop new sensors and sensing platforms that facilitate the development of new mobile and embedded devices. Mobile and embedded devices have become an integral part of our everyday lives and the sensing capabilities of these devices have improved throughout the years. Developing new and innovative sensors and sensing platforms will provide the building blocks for developing new sensing systems. In an effort to facilitate these innovations we have developed two new in-air sonar sensors and a new reconfigurable sensing platform. The first in-air sonar sensor is designed for ranging applications and uses the phone\u27s microphone and rear speaker to generate a wide beam of sound. The second in-air sonar sensor is an external module which uses a narrow beam of sound for high resolution ranging. This ranging information is then combined with orientation data from the phone\u27s gyroscope,magnetometer and accelerometer to generate a two dimensional map of a space. While researching ways of enhancing the sensing capabilities of mobile and embedded devices, we found that the process often requires developing new hardware prototypes. However, developing hardware prototypes is time-consuming. In an effort to lower the barrier to entry for small teams and software researchers, we have developed a new reconfigurable sensing platform that uses a code first approach to embedded design. Instead of designing software to run within the limited constraints of the hardware, our proposed code-first approach allows software researchers to synthesize the hardware configuration that is required to run their software

    Temporal Stream Logic: Synthesis beyond the Bools

    Full text link
    Reactive systems that operate in environments with complex data, such as mobile apps or embedded controllers with many sensors, are difficult to synthesize. Synthesis tools usually fail for such systems because the state space resulting from the discretization of the data is too large. We introduce TSL, a new temporal logic that separates control and data. We provide a CEGAR-based synthesis approach for the construction of implementations that are guaranteed to satisfy a TSL specification for all possible instantiations of the data processing functions. TSL provides an attractive trade-off for synthesis. On the one hand, synthesis from TSL, unlike synthesis from standard temporal logics, is undecidable in general. On the other hand, however, synthesis from TSL is scalable, because it is independent of the complexity of the handled data. Among other benchmarks, we have successfully synthesized a music player Android app and a controller for an autonomous vehicle in the Open Race Car Simulator (TORCS.

    Comparing Features Extraction Techniques Using J48 for Activity Recognition on Mobile Phones

    Get PDF
    Proceedings of: 10th Conference on Practical Applications of Agents and Multi-Agent Systems, 28-30 March, 2012.Workshop on Agents and Multi-agent systems for Enterprise IntegrationNowadays, mobile phones are not only used for mere communication such as calling or sending text messages. Mobile phones are becoming the main computer device in people's lives. Besides, thanks to the embedded sensors (Accelerometer, digital compass, gyroscope, GPS,and so on) is possible to improve the user experience. Activity recognition aims to recognize actions and goals of individual from a series of observations of themselves, in this case is used an accelerometer.This work was supported in part by Projects CICYT TIN2011-28620-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485) and DPS2008-07029-C02-02Publicad

    Development of a Wireless Mobile Computing Platform for Fall Risk Prediction

    Get PDF
    Falls are a major health risk with which the elderly and disabled must contend. Scientific research on smartphone-based gait detection systems using the Internet of Things (IoT) has recently become an important component in monitoring injuries due to these falls. Analysis of human gait for detecting falls is the subject of many research projects. Progress in these systems, the capabilities of smartphones, and the IoT are enabling the advancement of sophisticated mobile computing applications that detect falls after they have occurred. This detection has been the focus of most fall-related research; however, ensuring preventive measures that predict a fall is the goal of this health monitoring system. By performing a thorough investigation of existing systems and using predictive analytics, we built a novel mobile application/system that uses smartphone and smart-shoe sensors to predict and alert the user of a fall before it happens. The major focus of this dissertation has been to develop and implement this unique system to help predict the risk of falls. We used built-in sensors --accelerometer and gyroscope-- in smartphones and a sensor embedded smart-shoe. The smart-shoe contains four pressure sensors with a Wi-Fi communication module to unobtrusively collect data. The interactions between these sensors and the user resulted in distinct challenges for this research while also creating new performance goals based on the unique characteristics of this system. In addition to providing an exciting new tool for fall prediction, this work makes several contributions to current and future generation mobile computing research

    SaferCross: Enhancing Pedestrian Safety Using Embedded Sensors of Smartphone

    Get PDF
    The number of pedestrian accidents continues to keep climbing. Distraction from smartphone is one of the biggest causes for pedestrian fatalities. In this paper, we develop SaferCross, a mobile system based on the embedded sensors of smartphone to improve pedestrian safety by preventing distraction from smartphone. SaferCross adopts a holistic approach by identifying and developing essential system components that are missing in existing systems and integrating the system components into a "fully-functioning" mobile system for pedestrian safety. Specifically, we create algorithms for improving the accuracy and energy efficiency of pedestrian positioning, effectiveness of phone activity detection, and real-time risk assessment. We demonstrate that SaferCross, through systematic integration of the developed algorithms, performs situation awareness effectively and provides a timely warning to the pedestrian based on the information obtained from smartphone sensors and Direct Wi-Fi-based peer-to-peer communication with approaching cars. Extensive experiments are conducted in a department parking lot for both component-level and integrated testing. The results demonstrate that the energy efficiency and positioning accuracy of SaferCross are improved by 52% and 72% on average compared with existing solutions with missing support for positioning accuracy and energy efficiency, and the phone-viewing event detection accuracy is over 90%. The integrated test results show that SaferCross alerts the pedestrian timely with an average error of 1.6sec in comparison with the ground truth data, which can be easily compensated by configuring the system to fire an alert message a couple of seconds early.Comment: Published in IEEE Access, 202

    Rmagine: 3D Range Sensor Simulation in Polygonal Maps via Raytracing for Embedded Hardware on Mobile Robots

    Full text link
    Sensor simulation has emerged as a promising and powerful technique to find solutions to many real-world robotic tasks like localization and pose tracking.However, commonly used simulators have high hardware requirements and are therefore used mostly on high-end computers. In this paper, we present an approach to simulate range sensors directly on embedded hardware of mobile robots that use triangle meshes as environment map. This library called Rmagine allows a robot to simulate sensor data for arbitrary range sensors directly on board via raytracing. Since robots typically only have limited computational resources, the Rmagine aims at being flexible and lightweight, while scaling well even to large environment maps. It runs on several platforms like Laptops or embedded computing boards like Nvidia Jetson by putting an unified API over the specific proprietary libraries provided by the hardware manufacturers. This work is designed to support the future development of robotic applications depending on simulation of range data that could previously not be computed in reasonable time on mobile systems
    corecore