746 research outputs found

    WideSee: towards wide-area contactless wireless sensing

    Get PDF
    Contactless wireless sensing without attaching a device to the target has achieved promising progress in recent years. However, one severe limitation is the small sensing range. This paper presents WideSee to realize wide-area sensing with only one transceiver pair. WideSee utilizes the LoRa signal to achieve a larger range of sensing and further incorporates drone's mobility to broaden the sensing area. WideSee presents solutions across software and hardware to overcome two aspects of challenges for wide-range contactless sensing: (i) the interference brought by the device mobility and LoRa's high sensitivity; and (ii) the ambiguous target information such as location when employing just a single pair of transceivers. We have developed a working prototype of WideSee for human target detection and localization that are especially useful in emergency scenarios such as rescue search, and evaluated WideSee with both controlled experiments and the field study in a high-rise building. Extensive experiments demonstrate the great potential of WideSee for wide-area contactless sensing with a single LoRa transceiver pair hosted on a drone

    XRLoc: Accurate UWB Localization for XR Systems

    Full text link
    Understanding the location of ultra-wideband (UWB) tag-attached objects and people in the real world is vital to enabling a smooth cyber-physical transition. However, most UWB localization systems today require multiple anchors in the environment, which can be very cumbersome to set up. In this work, we develop XRLoc, providing an accuracy of a few centimeters in many real-world scenarios. This paper will delineate the key ideas which allow us to overcome the fundamental restrictions that plague a single anchor point from localization of a device to within an error of a few centimeters. We deploy a VR chess game using everyday objects as a demo and find that our system achieves 2.42.4 cm median accuracy and 5.35.3 cm 90th90^\mathrm{th} percentile accuracy in dynamic scenarios, performing at least 8×8\times better than state-of-art localization systems. Additionally, we implement a MAC protocol to furnish these locations for over 1010 tags at update rates of 100100 Hz, with a localization latency of 1\sim 1 ms

    Ultra-wide bandwidth systems for the surveillance of railway crossing Areas

    Get PDF
    Level crossings are critical elements of railway networks where a large number of accidents take place every year. With the recent enforcement of new and higher safety standards for railway transportation systems, dedicated and reliable technologies for level crossing surveillance must be introduced in order to comply with the safety requirements. In this survey the worldwide problem of level crossing surveillance is addressed, with particular attention to the recent European safety regulations. In this context, the capability of detecting, localizing, and discriminating the vehicle/obstacle that might be entrapped in a level crossing area is considered of paramount importance to save lives, and at the same time avoid costly false alarms. In this article the main solutions available today are illustrated and their pros and cons discussed. In particular, the recent ultra-wide bandwidth technology, combined with proper signal processing and backhauling over the already deployed optical fiber backbone, is shown to represent a promising solution for safety improvement in level crossings

    Motion tracking of iris features to detect small eye movements

    Get PDF
    The inability of current video-based eye trackers to reliably detect very small eye movements has led to confusion about the prevalence or even the existence of monocular microsaccades (small, rapid eye movements that occur in only one eye at a time). As current methods often rely on precisely localizing the pupil and/or corneal reflection on successive frames, current microsaccade-detection algorithms often suffer from signal artifacts and a low signal-to-noise ratio. We describe a new video-based eye tracking methodology which can reliably detect small eye movements over 0.2 degrees (12 arcmin) with very high confidence. Our method tracks the motion of iris features to estimate velocity rather than position, yielding a better record of microsaccades. We provide a more robust, detailed record of miniature eye movements by relying on more stable, higher-order features (such as local features of iris texture) instead of lower-order features (such as pupil center and corneal reflection), which are sensitive to noise and drift

    Passive radar based on WiFi transmissions: signal processing schemes and experimental results

    Get PDF
    Aim of this work is to study innovative techniques and processing strategies for a new passive sensor for short range surveillance. The principle of work of the sensor will be based on the passive radar principle, and WiFi transmissions - which usually provide Internet access within local areas - will be exploited by the passive sensor to detect, localize and classify targets

    Passive radar based on WiFi transmissions: signal processing schemes and experimental results

    Get PDF
    Aim of this work is to study innovative techniques and processing strategies for a new passive sensor for short range surveillance. The principle of work of the sensor will be based on the passive radar principle, and WiFi transmissions - which usually provide Internet access within local areas - will be exploited by the passive sensor to detect, localize and classify targets

    Study and Characterization of a Camera-based Distributed System for Large-Volume Dimensional Metrology Applications

    Get PDF
    Large-Volume Dimensional Metrology (LVDM) deals with dimensional inspection of large objects with dimensions in the order of tens up to hundreds of meters. Typical large volume dimensional metrology applications concern the assembly/disassembly phase of large objects, referring to industrial engineering. Based on different technologies and measurement principles, a wealth of LVDM systems have been proposed and developed in the literature, just to name a few, e.g., optical based systems such as laser tracker, laser radar, and mechanical based systems such as gantry CMM and multi-joints artificial arm CMM, and so on. Basically, the main existing LVDM systems can be divided into two categories, i.e. centralized systems and distributed systems, according to the scheme of hardware configuration. By definition, a centralized system is a stand-alone unit which works independently to provide measurements of a spatial point, while a distributed system, is defined as a system that consists of a series of sensors which work cooperatively to provide measurements of a spatial point, and usually individual sensor cannot measure the coordinates separately. Some representative distributed systems in the literature are iGPS, MScMS-II, and etc. The current trend of LVDM systems seem to orient towards distributed systems, and actually, distributed systems demonstrate many advantages that distinguish themselves from conventional centralized systems

    Full Issue

    Get PDF

    Self-localization in Ad Hoc Indoor Acoustic Networks

    Get PDF
    The increasing use of mobile technology in everyday life has aroused interest into developing new ways of utilizing the data collected by devices such as mobile phones and wearable devices. Acoustic sensors can be used to localize sound sources if the positions of spatially separate sensors are known or can be determined. However, the process of determining the 3D coordinates by manual measurements is tedious especially with increasing number of sensors. Therefore, the localization process has to be automated. Satellite based positioning is imprecise for many applications and requires line-of-sight to the sky. This thesis studies localization methods for wireless acoustic sensor networks and the process is called self-localization.This thesis focuses on self-localization from sound, and therefore the term acoustic is used. Furthermore, the development of the methods aims at utilizing ad hoc sensor networks, which means that the sensors are not necessarily installed in the premises like meeting rooms and other purpose-built spaces, which often have dedicated audio hardware for spatial audio applications. Instead of relying on such spaces and equipment, mobile devices are used, which are combined to form sensor networks.For instance, a few mobile phones laid on a table can be used to create a sensor network built for an event and it is inherently dismantled once the event is over, which explains the use of the term ad hoc. Once positions of the devices are estimated, the network can be used for spatial applications such as sound source localization and audio enhancement via spatial filtering. The main purpose of this thesis is to present the methods for self-localization of such an ad hoc acoustic sensor network. Using off-the-shelf ad hoc devices to establish sensor networks enables implementation of many spatial algorithms basically in any environment.Several acoustic self-localization methods have been introduced over the years. However, they often rely on specialized hardware and calibration signals. This thesis presents methods that are passive and utilize environmental sounds such as speech from which, by using time delay estimation, the spatial information of the sensor network can be determined. Many previous self-localization methods assume that audio captured by the sensors is synchronized. This assumption cannot be made in an ad hoc sensor network, since the different sensors are unaware of each other without specific signaling that is not available without special arrangement.The methods developed in this thesis are evaluated with simulations and real data recordings. Scenarios in which the targets of positioning are stationary and in motion are studied. The real world recordings are made in closed spaces such as meeting rooms. The targets are approximately 1 – 5 meters apart. The positioning accuracy is approximately five centimeters in a stationary scenario, and ten centimeters in a moving-target scenario on average. The most important result of this thesis is presenting the first self-localization method that uses environmental sounds and off-the-shelf unsynchronized devices, and allows the targets of self-localization to move
    corecore