44,329 research outputs found

    Acoustical Ranging Techniques in Embedded Wireless Sensor Networked Devices

    Get PDF
    Location sensing provides endless opportunities for a wide range of applications in GPS-obstructed environments; where, typically, there is a need for higher degree of accuracy. In this article, we focus on robust range estimation, an important prerequisite for fine-grained localization. Motivated by the promise of acoustic in delivering high ranging accuracy, we present the design, implementation and evaluation of acoustic (both ultrasound and audible) ranging systems.We distill the limitations of acoustic ranging; and present efficient signal designs and detection algorithms to overcome the challenges of coverage, range, accuracy/resolution, tolerance to Doppler’s effect, and audible intensity. We evaluate our proposed techniques experimentally on TWEET, a low-power platform purpose-built for acoustic ranging applications. Our experiments demonstrate an operational range of 20 m (outdoor) and an average accuracy 2 cm in the ultrasound domain. Finally, we present the design of an audible-range acoustic tracking service that encompasses the benefits of a near-inaudible acoustic broadband chirp and approximately two times increase in Doppler tolerance to achieve better performance

    Lower bounds for Arrangement-based Range-Free Localization in Sensor Networks

    Full text link
    Colander are location aware entities that collaborate to determine approximate location of mobile or static objects when beacons from an object are received by all colanders that are within its distance RR. This model, referred to as arrangement-based localization, does not require distance estimation between entities, which has been shown to be highly erroneous in practice. Colander are applicable in localization in sensor networks and tracking of mobile objects. A set SR2S \subset {\mathbb R}^2 is an (R,ϵ)(R,\epsilon)-colander if by placing receivers at the points of SS, a wireless device with transmission radius RR can be localized to within a circle of radius ϵ\epsilon. We present tight upper and lower bounds on the size of (R,ϵ)(R,\epsilon)-colanders. We measure the expected size of colanders that will form (R,ϵ)(R, \epsilon)-colanders if they distributed uniformly over the plane

    Optimization of star research algorithm for esmo star tracker

    Get PDF
    This paper explains in detail the design and the development of a software research star algorithm, embedded on a star tracker, by the ISAE/SUPAERO team. This research algorithm is inspired by musical techniques. This work will be carried out as part of the ESMO (European Student Moon Orbiter) project by different teams of students and professors from ISAE/SUPAERO (Institut Supe ́rieur de l’Ae ́ronautique et de l’Espace). Till today, the system engineering studies have been completed and the work that will be presented will concern the algorithmic and the embedded software development. The physical architecture of the sensor relies on APS 750 developed by the CIMI laboratory of ISAE/SUPAERO. First, a star research algorithm based on the image acquired in lost-in-space mode (one of the star tracker opera- tional modes) will be presented; it is inspired by techniques of musical recognition with the help of the correlation of digital signature (hash) with those stored in databases. The musical recognition principle is based on finger- printing, i.e. the extraction of points of interest in the studied signal. In the musical context, the signal spectrogram is used to identify these points. Applying this technique in image processing domain requires an equivalent tool to spectrogram. Those points of interest create a hash and are used to efficiently search within the database pre- viously sorted in order to be compared. The main goals of this research algorithm are to minimise the number of steps in the computations in order to deliver information at a higher frequency and to increase the computation robustness against the different possible disturbances

    Speaker Normalization Using Cortical Strip Maps: A Neural Model for Steady State Vowel Identification

    Full text link
    Auditory signals of speech are speaker-dependent, but representations of language meaning are speaker-independent. Such a transformation enables speech to be understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitchindependent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by Adaptive Resonance Theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [J. Acoust. Soc. Am. 24, 175-184 (1952)] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
    corecore