105 research outputs found

    Distributed Kalman Filtering

    Get PDF

    Use of satellite-derived heterogeneous surface soil moisture for numerical weather prediction, The

    Get PDF
    Summer 1996.Bibliography: pages [296]-320

    Suivi Multi-Locuteurs avec des Informations Audio-Visuelles pour la Perception des Robots

    Get PDF
    Robot perception plays a crucial role in human-robot interaction (HRI). Perception system provides the robot information of the surroundings and enables the robot to give feedbacks. In a conversational scenario, a group of people may chat in front of the robot and move freely. In such situations, robots are expected to understand where are the people, who are speaking, or what are they talking about. This thesis concentrates on answering the first two questions, namely speaker tracking and diarization. We use different modalities of the robot’s perception system to achieve the goal. Like seeing and hearing for a human-being, audio and visual information are the critical cues for a robot in a conversational scenario. The advancement of computer vision and audio processing of the last decade has revolutionized the robot perception abilities. In this thesis, we have the following contributions: we first develop a variational Bayesian framework for tracking multiple objects. The variational Bayesian framework gives closed-form tractable problem solutions, which makes the tracking process efficient. The framework is first applied to visual multiple-person tracking. Birth and death process are built jointly with the framework to deal with the varying number of the people in the scene. Furthermore, we exploit the complementarity of vision and robot motorinformation. On the one hand, the robot’s active motion can be integrated into the visual tracking system to stabilize the tracking. On the other hand, visual information can be used to perform motor servoing. Moreover, audio and visual information are then combined in the variational framework, to estimate the smooth trajectories of speaking people, and to infer the acoustic status of a person- speaking or silent. In addition, we employ the model to acoustic-only speaker localization and tracking. Online dereverberation techniques are first applied then followed by the tracking system. Finally, a variant of the acoustic speaker tracking model based on von-Mises distribution is proposed, which is specifically adapted to directional data. All the proposed methods are validated on datasets according to applications.La perception des robots joue un rôle crucial dans l’interaction homme-robot (HRI). Le système de perception fournit les informations au robot sur l’environnement, ce qui permet au robot de réagir en consequence. Dans un scénario de conversation, un groupe de personnes peut discuter devant le robot et se déplacer librement. Dans de telles situations, les robots sont censés comprendre où sont les gens, ceux qui parlent et de quoi ils parlent. Cette thèse se concentre sur les deux premières questions, à savoir le suivi et la diarisation des locuteurs. Nous utilisons différentes modalités du système de perception du robot pour remplir cet objectif. Comme pour l’humain, l’ouie et la vue sont essentielles pour un robot dans un scénario de conversation. Les progrès de la vision par ordinateur et du traitement audio de la dernière décennie ont révolutionné les capacités de perception des robots. Dans cette thèse, nous développons les contributions suivantes : nous développons d’abord un cadre variationnel bayésien pour suivre plusieurs objets. Le cadre bayésien variationnel fournit des solutions explicites, rendant le processus de suivi très efficace. Cette approche est d’abord appliqué au suivi visuel de plusieurs personnes. Les processus de créations et de destructions sont en adéquation avecle modèle probabiliste proposé pour traiter un nombre variable de personnes. De plus, nous exploitons la complémentarité de la vision et des informations du moteur du robot : d’une part, le mouvement actif du robot peut être intégré au système de suivi visuel pour le stabiliser ; d’autre part, les informations visuelles peuvent être utilisées pour effectuer l’asservissement du moteur. Par la suite, les informations audio et visuelles sont combinées dans le modèle variationnel, pour lisser les trajectoires et déduire le statut acoustique d’une personne : parlant ou silencieux. Pour experimenter un scenario où l’informationvisuelle est absente, nous essayons le modèle pour la localisation et le suivi des locuteurs basé sur l’information acoustique uniquement. Les techniques de déréverbération sont d’abord appliquées, dont le résultat est fourni au système de suivi. Enfin, une variante du modèle de suivi des locuteurs basée sur la distribution de von-Mises est proposée, celle-ci étant plus adaptée aux données directionnelles. Toutes les méthodes proposées sont validées sur des bases de données specifiques à chaque application

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Global estimation of precipitation using opaque microwave bands

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 115-125).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.This thesis describes the use of opaque microwave bands for global estimation of precipitation rate. An algorithm was developed for estimating instantaneous precipitation rate for the Advanced Microwave Sounding Unit (AMSU) on the NOAA-15, NOAA-16, and NOAA-17 satellites, and the Advanced Microwave Sounding Unit and Humidity Sounder for Brazil (AMSU/HSB) aboard the NASA Aqua satellite. The algorithm relies primarily on channels in the opaque 54-GHz oxygen and 183-GHz water vapor resonance bands. Many methods for estimating precipitation rate using surface-sensitive microwave window channels have been developed by others. The algorithm involves a set of signal processing components whose outputs are fed into a neural net to produce a rain rate estimate for each 15-km spot. The signal processing components utilize techniques such as principal component analysis for characterizing groups of channels, spatial filtering for cloud-clearing brightness temperature images, and data fusion for sharpening images in order to optimize sensing of small precipitation cells. An effort has been made to make the algorithm as blind to surface variations as possible. The algorithm was trained using data over the eastern U.S. from the NEXRAD ground-based radar network, and was validated through numerical comparisons with NEXRAD data and visual examination of the morphology of precipitation from over the eastern U.S. and around the world. It performed reasonably well over the eastern U.S. and showed potential for detecting and estimating falling snow. However, it tended to overestimate rain rate in summer Arctic climates. Adjustments to the algorithm were made by developing a neural-net-based estimator for estimating a multiplicative correction factor based on data from(cont.) the Advanced Microwave Sounding Radiometer for the Earth Observing System (AMSR-E) on the Aqua satellite. The correction improved estimates in the Arctic to more reasonable levels. The final estimator was a hybrid of the NEXRAD-trained estimator and the AMSR-E-corrected estimator. Climatological metrics were computed over one year during which all AMSU-A/B instruments on NOAA-15, NOAA-16, and NOAA-17 were working. Annual mean rain rates appear to agree morphologically with those from the Global Precipitation Climatology Project. Maps of precipitation frequencies and the diurnal variations of precipitation rate were produced.by Frederick Wey-Min Chen.Ph.D

    Remote Sensing

    Get PDF
    This dual conception of remote sensing brought us to the idea of preparing two different books; in addition to the first book which displays recent advances in remote sensing applications, this book is devoted to new techniques for data processing, sensors and platforms. We do not intend this book to cover all aspects of remote sensing techniques and platforms, since it would be an impossible task for a single volume. Instead, we have collected a number of high-quality, original and representative contributions in those areas

    Workshop on Strategies for Calibration and Validation of Global Change Measurements

    Get PDF
    The Committee on Environment and Natural Resources (CENR) Task Force on Observations and Data Management hosted a Global Change Calibration/Validation Workshop on May 10-12, 1995, in Arlington, Virginia. This Workshop was convened by Robert Schiffer of NASA Headquarters in Washington, D.C., for the CENR Secretariat with a view toward assessing and documenting lessons learned in the calibration and validation of large-scale, long-term data sets in land, ocean, and atmospheric research programs. The National Aeronautics and Space Administration (NASA)/Goddard Space Flight Center (GSFC) hosted the meeting on behalf of the Committee on Earth Observation Satellites (CEOS)/Working Group on Calibration/walidation, the Global Change Observing System (GCOS), and the U. S. CENR. A meeting of experts from the international scientific community was brought together to develop recommendations for calibration and validation of global change data sets taken from instrument series and across generations of instruments and technologies. Forty-nine scientists from nine countries participated. The U. S., Canada, United Kingdom, France, Germany, Japan, Switzerland, Russia, and Kenya were represented

    Vision Sensors and Edge Detection

    Get PDF
    Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing

    Multi-sensor remote sensing for drought characterization: current status, opportunities and a roadmap for the future

    Get PDF
    Satellite based remote sensing offers one of the few approaches able to monitor the spatial and temporal development of regional to continental scale droughts. A unique element of remote sensing platforms is their multi-sensor capability, which enhances the capacity for characterizing drought from a variety of perspectives. Such aspects include monitoring drought influences on vegetation and hydrological responses, as well as assessing sectoral impacts (e.g., agriculture). With advances in remote sensing systems along with an increasing range of platforms available for analysis, this contribution provides a timely and systematic review of multi-sensor remote sensing drought studies, with a particular focus on drought related datasets, drought related phenomena and mechanisms, and drought modeling. To explore this topic, we first present a comprehensive summary of large-scale remote sensing datasets that can be used for multi-sensor drought studies. We then review the role of multi-sensor remote sensing for exploring key drought related phenomena and mechanisms, including vegetation responses to drought, land-atmospheric feedbacks during drought, drought-induced tree mortality, drought-related ecosystem fires, post-drought recovery and legacy effects, flash drought, as well as drought trends under climate change. A summary of recent modeling advances towards developing integrated multi-sensor remote sensing drought indices is also provided. We conclude that leveraging multi-sensor remote sensing provides unique benefits for regional to global drought studies, particularly in: 1) revealing the complex drought impact mechanisms on ecosystem components; 2) providing continuous long-term drought related information at large scales; 3) presenting real-time drought information with high spatiotemporal resolution; 4) providing multiple lines of evidence of drought monitoring to improve modeling and prediction robustness; and 5) improving the accuracy of drought monitoring and assessment efforts. We specifically highlight that more mechanism-oriented drought studies that leverage a combination of sensors and techniques (e.g., optical, microwave, hyperspectral, LiDAR, and constellations) across a range of spatiotemporal scales are needed in order to progress and advance our understanding, characterization and description of drought in the future

    Recent Advances in Indoor Localization Systems and Technologies

    Get PDF
    Despite the enormous technical progress seen in the past few years, the maturity of indoor localization technologies has not yet reached the level of GNSS solutions. The 23 selected papers in this book present the recent advances and new developments in indoor localization systems and technologies, propose novel or improved methods with increased performance, provide insight into various aspects of quality control, and also introduce some unorthodox positioning methods
    corecore