1,507 research outputs found

    MobiQuery: A Spatiotemporal Query Service for Mobile Users in Sensor Networks

    Get PDF
    This paper presents MobiQuery, a spatiotemporal query service that allows mobile users to periodically collect sensor data from the physical environment through wireless sensor networks. A salient feature of \MQ is that it can meet stringent spatiotemporal performance constraints, including query latency, data freshness, and changing areas of interest due to user mobility. We present three just-in-time prefetching protocols that enable MobiQuery to achieve desired spatiotemporal performance despite low node duty cycles, while significantly reducing communication overhead. We validate our approach through both theoretical analysis and extensive simulations under realistic settings including varying user movement patterns and location errors

    Neuromorphic Camera Denoising using Graph Neural Network-driven Transformers

    Full text link
    Neuromorphic vision is a bio-inspired technology that has triggered a paradigm shift in the computer-vision community and is serving as a key-enabler for a multitude of applications. This technology has offered significant advantages including reduced power consumption, reduced processing needs, and communication speed-ups. However, neuromorphic cameras suffer from significant amounts of measurement noise. This noise deteriorates the performance of neuromorphic event-based perception and navigation algorithms. In this paper, we propose a novel noise filtration algorithm to eliminate events which do not represent real log-intensity variations in the observed scene. We employ a Graph Neural Network (GNN)-driven transformer algorithm, called GNN-Transformer, to classify every active event pixel in the raw stream into real-log intensity variation or noise. Within the GNN, a message-passing framework, called EventConv, is carried out to reflect the spatiotemporal correlation among the events, while preserving their asynchronous nature. We also introduce the Known-object Ground-Truth Labeling (KoGTL) approach for generating approximate ground truth labels of event streams under various illumination conditions. KoGTL is used to generate labeled datasets, from experiments recorded in chalenging lighting conditions. These datasets are used to train and extensively test our proposed algorithm. When tested on unseen datasets, the proposed algorithm outperforms existing methods by 8.8% in terms of filtration accuracy. Additional tests are also conducted on publicly available datasets to demonstrate the generalization capabilities of the proposed algorithm in the presence of illumination variations and different motion dynamics. Compared to existing solutions, qualitative results verified the superior capability of the proposed algorithm to eliminate noise while preserving meaningful scene events

    Interactive Planning and Sensing for Aircraft in Uncertain Environments with Spatiotemporally Evolving Threats

    Get PDF
    Autonomous aerial, terrestrial, and marine vehicles provide a platform for several applications including cargo transport, information gathering, surveillance, reconnaissance, and search-and-rescue. To enable such applications, two main technical problems are commonly addressed.On the one hand, the motion-planning problem addresses optimal motion to a destination: an application example is the delivery of a package in the shortest time with least fuel. Solutions to this problem often assume that all relevant information about the environment is available, possibly with some uncertainty. On the other hand, the information gathering problem addresses the maximization of some metric of information about the environment: application examples include such as surveillance and environmental monitoring. Solutions to the motion-planning problem in vehicular autonomy assume that information about the environment is available from three sources: (1) the vehicle’s own onboard sensors, (2) stationary sensor installations (e.g. ground radar stations), and (3) other information gathering vehicles, i.e., mobile sensors, especially with the recent emphasis on collaborative teams of autonomous vehicles with heterogeneous capabilities. Each source typically processes the raw sensor data via estimation algorithms. These estimates are then available to a decision making system such as a motion- planning algorithm. The motion-planner may use some or all of the estimates provided. There is an underlying assumption of “separation� between the motion-planning algorithm and the information about environment. This separation is common in linear feedback control systems, where estimation algorithms are designed independent of control laws, and control laws are designed with the assumption that the estimated state is the true state. In the case of motion-planning, there is no reason to believe that such a separation between the motion-planning algorithm and the sources of estimated environment information will lead to optimal motion plans, even if the motion planner and the estimators are themselves optimal. The goal of this dissertation is to investigate whether the removal of this separation, via interactive motion-planning and sensing, can significantly improve the optimality of motion- planning. The major contribution of this work is interactive planning and sensing. We consider the problem of planning the path of a vehicle, which we refer to as the actor, to traverse a threat field with minimum threat exposure. The threat field is an unknown, time- variant, and strictly positive scalar field defined on a compact 2D spatial domain – the actor’s workspace. The threat field is estimated by a network of mobile sensors that can measure the threat field pointwise. All measurements are noisy. The objective is to determine a path for the actor to reach a desired goal with minimum risk, which is a measure sensitive not only to the threat exposure itself, but also to the uncertainty therein. A novelty of this problem setup is that the actor can communicate with the sensor network and request that the sensors position themselves in a procedure we call sensor reconfiguration such that the actor’s risk is minimized. This work continues with a foundation in motion planning in time-varying fields where waiting is a control input. Waiting is examined in the context of finding an optimal path with considerations for the cost of exposure to a threat field, the cost of movement, and the cost of waiting. For example, an application where waiting may be beneficial in motion-planning is the delivery of a package where adverse weather may pose a risk to the safety of a UAV and its cargo. In such scenarios, an optimal plan may include “waiting until the storm passes.� Results on computational efficiency and optimality of considering waiting in path- planning algorithms are presented. In addition, the relationship of waiting in a time- varying field represented with varying levels of resolution, or multiresolution is studied. Interactive planning and sensing is further developed for the case of time-varying environments. This proposed extension allows for the evaluation of different mission windows, finite sensor network reconfiguration durations, finite planning durations, and varying number of available sensors. Finally, the proposed method considers the effect of waiting in the path planner under the interactive planning and sensing for time-varying fields framework. Future work considers various extensions of the proposed interactive planning and sensing framework including: generalizing the environment using Gaussian processes, sensor reconfiguration costs, multiresolution implementations, nonlinear parameters, decentralized sensor networks and an application to aerial payload delivery by parafoil

    A Data-driven Methodology Towards Mobility- and Traffic-related Big Spatiotemporal Data Frameworks

    Get PDF
    Human population is increasing at unprecedented rates, particularly in urban areas. This increase, along with the rise of a more economically empowered middle class, brings new and complex challenges to the mobility of people within urban areas. To tackle such challenges, transportation and mobility authorities and operators are trying to adopt innovative Big Data-driven Mobility- and Traffic-related solutions. Such solutions will help decision-making processes that aim to ease the load on an already overloaded transport infrastructure. The information collected from day-to-day mobility and traffic can help to mitigate some of such mobility challenges in urban areas. Road infrastructure and traffic management operators (RITMOs) face several limitations to effectively extract value from the exponentially growing volumes of mobility- and traffic-related Big Spatiotemporal Data (MobiTrafficBD) that are being acquired and gathered. Research about the topics of Big Data, Spatiotemporal Data and specially MobiTrafficBD is scattered, and existing literature does not offer a concrete, common methodological approach to setup, configure, deploy and use a complete Big Data-based framework to manage the lifecycle of mobility-related spatiotemporal data, mainly focused on geo-referenced time series (GRTS) and spatiotemporal events (ST Events), extract value from it and support decision-making processes of RITMOs. This doctoral thesis proposes a data-driven, prescriptive methodological approach towards the design, development and deployment of MobiTrafficBD Frameworks focused on GRTS and ST Events. Besides a thorough literature review on Spatiotemporal Data, Big Data and the merging of these two fields through MobiTraffiBD, the methodological approach comprises a set of general characteristics, technical requirements, logical components, data flows and technological infrastructure models, as well as guidelines and best practices that aim to guide researchers, practitioners and stakeholders, such as RITMOs, throughout the design, development and deployment phases of any MobiTrafficBD Framework. This work is intended to be a supporting methodological guide, based on widely used Reference Architectures and guidelines for Big Data, but enriched with inherent characteristics and concerns brought about by Big Spatiotemporal Data, such as in the case of GRTS and ST Events. The proposed methodology was evaluated and demonstrated in various real-world use cases that deployed MobiTrafficBD-based Data Management, Processing, Analytics and Visualisation methods, tools and technologies, under the umbrella of several research projects funded by the European Commission and the Portuguese Government.A população humana cresce a um ritmo sem precedentes, particularmente nas áreas urbanas. Este aumento, aliado ao robustecimento de uma classe média com maior poder económico, introduzem novos e complexos desafios na mobilidade de pessoas em áreas urbanas. Para abordar estes desafios, autoridades e operadores de transportes e mobilidade estão a adotar soluções inovadoras no domínio dos sistemas de Dados em Larga Escala nos domínios da Mobilidade e Tráfego. Estas soluções irão apoiar os processos de decisão com o intuito de libertar uma infraestrutura de estradas e transportes já sobrecarregada. A informação colecionada da mobilidade diária e da utilização da infraestrutura de estradas pode ajudar na mitigação de alguns dos desafios da mobilidade urbana. Os operadores de gestão de trânsito e de infraestruturas de estradas (em inglês, road infrastructure and traffic management operators — RITMOs) estão limitados no que toca a extrair valor de um sempre crescente volume de Dados Espaciotemporais em Larga Escala no domínio da Mobilidade e Tráfego (em inglês, Mobility- and Traffic-related Big Spatiotemporal Data —MobiTrafficBD) que estão a ser colecionados e recolhidos. Os trabalhos de investigação sobre os tópicos de Big Data, Dados Espaciotemporais e, especialmente, de MobiTrafficBD, estão dispersos, e a literatura existente não oferece uma metodologia comum e concreta para preparar, configurar, implementar e usar uma plataforma (framework) baseada em tecnologias Big Data para gerir o ciclo de vida de dados espaciotemporais em larga escala, com ênfase nas série temporais georreferenciadas (em inglês, geo-referenced time series — GRTS) e eventos espacio- temporais (em inglês, spatiotemporal events — ST Events), extrair valor destes dados e apoiar os RITMOs nos seus processos de decisão. Esta dissertação doutoral propõe uma metodologia prescritiva orientada a dados, para o design, desenvolvimento e implementação de plataformas de MobiTrafficBD, focadas em GRTS e ST Events. Além de uma revisão de literatura completa nas áreas de Dados Espaciotemporais, Big Data e na junção destas áreas através do conceito de MobiTrafficBD, a metodologia proposta contem um conjunto de características gerais, requisitos técnicos, componentes lógicos, fluxos de dados e modelos de infraestrutura tecnológica, bem como diretrizes e boas práticas para investigadores, profissionais e outras partes interessadas, como RITMOs, com o objetivo de guiá-los pelas fases de design, desenvolvimento e implementação de qualquer pla- taforma MobiTrafficBD. Este trabalho deve ser visto como um guia metodológico de suporte, baseado em Arqui- teturas de Referência e diretrizes amplamente utilizadas, mas enriquecido com as característi- cas e assuntos implícitos relacionados com Dados Espaciotemporais em Larga Escala, como no caso de GRTS e ST Events. A metodologia proposta foi avaliada e demonstrada em vários cenários reais no âmbito de projetos de investigação financiados pela Comissão Europeia e pelo Governo português, nos quais foram implementados métodos, ferramentas e tecnologias nas áreas de Gestão de Dados, Processamento de Dados e Ciência e Visualização de Dados em plataformas MobiTrafficB

    Learning to represent surroundings, anticipate motion and take informed actions in unstructured environments

    Get PDF
    Contemporary robots have become exceptionally skilled at achieving specific tasks in structured environments. However, they often fail when faced with the limitless permutations of real-world unstructured environments. This motivates robotics methods which learn from experience, rather than follow a pre-defined set of rules. In this thesis, we present a range of learning-based methods aimed at enabling robots, operating in dynamic and unstructured environments, to better understand their surroundings, anticipate the actions of others, and take informed actions accordingly

    BEV-Locator: An End-to-end Visual Semantic Localization Network Using Multi-View Images

    Full text link
    Accurate localization ability is fundamental in autonomous driving. Traditional visual localization frameworks approach the semantic map-matching problem with geometric models, which rely on complex parameter tuning and thus hinder large-scale deployment. In this paper, we propose BEV-Locator: an end-to-end visual semantic localization neural network using multi-view camera images. Specifically, a visual BEV (Birds-Eye-View) encoder extracts and flattens the multi-view images into BEV space. While the semantic map features are structurally embedded as map queries sequence. Then a cross-model transformer associates the BEV features and semantic map queries. The localization information of ego-car is recursively queried out by cross-attention modules. Finally, the ego pose can be inferred by decoding the transformer outputs. We evaluate the proposed method in large-scale nuScenes and Qcraft datasets. The experimental results show that the BEV-locator is capable to estimate the vehicle poses under versatile scenarios, which effectively associates the cross-model information from multi-view images and global semantic maps. The experiments report satisfactory accuracy with mean absolute errors of 0.052m, 0.135m and 0.251∘^\circ in lateral, longitudinal translation and heading angle degree

    From data acquisition to data fusion : a comprehensive review and a roadmap for the identification of activities of daily living using mobile devices

    Get PDF
    This paper focuses on the research on the state of the art for sensor fusion techniques, applied to the sensors embedded in mobile devices, as a means to help identify the mobile device user’s daily activities. Sensor data fusion techniques are used to consolidate the data collected from several sensors, increasing the reliability of the algorithms for the identification of the different activities. However, mobile devices have several constraints, e.g., low memory, low battery life and low processing power, and some data fusion techniques are not suited to this scenario. The main purpose of this paper is to present an overview of the state of the art to identify examples of sensor data fusion techniques that can be applied to the sensors available in mobile devices aiming to identify activities of daily living (ADLs)

    Information Dissemination in Mobile Ad-Hoc Geosensor Networks

    Full text link

    Context-based Information Fusion: A survey and discussion

    Get PDF
    This survey aims to provide a comprehensive status of recent and current research on context-based Information Fusion (IF) systems, tracing back the roots of the original thinking behind the development of the concept of \u201ccontext\u201d. It shows how its fortune in the distributed computing world eventually permeated in the world of IF, discussing the current strategies and techniques, and hinting possible future trends. IF processes can represent context at different levels (structural and physical constraints of the scenario, a priori known operational rules between entities and environment, dynamic relationships modelled to interpret the system output, etc.). In addition to the survey, several novel context exploitation dynamics and architectural aspects peculiar to the fusion domain are presented and discussed
    • …
    corecore