7,263 research outputs found

    Multisensor Poisson Multi-Bernoulli Filter for Joint Target-Sensor State Tracking

    Full text link
    In a typical multitarget tracking (MTT) scenario, the sensor state is either assumed known, or tracking is performed in the sensor's (relative) coordinate frame. This assumption does not hold when the sensor, e.g., an automotive radar, is mounted on a vehicle, and the target state should be represented in a global (absolute) coordinate frame. Then it is important to consider the uncertain location of the vehicle on which the sensor is mounted for MTT. In this paper, we present a multisensor low complexity Poisson multi-Bernoulli MTT filter, which jointly tracks the uncertain vehicle state and target states. Measurements collected by different sensors mounted on multiple vehicles with varying location uncertainty are incorporated sequentially based on the arrival of new sensor measurements. In doing so, targets observed from a sensor mounted on a well-localized vehicle reduce the state uncertainty of other poorly localized vehicles, provided that a common non-empty subset of targets is observed. A low complexity filter is obtained by approximations of the joint sensor-feature state density minimizing the Kullback-Leibler divergence (KLD). Results from synthetic as well as experimental measurement data, collected in a vehicle driving scenario, demonstrate the performance benefits of joint vehicle-target state tracking.Comment: 13 pages, 7 figure

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    \u3cem\u3eGRASP News\u3c/em\u3e: Volume 9, Number 1

    Get PDF
    The past year at the GRASP Lab has been an exciting and productive period. As always, innovation and technical advancement arising from past research has lead to unexpected questions and fertile areas for new research. New robots, new mobile platforms, new sensors and cameras, and new personnel have all contributed to the breathtaking pace of the change. Perhaps the most significant change is the trend towards multi-disciplinary projects, most notable the multi-agent project (see inside for details on this, and all the other new and on-going projects). This issue of GRASP News covers the developments for the year 1992 and the first quarter of 1993

    GRASP News Volume 9, Number 1

    Get PDF
    A report of the General Robotics and Active Sensory Perception (GRASP) Laboratory

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    An Overview about Emerging Technologies of Autonomous Driving

    Full text link
    Since DARPA started Grand Challenges in 2004 and Urban Challenges in 2007, autonomous driving has been the most active field of AI applications. This paper gives an overview about technical aspects of autonomous driving technologies and open problems. We investigate the major fields of self-driving systems, such as perception, mapping and localization, prediction, planning and control, simulation, V2X and safety etc. Especially we elaborate on all these issues in a framework of data closed loop, a popular platform to solve the long tailed autonomous driving problems

    LiDAR aided simulation pipeline for wireless communication in vehicular traffic scenarios

    Get PDF
    Abstract. Integrated Sensing and Communication (ISAC) is a modern technology under development for Sixth Generation (6G) systems. This thesis focuses on creating a simulation pipeline for dynamic vehicular traffic scenarios and a novel approach to reducing wireless communication overhead with a Light Detection and Ranging (LiDAR) based system. The simulation pipeline can be used to generate data sets for numerous problems. Additionally, the developed error model for vehicle detection algorithms can be used to identify LiDAR performance with respect to different parameters like LiDAR height, range, and laser point density. LiDAR behavior on traffic environment is provided as part of the results in this study. A periodic beam index map is developed by capturing antenna azimuth and elevation angles, which denote maximum Reference Signal Receive Power (RSRP) for a simulated receiver grid on the road and classifying areas using Support Vector Machine (SVM) algorithm to reduce the number of Synchronization Signal Blocks (SSBs) that are needed to be sent in Vehicle to Infrastructure (V2I) communication. This approach effectively reduces the wireless communication overhead in V2I communication

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    Dynamic Data Driven Applications System Concept for Information Fusion

    Get PDF
    AbstractWe present a framework of Information Fusion (IF) using the Dynamic Data Driven Applications Systems (DDDAS) concept. Existing literature at the intersection of these two topics supports environmental modeling (e.g., terrain understanding) for context enhanced applications. Taking advantage of sensor models, statistical methods, and situation- specific spatio-temporal fusion products derived from wide area sensor networks, DDDAS demonstrates robust multi-scale and multi-resolution geographical terrain computations. We highlight the complementary nature of these seemingly parallel approaches and propose a more integrated analytical framework in the context of a cooperative multimodal sensing application. In particular, we use a Wide-Area Motion Imagery (WAMI) application to draw parallels and contrasts between IF and DDDAS systems that warrants an integrated perspective. This elementary work is aimed at triggering a sequence of deeper insightful research towards exploiting sparsely sampled piecewise dense WAMI measurements – an application where the challenges of big-data with regards to mathematical fusion relationships and high-performance computations remain significant and will persist. Dynamic data-driven adaptive computations are required to effectively handle the challenges with exponentially increasing data volume for advanced information fusion systems solutions such as simultaneous target tracking and identification
    • …
    corecore