2,300 research outputs found

    Recent Advances in mmWave-Radar-Based Sensing, Its Applications, and Machine Learning Techniques: A Review

    Get PDF
    Human gesture detection, obstacle detection, collision avoidance, parking aids, automotive driving, medical, meteorological, industrial, agriculture, defense, space, and other relevant fields have all benefited from recent advancements in mmWave radar sensor technology. A mmWave radar has several advantages that set it apart from other types of sensors. A mmWave radar can operate in bright, dazzling, or no-light conditions. A mmWave radar has better antenna miniaturization than other traditional radars, and it has better range resolution. However, as more data sets have been made available, there has been a significant increase in the potential for incorporating radar data into different machine learning methods for various applications. This review focuses on key performance metrics in mmWave-radar-based sensing, detailed applications, and machine learning techniques used with mmWave radar for a variety of tasks. This article starts out with a discussion of the various working bands of mmWave radars, then moves on to various types of mmWave radars and their key specifications, mmWave radar data interpretation, vast applications in various domains, and, in the end, a discussion of machine learning algorithms applied with radar data for various applications. Our review serves as a practical reference for beginners developing mmWave-radar-based applications by utilizing machine learning techniques.publishedVersio

    Survey on Recent Advances in Integrated GNSSs Towards Seamless Navigation Using Multi-Sensor Fusion Technology

    Get PDF
    During the past few decades, the presence of global navigation satellite systems (GNSSs) such as GPS, GLONASS, Beidou and Galileo has facilitated positioning, navigation and timing (PNT) for various outdoor applications. With the rapid increase in the number of orbiting satellites per GNSS, enhancements in the satellite-based augmentation systems (SBASs) such as EGNOS and WAAS, as well as commissioning new GNSS constellations, the PNT capabilities are maximized to reach new frontiers. Additionally, the recent developments in precise point positioning (PPP) and real time kinematic (RTK) algorithms have provided more feasibility to carrier-phase precision positioning solutions up to the third-dimensional localization. With the rapid growth of internet of things (IoT) applications, seamless navigation becomes very crucial for numerous PNT dependent applications especially in sensitive fields such as safety and industrial applications. Throughout the years, GNSSs have maintained sufficiently acceptable performance in PNT, in RTK and PPP applications however GNSS experienced major challenges in some complicated signal environments. In many scenarios, GNSS signal suffers deterioration due to multipath fading and attenuation in densely obscured environments that comprise stout obstructions. Recently, there has been a growing demand e.g. in the autonomous-things domain in adopting reliable systems that accurately estimate position, velocity and time (PVT) observables. Such demand in many applications also facilitates the retrieval of information about the six degrees of freedom (6-DOF - x, y, z, roll, pitch, and heading) movements of the target anchors. Numerous modern applications are regarded as beneficiaries of precise PNT solutions such as the unmanned aerial vehicles (UAV), the automatic guided vehicles (AGV) and the intelligent transportation system (ITS). Hence, multi-sensor fusion technology has become very vital in seamless navigation systems owing to its complementary capabilities to GNSSs. Fusion-based positioning in multi-sensor technology comprises the use of multiple sensors measurements for further refinement in addition to the primary GNSS, which results in high precision and less erroneous localization. Inertial navigation systems (INSs) and their inertial measurement units (IMUs) are the most commonly used technologies for augmenting GNSS in multi-sensor integrated systems. In this article, we survey the most recent literature on multi-sensor GNSS technology for seamless navigation. We provide an overall perspective for the advantages, the challenges and the recent developments of the fusion-based GNSS navigation realm as well as analyze the gap between scientific advances and commercial offerings. INS/GNSS and IMU/GNSS systems have proven to be very reliable in GNSS-denied environments where satellite signal degradation is at its peak, that is why both integrated systems are very abundant in the relevant literature. In addition, the light detection and ranging (LiDAR) systems are widely adopted in the literature for its capability to provide 6-DOF to mobile vehicles and autonomous robots. LiDARs are very accurate systems however they are not suitable for low-cost positioning due to the expensive initial costs. Moreover, several other techniques from the radio frequency (RF) spectrum are utilized as multi-sensor systems such as cellular networks, WiFi, ultra-wideband (UWB) and Bluetooth. The cellular-based systems are very suitable for outdoor navigation applications while WiFi-based, UWB-based and Bluetooth-based systems are efficient in indoor positioning systems (IPS). However, to achieve reliable PVT estimations in multi-sensor GNSS navigation, optimal algorithms should be developed to mitigate the estimation errors resulting from non-line-of-sight (NLOS) GNSS situations. Examples of the most commonly used algorithms for trilateration-based positioning are Kalman filters, weighted least square (WLS), particle filters (PF) and many other hybrid algorithms by mixing one or more algorithms together. In this paper, the reviewed articles under study and comparison are presented by highlighting their motivation, the methodology of implementation, the modelling utilized and the performed experiments. Then they are assessed with respect to the published results focusing on achieved accuracy, robustness and overall implementation cost-benefits as performance metrics. Our summarizing survey assesses the most promising, highly ranked and recent articles that comprise insights into the future of GNSS technology with multi-sensor fusion technique.©2021 The Authors. Published by ION.fi=vertaisarvioimaton|en=nonPeerReviewed

    Integrated Sensing and Communications: Towards Dual-functional Wireless Networks for 6G and Beyond

    Get PDF
    As the standardization of 5G solidifies, researchers are speculating what 6G will be. The integration of sensing functionality is emerging as a key feature of the 6G Radio Access Network (RAN), allowing for the exploitation of dense cell infrastructures to construct a perceptive network. In this IEEE Journal on Selected Areas in Commmunications (JSAC) Special Issue overview, we provide a comprehensive review on the background, range of key applications and state-of-the-art approaches of Integrated Sensing and Communications (ISAC). We commence by discussing the interplay between sensing and communications (S&C) from a historical point of view, and then consider the multiple facets of ISAC and the resulting performance gains. By introducing both ongoing and potential use cases, we shed light on the industrial progress and standardization activities related to ISAC. We analyze a number of performance tradeoffs between S&C, spanning from information theoretical limits to physical layer performance tradeoffs, and the cross-layer design tradeoffs. Next, we discuss the signal processing aspects of ISAC, namely ISAC waveform design and receive signal processing. As a step further, we provide our vision on the deeper integration between S&C within the framework of perceptive networks, where the two functionalities are expected to mutually assist each other, i.e., via communication-assisted sensing and sensing-assisted communications. Finally, we identify the potential integration of ISAC with other emerging communication technologies, and their positive impacts on the future of wireless networks

    Sensor Fusion for Object Detection and Tracking in Autonomous Vehicles

    Get PDF
    Autonomous driving vehicles depend on their perception system to understand the environment and identify all static and dynamic obstacles surrounding the vehicle. The perception system in an autonomous vehicle uses the sensory data obtained from different sensor modalities to understand the environment and perform a variety of tasks such as object detection and object tracking. Combining the outputs of different sensors to obtain a more reliable and robust outcome is called sensor fusion. This dissertation studies the problem of sensor fusion for object detection and object tracking in autonomous driving vehicles and explores different approaches for utilizing deep neural networks to accurately and efficiently fuse sensory data from different sensing modalities. In particular, this dissertation focuses on fusing radar and camera data for 2D and 3D object detection and object tracking tasks. First, the effectiveness of radar and camera fusion for 2D object detection is investigated by introducing a radar region proposal algorithm for generating object proposals in a two-stage object detection network. The evaluation results show significant improvement in speed and accuracy compared to a vision-based proposal generation method. Next, radar and camera fusion is used for the task of joint object detection and depth estimation where the radar data is used in conjunction with image features to generate object proposals, but also provides accurate depth estimation for the detected objects in the scene. A fusion algorithm is also proposed for 3D object detection where where the depth and velocity data obtained from the radar is fused with the camera images to detect objects in 3D and also accurately estimate their velocities without requiring any temporal information. Finally, radar and camera sensor fusion is used for 3D multi-object tracking by introducing an end-to-end trainable and online network capable of tracking objects in real-time

    Novel Hybrid-Learning Algorithms for Improved Millimeter-Wave Imaging Systems

    Full text link
    Increasing attention is being paid to millimeter-wave (mmWave), 30 GHz to 300 GHz, and terahertz (THz), 300 GHz to 10 THz, sensing applications including security sensing, industrial packaging, medical imaging, and non-destructive testing. Traditional methods for perception and imaging are challenged by novel data-driven algorithms that offer improved resolution, localization, and detection rates. Over the past decade, deep learning technology has garnered substantial popularity, particularly in perception and computer vision applications. Whereas conventional signal processing techniques are more easily generalized to various applications, hybrid approaches where signal processing and learning-based algorithms are interleaved pose a promising compromise between performance and generalizability. Furthermore, such hybrid algorithms improve model training by leveraging the known characteristics of radio frequency (RF) waveforms, thus yielding more efficiently trained deep learning algorithms and offering higher performance than conventional methods. This dissertation introduces novel hybrid-learning algorithms for improved mmWave imaging systems applicable to a host of problems in perception and sensing. Various problem spaces are explored, including static and dynamic gesture classification; precise hand localization for human computer interaction; high-resolution near-field mmWave imaging using forward synthetic aperture radar (SAR); SAR under irregular scanning geometries; mmWave image super-resolution using deep neural network (DNN) and Vision Transformer (ViT) architectures; and data-level multiband radar fusion using a novel hybrid-learning architecture. Furthermore, we introduce several novel approaches for deep learning model training and dataset synthesis.Comment: PhD Dissertation Submitted to UTD ECE Departmen

    A Review of Sensor Technologies for Perception in Automated Driving

    Get PDF
    After more than 20 years of research, ADAS are common in modern vehicles available in the market. Automated Driving systems, still in research phase and limited in their capabilities, are starting early commercial tests in public roads. These systems rely on the information provided by on-board sensors, which allow to describe the state of the vehicle, its environment and other actors. Selection and arrangement of sensors represent a key factor in the design of the system. This survey reviews existing, novel and upcoming sensor technologies, applied to common perception tasks for ADAS and Automated Driving. They are put in context making a historical review of the most relevant demonstrations on Automated Driving, focused on their sensing setup. Finally, the article presents a snapshot of the future challenges for sensing technologies and perception, finishing with an overview of the commercial initiatives and manufacturers alliances that will show future market trends in sensors technologies for Automated Vehicles.This work has been partly supported by ECSEL Project ENABLE- S3 (with grant agreement number 692455-2), by the Spanish Government through CICYT projects (TRA2015- 63708-R and TRA2016-78886-C3-1-R)

    Application of advanced technology to space automation

    Get PDF
    Automated operations in space provide the key to optimized mission design and data acquisition at minimum cost for the future. The results of this study strongly accentuate this statement and should provide further incentive for immediate development of specific automtion technology as defined herein. Essential automation technology requirements were identified for future programs. The study was undertaken to address the future role of automation in the space program, the potential benefits to be derived, and the technology efforts that should be directed toward obtaining these benefits
    corecore