6,010 research outputs found

    People tracking by cooperative fusion of RADAR and camera sensors

    Get PDF
    Accurate 3D tracking of objects from monocular camera poses challenges due to the loss of depth during projection. Although ranging by RADAR has proven effective in highway environments, people tracking remains beyond the capability of single sensor systems. In this paper, we propose a cooperative RADAR-camera fusion method for people tracking on the ground plane. Using average person height, joint detection likelihood is calculated by back-projecting detections from the camera onto the RADAR Range-Azimuth data. Peaks in the joint likelihood, representing candidate targets, are fed into a Particle Filter tracker. Depending on the association outcome, particles are updated using the associated detections (Tracking by Detection), or by sampling the raw likelihood itself (Tracking Before Detection). Utilizing the raw likelihood data has the advantage that lost targets are continuously tracked even if the camera or RADAR signal is below the detection threshold. We show that in single target, uncluttered environments, the proposed method entirely outperforms camera-only tracking. Experiments in a real-world urban environment also confirm that the cooperative fusion tracker produces significantly better estimates, even in difficult and ambiguous situations

    Laser space rendezvous and docking tradeoff

    Get PDF
    A spaceborne laser radar (LADAR) was configured to meet the requirements for rendezvous and docking with a cooperative object in synchronous orbit. The LADAR, configurated using existing pulsed CO2 laser technology and a 1980 system technology baseline, is well suited for the envisioned space tug missions. The performance of a family of candidate LADARS was analyzed. Tradeoff studies as a function of size, weight, and power consumption were carried out for maximum ranges of 50, 100, 200, and 300 nautical miles. The investigation supports the original contention that a rendezvous and docking LADAR can be constructed to offer a cost effective and reliable solution to the envisioned space missions. In fact, the CO2 ladar system offers distinct advantages over other candidate systems

    Prelaunch testing of the GEOS-3 laser reflector array

    Get PDF
    The prelaunch testing performed on the Geos-3 laser reflector array before launch was used to determine the lidar cross section of the array and the distance of the center of gravity of the satellite from the center of gravity of reflected laser pulses as a function of incidence angle. Experimental data are compared to computed results

    Advanced radar absorbing ceramic-based materials for multifunctional applications in space environment

    Get PDF
    In this review, some results of the experimental activity carried out by the authors on advanced composite materials for space applications are reported. Composites are widely employed in the aerospace industry thanks to their lightweight and advanced thermo-mechanical and electrical properties. A critical issue to tackle using engineered materials for space activities is providing two or more specific functionalities by means of single items/components. In this scenario, carbon-based composites are believed to be ideal candidates for the forthcoming development of aerospace research and space missions, since a widespread variety of multi-functional structures are allowed by employing these materials. The research results described here suggest that hybrid ceramic/polymeric structures could be employed as spacecraft-specific subsystems in order to ensure extreme temperature withstanding and electromagnetic shielding behavior simultaneously. The morphological and thermo-mechanical analysis of carbon/carbon (C/C) three-dimensional (3D) shell prototypes is reported; then, the microwave characterization of multilayered carbon-filled micro-/nano-composite panels is described. Finally, the possibility of combining the C/C bulk with a carbon-reinforced skin in a synergic arrangement is discussed, with the aid of numerical and experimental analyses

    Eat-Radar: Continuous Fine-Grained Eating Gesture Detection Using FMCW Radar and 3D Temporal Convolutional Network

    Full text link
    Unhealthy dietary habits are considered as the primary cause of multiple chronic diseases such as obesity and diabetes. The automatic food intake monitoring system has the potential to improve the quality of life (QoF) of people with dietary related diseases through dietary assessment. In this work, we propose a novel contact-less radar-based food intake monitoring approach. Specifically, a Frequency Modulated Continuous Wave (FMCW) radar sensor is employed to recognize fine-grained eating and drinking gestures. The fine-grained eating/drinking gesture contains a series of movement from raising the hand to the mouth until putting away the hand from the mouth. A 3D temporal convolutional network (3D-TCN) is developed to detect and segment eating and drinking gestures in meal sessions by processing the Range-Doppler Cube (RD Cube). Unlike previous radar-based research, this work collects data in continuous meal sessions. We create a public dataset that contains 48 meal sessions (3121 eating gestures and 608 drinking gestures) from 48 participants with a total duration of 783 minutes. Four eating styles (fork & knife, chopsticks, spoon, hand) are included in this dataset. To validate the performance of the proposed approach, 8-fold cross validation method is applied. Experimental results show that our proposed 3D-TCN outperforms the model that combines a convolutional neural network and a long-short-term-memory network (CNN-LSTM), and also the CNN-Bidirectional LSTM model (CNN-BiLSTM) in eating and drinking gesture detection. The 3D-TCN model achieves a segmental F1-score of 0.887 and 0.844 for eating and drinking gestures, respectively. The results of the proposed approach indicate the feasibility of using radar for fine-grained eating and drinking gesture detection and segmentation in meal sessions

    Benefits of Sharing Information from Commercial Airborne Forward-Looking Sensors in the Next Generation Air Transportation System

    Get PDF
    The air transportation system of the future will need to support much greater traffic densities than are currently possible, while preserving or improving upon current levels of safety. Concepts are under development to support a Next Generation Air Transportation System (NextGen) that by some estimates will need to support up to three times current capacity by the year 2025. Weather and other atmospheric phenomena, such as wake vortices and volcanic ash, constitute major constraints on airspace system capacity and can present hazards to aircraft if encountered. To support safe operations in the NextGen environment advanced systems for collection and dissemination of aviation weather and environmental information will be required. The envisioned NextGen Network Enabled Weather (NNEW) infrastructure will be a critical component of the aviation weather support services, providing access to a common weather picture for all system users. By taking advantage of Network Enabled Operations (NEO) capabilities, a virtual 4-D Weather Data Cube with aviation weather information from many sources will be developed. One new source of weather observations may be airborne forward-looking sensors, such as the X-band weather radar. Future sensor systems that are the subject of current research include advanced multi-frequency and polarimetric radar, a variety of Lidar technologies, and infrared imaging spectrometers

    Learning to Detect Open Carry and Concealed Object with 77GHz Radar

    Full text link
    Detecting harmful carried objects plays a key role in intelligent surveillance systems and has widespread applications, for example, in airport security. In this paper, we focus on the relatively unexplored area of using low-cost 77GHz mmWave radar for the carried objects detection problem. The proposed system is capable of real-time detecting three classes of objects - laptop, phone, and knife - under open carry and concealed cases where objects are hidden with clothes or bags. This capability is achieved by the initial signal processing for localization and generating range-azimuth-elevation image cubes, followed by a deep learning-based prediction network and a multi-shot post-processing module for detecting objects. Extensive experiments for validating the system performance on detecting open carry and concealed objects have been presented with a self-built radar-camera testbed and collected dataset. Additionally, the influence of different input formats, factors, and parameters on system performance is analyzed, providing an intuitive understanding of the system. This system would be the very first baseline for other future works aiming to detect carried objects using 77GHz radar.Comment: 12 page

    Radar signal processing for sensing in assisted living: the challenges associated with real-time implementation of emerging algorithms

    Get PDF
    This article covers radar signal processing for sensing in the context of assisted living (AL). This is presented through three example applications: human activity recognition (HAR) for activities of daily living (ADL), respiratory disorders, and sleep stages (SSs) classification. The common challenge of classification is discussed within a framework of measurements/preprocessing, feature extraction, and classification algorithms for supervised learning. Then, the specific challenges of the three applications from a signal processing standpoint are detailed in their specific data processing and ad hoc classification strategies. Here, the focus is on recent trends in the field of activity recognition (multidomain, multimodal, and fusion), health-care applications based on vital signs (superresolution techniques), and comments related to outstanding challenges. Finally, this article explores challenges associated with the real-time implementation of signal processing/classification algorithms
    • …
    corecore