2 research outputs found

    Radar Perception for Autonomous Unmanned Aerial Vehicles: A Survey

    No full text
    The advent of consumer and industrial Unmanned Aerial Vehicles (UAVs), commonly referred to as drones, has opened business opportunities in many fields, including logistics, smart agriculture, inspection, surveillance, and construction. In addition, the autonomous operations of UAVs reduce risks by minimizing the time spent by human workers in harsh environments and lowering costs by automating tasks. For reliability and safety, the drones must sense and avoid potential obstacles and must be capable of safely navigating in unknown environments. UAVs' perception requires reliability in various settings, such as high dust levels, humidity, intense sun glare, dark, and fog that can severely obstruct many conventional sensing methods. Radar systems have unique strengths; they can reliably estimate how far an object is and measure its relative speed via the Doppler effect. In addition, because radars exploit radio waves to sense, they perform well in rain, fog, snow, or smoky environments. This stands in contrast to optical technologies, such as cameras or LIght Detection And Ranging (Lidars), which are more susceptible to the same challenges as the human eye. This survey paper aims to address the signal processing challenges for the exploitation of radar systems in unmanned aerial vehicles for advanced perception, considering recent integration trends and technology capabilities. The focus is on signal processing techniques for low-cost and power-efficient radar sensors, which operate onboard the UAVs in real-Time to ensure their needs in terms of perception, situational awareness, and navigation. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, safe, and autonomous way for UAVs to perceive and interact with the world. Microwave Sensing, Signals & System

    Wearable monitoring and interpretable machine learning can objectively track progression in patients during cardiac rehabilitation

    Get PDF
    Cardiovascular diseases (CVD) are often characterized by their multifactorial complexity. This makes remote monitoring and ambulatory cardiac rehabilitation (CR) therapy challenging. Current wearable multimodal devices enable remote monitoring. Machine learning (ML) and artificial intelligence (AI) can help in tackling multifaceted datasets. However, for clinical acceptance, easy interpretability of the AI models is crucial. The goal of the present study was to investigate whether a multi-parameter sensor could be used during a standardized activity test to interpret functional capacity in the longitudinal follow-up of CR patients. A total of 129 patients were followed for 3 months during CR using 6-min walking tests (6MWT) equipped with a wearable ECG and accelerometer device. Functional capacity was assessed based on 6MWT distance (6MWD). Linear and nonlinear interpretable models were explored to predict 6MWD. The t-distributed stochastic neighboring embedding (t-SNE) technique was exploited to embed and visualize high dimensional data. The performance of support vector machine (SVM) models, combining different features and using different kernel types, to predict functional capacity was evaluated. The SVM model, using chronotropic response and effort as input features, showed a mean absolute error of 42.8 m (±36.8 m). The 3D-maps derived using the t-SNE technique visualized the relationship between sensor-derived biomarkers and functional capacity, which enables tracking of the evolution of patients throughout the CR program. The current study showed that wearable monitoring combined with interpretable ML can objectively track clinical progression in a CR population. These results pave the road towards ambulatory CR.Circuits and System
    corecore