26 research outputs found

    leave a trace - A People Tracking System Meets Anomaly Detection

    Full text link
    Video surveillance always had a negative connotation, among others because of the loss of privacy and because it may not automatically increase public safety. If it was able to detect atypical (i.e. dangerous) situations in real time, autonomously and anonymously, this could change. A prerequisite for this is a reliable automatic detection of possibly dangerous situations from video data. This is done classically by object extraction and tracking. From the derived trajectories, we then want to determine dangerous situations by detecting atypical trajectories. However, due to ethical considerations it is better to develop such a system on data without people being threatened or even harmed, plus with having them know that there is such a tracking system installed. Another important point is that these situations do not occur very often in real, public CCTV areas and may be captured properly even less. In the artistic project leave a trace the tracked objects, people in an atrium of a institutional building, become actor and thus part of the installation. Visualisation in real-time allows interaction by these actors, which in turn creates many atypical interaction situations on which we can develop our situation detection. The data set has evolved over three years and hence, is huge. In this article we describe the tracking system and several approaches for the detection of atypical trajectories

    Multifunctional and compact 3D FMCW MIMO radar system with rectangular array for medium-range applications

    Full text link
    Miralles-Navarro, E.; Multerer, T.; Ganis, A.; Schoenlinner, B.; Prechtel, U.; Meusling, A.; Mietzner, J.... (2018). Multifunctional and compact 3D FMCW MIMO radar system with rectangular array for medium-range applications. IEEE Aerospace and Electronic Systems Magazine. 33(4):46-54. https://doi.org/10.1109/MAES.2018.160277S465433

    S-band FMCW Radar for Target Tracking and Monitoring

    Get PDF
    A Frequency Modulated Continuous Wave (FMCW) Doppler radar was assembled to detect human as sample target. Its operating frequency is at 2.4GHz with transmitting power of 10.41 dBm. Range resolution of the radar is 2.8 meters at 53.2MHz signal bandwidth and chirp waveform of 40ms. The radar exploits Doppler principle to acquire the range and velocity information of targets whilst a Moving Target Indicator (MTI) pulse canceller is utilized to filter incoming noise signal. With the use of Chirp period-bandwidth product Frequency Modulated (FM) waveform and deramping process, the radars’ Signal to Noise Ratio (SNR) was improved up to 42 dB. The attained maximum range is about 200 meters for a target with Radar Cross Section (RCS) of 1m2. The constructed radar is capable to measure the speed of moving target at 0.645m/s and above with great accuracy. The radar can detect and determine the position of pedestrians with 0.18% percentage error

    Experimental analysis of using radar as an extrinsic sensor for human-robot collaboration

    Get PDF
    Collaborative robots are expected to be an integral part of driving the fourth industrial revolution that is soon expected to happen. In human-robot collaboration, the robot and a human share a common workspace and work on a common task. Here the safety of a human working with the robot is of utmost importance. A collaborative robot usually consists of various sensors to ensure the safety of a human working with the robot. This research mainly focuses on establishing a safe environment for a human working alongside a robot by mounting an FMCW radar as an extrinsic sensor, through which the workspace of the robot is monitored. A customized tracking algorithm is developed for the sensor used in this study by including a dynamically varying gating threshold, and information about consecutive missed detections to track and localize the human around the workspace of the robot. The performance of the proposed system in successfully establishing a safe human-robot collaboration is examined across a few scenarios that arise when a single human operator is working alongside a robot, with the radar operating in different modes. An OptiTrack Motion Capture System is used as ground truth to validate the efficacy of the proposed system

    Augmentation of Visual Odometry using Radar

    Get PDF
    As UAVs become viable for more applications, pose estimation continues to be critical. All UAVs need to know where they are at all times, in order to avoid disaster. However, in the event that UAVs are deployed in an area with poor visual conditions, such as in many disaster scenarios, many localization algorithms have difficulties working. This thesis presents VIL-DSO, a visual odometry method as a pose estimation solution, combining several different algorithms in order to improve pose estimation and provide metric scale. This thesis also presents a method for automatically determining an accurate physical transform between radar and camera data, and in doing so, allow for the projection of radar information into the image plane. Finally, this thesis presents EVIL-DSO, a method for localization that fuses visual-inertial odometry with radar information. The proposed EVIL-DSO algorithm uses radar information projected into the image plane in order to create a depth map for odometry to directly observe depth of features, which can then be used as part of the odometry algorithm to remove the need to perform costly depth estimations. Trajectory analysis of the proposed algorithm on outdoor data, compared to differential GPS data, shows that the proposed algorithm is more accurate in terms of root-mean-square error, as well as having a lower percentage of scale error. Runtime analysis shows that the proposed algorithm updates more frequently than other, similar, algorithms

    All-weather object recognition using radar and infrared sensing

    Get PDF
    Autonomous cars are an emergent technology which has the capacity to change human lives. The current sensor systems which are most capable of perception are based on optical sensors. For example, deep neural networks show outstanding results in recognising objects when used to process data from cameras and Light Detection And Ranging (LiDAR) sensors. However these sensors perform poorly under adverse weather conditions such as rain, fog, and snow due to the sensor wavelengths. This thesis explores new sensing developments based on long wave polarised infrared (IR) imagery and imaging radar to recognise objects. First, we developed a methodology based on Stokes parameters using polarised infrared data to recognise vehicles using deep neural networks. Second, we explored the potential of using only the power spectrum captured by low-THz radar sensors to perform object recognition in a controlled scenario. This latter work is based on a data-driven approach together with the development of a data augmentation method based on attenuation, range and speckle noise. Last, we created a new large-scale dataset in the ”wild” with many different weather scenarios (sunny, overcast, night, fog, rain and snow) showing radar robustness to detect vehicles in adverse weather. High resolution radar and polarised IR imagery, combined with a deep learning approach, are shown as a potential alternative to current automotive sensing systems based on visible spectrum optical technology as they are more robust in severe weather and adverse light conditions.UK Engineering and Physical Research Council, grant reference EP/N012402/

    Detecting and Tracking Vulnerable Road Users\u27 Trajectories Using Different Types of Sensors Fusion

    Get PDF
    Vulnerable road user (VRU) detection and tracking has been a key challenge in transportation research. Different types of sensors such as the camera, LiDAR, and inertial measurement units (IMUs) have been used for this purpose. For detection and tracking with the camera, it is necessary to perform calibration to obtain correct GPS trajectories. This method is often tedious and necessitates accurate ground truth data. Moreover, if the camera performs any pan-tilt-zoom function, it is usually necessary to recalibrate the camera. In this thesis, we propose camera calibration using an auxiliary sensor: ultra-wideband (UWB). USBs are capable of tracking a road user with ten-centimeter-level accuracy. Once a VRU with a UWB traverses in the camera view, the UWB GPS data is fused with the camera to perform real-time calibration. As the experimental results in this thesis have shown, the camera is able to output better trajectories after calibration. It is expected that the use of UWB is needed only once to fuse the data and determine the correct trajectories at the same intersection and location of the camera. All other trajectories collected by the camera can be corrected using the same adjustment. In addition, data analysis was conducted to evaluate the performance of the UWB sensors. This study also predicted pedestrian trajectories using data fused by the UWB and smartphone sensors. UWB GPS coordinates are very accurate although it lacks other sensor parameters such as accelerometer, gyroscope, etc. The smartphone data have been used in this scenario to augment the UWB data. The two datasets were merged on the basis of the closest timestamp. The resulting dataset has precise latitude and longitude from UWB as well as the accelerometer, gyroscope, and speed data from smartphones making the fused dataset accurate and rich in terms of parameters. The fused dataset was then used to predict the GPS coordinates of pedestrians and scooters using LSTM

    State-of-the-Art Review on Wearable Obstacle Detection Systems Developed for Assistive Technologies and Footwear

    Get PDF
    Walking independently is essential to maintaining our quality of life but safe locomotion depends on perceiving hazards in the everyday environment. To address this problem, there is an increasing focus on developing assistive technologies that can alert the user to the risk destabilizing foot contact with either the ground or obstacles, leading to a fall. Shoe-mounted sensor systems designed to monitor foot-obstacle interaction are being employed to identify tripping risk and provide corrective feedback. Advances in smart wearable technologies, integrating motion sensors with machine learning algorithms, has led to developments in shoe-mounted obstacle detection. The focus of this review is gait-assisting wearable sensors and hazard detection for pedestrians. This literature represents a research front that is critically important in paving the way towards practical, low-cost, wearable devices that can make walking safer and reduce the increasing financial and human costs of fall injuries
    corecore