42,507 research outputs found
Mitigation of Through-Wall Distortions of Frontal Radar Images using Denoising Autoencoders
Radar images of humans and other concealed objects are considerably distorted
by attenuation, refraction and multipath clutter in indoor through-wall
environments. While several methods have been proposed for removing target
independent static and dynamic clutter, there still remain considerable
challenges in mitigating target dependent clutter especially when the knowledge
of the exact propagation characteristics or analytical framework is
unavailable. In this work we focus on mitigating wall effects using a machine
learning based solution -- denoising autoencoders -- that does not require
prior information of the wall parameters or room geometry. Instead, the method
relies on the availability of a large volume of training radar images gathered
in through-wall conditions and the corresponding clean images captured in
line-of-sight conditions. During the training phase, the autoencoder learns how
to denoise the corrupted through-wall images in order to resemble the free
space images. We have validated the performance of the proposed solution for
both static and dynamic human subjects. The frontal radar images of static
targets are obtained by processing wideband planar array measurement data with
two-dimensional array and range processing. The frontal radar images of dynamic
targets are simulated using narrowband planar array data processed with
two-dimensional array and Doppler processing. In both simulation and
measurement processes, we incorporate considerable diversity in the target and
propagation conditions. Our experimental results, from both simulation and
measurement data, show that the denoised images are considerably more similar
to the free-space images when compared to the original through-wall images
Monocular Vision as a Range Sensor
One of the most important abilities for a mobile robot is detecting obstacles in order to avoid collisions. Building a map of these obstacles is the next logical step. Most robots to date have used sensors such as passive or active infrared, sonar or laser range finders to locate obstacles in their path. In contrast, this work uses a single colour camera as the only sensor, and consequently the robot must obtain range information from the camera images. We propose simple methods for determining the range to the nearest obstacle in any direction in the robotâs field of view, referred to as the Radial Obstacle Profile. The ROP can then be used to determine the amount of rotation between two successive images, which is important for constructing a 360Âș view of the surrounding environment as part of map construction
An instinct for detection: psychological perspectives on CCTV surveillance
The aim of this article is to inform and stimulate a proactive, multidisciplinary approach to research and development in surveillance-based detective work. In this article we review some of the key psychological issues and phenomena that practitioners should be aware of. We look at how human performance can be explained with reference to our biological and evolutionary legacy. We show how critical viewing conditions can be in determining whether observers detect or overlook criminal activity in video material. We examine situations where performance can be surprisingly poor, and cover situations where, even once confronted with evidence of these detection deficits, observers still underestimate their susceptibility to them. Finally we explain why the emergence of these relatively recent research themes presents an opportunity for police and law enforcement agencies to set a new, multidisciplinary research agenda focused on relevant and pressing issues of national and international importance
GUARDIANS final report
Emergencies in industrial warehouses are a major concern for firefghters. The large dimensions together with the development of dense smoke that drastically reduces visibility, represent major challenges. The Guardians robot swarm is designed to assist fire fighters in searching a
large warehouse. In this report we discuss the technology developed for a swarm of robots searching and assisting fire fighters. We explain the swarming algorithms which provide the functionality by which the robots react to and follow humans while no communication is required. Next we
discuss the wireless communication system, which is a so-called mobile ad-hoc network. The communication network provides also one of the means to locate the robots and humans. Thus the robot swarm is able to locate itself and provide guidance information to the humans. Together with
the re ghters we explored how the robot swarm should feed information back to the human fire fighter. We have designed and experimented with interfaces for presenting swarm based information to human beings
Recommended from our members
Radar simulation of human activities in non line-of-sight environments
textThe capability to detect, track and monitor human activities behind building walls and other non-line-of-sight environments is an important component of security and surveillance operations. Over the years, both ultrawideband and Doppler based radar techniques have been researched and developed for tracking humans behind walls. In particular, Doppler radars capture some interesting features of the human radar returns called microDopplers that arise from the dynamic movements of the different body parts. All the current research efforts have focused on building hardware sensors with very specific capabilities. This dissertation focuses on developing a physics based Doppler radar simulator to generate the dynamic signatures of complex human motions in nonline-of-sight environments. The simulation model incorporates dynamic human motion, electromagnetic scattering mechanisms, channel propagation effects and radar sensor parameters. Detailed, feature-by-feature analyses of the resulting radar signatures are carried out to enhance our fundamental understanding of human sensing using radar. First, a methodology for simulating the radar returns from complex human motions in free space is presented. For this purpose, computer animation data from motion capture technologies are exploited to describe the human movements. Next, a fast, simple, primitive-based electromagnetic model is used to simulate the human body. The microDopplers of several human motions such as walking, running, crawling and jumping are generated by integrating the animation models of humans with the electromagnetic model of the human body. Next, a methodology for generating the microDoppler radar signatures of humans moving behind walls is presented. This involves combining wall propagation functions derived from the finite-difference time-domain (FDTD) simulation with the free space radar simulations of humans. The resulting hybrid simulator of the human and wall is used to investigate the effects of both homogeneous and inhomogeneous walls on human microDopplers. The results are further corroborated by basic point-scatterer analysis of different wall effects. The wall studies are followed by an analysis of the effects of flat grounds on human radar signatures. The ground effect is modeled using the method of images and a ground reflection coefficient. A suitable Doppler radar testbed is developed in the laboratory for simulation validation. Measured data of different human activities are collected in both line-of-sight and through-wall environments and the resulting microDoppler signatures are compared with the simulation results. The human microDopplers are best observed in the joint timefrequency space. Hence, suitable joint time-frequency transforms are investigated for improving the display and the readability of both simulated and measured spectrograms. Finally, two new Doppler radar paradigms are considered. First, a scenario is considered where multiple, spatially distributed Doppler radars are used to measure the microDopplers of a moving human from different viewing angles. The possibility of using these microDoppler data for estimating the positions of different point scatterers on the human body is investigated. Second, a scenario is considered where multiple Doppler radars are collocated in a two-dimensional (2-D) array configuration. The possibility of generating frontal images of human movements using joint Doppler and 2-D spatial beamforming is considered. The performance of this concept is compared with that of conventional 2-D array processing without Doppler processing.Electrical and Computer Engineerin
A robot swarm assisting a human fire-fighter
Emergencies in industrial warehouses are a major concern for fire-fighters. The large dimensions, together with the development of dense smoke that drastically reduces visibility, represent major challenges. The GUARDIANS robot swarm is designed to assist fire-fighters in searching a large warehouse. In this paper we discuss the technology developed for a swarm of robots assisting fire-fighters. We explain the swarming algorithms that provide the functionality by which the robots react to and follow humans while no communication is required. Next we discuss the wireless communication system, which is a so-called mobile ad-hoc network. The communication network provides also the means to locate the robots and humans. Thus, the robot swarm is able to provide guidance information to the humans. Together with the fire-fighters we explored how the robot swarm should feed information back to the human fire-fighter. We have designed and experimented with interfaces for presenting swarm-based information to human beings
Cue combination for 3D location judgements
Cue combination rules have often been applied to the perception of surface shape but not to judgements of object location. Here, we used immersive virtual reality to explore the relationship between different cues to distance. Participants viewed a virtual scene and judged the change in distance of an object presented in two intervals, where the scene changed in size between intervals (by a factor of between 0.25 and 4). We measured thresholds for detecting a change in object distance when there were only 'physical' (stereo and motion parallax) or 'texture-based' cues (independent of the scale of the scene) and used these to predict biases in a distance matching task. Under a range of conditions, in which the viewing distance and position of the tarte relative to other objects was varied, the ration of 'physical' to 'texture-based' thresholds was a good predictor of biases in the distance matching task. The cue combination approach, which successfully accounts for our data, relies on quite different principles from those underlying geometric reconstruction
Non-line-of-sight tracking of people at long range
A remote-sensing system that can determine the position of hidden objects has
applications in many critical real-life scenarios, such as search and rescue
missions and safe autonomous driving. Previous work has shown the ability to
range and image objects hidden from the direct line of sight, employing
advanced optical imaging technologies aimed at small objects at short range. In
this work we demonstrate a long-range tracking system based on single laser
illumination and single-pixel single-photon detection. This enables us to track
one or more people hidden from view at a stand-off distance of over 50~m. These
results pave the way towards next generation LiDAR systems that will
reconstruct not only the direct-view scene but also the main elements hidden
behind walls or corners
- âŠ