251 research outputs found
Integrated Sensing and Communication based Outdoor Multi-Target Detection, Tracking and Localization in Practical 5G Networks
The 6th generation (6G) wireless networks will likely to support a variety of
capabilities beyond communication, such as sensing and localization, through
the use of communication networks empowered by advanced technologies.
Integrated sensing and communication (ISAC) has been recognized as a critical
technology as well as an usage scenario for 6G, as widely agreed by leading
global standardization bodies. ISAC utilizes communication infrastructure and
devices to provide the capability of sensing the environment with high
resolution, as well as tracking and localizing moving objects nearby. Meeting
both the requirements for communication and sensing simultaneously, ISAC based
approaches celebrate the advantages of higher spectral and energy efficiency
compared to two separate systems to serve two purposes, and potentially lower
costs and easy deployment. A key step towards the standardization and
commercialization of ISAC is to carry out comprehensive field trials in
practical networks, such as the 5th generation (5G) network, to demonstrate its
true capacities in practical scenarios. In this paper, an ISAC based outdoor
multi-target detection, tracking and localization approach is proposed and
validated in 5G networks. The proposed system comprises of 5G base stations
(BSs) which serve nearby mobile users normally, while accomplishing the task of
detecting, tracking and localizing drones, vehicles and pedestrians
simultaneously. Comprehensive trial results demonstrate the relatively high
accuracy of the proposed method in practical outdoor environment when tracking
and localizing single targets and multiple targets.Comment: Accepted by an open access journal (appearing on IEEEXplore soon
Efficient Pedestrian Detection in Urban Traffic Scenes
Pedestrians are important participants in urban traffic environments, and thus act as an interesting category of objects for autonomous cars. Automatic pedestrian detection is an essential task for protecting pedestrians from collision. In this thesis, we investigate and develop novel approaches by interpreting spatial and temporal characteristics of pedestrians, in three different aspects: shape, cognition and motion. The special up-right human body shape, especially the geometry of the head and shoulder area, is the most discriminative characteristic for pedestrians from other object categories. Inspired by the success of Haar-like features for detecting human faces, which also exhibit a uniform shape structure, we propose to design particular Haar-like features for pedestrians. Tailored to a pre-defined statistical pedestrian shape model, Haar-like templates with multiple modalities are designed to describe local difference of the shape structure. Cognition theories aim to explain how human visual systems process input visual signals in an accurate and fast way. By emulating the center-surround mechanism in human visual systems, we design multi-channel, multi-direction and multi-scale contrast features, and boost them to respond to the appearance of pedestrians. In this way, our detector is considered as a top-down saliency system. In the last part of this thesis, we exploit the temporal characteristics for moving pedestrians and then employ motion information for feature design, as well as for regions of interest (ROIs) selection. Motion segmentation on optical flow fields enables us to select those blobs most probably containing moving pedestrians; a combination of Histogram of Oriented Gradients (HOG) and motion self difference features further enables robust detection. We test our three approaches on image and video data captured in urban traffic scenes, which are rather challenging due to dynamic and complex backgrounds. The achieved results demonstrate that our approaches reach and surpass state-of-the-art performance, and can also be employed for other applications, such as indoor robotics or public surveillance. In this thesis, we investigate and develop novel approaches by interpreting spatial and temporal characteristics of pedestrians, in three different aspects: shape, cognition and motion. The special up-right human body shape, especially the geometry of the head and shoulder area, is the most discriminative characteristic for pedestrians from other object categories. Inspired by the success of Haar-like features for detecting human faces, which also exhibit a uniform shape structure, we propose to design particular Haar-like features for pedestrians. Tailored to a pre-defined statistical pedestrian shape model, Haar-like templates with multiple modalities are designed to describe local difference of the shape structure. Cognition theories aim to explain how human visual systems process input visual signals in an accurate and fast way. By emulating the center-surround mechanism in human visual systems, we design multi-channel, multi-direction and multi-scale contrast features, and boost them to respond to the appearance of pedestrians. In this way, our detector is considered as a top-down saliency system. In the last part of this thesis, we exploit the temporal characteristics for moving pedestrians and then employ motion information for feature design, as well as for regions of interest (ROIs) selection. Motion segmentation on optical flow fields enables us to select those blobs most probably containing moving pedestrians; a combination of Histogram of Oriented Gradients (HOG) and motion self difference features further enables robust detection. We test our three approaches on image and video data captured in urban traffic scenes, which are rather challenging due to dynamic and complex backgrounds. The achieved results demonstrate that our approaches reach and surpass state-of-the-art performance, and can also be employed for other applications, such as indoor robotics or public surveillance
A 58.6 mW 30 Frames/s Real-Time Programmable Multiobject Detection Accelerator With Deformable Parts Models on Full HD 1920×1080 Videos
This paper presents a programmable, energy-efficient, and real-time object detection hardware accelerator for low power and high throughput applications using deformable parts models, with 2x higher detection accuracy than traditional rigid body models. Three methods are used to address the high computational complexity of eight deformable parts detection: classification pruning for 33x fewer part classification, vector quantization for 15x memory size reduction, and feature basis projection for 2x reduction in the cost of each classification. The chip was fabricated in a 65 nm CMOS technology, and can process full high definition 1920 × 1080 videos at 60 frames/s without any OFF-chip storage. The chip has two programmable classification engines (CEs) for multiobject detection. At 30 frames/s, the chip consumes only 58.6 mW (0.94 nJ/pixel, 1168 GOPS/W). At a higher throughput of 60 frames/s, the CEs can be time multiplexed to detect even more than two object classes. This proposed accelerator enables object detection to be as energy-efficient as video compression, which is found in most cameras today.United States. Defense Advanced Research Projects AgencyTexas Instruments Incorporate
Developing Predictive Models of Driver Behaviour for the Design of Advanced Driving Assistance Systems
World-wide injuries in vehicle accidents have been on the rise in recent
years, mainly due to driver error. The main objective of this research is to
develop a predictive system for driving maneuvers by analyzing the cognitive
behavior (cephalo-ocular) and the driving behavior of the driver (how the vehicle
is being driven). Advanced Driving Assistance Systems (ADAS) include
different driving functions, such as vehicle parking, lane departure warning,
blind spot detection, and so on. While much research has been performed on
developing automated co-driver systems, little attention has been paid to the
fact that the driver plays an important role in driving events. Therefore, it
is crucial to monitor events and factors that directly concern the driver. As
a goal, we perform a quantitative and qualitative analysis of driver behavior
to find its relationship with driver intentionality and driving-related actions.
We have designed and developed an instrumented vehicle (RoadLAB) that is
able to record several synchronized streams of data, including the surrounding
environment of the driver, vehicle functions and driver cephalo-ocular behavior,
such as gaze/head information. We subsequently analyze and study the
behavior of several drivers to find out if there is a meaningful relation between
driver behavior and the next driving maneuver
A System for the Generation of Synthetic Wide Area Aerial Surveillance Imagery
The development, benchmarking and validation of aerial Persistent Surveillance (PS) algorithms requires access to specialist Wide Area Aerial Surveillance (WAAS) datasets. Such datasets are difficult to obtain and are often extremely large both in spatial resolution and temporal duration. This paper outlines an approach to the simulation of complex urban environments and demonstrates the viability of using this approach for the generation of simulated sensor data, corresponding to the use of wide area imaging systems for surveillance and reconnaissance applications. This provides a cost-effective method to generate datasets for vehicle tracking algorithms and anomaly detection methods. The system fuses the Simulation of Urban Mobility (SUMO) traffic simulator with a MATLAB controller and an image generator to create scenes containing uninterrupted door-to-door journeys across large areas of the urban environment. This ‘pattern-of-life’ approach provides three-dimensional visual information with natural movement and traffic flows. This can then be used to provide simulated sensor measurements (e.g. visual band and infrared video imagery) and automatic access to ground-truth data for the evaluation of multi-target tracking systems
Recommended from our members
A study on detection of risk factors of a toddler’s fall injuries using visual dynamic motion cues
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The research in this thesis is intended to aid caregivers’ supervision of toddlers to prevent accidental injuries, especially injuries due to falls in the home environment. There have been very few attempts to develop an automatic system to tackle young children’s accidents despite the fact that they are particularly vulnerable to home accidents and a caregiver cannot give continuous supervision. Vision-based analysis methods have been developed to recognise toddlers’ fall risk factors related to changes in their behaviour or environment. First of all, suggestions to prevent fall events of young children at home were collected from well-known organisations for child safety. A large number of fall records of toddlers who had sought treatment at a hospital were analysed to identify a toddler’s fall risk factors. The factors include clutter being a tripping or slipping hazard on the floor and a toddler moving around or climbing furniture or room structures.
The major technical problem in detecting the risk factors is to classify foreground objects into human and non-human, and novel approaches have been proposed for the classification. Unlike most existing studies, which focus on human appearance such as skin colour for human detection, the approaches addressed in this thesis use cues related to dynamic motions. The first cue is based on the fact that there is relative motion between human body parts while typical indoor clutter does not have such parts with diverse motions. In addition, other motion cues are employed to differentiate a human from a pet since a pet also moves its parts diversely. They are angle changes of ellipse fitted to each object and history of its actual heights to capture the various posture changes and different body size of pets. The methods work well as long as foreground regions are correctly segmented
Mobile robot dynamic motion planning to avoid collisions with humans
In various scenarios, e.g. on a construction site, in a shopping mall, or on a street, a mobile robot confronts pedestrians from time to time. Consequently, future paths of the robot should avoid pedestrians to keep them safe. Research on this topic has gradually increased. Many processes of human-robot interactions, e.g. humans and robots walk towards each other, move forward together, or meet on the road, are similar to the scenario above. Therefore it is necessary to generate one safe path before the robot sets up, namely planning one global path from the starting point to the goal site. To achieve a continuous, safe, and efficient path, it is essential first to predict the pedestrians' future path. In the thesis, the method of causal human trajectory prediction is adopted to predict the future path based on the discontinuous previous clues, while CNN as well as RNNs is emphasized to encode the history information to predict future trajectories. For path planning, A star as the algorithm is utilized to search for one path to detour the obstacles
- …