5,316 research outputs found
Radar and RGB-depth sensors for fall detection: a review
This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing
Convolutional Neural Network on Three Orthogonal Planes for Dynamic Texture Classification
Dynamic Textures (DTs) are sequences of images of moving scenes that exhibit
certain stationarity properties in time such as smoke, vegetation and fire. The
analysis of DT is important for recognition, segmentation, synthesis or
retrieval for a range of applications including surveillance, medical imaging
and remote sensing. Deep learning methods have shown impressive results and are
now the new state of the art for a wide range of computer vision tasks
including image and video recognition and segmentation. In particular,
Convolutional Neural Networks (CNNs) have recently proven to be well suited for
texture analysis with a design similar to a filter bank approach. In this
paper, we develop a new approach to DT analysis based on a CNN method applied
on three orthogonal planes x y , xt and y t . We train CNNs on spatial frames
and temporal slices extracted from the DT sequences and combine their outputs
to obtain a competitive DT classifier. Our results on a wide range of commonly
used DT classification benchmark datasets prove the robustness of our approach.
Significant improvement of the state of the art is shown on the larger
datasets.Comment: 19 pages, 10 figure
Recommended from our members
Video content analysis for automated detection and tracking of humans in CCTV surveillance applications
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The problems of achieving high detection rate with low false alarm rate for human detection and tracking in video sequence, performance scalability, and improving response time are addressed in this thesis. The underlying causes are the effect of scene complexity, human-to-human interactions, scale changes, and scene background-human interactions. A two-stage processing solution, namely, human detection, and human tracking with two novel pattern classifiers is presented. Scale independent human detection is achieved by processing in the wavelet domain using square wavelet features. These features used to characterise human silhouettes at different scales are similar to rectangular features used in [Viola 2001]. At the detection stage two detectors are combined to improve detection rate. The first detector is based on shape-outline of humans extracted from the scene using a reduced complexity outline extraction algorithm. A Shape mismatch measure is used to differentiate between the human and the background class. The second detector uses rectangular features as primitives for silhouette description in the wavelet domain. The marginal distribution of features collocated at a particular position on a candidate human (a patch of the image) is used to describe statistically the silhouette. Two similarity measures are computed between a candidate human and the model histograms of human and non human classes. The similarity measure is used to discriminate between the human and the non human class. At the tracking stage, a tracker based on joint probabilistic data association filter (JPDAF) for data association, and motion correspondence is presented. Track clustering is used to reduce hypothesis enumeration complexity. Towards improving response time with increase in frame dimension, scene complexity, and number of channels; a scalable algorithmic architecture and operating accuracy prediction technique is presented. A scheduling strategy for improving the response time and throughput by parallel processing is also presented
Temporal phase unwrapping using deep learning
The multi-frequency temporal phase unwrapping (MF-TPU) method, as a classical
phase unwrapping algorithm for fringe projection profilometry (FPP), is capable
of eliminating the phase ambiguities even in the presence of surface
discontinuities or spatially isolated objects. For the simplest and most
efficient case, two sets of 3-step phase-shifting fringe patterns are used: the
high-frequency one is for 3D measurement and the unit-frequency one is for
unwrapping the phase obtained from the high-frequency pattern set. The final
measurement precision or sensitivity is determined by the number of fringes
used within the high-frequency pattern, under the precondition that the phase
can be successfully unwrapped without triggering the fringe order error.
Consequently, in order to guarantee a reasonable unwrapping success rate, the
fringe number (or period number) of the high-frequency fringe patterns is
generally restricted to about 16, resulting in limited measurement accuracy. On
the other hand, using additional intermediate sets of fringe patterns can
unwrap the phase with higher frequency, but at the expense of a prolonged
pattern sequence. Inspired by recent successes of deep learning techniques for
computer vision and computational imaging, in this work, we report that the
deep neural networks can learn to perform TPU after appropriate training, as
called deep-learning based temporal phase unwrapping (DL-TPU), which can
substantially improve the unwrapping reliability compared with MF-TPU even in
the presence of different types of error sources, e.g., intensity noise, low
fringe modulation, and projector nonlinearity. We further experimentally
demonstrate for the first time, to our knowledge, that the high-frequency phase
obtained from 64-period 3-step phase-shifting fringe patterns can be directly
and reliably unwrapped from one unit-frequency phase using DL-TPU
- …