76 research outputs found
Development Of A High Performance Mosaicing And Super-Resolution Algorithm
In this dissertation, a high-performance mosaicing and super-resolution algorithm is described. The scale invariant feature transform (SIFT)-based mosaicing algorithm builds an initial mosaic which is iteratively updated by the robust super resolution algorithm to achieve the final high-resolution mosaic. Two different types of datasets are used for testing: high altitude balloon data and unmanned aerial vehicle data. To evaluate our algorithm, five performance metrics are employed: mean square error, peak signal to noise ratio, singular value decomposition, slope of reciprocal singular value curve, and cumulative probability of blur detection. Extensive testing shows that the proposed algorithm is effective in improving the captured aerial data and the performance metrics are accurate in quantifying the evaluation of the algorithm
INTELLIGENT VIDEO SURVEILLANCE OF HUMAN MOTION: ANOMALY DETECTION
Intelligent video surveillance is a system that can highlight extraction and
video summarization that require recognition of the activities occurring in the video
without any human supervision. Surveillance systems are extremely helpful to guard
or protect you from any dangerous condition. In this project, we propose a system
that can track and detect abnormal behavior in indoor environment. By concentrating
on inside house enviromnent, we want to detect any abnormal behavior between
adult and toddler to avoid abusing to happen. In general, the frameworks of a video
surveillance system include the following stages: background estimator,
segmentation, detection, tracking, behavior understanding and description. We use
training behavior profile to collect the description and generate statistically behavior
to perform anomaly detection later. We begin with modeling the simplest actions
like: stomping, slapping, kicking, pointed sharp or blunt object that do not require
sophisticated modeling. A method to model actions with more complex dynamic are
then discussed. The results of the system manage to track adult figure, toddler figure
and harm object as third subject. With this system, it can bring attention of human
personnel security. For future work, we recommend to continue design methods for
higher level representation of complex activities to do the matching anomaly
detection with real-time video surveillance. We also propose the system to embed
with hardware solution for triggered the matching detection as output
Recommended from our members
Camera positioning for 3D panoramic image rendering
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.Virtual camera realisation and the proposition of trapezoidal camera architecture are the two broad contributions of this thesis. Firstly, multiple camera and their arrangement constitute a critical component which affect the integrity of visual content acquisition for multi-view video. Currently, linear, convergence, and divergence arrays are the prominent camera topologies adopted. However, the large number of cameras required and their synchronisation are two of prominent challenges usually encountered. The use of virtual cameras can significantly reduce the number of physical cameras used with respect to any of the known
camera structures, hence adequately reducing some of the other implementation issues. This thesis explores to use image-based rendering with and without geometry in the implementations leading to the realisation of virtual cameras. The virtual camera implementation was carried out from the perspective of depth map (geometry) and use of multiple image samples (no geometry). Prior to the virtual camera realisation, the generation of depth map was investigated using region match measures widely known for solving image point correspondence problem. The constructed depth maps have been compare with the ones generated
using the dynamic programming approach. In both the geometry and no geometry approaches, the virtual cameras lead to the rendering of views from a textured depth map, construction of 3D panoramic image of a scene by stitching multiple image samples and performing superposition on them, and computation
of virtual scene from a stereo pair of panoramic images. The quality of these rendered images were assessed through the use of either objective or subjective analysis in Imatest software. Further more, metric reconstruction of a scene was performed by re-projection of the pixel points from multiple image samples with
a single centre of projection. This was done using sparse bundle adjustment algorithm. The statistical summary obtained after the application of this algorithm provides a gauge for the efficiency of the optimisation step. The optimised data was then visualised in Meshlab software environment, hence providing the reconstructed scene. Secondly, with any of the well-established camera arrangements, all cameras are usually constrained to the same horizontal plane. Therefore, occlusion becomes an extremely challenging problem, and a robust camera set-up is required in order to resolve strongly the hidden part of any scene objects.
To adequately meet the visibility condition for scene objects and given that occlusion of the same scene objects can occur, a multi-plane camera structure is highly desirable. Therefore, this thesis also explore trapezoidal camera structure for image acquisition. The approach here is to assess the feasibility and potential
of several physical cameras of the same model being sparsely arranged on the edge of an efficient trapezoid graph. This is implemented both Matlab and Maya. The quality of the depth maps rendered in Matlab are better in Quality
A LITERATURE STUDY ON CROWD(PEOPLE) COUNTING WITH THE HELP OF SURVEILLANCE VIDEOS
The categories of crowd counting in video falls in two broad categories: (a) ROI counting which estimates the total number of people in some regions at certain time instance (b) LOI counting which counts people who crosses a detecting line in certain time duration. The LOI counting can be developed using feature tracking techniques where the features are either tracked into trajectories and these trajectories are clustered into object tracks or based on extracting and counting crowd blobs from a temporal slice of the video. And the ROI counting can be developed using two techniques: Detection Based and Feature Based and Pixel Regression Techniques. Detection based methods detect people individually and count them. It utilizes any of the following methods:- Background Differencing, Motion and Appearance joint segmentation, Silhouette or shape matching and Standard object recognition method. Regression approaches extract the features such as foreground pixels and interest points, and vectors are formed with those features and it uses machine learning algorithms to subside the number of pedestrians or people. Some of the common features according to recent survey are edges, wavelet coefficients, and combination of large set of features. Some of the common Regressions are Linear Regression, Neural Networks, Gaussian Process Regression and Discrete Classifiers. This paper aims at presenting a decade survey on people (crowd) counting in surveillance videos
Computer Vision for Multimedia Geolocation in Human Trafficking Investigation: A Systematic Literature Review
The task of multimedia geolocation is becoming an increasingly essential
component of the digital forensics toolkit to effectively combat human
trafficking, child sexual exploitation, and other illegal acts. Typically,
metadata-based geolocation information is stripped when multimedia content is
shared via instant messaging and social media. The intricacy of geolocating,
geotagging, or finding geographical clues in this content is often overly
burdensome for investigators. Recent research has shown that contemporary
advancements in artificial intelligence, specifically computer vision and deep
learning, show significant promise towards expediting the multimedia
geolocation task. This systematic literature review thoroughly examines the
state-of-the-art leveraging computer vision techniques for multimedia
geolocation and assesses their potential to expedite human trafficking
investigation. This includes a comprehensive overview of the application of
computer vision-based approaches to multimedia geolocation, identifies their
applicability in combating human trafficking, and highlights the potential
implications of enhanced multimedia geolocation for prosecuting human
trafficking. 123 articles inform this systematic literature review. The
findings suggest numerous potential paths for future impactful research on the
subject
Recommended from our members
Multimodal Indexing of Presentation Videos
This thesis presents four novel methods to help users efficiently and effectively retrieve information from unstructured and unsourced multimedia sources, in particular the increasing amount and variety of presentation videos such as those in e-learning, conference recordings, corporate talks, and student presentations. We demonstrate a system to summarize, index and cross-reference such videos, and measure the quality of the produced indexes as perceived by the end users. We introduce four major semantic indexing cues: text, speaker faces, graphics, and mosaics, going beyond standard tag based searches and simple video playbacks. This work aims at recognizing visual content "in the wild", where the system cannot rely on any additional information besides the video itself. For text, within a scene text detection and recognition framework, we present a novel locally optimal adaptive binarization algorithm, implemented with integral histograms. It determines of an optimal threshold that maximizes the between-classes variance within a subwindow, with computational complexity independent from the size of the window itself. We obtain character recognition rates of 74%, as validated against ground truth of 8 presentation videos spanning over 1 hour and 45 minutes, which almost doubles the baseline performance of an open source OCR engine. For speaker faces, we detect, track, match, and finally select a humanly preferred face icon per speaker, based on three quality measures: resolution, amount of skin, and pose. We register a 87% accordance (51 out of 58 speakers) between the face indexes automatically generated from three unstructured presentation videos of approximately 45 minutes each, and human preferences recorded through Mechanical Turk experiments. For diagrams, we locate graphics inside frames showing a projected slide, cluster them according to an on-line algorithm based on a combination of visual and temporal information, and select and color-correct their representatives to match human preferences recorded through Mechanical Turk experiments. We register 71% accuracy (57 out of 81 unique diagrams properly identified, selected and color-corrected) on three hours of videos containing five different presentations. For mosaics, we combine two existing suturing measures, to extend video images into in-the-world coordinate system. A set of frames to be registered into a mosaic are sampled according to the PTZ camera movement, which is computed through least square estimation starting from the luminance constancy assumption. A local features based stitching algorithm is then applied to estimate the homography among a set of video frames and median blending is used to render pixels in overlapping regions of the mosaic. For two of these indexes, namely faces and diagrams, we present two novel MTurk-derived user data collections to determine viewer preferences, and show that they are matched in selection by our methods. The net result work of this thesis allows users to search, inside a video collection as well as within a single video clip, for a segment of presentation by professor X on topic Y, containing graph Z
Highly efficient low-level feature extraction for video representation and retrieval.
PhDWitnessing the omnipresence of digital video media, the research community has
raised the question of its meaningful use and management. Stored in immense
multimedia databases, digital videos need to be retrieved and structured in an
intelligent way, relying on the content and the rich semantics involved. Current
Content Based Video Indexing and Retrieval systems face the problem of the semantic
gap between the simplicity of the available visual features and the richness of user
semantics.
This work focuses on the issues of efficiency and scalability in video indexing and
retrieval to facilitate a video representation model capable of semantic annotation. A
highly efficient algorithm for temporal analysis and key-frame extraction is developed.
It is based on the prediction information extracted directly from the compressed domain
features and the robust scalable analysis in the temporal domain. Furthermore,
a hierarchical quantisation of the colour features in the descriptor space is presented.
Derived from the extracted set of low-level features, a video representation model that
enables semantic annotation and contextual genre classification is designed.
Results demonstrate the efficiency and robustness of the temporal analysis algorithm
that runs in real time maintaining the high precision and recall of the detection task.
Adaptive key-frame extraction and summarisation achieve a good overview of the
visual content, while the colour quantisation algorithm efficiently creates hierarchical
set of descriptors. Finally, the video representation model, supported by the genre
classification algorithm, achieves excellent results in an automatic annotation system by
linking the video clips with a limited lexicon of related keywords
INTELLIGENT VIDEO SURVEILLANCE OF HUMAN MOTION: ANOMALY DETECTION
Intelligent video surveillance is a system that can highlight extraction and
video summarization that require recognition of the activities occurring in the video
without any human supervision. Surveillance systems are extremely helpful to guard
or protect you from any dangerous condition. In this project, we propose a system
that can track and detect abnormal behavior in indoor environment. By concentrating
on inside house enviromnent, we want to detect any abnormal behavior between
adult and toddler to avoid abusing to happen. In general, the frameworks of a video
surveillance system include the following stages: background estimator,
segmentation, detection, tracking, behavior understanding and description. We use
training behavior profile to collect the description and generate statistically behavior
to perform anomaly detection later. We begin with modeling the simplest actions
like: stomping, slapping, kicking, pointed sharp or blunt object that do not require
sophisticated modeling. A method to model actions with more complex dynamic are
then discussed. The results of the system manage to track adult figure, toddler figure
and harm object as third subject. With this system, it can bring attention of human
personnel security. For future work, we recommend to continue design methods for
higher level representation of complex activities to do the matching anomaly
detection with real-time video surveillance. We also propose the system to embed
with hardware solution for triggered the matching detection as output
Representations for Cognitive Vision : a Review of Appearance-Based, Spatio-Temporal, and Graph-Based Approaches
The emerging discipline of cognitive vision requires a proper representation of visual information including spatial and temporal relationships, scenes, events, semantics and context. This review article summarizes existing representational schemes in computer vision which might be useful for cognitive vision, a and discusses promising future research directions. The various approaches are categorized according to appearance-based, spatio-temporal, and graph-based representations for cognitive vision. While the representation of objects has been covered extensively in computer vision research, both from a reconstruction as well as from a recognition point of view, cognitive vision will also require new ideas how to represent scenes. We introduce new concepts for scene representations and discuss how these might be efficiently implemented in future cognitive vision systems
Long-Term Memory Motion-Compensated Prediction
Long-term memory motion-compensated prediction extends the spatial displacement vector utilized in block-based hybrid video coding by a variable time delay permitting the use of more frames than the previously decoded one for motion compensated prediction. The long-term memory covers several seconds of decoded frames at the encoder and decoder. The use of multiple frames for motion compensation in most cases provides significantly improved prediction gain. The variable time delay has to be transmitted as side information requiring an additional bit rate which may be prohibitive when the size of the long-term memory becomes too large. Therefore, we control the bit rate of the motion information by employing rate-constrained motion estimation. Simulation results are obtained by integrating long-term memory prediction into an H.263 codec. Reconstruction PSNR improvements up to 2 dB for the Foreman sequence and 1.5 dB for the Mother–Daughter sequence are demonstrated in comparison to the TMN-2.0 H.263 coder. The PSNR improvements correspond to bit-rate savings up to 34 and 30%, respectively. Mathematical inequalities are used to speed up motion estimation while achieving full prediction gain
- …