633 research outputs found

    Towards automated visual surveillance using gait for identity recognition and tracking across multiple non-intersecting cameras

    No full text
    Despite the fact that personal privacy has become a major concern, surveillance technology is now becoming ubiquitous in modern society. This is mainly due to the increasing number of crimes as well as the essential necessity to provide secure and safer environment. Recent research studies have confirmed now the possibility of recognizing people by the way they walk i.e. gait. The aim of this research study is to investigate the use of gait for people detection as well as identification across different cameras. We present a new approach for people tracking and identification between different non-intersecting un-calibrated stationary cameras based on gait analysis. A vision-based markerless extraction method is being deployed for the derivation of gait kinematics as well as anthropometric measurements in order to produce a gait signature. The novelty of our approach is motivated by the recent research in biometrics and forensic analysis using gait. The experimental results affirmed the robustness of our approach to successfully detect walking people as well as its potency to extract gait features for different camera viewpoints achieving an identity recognition rate of 73.6 % processed for 2270 video sequences. Furthermore, experimental results confirmed the potential of the proposed method for identity tracking in real surveillance systems to recognize walking individuals across different views with an average recognition rate of 92.5 % for cross-camera matching for two different non-overlapping views.<br/

    On Using Gait in Forensic Biometrics

    No full text
    Given the continuing advances in gait biometrics, it appears prudent to investigate the translation of these techniques for forensic use. We address the question as to the confidence that might be given between any two such measurements. We use the locations of ankle, knee and hip to derive a measure of the match between walking subjects in image sequences. The Instantaneous Posture Match algorithm, using Harr templates, kinematics and anthropomorphic knowledge is used to determine their location. This is demonstrated using real CCTV recorded at Gatwick Airport, laboratory images from the multi-view CASIA-B dataset and an example of real scene of crime video. To access the measurement confidence we study the mean intra- and inter-match scores as a function of database size. These measures converge to constant and separate values, indicating that the match measure derived from individual comparisons is considerably smaller than the average match measure from a population

    Face recognition technologies for evidential evaluation of video traces

    Get PDF
    Human recognition from video traces is an important task in forensic investigations and evidence evaluations. Compared with other biometric traits, face is one of the most popularly used modalities for human recognition due to the fact that its collection is non-intrusive and requires less cooperation from the subjects. Moreover, face images taken at a long distance can still provide reasonable resolution, while most biometric modalities, such as iris and fingerprint, do not have this merit. In this chapter, we discuss automatic face recognition technologies for evidential evaluations of video traces. We first introduce the general concepts in both forensic and automatic face recognition , then analyse the difficulties in face recognition from videos . We summarise and categorise the approaches for handling different uncontrollable factors in difficult recognition conditions. Finally we discuss some challenges and trends in face recognition research in both forensics and biometrics . Given its merits tested in many deployed systems and great potential in other emerging applications, considerable research and development efforts are expected to be devoted in face recognition in the near future

    DALES: Automated Tool for Detection, Annotation, Labelling and Segmentation of Multiple Objects in Multi-Camera Video Streams

    Get PDF
    In this paper, we propose a new software tool called DALES to extract semantic information from multi-view videos based on the analysis of their visual content. Our system is fully automatic and is well suited for multi-camera environment. Once the multi-view video sequences are loaded into DALES, our software performs the detection, counting, and segmentation of the visual objects evolving in the provided video streams. Then, these objects of interest are processed in order to be labelled, and the related frames are thus annotated with the corresponding semantic content. Moreover, a textual script is automatically generated with the video annotations. DALES system shows excellent performance in terms of accuracy and computational speed and is robustly designed to ensure view synchronization

    Payload crew interface design criteria and techniques. Task 1: Inflight operations and training for payloads

    Get PDF
    Guidelines are developed for use in control and display panel design for payload operations performed on the aft flight deck of the orbiter. Preliminary payload procedures are defined. Crew operational concepts are developed. Payloads selected for operational simulations were the shuttle UV optical telescope (SUOT), the deep sky UV survey telescope (DUST), and the shuttle UV stellar spectrograph (SUSS). The advanced technology laboratory payload consisting of 11 experiments was selected for a detailed evaluation because of the availability of operational data and its operational complexity

    Enabling Cross-Camera Collaboration for Video Analytics on Distributed Smart Cameras

    Full text link
    Overlapping cameras offer exciting opportunities to view a scene from different angles, allowing for more advanced, comprehensive and robust analysis. However, existing visual analytics systems for multi-camera streams are mostly limited to (i) per-camera processing and aggregation and (ii) workload-agnostic centralized processing architectures. In this paper, we present Argus, a distributed video analytics system with cross-camera collaboration on smart cameras. We identify multi-camera, multi-target tracking as the primary task of multi-camera video analytics and develop a novel technique that avoids redundant, processing-heavy identification tasks by leveraging object-wise spatio-temporal association in the overlapping fields of view across multiple cameras. We further develop a set of techniques to perform these operations across distributed cameras without cloud support at low latency by (i) dynamically ordering the camera and object inspection sequence and (ii) flexibly distributing the workload across smart cameras, taking into account network transmission and heterogeneous computational capacities. Evaluation of three real-world overlapping camera datasets with two Nvidia Jetson devices shows that Argus reduces the number of object identifications and end-to-end latency by up to 7.13x and 2.19x (4.86x and 1.60x compared to the state-of-the-art), while achieving comparable tracking quality.Comment: 18 pages, under revie

    FARSEC: A Reproducible Framework for Automatic Real-Time Vehicle Speed Estimation Using Traffic Cameras

    Full text link
    Estimating the speed of vehicles using traffic cameras is a crucial task for traffic surveillance and management, enabling more optimal traffic flow, improved road safety, and lower environmental impact. Transportation-dependent systems, such as for navigation and logistics, have great potential to benefit from reliable speed estimation. While there is prior research in this area reporting competitive accuracy levels, their solutions lack reproducibility and robustness across different datasets. To address this, we provide a novel framework for automatic real-time vehicle speed calculation, which copes with more diverse data from publicly available traffic cameras to achieve greater robustness. Our model employs novel techniques to estimate the length of road segments via depth map prediction. Additionally, our framework is capable of handling realistic conditions such as camera movements and different video stream inputs automatically. We compare our model to three well-known models in the field using their benchmark datasets. While our model does not set a new state of the art regarding prediction performance, the results are competitive on realistic CCTV videos. At the same time, our end-to-end pipeline offers more consistent results, an easier implementation, and better compatibility. Its modular structure facilitates reproducibility and future improvements

    Enhanced target detection in CCTV network system using colour constancy

    Get PDF
    The focus of this research is to study how targets can be more faithfully detected in a multi-camera CCTV network system using spectral feature for the detection. The objective of the work is to develop colour constancy (CC) methodology to help maintain the spectral feature of the scene into a constant stable state irrespective of variable illuminations and camera calibration issues. Unlike previous work in the field of target detection, two versions of CC algorithms have been developed during the course of this work which are capable to maintain colour constancy for every image pixel in the scene: 1) a method termed as Enhanced Luminance Reflectance CC (ELRCC) which consists of a pixel-wise sigmoid function for an adaptive dynamic range compression, 2) Enhanced Target Detection and Recognition Colour Constancy (ETDCC) algorithm which employs a bidirectional pixel-wise non-linear transfer PWNLTF function, a centre-surround luminance enhancement and a Grey Edge white balancing routine. The effectiveness of target detections for all developed CC algorithms have been validated using multi-camera β€˜Imagery Library for Intelligent Detection Systems’ (iLIDS), β€˜Performance Evaluation of Tracking and Surveillance’ (PETS) and β€˜Ground Truth Colour Chart’ (GTCC) datasets. It is shown that the developed CC algorithms have enhanced target detection efficiency by over 175% compared with that without CC enhancement. The contribution of this research has been one journal paper published in the Optical Engineering together with 3 conference papers in the subject of research

    Proto-type installation of a double-station system for the optical-video-detection and orbital characterisation of a meteor/fireball in South Korea

    Get PDF
    We give a detailed description of the installation and operation of a double-station meteor detection system which formed part of a research & education project between Korea Astronomy Space Science Institute and Daejeon Science Highschool. A total of six light-sensitive CCD cameras were installed with three cameras at SOAO and three cameras at BOAO observatory. A double-station observation of a meteor event enables the determination of the three-dimensional orbit in space. This project was initiated in response to the Jinju fireball event in March 2014. The cameras were installed in October/November 2014. The two stations are identical in hardware as well as software. Each station employes sensitive Watec-902H2 cameras in combination with relatively fast f/1.2 lenses. Various fields of views were used for measuring differences in detection rates of meteor events. We employed the SonotaCo UFO software suite for meteor detection and their subsequent analysis. The system setup as well as installation/operation experience is described and first results are presented. We also give a brief overview of historic as well as recent meteor (fall) detections in South Korea. For more information please consult http://meteor.kasi.re.kr .Comment: Technical/instrumentation description of a professional meteor detection system, 23 pages, 20 figures (color/monochrome), 5 tables, submitted to the Journal of Korean Astronomical Society (JKAS, http://jkas.kas.org/, http://jkas.kas.org/history.html

    Person re-Identification over distributed spaces and time

    Get PDF
    PhDReplicating the human visual system and cognitive abilities that the brain uses to process the information it receives is an area of substantial scientific interest. With the prevalence of video surveillance cameras a portion of this scientific drive has been into providing useful automated counterparts to human operators. A prominent task in visual surveillance is that of matching people between disjoint camera views, or re-identification. This allows operators to locate people of interest, to track people across cameras and can be used as a precursory step to multi-camera activity analysis. However, due to the contrasting conditions between camera views and their effects on the appearance of people re-identification is a non-trivial task. This thesis proposes solutions for reducing the visual ambiguity in observations of people between camera views This thesis first looks at a method for mitigating the effects on the appearance of people under differing lighting conditions between camera views. This thesis builds on work modelling inter-camera illumination based on known pairs of images. A Cumulative Brightness Transfer Function (CBTF) is proposed to estimate the mapping of colour brightness values based on limited training samples. Unlike previous methods that use a mean-based representation for a set of training samples, the cumulative nature of the CBTF retains colour information from underrepresented samples in the training set. Additionally, the bi-directionality of the mapping function is explored to try and maximise re-identification accuracy by ensuring samples are accurately mapped between cameras. Secondly, an extension is proposed to the CBTF framework that addresses the issue of changing lighting conditions within a single camera. As the CBTF requires manually labelled training samples it is limited to static lighting conditions and is less effective if the lighting changes. This Adaptive CBTF (A-CBTF) differs from previous approaches that either do not consider lighting change over time, or rely on camera transition time information to update. By utilising contextual information drawn from the background in each camera view, an estimation of the lighting change within a single camera can be made. This background lighting model allows the mapping of colour information back to the original training conditions and thus remove the need for 3 retraining. Thirdly, a novel reformulation of re-identification as a ranking problem is proposed. Previous methods use a score based on a direct distance measure of set features to form a correct/incorrect match result. Rather than offering an operator a single outcome, the ranking paradigm is to give the operator a ranked list of possible matches and allow them to make the final decision. By utilising a Support Vector Machine (SVM) ranking method, a weighting on the appearance features can be learned that capitalises on the fact that not all image features are equally important to re-identification. Additionally, an Ensemble-RankSVM is proposed to address scalability issues by separating the training samples into smaller subsets and boosting the trained models. Finally, the thesis looks at a practical application of the ranking paradigm in a real world application. The system encompasses both the re-identification stage and the precursory extraction and tracking stages to form an aid for CCTV operators. Segmentation and detection are combined to extract relevant information from the video, while several combinations of matching techniques are combined with temporal priors to form a more comprehensive overall matching criteria. The effectiveness of the proposed approaches is tested on datasets obtained from a variety of challenging environments including offices, apartment buildings, airports and outdoor public spaces
    • …
    corecore