235 research outputs found

    Hierarchical video surveillance architecture: a chassis for video big data analytics and exploration

    Get PDF
    There is increasing reliance on video surveillance systems for systematic derivation, analysis and interpretation of the data needed for predicting, planning, evaluating and implementing public safety. This is evident from the massive number of surveillance cameras deployed across public locations. For example, in July 2013, the British Security Industry Association (BSIA) reported that over 4 million CCTV cameras had been installed in Britain alone. The BSIA also reveal that only 1.5% of these are state owned. In this paper, we propose a framework that allows access to data from privately owned cameras, with the aim of increasing the efficiency and accuracy of public safety planning, security activities, and decision support systems that are based on video integrated surveillance systems. The accuracy of results obtained from government-owned public safety infrastructure would improve greatly if privately owned surveillance systems ‘expose’ relevant video-generated metadata events, such as triggered alerts and also permit query of a metadata repository. Subsequently, a police officer, for example, with an appropriate level of system permission can query unified video systems across a large geographical area such as a city or a country to predict the location of an interesting entity, such as a pedestrian or a vehicle. This becomes possible with our proposed novel hierarchical architecture, the Fused Video Surveillance Architecture (FVSA). At the high level, FVSA comprises of a hardware framework that is supported by a multi-layer abstraction software interface. It presents video surveillance systems as an adapted computational grid of intelligent services, which is integration-enabled to communicate with other compatible systems in the Internet of Things (IoT)

    Quantifying Mechanical Properties of Automotive Steels with Deep Learning Based Computer Vision Algorithms

    Get PDF
    This paper demonstrates that the instrumented indentation test (IIT), together with a trained artificial neural network (ANN), has the capability to characterize the mechanical properties of the local parts of a welded steel structure such as a weld nugget or heat affected zone. Aside from force-indentation depth curves generated from the IIT, the profile of the indented surface deformed after the indentation test also has a strong correlation with the materials’ plastic behavior. The profile of the indented surface was used as the training dataset to design an ANN to determine the material parameters of the welded zones. The deformation of the indented surface in three dimensions shown in images were analyzed with the computer vision algorithms and the obtained data were employed to train the ANN for the characterization of the mechanical properties. Moreover, this method was applied to the images taken with a simple light microscope from the surface of a specimen. Therefore, it is possible to quantify the mechanical properties of the automotive steels with the four independent methods: (1) force-indentation depth curve; (2) profile of the indented surface; (3) analyzing of the 3D-measurement image; and (4) evaluation of the images taken by a simple light microscope. The results show that there is a very good agreement between the material parameters obtained from the trained ANN and the experimental uniaxial tensile test. The results present that the mechanical properties of an unknown steel can be determined by only analyzing the images taken from its surface after pushing a simple indenter into its surface

    Toward Accurate and Efficient Feature Selection for Speaker Recognition on Wearables

    Get PDF
    Due to the user-interface limitations of wearable devices, voice-based interfaces are becoming more common; speaker recognition may then address the authentication requirements of wearable applications. Wearable devices have small form factor, limited energy budget and limited computational capacity. In this paper, we examine the challenge of computing speaker recognition on small wearable platforms, and specifically, reducing resource use (energy use, response time) by trimming the input through careful feature selections. For our experiments, we analyze four different feature-selection algorithms and three different feature sets for speaker identification and speaker verification. Our results show that Principal Component Analysis (PCA) with frequency-domain features had the highest accuracy, Pearson Correlation (PC) with time-domain features had the lowest energy use, and recursive feature elimination (RFE) with frequency-domain features had the least latency. Our results can guide developers to choose feature sets and configurations for speaker-authentication algorithms on wearable platforms

    Clustering of Physical Activities for Quantified Self and mHealth Applications

    Get PDF
    corecore