4,550 research outputs found

    Smart video sensors for 3D scene reconstruction of large infrastructures

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s11042-012-1184-zThis paper introduces a new 3D-based surveillance solution for large infrastructures. Our proposal is based on an accurate 3D reconstruction using the rich information obtained from a network of intelligent video-processing nodes. In this manner, if the scenario to cover is modeled in 3D with high precision, it will be possible to locate the detected objects in the virtual representation. Moreover, as an improvement over previous 2D solutions, having the possibility of modifying the view point enables the application to choose the perspective that better suits the current state of the scenario. In this sense, the contextualization of the events detected in a 3D environment can offer a much better understanding of what is happening in the real world and where it is exactly happening. Details of the video processing nodes are given, as well as of the 3D reconstruction tasks performed afterwards. The possibilities of such a system are described and the performance obtained is analyzed.This work has been partially supported by the ViCoMo project (ITEA2 project IP08009 funded by the Spanish MICINN with project TSI-020400-2011-57), the Spanish Government (TIN2009-14103-C03-03, DPI2008-06737-C02-01/02 and DPI 2011-28507-C02-02) and European FEDER funds.Ripollés Mateu, ÓE.; Simó Ten, JE.; Benet Gilabert, G.; Vivó Hernando, RA. (2014). Smart video sensors for 3D scene reconstruction of large infrastructures. Multimedia Tools and Applications. 73(2):977-993. https://doi.org/10.1007/s11042-012-1184-zS977993732Atienza-Vanacloig V, Rosell-Ortega J, Andreu-Garcia G, Valiente-Gonzalez J (2008) People and luggage recognition in airport surveillance under real-time constraints. In: 19th international conference on pattern recognition, pp 1–4Cal3D (2011) http://gna.org/projects/cal3d/ . Accessed 19 July 2012Chang F, Chen CJ (2003) A component-labeling algorithm using contour tracing technique. In: 7th int. conference on document analysis and recognition, pp 741–745Cruz-Neira C, Sandin DJ, DeFanti TA, Kenyon RV, Hart JC (1992) The cave: audio visual experience automatic virtual environment. Commun ACM 35:64–72Fleck S, Busch F, Biber P, Strasser W (2006) 3D surveillance a distributed network of smart cameras for real-time tracking and its visualization in 3D. In: Conference on computer vision and pattern recognition workshop (CVPRW06), p 118Hoiem D, Efros AA, Hebert M (2005) Automatic photo pop-up. ACM Trans Graph 24:577–584Javed O, Shah M (2008) Automated multi-camera surveillance: algorithms and practice. Springer, New YorkLipton A, Fujiyoshi H, Patil R (1998) Moving target classification and tracking from real-time video. In: Proceedings of IEEE workshop on applications of computer vision, vol 1, pp 8–14Lloyd DH (1968) A concept of improvement of learning response in the taught lesson. In: Visual education, pp 23–25Osfield R, Burns D (2011) OpenSceneGraph. http://www.openscenegraph.org . Accessed 19 July 2012Rieffel EG, Girgensohn A, Kimber D, Chen T, Liu Q (2007) Geometric tools for multicamera surveillance systems. In: IEEE int. conf. on distributed smart camerasSebe I, Hu J, You S, Neumann U (2003) 3D video surveillance with augmented virtual environments. In: ACM SIGMM workshop on video surveillance, pp 107–112SENSE Consortium (2006) Smart embedded network of sensing entities. Web page: http://www.sense-ist.org (European Commission: IST Project 033279). Accessed 19 July 2012Sánchez J, Benet G, Simó JE (2012) Video sensor architecture for surveillance applications. Sensors 12(2):1509–1528Vouzounaras G, Daras P, Strintzis M (2011) Automatic generation of 3D outdoor and indoor building scenes from a single image. Multimedia Tools Appl. doi: 10.1007/s11042-011-0823-0Yan W, Kieran D, Rafatirad S, Jain R (2011) A comprehensive study of visual event computing. Multimedia Tools Appl 55:443–481Zúñiga M, Brémond F, Thonnat M (2006) Fast and reliable object classification in video based on a 3D generic model. In: Proceedings of the international conference on visual information engineering (VIE2006), pp 26–2

    Scalable software architecture for on-line multi-camera video processing

    Get PDF
    In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhea

    CCTV Surveillance System, Attacks and Design Goals

    Get PDF
    Closed Circuit Tele-Vision surveillance systems are frequently the subject of debate. Some parties seek to promote their benefits such as their use in criminal investigations and providing a feeling of safety to the public. They have also been on the receiving end of bad press when some consider intrusiveness has outweighed the benefits. The correct design and use of such systems is paramount to ensure a CCTV surveillance system meets the needs of the user, provides a tangible benefit and provides safety and security for the wider law-abiding public. In focusing on the normative aspects of CCTV, the paper raises questions concerning the efficiency of understanding contemporary forms of ‘social ordering practices’ primarily in terms of technical rationalities while neglecting other, more material and ideological processes involved in the construction of social order. In this paper, a 360-degree view presented on the assessment of the diverse CCTV video surveillance systems (VSS) of recent past and present in accordance with technology. Further, an attempt been made to compare different VSS with their operational strengths and their attacks. Finally, the paper concludes with a number of future research directions in the design and implementation of VSS

    Feature-based calibration of distributed smart stereo camera networks

    Get PDF
    A distributed smart camera network is a collective of vision-capable devices with enough processing power to execute algorithms for collaborative vision tasks. A true 3D sensing network applies to a broad range of applications, and local stereo vision capabilities at each node offer the potential for a particularly robust implementation. A novel spatial calibration method for such a network is presented, which obtains pose estimates suitable for collaborative 3D vision in a distributed fashion using two stages of registration on robust 3D features. The method is first described in a general, modular sense, assuming some ideal vision and registration algorithms. Then, existing algorithms are selected for a practical implementation. The method is designed independently of networking details, making only a few basic assumptions about the underlying network\u27s capabilities. Experiments using both software simulations and physical devices are designed and executed to demonstrate performance

    Autonomous Multicamera Tracking on Embedded Smart Cameras

    Get PDF
    There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Video Surveillance-Based Intelligent Traffic Management in Smart Cities

    Get PDF
    Visualization of video is considered as important part of visual analytics. Several challenges arise from massive video contents that can be resolved by using data analytics and consequently gaining significance. Though rapid progression in digital technologies resulted in videos data explosion that incites the requirements to create visualization and computer graphics from videos, a state-of-the-art algorithm has been proposed in this chapter for 3D conversion of traffic video contents and displaying on Google Maps. Time stamped visualization based on glyph is employed efficiently in surveillance videos and utilized for event detection. This method of visualization can possibly decrease the complexity of data, having complete view of videos from video collection. The effectiveness of proposed system has shown by obtaining numerous unprocessed videos and algorithm is tested on these videos without concerning field conditions. The proposed visualization technique produces promising results and found effective in conveying meaningful information while alleviating the need of searching exhaustively colossal amount of video data
    corecore