603 research outputs found

    Review of Person Re-identification Techniques

    Full text link
    Person re-identification across different surveillance cameras with disjoint fields of view has become one of the most interesting and challenging subjects in the area of intelligent video surveillance. Although several methods have been developed and proposed, certain limitations and unresolved issues remain. In all of the existing re-identification approaches, feature vectors are extracted from segmented still images or video frames. Different similarity or dissimilarity measures have been applied to these vectors. Some methods have used simple constant metrics, whereas others have utilised models to obtain optimised metrics. Some have created models based on local colour or texture information, and others have built models based on the gait of people. In general, the main objective of all these approaches is to achieve a higher-accuracy rate and lowercomputational costs. This study summarises several developments in recent literature and discusses the various available methods used in person re-identification. Specifically, their advantages and disadvantages are mentioned and compared.Comment: Published 201

    Enhancing camera surveillance using computer vision: a research note

    Full text link
    Purpose\mathbf{Purpose} - The growth of police operated surveillance cameras has out-paced the ability of humans to monitor them effectively. Computer vision is a possible solution. An ongoing research project on the application of computer vision within a municipal police department is described. The paper aims to discuss these issues. Design/methodology/approach\mathbf{Design/methodology/approach} - Following the demystification of computer vision technology, its potential for police agencies is developed within a focus on computer vision as a solution for two common surveillance camera tasks (live monitoring of multiple surveillance cameras and summarizing archived video files). Three unaddressed research questions (can specialized computer vision applications for law enforcement be developed at this time, how will computer vision be utilized within existing public safety camera monitoring rooms, and what are the system-wide impacts of a computer vision capability on local criminal justice systems) are considered. Findings\mathbf{Findings} - Despite computer vision becoming accessible to law enforcement agencies the impact of computer vision has not been discussed or adequately researched. There is little knowledge of computer vision or its potential in the field. Originality/value\mathbf{Originality/value} - This paper introduces and discusses computer vision from a law enforcement perspective and will be valuable to police personnel tasked with monitoring large camera networks and considering computer vision as a system upgrade

    Camera localization using trajectories and maps

    Get PDF
    We propose a new Bayesian framework for automatically determining the position (location and orientation) of an uncalibrated camera using the observations of moving objects and a schematic map of the passable areas of the environment. Our approach takes advantage of static and dynamic information on the scene structures through prior probability distributions for object dynamics. The proposed approach restricts plausible positions where the sensor can be located while taking into account the inherent ambiguity of the given setting. The proposed framework samples from the posterior probability distribution for the camera position via data driven MCMC, guided by an initial geometric analysis that restricts the search space. A Kullback-Leibler divergence analysis is then used that yields the final camera position estimate, while explicitly isolating ambiguous settings. The proposed approach is evaluated in synthetic and real environments, showing its satisfactory performance in both ambiguous and unambiguous settings

    INCORPORATING MACHINE VISION IN PRECISION DAIRY FARMING TECHNOLOGIES

    Get PDF
    The inclusion of precision dairy farming technologies in dairy operations is an area of increasing research and industry direction. Machine vision based systems are suitable for the dairy environment as they do not inhibit workflow, are capable of continuous operation, and can be fully automated. The research of this dissertation developed and tested 3 machine vision based precision dairy farming technologies tailored to the latest generation of RGB+D cameras. The first system focused on testing various imaging approaches for the potential use of machine vision for automated dairy cow feed intake monitoring. The second system focused on monitoring the gradual change in body condition score (BCS) for 116 cows over a nearly 7 month period. Several proposed automated BCS systems have been previously developed by researchers, but none have monitored the gradual change in BCS for a duration of this magnitude. These gradual changes infer a great deal of beneficial and immediate information on the health condition of every individual cow being monitored. The third system focused on automated dairy cow feature detection using Haar cascade classifiers to detect anatomical features. These features included the tailhead, hips, and rear regions of the cow body. The features chosen were done so in order to aid machine vision applications in determining if and where a cow is present in an image or video frame. Once the cow has been detected, it must then be automatically identified in order to keep the system fully automated, which was also studied in a machine vision based approach in this research as a complimentary aspect to incorporate along with cow detection. Such systems have the potential to catch poor health conditions developing early on, aid in balancing the diet of the individual cow, and help farm management to better facilitate resources, monetary and otherwise, in an appropriate and efficient manner. Several different applications of this research are also discussed along with future directions for research, including the potential for additional automated precision dairy farming technologies, integrating many of these technologies into a unified system, and the use of alternative, potentially more robust machine vision cameras

    Object Detection in Omnidirectional Images

    Get PDF
    Nowadays, computer vision (CV) is widely used to solve real-world problems, which pose increasingly higher challenges. In this context, the use of omnidirectional video in a growing number of applications, along with the fast development of Deep Learning (DL) algorithms for object detection, drives the need for further research to improve existing methods originally developed for conventional 2D planar images. However, the geometric distortion that common sphere-to-plane projections produce, mostly visible in objects near the poles, in addition to the lack of omnidirectional open-source labeled image datasets has made an accurate spherical image-based object detection algorithm a hard goal to achieve. This work is a contribution to develop datasets and machine learning models particularly suited for omnidirectional images, represented in planar format through the well-known Equirectangular Projection (ERP). To this aim, DL methods are explored to improve the detection of visual objects in omnidirectional images, by considering the inherent distortions of ERP. An experimental study was, firstly, carried out to find out whether the error rate and type of detection errors were related to the characteristics of ERP images. Such study revealed that the error rate of object detection using existing DL models with ERP images, actually, depends on the object spherical location in the image. Then, based on such findings, a new object detection framework is proposed to obtain a uniform error rate across the whole spherical image regions. The results show that the pre and post-processing stages of the implemented framework effectively contribute to reducing the performance dependency on the image region, evaluated by the above-mentioned metric

    The behavior of Atlantic cod, Gadus morhua, in an offshore net pen

    Get PDF
    Aquaculture is one of the fastest growing food producing sectors of the world, with an annual compounding growth rate of 8.8% (since 1970). In spite of the rapid growth, scientific and public concerns have arisen about the sustainability and environmental impacts of the industry, including aquaculture\u27s dependence on wild fish products, eutrophication from animal waste and uneaten food, and escapement of genetically altered farming stock. The use of behavioral studies may help refine commercial aquaculture by obtaining information to design operations that optimize growth, and feed utilization, while increasing production and animal well being. The goal of this study was to design and develop a system for monitoring fine-scale fish behavior in an offshore aquaculture net pen, using a combination of ultrasonic telemetry and underwater video. Additionally, 32 Atlantic cod, Gadus morhua, were studied, via ultrasonic telemetry, to provide a preliminary analysis of activity rhythms, cage utilization, and feeding behavior within a net pen. The first chapter provides a detailed description of a self-contained data collection system designed to study cod at a scale previously unavailable. Ultrasonic telemetry was used to monitor individual cod while a video system monitored group dynamics. A preliminary evaluation of the telemetry system documented high signal retention capable of logging fish positions every two seconds. Laboratory studies showed no influence of transmitter implantation on swimming speed, behavior or feeding. Additionally, our data documented that sampling rates over 10 seconds per location caused significant error in calculations of activity. The second chapter provided an analysis of cod movement, daily activity rhythms, behavior, swimming speeds and cage use. Individual cod behavior remained independent of conspecifics and consisted primarily of milling behavior. Cod exhibited clear diurnal rhythms, with activity highest during daytime hours. Analysis of cage utilization documented inefficient use of the net pen; with individual space use limited to small overlapping areas within the bottom half of the net pen. Additionally, operational stresses were documented to elicit dramatic changes in behavior. The third chapter used feeding behavior, along with stomach content analysis, to assess feeding efficiency. Aggressive feeding behavior was displayed in 42.7 +/- 4.6% of cod daily, while 25.8 +/- 3.7% of cod displayed no interest in feeding during a feeding cycle. Additionally, 31.5 +/- 4.5% of cod displayed an intermediate feeding behavior whereby fish moved into the feeding area but did not make vertical movements toward the feed source. Stomach content analysis revealed that 77.6 +/- 14.1% of cod stomachs contained recently consumed pellets. The combination of stomach content and ultrasonic telemetry data results suggests cod displayed multiple feeding strategies: aggressive, non-aggressive and none feeding, or scavenging

    Object Tracking

    Get PDF
    Object tracking consists in estimation of trajectory of moving objects in the sequence of images. Automation of the computer object tracking is a difficult task. Dynamics of multiple parameters changes representing features and motion of the objects, and temporary partial or full occlusion of the tracked objects have to be considered. This monograph presents the development of object tracking algorithms, methods and systems. Both, state of the art of object tracking methods and also the new trends in research are described in this book. Fourteen chapters are split into two sections. Section 1 presents new theoretical ideas whereas Section 2 presents real-life applications. Despite the variety of topics contained in this monograph it constitutes a consisted knowledge in the field of computer object tracking. The intention of editor was to follow up the very quick progress in the developing of methods as well as extension of the application

    Measuring trustworthiness of image data in the internet of things environment

    Get PDF
    Internet of Things (IoT) image sensors generate huge volumes of digital images every day. However, easy availability and usability of photo editing tools, the vulnerability in communication channels and malicious software have made forgery attacks on image sensor data effortless and thus expose IoT systems to cyberattacks. In IoT applications such as smart cities and surveillance systems, the smooth operation depends on sensors’ sharing data with other sensors of identical or different types. Therefore, a sensor must be able to rely on the data it receives from other sensors; in other words, data must be trustworthy. Sensors deployed in IoT applications are usually limited to low processing and battery power, which prohibits the use of complex cryptography and security mechanism and the adoption of universal security standards by IoT device manufacturers. Hence, estimating the trust of the image sensor data is a defensive solution as these data are used for critical decision-making processes. To our knowledge, only one published work has estimated the trustworthiness of digital images applied to forensic applications. However, that study’s method depends on machine learning prediction scores returned by existing forensic models, which limits its usage where underlying forensics models require different approaches (e.g., machine learning predictions, statistical methods, digital signature, perceptual image hash). Multi-type sensor data correlation and context awareness can improve the trust measurement, which is absent in that study’s model. To address these issues, novel techniques are introduced to accurately estimate the trustworthiness of IoT image sensor data with the aid of complementary non-imagery (numeric) data-generating sensors monitoring the same environment. The trust estimation models run in edge devices, relieving sensors from computationally intensive tasks. First, to detect local image forgery (splicing and copy-move attacks), an innovative image forgery detection method is proposed based on Discrete Cosine Transformation (DCT), Local Binary Pattern (LBP) and a new feature extraction method using the mean operator. Using Support Vector Machine (SVM), the proposed method is extensively tested on four well-known publicly available greyscale and colour image forgery datasets and on an IoT-based image forgery dataset that we built. Experimental results reveal the superiority of our proposed method over recent state-of-the-art methods in terms of widely used performance metrics and computational time and demonstrate robustness against low availability of forged training samples. Second, a robust trust estimation framework for IoT image data is proposed, leveraging numeric data-generating sensors deployed in the same area of interest (AoI) in an indoor environment. As low-cost sensors allow many IoT applications to use multiple types of sensors to observe the same AoI, the complementary numeric data of one sensor can be exploited to measure the trust value of another image sensor’s data. A theoretical model is developed using Shannon’s entropy to derive the uncertainty associated with an observed event and Dempster-Shafer theory (DST) for decision fusion. The proposed model’s efficacy in estimating the trust score of image sensor data is analysed by observing a fire event using IoT image and temperature sensor data in an indoor residential setup under different scenarios. The proposed model produces highly accurate trust scores in all scenarios with authentic and forged image data. Finally, as the outdoor environment varies dynamically due to different natural factors (e.g., lighting condition variations in day and night, presence of different objects, smoke, fog, rain, shadow in the scene), a novel trust framework is proposed that is suitable for the outdoor environments with these contextual variations. A transfer learning approach is adopted to derive the decision about an observation from image sensor data, while also a statistical approach is used to derive the decision about the same observation from numeric data generated from other sensors deployed in the same AoI. These decisions are then fused using CertainLogic and compared with DST-based fusion. A testbed was set up using Raspberry Pi microprocessor, image sensor, temperature sensor, edge device, LoRa nodes, LoRaWAN gateway and servers to evaluate the proposed techniques. The results show that CertainLogic is more suitable for measuring the trustworthiness of image sensor data in an outdoor environment.Doctor of Philosoph

    Unmanned Aircraft Systems in the Cyber Domain

    Get PDF
    Unmanned Aircraft Systems are an integral part of the US national critical infrastructure. The authors have endeavored to bring a breadth and quality of information to the reader that is unparalleled in the unclassified sphere. This textbook will fully immerse and engage the reader / student in the cyber-security considerations of this rapidly emerging technology that we know as unmanned aircraft systems (UAS). The first edition topics covered National Airspace (NAS) policy issues, information security (INFOSEC), UAS vulnerabilities in key systems (Sense and Avoid / SCADA), navigation and collision avoidance systems, stealth design, intelligence, surveillance and reconnaissance (ISR) platforms; weapons systems security; electronic warfare considerations; data-links, jamming, operational vulnerabilities and still-emerging political scenarios that affect US military / commercial decisions. This second edition discusses state-of-the-art technology issues facing US UAS designers. It focuses on counter unmanned aircraft systems (C-UAS) – especially research designed to mitigate and terminate threats by SWARMS. Topics include high-altitude platforms (HAPS) for wireless communications; C-UAS and large scale threats; acoustic countermeasures against SWARMS and building an Identify Friend or Foe (IFF) acoustic library; updates to the legal / regulatory landscape; UAS proliferation along the Chinese New Silk Road Sea / Land routes; and ethics in this new age of autonomous systems and artificial intelligence (AI).https://newprairiepress.org/ebooks/1027/thumbnail.jp

    Advanced Technologies in Sheep Extensive Farming on a Climate Change Context

    Get PDF
    Climate change represents a serious issue that negatively impacts the animals’ performance. Sheep production from Mediterranean region is mainly characterized by extensive farming system that during summer are exposed to high temperature. The explored new technologies to monitoring animal welfare and environment could mitigate the impact of climate change supporting the sustainability of animal production and ensuring food security. The present chapter will summarize the more recent advanced technologies based on passive sensors, wearable sensors, and the combination of different technologies with the latest machine learning protocol tested for sheep farming aimed at monitoring animal welfare. A focus on the precision technologies solution to detect heat stress will be presented
    • 

    corecore