3,053 research outputs found

    The University Defence Research Collaboration In Signal Processing

    Get PDF
    This chapter describes the development of algorithms for automatic detection of anomalies from multi-dimensional, undersampled and incomplete datasets. The challenge in this work is to identify and classify behaviours as normal or abnormal, safe or threatening, from an irregular and often heterogeneous sensor network. Many defence and civilian applications can be modelled as complex networks of interconnected nodes with unknown or uncertain spatio-temporal relations. The behavior of such heterogeneous networks can exhibit dynamic properties, reflecting evolution in both network structure (new nodes appearing and existing nodes disappearing), as well as inter-node relations. The UDRC work has addressed not only the detection of anomalies, but also the identification of their nature and their statistical characteristics. Normal patterns and changes in behavior have been incorporated to provide an acceptable balance between true positive rate, false positive rate, performance and computational cost. Data quality measures have been used to ensure the models of normality are not corrupted by unreliable and ambiguous data. The context for the activity of each node in complex networks offers an even more efficient anomaly detection mechanism. This has allowed the development of efficient approaches which not only detect anomalies but which also go on to classify their behaviour

    2015 Oil Observing Tools: A Workshop Report

    Get PDF
    Since 2010, the National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA) have provided satellite-based pollution surveillance in United States waters to regulatory agencies such as the United States Coast Guard (USCG). These technologies provide agencies with useful information regarding possible oil discharges. Unfortunately, there has been confusion as to how to interpret the images collected by these satellites and other aerial platforms, which can generate misunderstandings during spill events. Remote sensor packages on aircraft and satellites have advantages and disadvantages vis-à-vis human observers, because they do not “see” features or surface oil the same way. In order to improve observation capabilities during oil spills, applicable technologies must be identified, and then evaluated with respect to their advantages and disadvantages for the incident. In addition, differences between sensors (e.g., visual, IR, multispectral sensors, radar) and platform packages (e.g., manned/unmanned aircraft, satellites) must be understood so that reasonable approaches can be made if applicable and then any data must be correctly interpreted for decision support. NOAA convened an Oil Observing Tools Workshop to focus on the above actions and identify training gaps for oil spill observers and remote sensing interpretation to improve future oil surveillance, observation, and mapping during spills. The Coastal Response Research Center (CRRC) assisted NOAA’s Office of Response and Restoration (ORR) with this effort. The workshop was held on October 20-22, 2015 at NOAA’s Gulf of Mexico Disaster Response Center in Mobile, AL. The expected outcome of the workshop was an improved understanding, and greater use of technology to map and assess oil slicks during actual spill events. Specific workshop objectives included: •Identify new developments in oil observing technologies useful for real-time (or near real-time) mapping of spilled oil during emergency events. •Identify merits and limitations of current technologies and their usefulness to emergency response mapping of oil and reliable prediction of oil surface transport and trajectory forecasts.Current technologies include: the traditional human aerial observer, unmanned aircraft surveillance systems, aircraft with specialized senor packages, and satellite earth observing systems. •Assess training needs for visual observation (human observers with cameras) and sensor technologies (including satellites) to build skills and enhance proper interpretation for decision support during actual events

    Automatic visual detection of human behavior: a review from 2000 to 2014

    Get PDF
    Due to advances in information technology (e.g., digital video cameras, ubiquitous sensors), the automatic detection of human behaviors from video is a very recent research topic. In this paper, we perform a systematic and recent literature review on this topic, from 2000 to 2014, covering a selection of 193 papers that were searched from six major scientific publishers. The selected papers were classified into three main subjects: detection techniques, datasets and applications. The detection techniques were divided into four categories (initialization, tracking, pose estimation and recognition). The list of datasets includes eight examples (e.g., Hollywood action). Finally, several application areas were identified, including human detection, abnormal activity detection, action recognition, player modeling and pedestrian detection. Our analysis provides a road map to guide future research for designing automatic visual human behavior detection systems.This work is funded by the Portuguese Foundation for Science and Technology (FCT - Fundacao para a Ciencia e a Tecnologia) under research Grant SFRH/BD/84939/2012

    Advanced Occupancy Measurement Using Sensor Fusion

    Get PDF
    With roughly about half of the energy used in buildings attributed to Heating, Ventilation, and Air conditioning (HVAC) systems, there is clearly great potential for energy saving through improved building operations. Accurate knowledge of localised and real-time occupancy numbers can have compelling control applications for HVAC systems. However, existing technologies applied for building occupancy measurements are limited, such that a precise and reliable occupant count is difficult to obtain. For example, passive infrared (PIR) sensors commonly used for occupancy sensing in lighting control applications cannot differentiate between occupants grouped together, video sensing is often limited by privacy concerns, atmospheric gas sensors (such as CO2 sensors) may be affected by the presence of electromagnetic (EMI) interference, and may not show clear links between occupancy and sensor values. Past studies have indicated the need for a heterogeneous multi-sensory fusion approach for occupancy detection to address the short-comings of existing occupancy detection systems. The aim of this research is to develop an advanced instrumentation strategy to monitor occupancy levels in non-domestic buildings, whilst facilitating the lowering of energy use and also maintaining an acceptable indoor climate. Accordingly, a novel multi-sensor based approach for occupancy detection in open-plan office spaces is proposed. The approach combined information from various low-cost and non-intrusive indoor environmental sensors, with the aim to merge advantages of various sensors, whilst minimising their weaknesses. The proposed approach offered the potential for explicit information indicating occupancy levels to be captured. The proposed occupancy monitoring strategy has two main components; hardware system implementation and data processing. The hardware system implementation included a custom made sound sensor and refinement of CO2 sensors for EMI mitigation. Two test beds were designed and implemented for supporting the research studies, including proof-of-concept, and experimental studies. Data processing was carried out in several stages with the ultimate goal being to detect occupancy levels. Firstly, interested features were extracted from all sensory data collected, and then a symmetrical uncertainty analysis was applied to determine the predictive strength of individual sensor features. Thirdly, a candidate features subset was determined using a genetic based search. Finally, a back-propagation neural network model was adopted to fuse candidate multi-sensory features for estimation of occupancy levels. Several test cases were implemented to demonstrate and evaluate the effectiveness and feasibility of the proposed occupancy detection approach. Results have shown the potential of the proposed heterogeneous multi-sensor fusion based approach as an advanced strategy for the development of reliable occupancy detection systems in open-plan office buildings, which can be capable of facilitating improved control of building services. In summary, the proposed approach has the potential to: (1) Detect occupancy levels with an accuracy reaching 84.59% during occupied instances (2) capable of maintaining average occupancy detection accuracy of 61.01%, in the event of sensor failure or drop-off (such as CO2 sensors drop-off), (3) capable of utilising just sound and motion sensors for occupancy levels monitoring in a naturally ventilated space, (4) capable of facilitating potential daily energy savings reaching 53%, if implemented for occupancy-driven ventilation control

    Sensor fusion in smart camera networks for ambient intelligence

    Get PDF
    This short report introduces the topics of PhD research that was conducted on 2008-2013 and was defended on July 2013. The PhD thesis covers sensor fusion theory, gathers it into a framework with design rules for fusion-friendly design of vision networks, and elaborates on the rules through fusion experiments performed with four distinct applications of Ambient Intelligence

    Vision-Based 2D and 3D Human Activity Recognition

    Get PDF

    Vision Based Activity Recognition Using Machine Learning and Deep Learning Architecture

    Get PDF
    Human Activity recognition, with wide application in fields like video surveillance, sports, human interaction, elderly care has shown great influence in upbringing the standard of life of people. With the constant development of new architecture, models, and an increase in the computational capability of the system, the adoption of machine learning and deep learning for activity recognition has shown great improvement with high performance in recent years. My research goal in this thesis is to design and compare machine learning and deep learning models for activity recognition through videos collected from different media in the field of sports. Human activity recognition (HAR) mostly is to recognize the action performed by a human through the data collected from different sources automatically. Based on the literature review, most data collected for analysis is based on time series data collected through different sensors and video-based data collected through the camera. So firstly, our research analyzes and compare different machine learning and deep learning architecture with sensor-based data collected from an accelerometer of a smartphone place at different position of the human body. Without any hand-crafted feature extraction methods, we found that deep learning architecture outperforms most of the machine learning architecture and the use of multiple sensors has higher accuracy than a dataset collected from a single sensor. Secondly, as collecting data from sensors in real-time is not feasible in all the fields such as sports, we study the activity recognition by using the video dataset. For this, we used two state-of-the-art deep learning architectures previously trained on the big, annotated dataset using transfer learning methods for activity recognition in three different sports-related publicly available datasets. Extending the study to the different activities performed on a single sport, and to avoid the current trend of using special cameras and expensive set up around the court for data collection, we developed our video dataset using sports coverage of basketball games broadcasted through broadcasting media. The detailed analysis and experiments based on different criteria such as range of shots taken, scoring activities is presented for 8 different activities using state-of-art deep learning architecture for video classification

    From Concept to Market: Surgical Robot Development

    Get PDF
    Surgical robotics and supporting technologies have really become a prime example of modern applied information technology infiltrating our everyday lives. The development of these systems spans across four decades, and only the last few years brought the market value and saw the rising customer base imagined already by the early developers. This chapter guides through the historical development of the most important systems, and provide references and lessons learnt for current engineers facing similar challenges. A special emphasis is put on system validation, assessment and clearance, as the most commonly cited barrier hindering the wider deployment of a system
    • …
    corecore