270 research outputs found

    10411 Abstracts Collection -- Computational Video

    Get PDF
    From 10.10.2010 to 15.10.2010, the Dagstuhl Seminar 10411 ``Computational Video \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Volume 110, Number 16 - Tuesday, February 19, 2013

    Get PDF

    Factors Influencing Grass Utilization Patterns And Growth Performance In Outdoor Swine

    Get PDF
    Six gestating Yorkshire sows were evaluated in a pasture grazing system for a spring, summer to fall, and winter trial. The pasture was divided into four different grass sections containing; 1) endophyte- infected Kentucky 31 Tall Fescue, 2) non-toxic endophyte infected Max Q Fescue, 3) multispecies grass including Redtop, Kentucky Bluegrass, and Kentucky 31 Fescue, and 4) common Bermudagrass. Each sow was assigned a global positioning system (GPS) unit by Telespial Systems, which notifies researchers of animal position at all times. The attained data was then used to determine how often different areas of the pasture were frequented. Grass score assessment was conducted after the sows were removed from pasture to determine associations between the percentages of time spent in the grass section and grass integrity. Growth performance was evaluated from offspring selected from the six Yorkshire sows in the winter trial. They consisted of 40 Yorkshire crosses; Yorkshire x Yorkshire, Large Black x Yorkshire, and Berkshire x Yorkshire that were finished in a hoop structure. An automated Feed Intake and Recording Equipment (FIRE) were used to supply feed, weigh each pig, and measure feed intake. Growth performance was evaluated by measuring average daily gain (ADG) and feed intake (FI). Feed efficiency (FE) was calculated based on feed intake and average daily gain. Grass type did not influence frequency of grass section use by sow. Based on collected data, time spent in the individual grass sections was Bermuda grass =13.95%, Multispecies = 13.87%, Max Q =18.94%, and Kentucky 31 Tall Fescue = 15.76%. Grass integrity data showed a higher frequency of grass score values of two (37.92%) and three (38.57%). Overall the sows spent the greatest percentage of time in the grass areas (62.52%) compared to the platform (37.46%). Growth performance of sows‟ offspring was not impacted by breed of sire; 1) Yorkshire cross, FI = 1.5 kg, ADG = 1.5 kg, FE = 1.0, 2) Berkshire cross, FI =1.4 kg, ADG = 1.5 kg, FE = 0.97, 3) Tamworth cross, FI = 1.5 kg, ADG = 1.5 kg, FE = 1.0. Gender did influence feed efficiency, with gilts having better FE to attain similar ADG values as males; 1) Male, FI =1.6 kg, ADG =1.5 kg, FE = 1.03, 2) Female, FI =1.4 kg, ADG =1.4 kg, FE =0.9

    FRIEND: A Cyber-Physical System for Traffic Flow Related Information Aggregation and Dissemination

    Get PDF
    The major contribution of this thesis is to lay the theoretical foundations of FRIEND — A cyber-physical system for traffic Flow-Related Information aggrEgatioN and Dissemination. By integrating resources and capabilities at the nexus between the cyber and physical worlds, FRIEND will contribute to aggregating traffic flow data collected by the huge fleet of vehicles on our roads into a comprehensive, near real-time synopsis of traffic flow conditions. We anticipate providing drivers with a meaningful, color-coded, at-a-glance view of flow conditions ahead, alerting them to congested traffic. FRIEND can be used to provide accurate information about traffic flow and can be used to propagate this information. The workhorse of FRIEND is the ubiquitous lane delimiters (a.k.a. cat\u27s eyes) on our roadways that, at the moment, are used simply as dumb reflectors. Our main vision is that by endowing cat\u27s eyes with a modest power source, detection and communication capabilities they will play an important role in collecting, aggregating and disseminating traffic flow conditions to the driving public. We envision the cat\u27s eyes system to be supplemented by road-side units (RSU) deployed at regular intervals (e.g. every kilometer or so). The RSUs placed on opposite sides of the roadway constitute a logical unit and are connected by optical fiber under the median. Unlike inductive loop detectors, adjacent RSUs along the roadway are not connected with each other, thus avoiding the huge cost of optical fiber. Each RSU contains a GPS device (for time synchronization), an active Radio Frequency Identification (RFID) tag for communication with passing cars, a radio transceiver for RSU to RSU communication and a laptop-class computing device. The physical components of FRIEND collect traffic flow-related data from passing vehicles. The collected data is used by FRIEND\u27s inference engine to build beliefs about the state of the traffic, to detect traffic trends, and to disseminate relevant traffic flow-related information along the roadway. The second contribution of this thesis is the development of an incident classification and detection algorithm that can be used to classify different types of traffic incident Then, it can notify the necessary target of the incident. We also compare our incident detection technique with other VANET techniques. Our third contribution is a novel strategy for information dissemination on highways. First, we aim to prevent secondary accidents. Second, we notify drivers far away from the accident of an expected delay that gives them the option to continue or exit before reaching the incident location. A new mechanism tracks the source of the incident while notifying drivers away from the accident. The more time the incident stays, the further the information needs to be propagated. Furthermore, the denser the traffic, the faster it will backup. In high density highways, an incident may form a backup of vehicles faster than low density highways. In order to satisfy this point, we need to propagate information as a function of density and time

    Evaluation of Robust Deep Learning Pipelines Targeting Low SWaP Edge Deployment

    Get PDF
    The deep learning technique of convolutional neural networks (CNNs) has greatly advanced the state-of-the-art for computer vision tasks such as image classification and object detection. These solutions rely on large systems leveraging wattage-hungry GPUs to provide the computational power to achieve such performance. However, the size, weight and power (SWaP) requirements of these conventional GPU-based deep learning systems are not suitable when a solution requires deployment to so called Edge environments such as autonomous vehicles, unmanned aerial vehicles (UAVs) and smart security cameras. The objective of this work is to benchmark FPGA-based alternatives to conventional GPU systems that have the potential to offer similar CNN inference performance while being delivered in a low SWaP platform suitable for Edge deployment. In this thesis we create equivalent pipelines for both GPU and FPGA which implement deep learning models for both image classification and object detection tasks. Beyond baseline benchmarking, we additionally quantify the impact on inference performance of two common real-world image degradation scenarios (simulated contrast reduced capture and salt-and-pepper sensor noise) and their associated correction methods (gamma correction and median kernel filtering) we selected as illustrative examples. The baseline system analysis, coupled with these additional robustness evaluations, provides a statistically significant benchmark comparison targeting a breadth of interest for the computer vision community. We have conducted the following experiments to demonstrate the FPGA as an effective alternative to the GPU implementation when deployed to Edge environments: (1) we developed a hardware video processing architecture with an associated library of hardware processing functions to prototype a base FPGA ecosystem, (2) we established through benchmarking that two common CNN models (ResNet-50 and YOLO version 3) have a mere 1\% drop in performance on FPGA versus GPU, (3) we show a quantitative baseline analysis for the image degradation/correction on the associated testing datasets, and (4) we proved that our FPGA-based computer vision system is an ideal platform for Edge deployment given its comparable robustness to input degradation when optimal correction is applied. The significance of these findings is the demonstration of our FPGA-based solution as the superior candidate for Edge deployed vision systems evidenced by our experiments which illustrate its competitive inference performance to the conventional GPU solution and its equivalent robustness provided by correction methods to noise encountered during in-the-wild imaging while being delivered with far lower SWaP requirements

    Interactive visualization of event logs for cybersecurity

    Get PDF
    Hidden cyber threats revealed with new visualization software Eventpa

    A Hybrid Deep Learning Approach for Human Action Recognition

    Get PDF
    Η Αναγνώριση Ανθρώπινης Κίνησης μέσω βίντεο προσπαθεί να εντοπίσει ανθρώπινες κινήσεις και δραστηριότητες σε ένα σύνολο από εικονοσειρές. Ο τομέας αυτός της υπολογιστικής όρασης έχει συγκεντρώσει μεγάλο ενδιαφέρον από την επιστημονική κοινότητα καθώς προσφέρει ένα τεράστιο εύρος εφαρμογών από την παροχή βοήθειας με την ανίχνευση επικίνδυνων ιατρικών καταστάσεων έως την βελτίωση αλληλεπίδρασης ανθρώπου μηχανής. Τα τελευταία χρόνια η βαθιά μηχανική μάθηση έχει προσφέρει πολλές αξιόπιστες λύσεις σε αντίστοιχα προβλήματα, ιδιαίτερα με την εμφάνιση των αισθητήρων βάθους και του μεγάλου πλήθους δεδομένων που είναι πλέον διαθέσιμα. Σε αυτήν την πτυχιακή εργασία παρουσιάζουμε μία αρχιτεκτονική βαθιάς μάθησης για την αναγνώριση ανθρώπινης κίνησης που συνδυάζει χρωματικά δεδομένα, δεδομένα βάθους και σκελετικά δεδομένα. Τα σκελετικά δεδομένα που χρησιμοποιούνται ως είσοδο στο μοντέλο αναπαριστώνται με δισδιάστατες εικόνες που ονομάζονται “Εικόνες Ενεργειών” (Activity Images) και προκύπτουν με την κωδικοποίηση τους σε ψευδοχρωματικές εικόνες και την εφαρμογή του διακριτού μετασχηματισμού Fourier (DFT). Η αρχιτεκτονική που παρουσιαζεται βασίζεται σε ένα υβριδικό μοντέλο συνελικτικών νευρωνικών δικτύων (Convolutional Neural Networks - CNNs) και ενός Long Short Term Memory (LSTM) δικτύου. Αρχικά, τα διαφορετικά είδη δεδομένων εισόδου αξιολογήθηκαν ξεχωριστά και έπειτα αξιολογήθηκαν διαφορετικοί συνδυασμοί αρχιτεκτονικών με πολλαπλά είδη δεδομένων εισόδου. Το μοντέλο που πέτυχε τα καλυτερα αποτελέσματα ήταν εκείνο που χρησιμοποίησε σύντηξη χρωματικών δεδομένων, δεδομένων βάθους και εικόνων ενεργειών. Η μέθοδός μας επαληθεύτηκε σε δύο υποσύνολα δραστηριοτήτων από δύο μεγάλης κλίμακας σύνολα δεδομένων: καθημερινές δραστηριότητες από το σύνολο PKU-MMD και ιατρικές καταστάσεις από το σύνολο NTU-RGB+D.Video-based Human Action Recognition (HAR) aims to identify human actions and movements from moving image sequences. This field of computer vision has been increasingly attracting the efforts of the research community due to its wide range of applications, ranging from healthcare to human-computer interaction. Recently, deep learning techniques addressing HAR problems have achieved promising performance, especially with the advent of depth sensors and the introduction of large scale, challenging datasets. In this thesis we present a deep learning multimodal method addressing Human Action Recognition, utilizing RGB, depth and skeletal information. The latter are represented by a 2D color image transformed into the spectral domain using the Discrete Fourier Transformation (DFT), called “Activity Image.” The proposed network is based on a hybrid CNN - LSTM architecture. Initially, the classification ability of each one of the modalities was tested independently and as a next step different fusion approaches were evaluated. The architecture that achieved the best accuracy score was the multimodal approach i.e., fusing RGB, Depth and Activity Images. The method was evaluated on large-scale datasets and focused on a subset of activities related to common activities of daily living from the PKU-MMD dataset and medical conditions from the NTU RGB+D dataset

    Wireless sensor network as a distribute database

    Get PDF
    Wireless sensor networks (WSN) have played a role in various fields. In-network data processing is one of the most important and challenging techniques as it affects the key features of WSNs, which are energy consumption, nodes life circles and network performance. In the form of in-network processing, an intermediate node or aggregator will fuse or aggregate sensor data, which are collected from a group of sensors before transferring to the base station. The advantage of this approach is to minimize the amount of information transferred due to lack of computational resources. This thesis introduces the development of a hybrid in-network data processing for WSNs to fulfil the WSNs constraints. An architecture for in-network data processing were proposed in clustering level, data compression level and data mining level. The Neighbour-aware Multipath Cluster Aggregation (NMCA) is designed in the clustering level, which combines cluster-based and multipath approaches to process different packet loss rates. The data compression schemes and Optimal Dynamic Huffman (ODH) algorithm compressed data in the cluster head for the compressed level. A semantic data mining for fire detection was designed for extracting information from the raw data by the semantic data-mining model is developed to improve data accuracy and extract the fire event in the simulation. A demo in-door location system with in-network data processing approach is built to test the performance of the energy reduction of our designed strategy. In conclusion, the added benefits that the technical work can provide for in-network data processing is discussed and specific contributions and future work are highlighted
    corecore