465 research outputs found

    BioClimate: a Science Gateway for Climate Change and Biodiversity research in the EUBrazilCloudConnect project

    Get PDF
    [EN] Climate and biodiversity systems are closely linked across a wide range of scales. To better understand the mutual interaction between climate change and biodiversity there is a strong need for multidisciplinary skills, scientific tools, and access to a large variety of heterogeneous, often distributed, data sources. Related to that, the EUBrazilCloudConnect project provides a user-oriented research environment built on top of a federated cloud infrastructure across Europe and Brazil, to serve key needs in different scientific domains, which is validated through a set of use cases. Among them, the most data-centric one is focused on climate change and biodiversity research. As part of this use case, the BioClimate Science Gateway has been implemented to provide end-users transparent access to (i) a highly integrated user-friendly environment, (ii) a large variety of data sources, and (iii) different analytics & visualization tools to serve a large spectrum of users needs and requirements. This paper presents a complete overview of BioClimate and the related scientific environment, in particular its Science Gateway, delivered to the end-user community at the end of the project.This work was supported by the EU FP7 EUBrazilCloudConnect Project (Grant Agreement 614048), and CNPq/Brazil (Grant Agreement no 490115/2013-6).Fiore, S.; Elia, D.; Blanquer Espert, I.; Brasileiro, FV.; Nuzzo, A.; Nassisi, P.; Rufino, LAA.... (2019). BioClimate: a Science Gateway for Climate Change and Biodiversity research in the EUBrazilCloudConnect project. Future Generation Computer Systems. 94:895-909. https://doi.org/10.1016/j.future.2017.11.034S8959099

    DESIGN OPTIMIZATION OF EMBEDDED SIGNAL PROCESSING SYSTEMS FOR TARGET DETECTION

    Get PDF
    Sensor networks for automated detection of targets, such as pedestrians and vehicles, are highly relevant in defense and surveillance applications. For this purpose, a variety of target detection algorithms and systems using different types of sensors have been proposed in the literature. Among them, systems based on non-image sensors are of special interest in many practical deployment scenarios because of their power efficiency and low computational loads. In this thesis, we investigate low power sensor systems for detecting people and vehicles using non-image sensors such as acoustic and seismic sensors. Our investigation is focused on design optimization across trade-offs including real-time performance, energy efficiency, and target detection accuracy, which are key design evaluation metrics for this class of systems. Design and implementation of low power, embedded target detection systems can be decomposed into two major, inter-related subproblems: (a) algorithm development, which encompasses the development or selection of detection algorithms and optimization of their parameters, and (b) system development, which involves the mapping of the algorithms derived from (a) into real-time, energy efficient implementations on the targeted embedded platforms. In this thesis, we address both of these subproblems in an integrated manner. That is, we investigate novel algorithmic techniques for improvement of accuracy without excessive computational complexity, and we develop new design methodologies, tools, and implementations for efficient realization of target detection algorithms on embedded platforms. We focus specifically on target detection systems that employ acoustic and seismic sensing modalities. These selected modalities support the low power design objectives of our work. However, we envision that our developed algorithms and implementation techniques can be extended readily to other types or combinations of relevant sensing modalities. Throughout this research, we have developed prototypes of our new algorithms and design methods on embedded platforms, and we have experimented with these prototypes to demonstrate our findings, and iteratively improve upon the achieved implementation trade-offs. The main contributions of this thesis are summarized in the following. (1). Classification algorithm for acoustic and seismic signals. We have developed a new classification algorithm for discrimination among people, vehicles, and noise. The algorithm is based on a new fusion technique for acoustic and seismic signals. Our new fusion technique was evaluated through experiments using actual measured datasets, which were collected from different sensors installed in different locations and at different times of day. Our proposed classification algorithm was shown to achieve a significant reduction in the number of false alarms compared to a baseline fusion approach. (2). Joint target localization and classification framework using sensor networks. We designed a joint framework for target localization and classification using a single generalized model for non-imaging based multi- modal sensor data. For target localization, we exploited both sensor data and estimated dynamics within a local neighborhood. We validated the capabilities of our framework by using an actual multi-modal dataset, which includes ground truth GPS information (e.g., time and position) and data from co-located seismic and acoustic sensors. Experimental results showed that our framework achieves better classification accuracy compared to state of the art fusion algorithms using temporal accumulation and achieves more accurate target localizations than a baseline target localization approach. (3). Design and optimization of target detection systems on embedded platforms using dataflow methods. We developed a foundation for our system-level design research by introducing a new rapid prototyping methodology and associated software tool. Using this tool, we presented the design and implementation of a novel, multi-mode embedded signal processing system for detection of people and vehicles related to our algorithmic contributions. We applied a strategically-configured suite of single- and dual-modality signal processing techniques together with dataflow-based design optimization for energy-efficient, real-time implementation. Through experiments using a Raspberry Pi platform, we demonstrated the capability of our target detection system to provide efficient operational trade-offs among detection accuracy, energy efficiency, and processing speed. (4). Software synthesis from dataflow schedule graphs on multicore platforms. We developed new software synthesis methods and tools for design and implementation of embedded signal processing systems using dataflow schedule graphs (DSGs). DSGs provide formal representations of dataflow schedules, which encapsulate information about the assignment of computational tasks (signal processing modules) to processing resources and the ordering of tasks that are assigned to the same resource. Building on fundamental DSG modeling concepts from the literature, we developed the first algorithms and supporting software synthesis tools for mapping DSG representations into efficient multi-threaded implementations. Our tools replace ad-hoc multicore signal processing system development processes with a structured process that is rooted in dataflow formalisms and supported with a high degree of automation. We evaluated our new DSG methods and tools through a demonstration involving multi-threaded implementation of our proposed classification algorithm and associated fusion technique for acoustic/seismic signals

    Myriad : a distributed machine vision application framework

    Get PDF
    This thesis examines the potential for the application of distributed computing frameworks to industrial and also lightweight consumer-level Machine Vision (MV) applications. Traditional, stand-alone MV systems have many benefits in well-defined, tightly- controlled industrial settings, but expose limitations in interactive, de-localised and small-task applications that seek to utilise vision techniques. In these situations, single-computer solutions fail to suffice and greater flexibility in terms of system construction, interactivity and localisation are required. Network-connected and distributed vision systems are proposed as a remedy to these problems, providing dynamic, componentised systems that may optionally be independent of location, or take advantage of networked computing tools and techniques, such as web servers, databases, proxies, wireless networking, secure connectivity, distributed computing clusters, web services and load balancing. The thesis discusses a system named Myriad, a distributed computing framework for Machine Vision applications. Myriad is composed components, such as image processing engines and equipment controllers, which behave as enhanced web servers and communicate using simple HTTP requests. The roles of HTTP-based distributed computing servers in simplifying rapid development of networked applications and integrating those applications with existing networked tools and business processes are explored. Prototypes of Myriad components, written in Java, along with supporting PHP, Perl and Prolog scripts and user interfaces in C , Java, VB and C++/Qt are examined. Each component includes a scripting language named MCS, enabling remote clients (or other Myriad components) to issue single commands or execute sequences of commands locally to the component in a sustained session. The advantages of server- side scripting in this manner for distributed computing tasks are outlined with emphasis on Machine Vision applications, as a means to overcome network connection issues and address problems where consistent processing is required. Furthermore, the opportunities to utilise scripting to form complex distributed computing network topologies and fully-autonomous federated networked applications are described, and examples given on how to achieve functionality such as clusters of image processing nodes. Through the medium of experimentation involving the remote control of a model train set, cameras and lights, the ability of Myriad to perform traditional roles of fixed, stand-alone Machine Vision systems is supported, along with discussion of opportunities to incorporate these elements into network-based dynamic collaborative inspection applications. In an example of 2D packing of remotely-acquired shapes, distributed computing extensions to Machine Vision tasks are explored, along with integration into larger business processes. Finally, the thesis examines the use of Machine Vision techniques and Myriad components to construct distributed computing applications with the addition of vision capabilities, leading to a new class of image-data-driven applications that exploit mobile computing and Pervasive Computing trends

    The Sensor Network Workbench: Towards Functional Specification, Verification and Deployment of Constrained Distributed Systems

    Full text link
    As the commoditization of sensing, actuation and communication hardware increases, so does the potential for dynamically tasked sense and respond networked systems (i.e., Sensor Networks or SNs) to replace existing disjoint and inflexible special-purpose deployments (closed-circuit security video, anti-theft sensors, etc.). While various solutions have emerged to many individual SN-centric challenges (e.g., power management, communication protocols, role assignment), perhaps the largest remaining obstacle to widespread SN deployment is that those who wish to deploy, utilize, and maintain a programmable Sensor Network lack the programming and systems expertise to do so. The contributions of this thesis centers on the design, development and deployment of the SN Workbench (snBench). snBench embodies an accessible, modular programming platform coupled with a flexible and extensible run-time system that, together, support the entire life-cycle of distributed sensory services. As it is impossible to find a one-size-fits-all programming interface, this work advocates the use of tiered layers of abstraction that enable a variety of high-level, domain specific languages to be compiled to a common (thin-waist) tasking language; this common tasking language is statically verified and can be subsequently re-translated, if needed, for execution on a wide variety of hardware platforms. snBench provides: (1) a common sensory tasking language (Instruction Set Architecture) powerful enough to express complex SN services, yet simple enough to be executed by highly constrained resources with soft, real-time constraints, (2) a prototype high-level language (and corresponding compiler) to illustrate the utility of the common tasking language and the tiered programming approach in this domain, (3) an execution environment and a run-time support infrastructure that abstract a collection of heterogeneous resources into a single virtual Sensor Network, tasked via this common tasking language, and (4) novel formal methods (i.e., static analysis techniques) that verify safety properties and infer implicit resource constraints to facilitate resource allocation for new services. This thesis presents these components in detail, as well as two specific case-studies: the use of snBench to integrate physical and wireless network security, and the use of snBench as the foundation for semester-long student projects in a graduate-level Software Engineering course

    Human Machine Interaction

    Get PDF
    In this book, the reader will find a set of papers divided into two sections. The first section presents different proposals focused on the human-machine interaction development process. The second section is devoted to different aspects of interaction, with a special emphasis on the physical interaction

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    An Abstraction Framework for Tangible Interactive Surfaces

    Get PDF
    This cumulative dissertation discusses - by the example of four subsequent publications - the various layers of a tangible interaction framework, which has been developed in conjunction with an electronic musical instrument with a tabletop tangible user interface. Based on the experiences that have been collected during the design and implementation of that particular musical application, this research mainly concentrates on the definition of a general-purpose abstraction model for the encapsulation of physical interface components that are commonly employed in the context of an interactive surface environment. Along with a detailed description of the underlying abstraction model, this dissertation also describes an actual implementation in the form of a detailed protocol syntax, which constitutes the common element of a distributed architecture for the construction of surface-based tangible user interfaces. The initial implementation of the presented abstraction model within an actual application toolkit is comprised of the TUIO protocol and the related computer-vision based object and multi-touch tracking software reacTIVision, along with its principal application within the Reactable synthesizer. The dissertation concludes with an evaluation and extension of the initial TUIO model, by presenting TUIO2 - a next generation abstraction model designed for a more comprehensive range of tangible interaction platforms and related application scenarios
    • …
    corecore