22,277 research outputs found

    A Planning Pipeline for Large Multi-Agent Missions

    Get PDF
    In complex multi-agent applications, human operators are often tasked with planning and managing large heterogeneous teams of humans and autonomous vehicles. Although the use of these autonomous vehicles broadens the scope of meaningful applications, many of their systems remain unintuitive and difficult to master for human operators whose expertise lies in the application domain and not at the platform level. Current research focuses on the development of individual capabilities necessary to plan multi-agent missions of this scope, placing little emphasis on the integration of these components in to a full pipeline. The work presented in this paper presents a complete and user-agnostic planning pipeline for large multiagent missions known as the HOLII GRAILLE. The system takes a holistic approach to mission planning by integrating capabilities in human machine interaction, flight path generation, and validation and verification. Components modules of the pipeline are explored on an individual level, as well as their integration into a whole system. Lastly, implications for future mission planning are discussed

    Close range mini Uavs photogrammetry for architecture survey

    Get PDF
    The survey of historical façades contains several bottlenecks, mainly related to the geometrical structure, the decorative framework, the presence of natural or artificial obstacles, the environment limitations. Urban context presents additional restrictions, binding by ground acquisition activity and leading to building data loss. The integration of TLS and close-range photogrammetry allows to go over such stuff, not overcoming the shadows effect due to the ground point of view. In the last year the massive use of UAVs in survey activity has permitted to enlarge survey capabilities, reaching a deeper knowledge in the architecture analysis. In the meanwhile, several behaviour rules have been introduced in different countries, regulating the UAVs use in different field, strongly restricting their application in urban areas. Recently very small and light platforms have been presented, which can partially overcome these rules restrictions, opening to very interesting future scenarios. This article presents the application of one of these very small RPAS (less than 300 g), equipped with a low-cost camera, in a close range photogrammetric survey of an historical building façade in Bologna (Italy). The suggested analysis tries to point out the system accuracy and details acquisition capacity. The final aim of the paper is to validate the application of this new platform in an architectonic survey pipeline, widening the future application of close-range photogrammetry in the architecture acquisition process

    From FPGA to ASIC: A RISC-V processor experience

    Get PDF
    This work document a correct design flow using these tools in the Lagarto RISC- V Processor and the RTL design considerations that must be taken into account, to move from a design for FPGA to design for ASIC

    Neuroimaging study designs, computational analyses and data provenance using the LONI pipeline.

    Get PDF
    Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges--management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu

    Trade-off analysis and design of a Hydraulic Energy Scavenger

    Get PDF
    In the last years there has been a growing interest in intelligent, autonomous devices for household applications. In the near future this technology will be part of our society; sensing and actuating will be integrated in the environment of our houses by means of energy scavengers and wireless microsystems. These systems will be capable of monitoring the environment, communicating with people and among each other, actuating and supplying themselves independently. This concept is now possible thanks to the low power consumption of electronic devices and accurate design of energy scavengers to harvest energy from the surrounding environment. In principle, an autonomous device comprises three main subsystems: an energy scavenger, an energy storage unit and an operational stage. The energy scavenger is capable of harvesting very small amounts of energy from the surroundings and converting it into electrical energy. This energy can be stored in a small storage unit like a small battery or capacitor, thus being available as a power supply. The operational stage can perform a variety of tasks depending on the application. Inside its application range, this kind of system presents several advantages with respect to regular devices using external energy supplies. They can be simpler to apply as no external connections are needed; they are environmentally friendly and might be economically advantageous in the long term. Furthermore, their autonomous nature permits the application in locations where the local energy grid is not present and allows them to be ‘hidden' in the environment, being independent from interaction with humans. In the present paper an energy-harvesting system used to supply a hydraulic control valve of a heating system for a typical residential application is studied. The system converts the kinetic energy from the water flow inside the pipes of the heating system to power the energy scavenger. The harvesting unit is composed of a hydraulic turbine that converts the kinetic energy of the water flow into rotational motion to drive a small electric generator. The design phases comprise a trade-off analysis to define the most suitable hydraulic turbine and electric generator for the energy scavenger, and an optimization of the components to satisfy the systems specification

    Instruction-Level Abstraction (ILA): A Uniform Specification for System-on-Chip (SoC) Verification

    Full text link
    Modern Systems-on-Chip (SoC) designs are increasingly heterogeneous and contain specialized semi-programmable accelerators in addition to programmable processors. In contrast to the pre-accelerator era, when the ISA played an important role in verification by enabling a clean separation of concerns between software and hardware, verification of these "accelerator-rich" SoCs presents new challenges. From the perspective of hardware designers, there is a lack of a common framework for the formal functional specification of accelerator behavior. From the perspective of software developers, there exists no unified framework for reasoning about software/hardware interactions of programs that interact with accelerators. This paper addresses these challenges by providing a formal specification and high-level abstraction for accelerator functional behavior. It formalizes the concept of an Instruction Level Abstraction (ILA), developed informally in our previous work, and shows its application in modeling and verification of accelerators. This formal ILA extends the familiar notion of instructions to accelerators and provides a uniform, modular, and hierarchical abstraction for modeling software-visible behavior of both accelerators and programmable processors. We demonstrate the applicability of the ILA through several case studies of accelerators (for image processing, machine learning, and cryptography), and a general-purpose processor (RISC-V). We show how the ILA model facilitates equivalence checking between two ILAs, and between an ILA and its hardware finite-state machine (FSM) implementation. Further, this equivalence checking supports accelerator upgrades using the notion of ILA compatibility, similar to processor upgrades using ISA compatibility.Comment: 24 pages, 3 figures, 3 table

    The Dark Energy Survey Data Management System

    Full text link
    The Dark Energy Survey collaboration will study cosmic acceleration with a 5000 deg2 griZY survey in the southern sky over 525 nights from 2011-2016. The DES data management (DESDM) system will be used to process and archive these data and the resulting science ready data products. The DESDM system consists of an integrated archive, a processing framework, an ensemble of astronomy codes and a data access framework. We are developing the DESDM system for operation in the high performance computing (HPC) environments at NCSA and Fermilab. Operating the DESDM system in an HPC environment offers both speed and flexibility. We will employ it for our regular nightly processing needs, and for more compute-intensive tasks such as large scale image coaddition campaigns, extraction of weak lensing shear from the full survey dataset, and massive seasonal reprocessing of the DES data. Data products will be available to the Collaboration and later to the public through a virtual-observatory compatible web portal. Our approach leverages investments in publicly available HPC systems, greatly reducing hardware and maintenance costs to the project, which must deploy and maintain only the storage, database platforms and orchestration and web portal nodes that are specific to DESDM. In Fall 2007, we tested the current DESDM system on both simulated and real survey data. We used Teragrid to process 10 simulated DES nights (3TB of raw data), ingesting and calibrating approximately 250 million objects into the DES Archive database. We also used DESDM to process and calibrate over 50 nights of survey data acquired with the Mosaic2 camera. Comparison to truth tables in the case of the simulated data and internal crosschecks in the case of the real data indicate that astrometric and photometric data quality is excellent.Comment: To be published in the proceedings of the SPIE conference on Astronomical Instrumentation (held in Marseille in June 2008). This preprint is made available with the permission of SPIE. Further information together with preprint containing full quality images is available at http://desweb.cosmology.uiuc.edu/wik
    • …
    corecore