4,381 research outputs found

    FPGA-based module for SURF extraction

    Get PDF
    We present a complete hardware and software solution of an FPGA-based computer vision embedded module capable of carrying out SURF image features extraction algorithm. Aside from image analysis, the module embeds a Linux distribution that allows to run programs specifically tailored for particular applications. The module is based on a Virtex-5 FXT FPGA which features powerful configurable logic and an embedded PowerPC processor. We describe the module hardware as well as the custom FPGA image processing cores that implement the algorithm's most computationally expensive process, the interest point detection. The module's overall performance is evaluated and compared to CPU and GPU based solutions. Results show that the embedded module achieves comparable disctinctiveness to the SURF software implementation running in a standard CPU while being faster and consuming significantly less power and space. Thus, it allows to use the SURF algorithm in applications with power and spatial constraints, such as autonomous navigation of small mobile robots

    A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

    Full text link
    Fully-autonomous miniaturized robots (e.g., drones), with artificial intelligence (AI) based visual navigation capabilities are extremely challenging drivers of Internet-of-Things edge intelligence capabilities. Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm2{}^\mathrm{2}. In this work, we present the first (to the best of our knowledge) demonstration of a navigation engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based visual navigation. To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average. Our navigation engine is flexible and can be used to span a wide performance range: at its peak performance corner it achieves 18 fps while still consuming on average just 3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication in the IEEE Internet of Things Journal (IEEE IOTJ

    Storing and Querying Probabilistic XML Using a Probabilistic Relational DBMS

    Get PDF
    This work explores the feasibility of storing and querying probabilistic XML in a probabilistic relational database. Our approach is to adapt known techniques for mapping XML to relational data such that the possible worlds are preserved. We show that this approach can work for any XML-to-relational technique by adapting a representative schema-based (inlining) as well as a representative schemaless technique (XPath Accelerator). We investigate the maturity of probabilistic rela- tional databases for this task with experiments with one of the state-of- the-art systems, called Trio

    Video in e-learning systems

    Get PDF
    The world is changing rapidly in the field of multimedia and it is inevitable to prepare to use and utilize the new teaching method. This is specifically true in the case of the use of educational films both as video and also using such video on the Internet. Our first task is to decide whether the development of material will be an independent film or a part of an e-learning course. In both cases the method of construction is different. The next step is to select the target group of the film. There is a wide scale of possible viewers or participants (who must have a certain level of basic knowledge) and also handicapped people should be able to use the results. The final product ought to be acceptable for e-learning and distance-learning as well. Using the information technology in education is general and the present being of the e-learnig is part of this fact. We can use e-learning effectively only if the system is filled up with electronic educational material. The most effective ones are the multimediamaterials. The effectiveness of the multimedia-material can be improved with the application of video

    From Design to Production Control Through the Integration of Engineering Data Management and Workflow Management Systems

    Full text link
    At a time when many companies are under pressure to reduce "times-to-market" the management of product information from the early stages of design through assembly to manufacture and production has become increasingly important. Similarly in the construction of high energy physics devices the collection of (often evolving) engineering data is central to the subsequent physics analysis. Traditionally in industry design engineers have employed Engineering Data Management Systems (also called Product Data Management Systems) to coordinate and control access to documented versions of product designs. However, these systems provide control only at the collaborative design level and are seldom used beyond design. Workflow management systems, on the other hand, are employed in industry to coordinate and support the more complex and repeatable work processes of the production environment. Commercial workflow products cannot support the highly dynamic activities found both in the design stages of product development and in rapidly evolving workflow definitions. The integration of Product Data Management with Workflow Management can provide support for product development from initial CAD/CAM collaborative design through to the support and optimisation of production workflow activities. This paper investigates this integration and proposes a philosophy for the support of product data throughout the full development and production lifecycle and demonstrates its usefulness in the construction of CMS detectors.Comment: 18 pages, 13 figure

    PRODUCT LINE ARCHITECTURE FOR HADRONTHERAPY CONTROL SYSTEM: APPLICATIONS DEVELOPMENT AND CERTIFICATION

    Get PDF
    Hadrontherapy is the treatment of cancer with charged ion beams. As the charged ion beams used in hadrontherapy are required to be accelerated to very large energies, the particle accelerators used in this treatment are complex and composed of several sub-systems. As a result, control systems are employed for the supervision and control of these accelerators. Currently, The Italian National Hadrontherapy Facility (CNAO) has the objective of modernizing one of the software environments of its control system. Such a project would allow for the integration of new types of devices into the control system, such as mobile devices, as well as introducing newer technologies into the environment. In order to achieve this, this work began with the requirement analysis and definition of a product line architecture for applications of the upgraded control system environment. The product line architecture focuses on reliability, maintainability, and ease of compliance with medical software certification directives. This was followed by the design and development of several software services aimed at allowing the communication of the environments applications and other components of the control system, such as remote file access, relational data access, and OPC-UA. In addition, several libraries and tools have been developed to support the development of future control system applications, following the defined product line architecture. Lastly, a pilot application was created using the tools developed during this work, as well as the preliminary results of a cross-environment integration project. The approach followed in this work is later evaluated by comparing the developed tools to their legacy counterparts, as well as estimating the impact of future applications following the defined product line architecture.Hadrontherapy is the treatment of cancer with charged ion beams. As the charged ion beams used in hadrontherapy are required to be accelerated to very large energies, the particle accelerators used in this treatment are complex and composed of several sub-systems. As a result, control systems are employed for the supervision and control of these accelerators. Currently, The Italian National Hadrontherapy Facility (CNAO) has the objective of modernizing one of the software environments of its control system. Such a project would allow for the integration of new types of devices into the control system, such as mobile devices, as well as introducing newer technologies into the environment. In order to achieve this, this work began with the requirement analysis and definition of a product line architecture for applications of the upgraded control system environment. The product line architecture focuses on reliability, maintainability, and ease of compliance with medical software certification directives. This was followed by the design and development of several software services aimed at allowing the communication of the environments applications and other components of the control system, such as remote file access, relational data access, and OPC-UA. In addition, several libraries and tools have been developed to support the development of future control system applications, following the defined product line architecture. Lastly, a pilot application was created using the tools developed during this work, as well as the preliminary results of a cross-environment integration project. The approach followed in this work is later evaluated by comparing the developed tools to their legacy counterparts, as well as estimating the impact of future applications following the defined product line architecture

    The Thin Gap Chambers database experience in test beam and preparations for ATLAS

    Full text link
    Thin gap chambers (TGCs) are used for the muon trigger system in the forward region of the LHC experiment ATLAS. The TGCs are expected to provide a trigger signal within 25 ns of the bunch spacing. An extensive system test of the ATLAS muon spectrometer has been performed in the H8 beam line at the CERN SPS during the last few years. A relational database was used for storing the conditions of the tests as well as the configuration of the system. This database has provided the detector control system with the information needed for configuration of the front end electronics. The database is used to assist the online operation and maintenance. The same database is used to store the non event condition and configuration parameters needed later for the offline reconstruction software. A larger scale of the database has been produced to support the whole TGC system. It integrates all the production, QA tests and assembly information. A 1/12th model of the whole TGC system is currently in use for testing the performance of this database in configuring and tracking the condition of the system. A prototype of the database was first implemented during the H8 test beams. This paper describes the database structure, its interface to other systems and its operational performance.Comment: Proceedings IEEE, Nuclear Science Symposium 2005, Stockholm, Sweeden, May 200

    NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps

    Get PDF
    Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many state-of-the-art (SOA) visual processing tasks. Even though Graphical Processing Units (GPUs) are most often used in training and deploying CNNs, their power efficiency is less than 10 GOp/s/W for single-frame runtime inference. We propose a flexible and efficient CNN accelerator architecture called NullHop that implements SOA CNNs useful for low-power and low-latency application scenarios. NullHop exploits the sparsity of neuron activations in CNNs to accelerate the computation and reduce memory requirements. The flexible architecture allows high utilization of available computing resources across kernel sizes ranging from 1x1 to 7x7. NullHop can process up to 128 input and 128 output feature maps per layer in a single pass. We implemented the proposed architecture on a Xilinx Zynq FPGA platform and present results showing how our implementation reduces external memory transfers and compute time in five different CNNs ranging from small ones up to the widely known large VGG16 and VGG19 CNNs. Post-synthesis simulations using Mentor Modelsim in a 28nm process with a clock frequency of 500 MHz show that the VGG19 network achieves over 450 GOp/s. By exploiting sparsity, NullHop achieves an efficiency of 368%, maintains over 98% utilization of the MAC units, and achieves a power efficiency of over 3TOp/s/W in a core area of 6.3mm2^2. As further proof of NullHop's usability, we interfaced its FPGA implementation with a neuromorphic event camera for real time interactive demonstrations
    corecore