5,614 research outputs found

    Extending OpenVX for Model-based Design of Embedded Vision Applications

    Get PDF
    Developing computer vision applications for lowpower heterogeneous systems is increasingly gaining interest in the embedded systems community. Even more interesting is the tuning of such embedded software for the target architecture when this is driven by multiple constraints (e.g., performance, peak power, energy consumption). Indeed, developers frequently run into system-level inefficiencies and bottlenecks that can not be quickly addressed by traditional methods. In this context OpenVX has been proposed as the standard platform to develop portable, optimized and powerefficient applications for vision algorithms targeting embedded systems. Nevertheless, adopting OpenVX for rapid prototyping, early algorithm parametrization and validation of complex embedded applications is a very challenging task. This paper presents a methodology to integrate a model-based design environment to OpenVX. The methodology allows applying Matlab/Simulink for the model-based design, parametrization, and validation of computer vision applications. Then, it allows for the automatic synthesis of the application model into an OpenVX description for the hardware and constraints-aware application tuning. Experimental results have been conducted with an application for digital image stabilization developed through Simulink and, then, automatically synthesized into OpenVX-VisionWorks code for an NVIDIA Jetson TX1 boar

    A model-based design flow for embedded vision applications on heterogeneous architectures

    Get PDF
    The ability to gather information from images is straightforward to human, and one of the principal input to understand external world. Computer vision (CV) is the process to extract such knowledge from the visual domain in an algorithmic fashion. The requested computational power to process these information is very high. Until recently, the only feasible way to meet non-functional requirements like performance was to develop custom hardware, which is costly, time-consuming and can not be reused in a general purpose. The recent introduction of low-power and low-cost heterogeneous embedded boards, in which CPUs are combine with heterogeneous accelerators like GPUs, DSPs and FPGAs, can combine the hardware efficiency needed for non-functional requirements with the flexibility of software development. Embedded vision is the term used to identify the application of the aforementioned CV algorithms applied in the embedded field, which usually requires to satisfy, other than functional requirements, also non-functional requirements such as real-time performance, power, and energy efficiency. Rapid prototyping, early algorithm parametrization, testing, and validation of complex embedded video applications for such heterogeneous architectures is a very challenging task. This thesis presents a comprehensive framework that: 1) Is based on a model-based paradigm. Differently from the standard approaches at the state of the art that require designers to manually model the algorithm in any programming language, the proposed approach allows for a rapid prototyping, algorithm validation and parametrization in a model-based design environment (i.e., Matlab/Simulink). The framework relies on a multi-level design and verification flow by which the high-level model is then semi-automatically refined towards the final automatic synthesis into the target hardware device. 2) Relies on a polyglot parallel programming model. The proposed model combines different programming languages and environments such as C/C++, OpenMP, PThreads, OpenVX, OpenCV, and CUDA to best exploit different levels of parallelism while guaranteeing a semi-automatic customization. 3) Optimizes the application performance and energy efficiency through a novel algorithm for the mapping and scheduling of the application 3 tasks on the heterogeneous computing elements of the device. Such an algorithm, called exclusive earliest finish time (XEFT), takes into consideration the possible multiple implementation of tasks for different computing elements (e.g., a task primitive for CPU and an equivalent parallel implementation for GPU). It introduces and takes advantage of the notion of exclusive overlap between primitives to improve the load balancing. This thesis is the result of three years of research activity, during which all the incremental steps made to compose the framework have been tested on real case studie

    Rapid Prototyping of Embedded Vision Systems: Embedding Computer Vision Applications into Low-Power Heterogeneous Architectures

    Get PDF
    Embedded vision is a disruptive new technology in the vision industry. It is a revolutionary concept with far reaching implications, and it is opening up new applications and shaping the future of entire industries. It is applied in self-driving cars, autonomous vehicles in agriculture, digital dermascopes that help specialists make more accurate diagnoses, among many other unique and cutting-edge applications. The design of such systems gives rise to new challenges for embedded Software developers. Embedded vision applications are characterized by stringent performance constraints to guarantee real-time behaviours and, at the same time, energy constraints to save battery on the mobile platforms. In this paper, we address such challenges by proposing an overall view of the problem and by analysing current solutions. We present our last results on embedded vision design automation over two main aspects: the adoption of the model-based paradigm for the embedded vision rapid prototyping, and the application of heterogeneous programming languages to improve the system performance. The paper presents our recent results on the design of a localization and mapping application combined with image recognition based on deep learning optimized for an NVIDIA Jetson TX2

    EyeRIS: A General-Purpose System for Eye Movement Contingent Display Control

    Full text link
    In experimental studies of visual performance, the need often emerges to modify the stimulus according to the eye movements perfonncd by the subject. The methodology of Eye Movement-Contingent Display (EMCD) enables accurate control of the position and motion of the stimulus on the retina. EMCD procedures have been used successfully in many areas of vision science, including studies of visual attention, eye movements, and physiological characterization of neuronal response properties. Unfortunately, the difficulty of real-time programming and the unavailability of flexible and economical systems that can be easily adapted to the diversity of experimental needs and laboratory setups have prevented the widespread use of EMCD control. This paper describes EyeRIS, a general-purpose system for performing EMCD experiments on a Windows computer. Based on a digital signal processor with analog and digital interfaces, this integrated hardware and software system is responsible for sampling and processing oculomotor signals and subject responses and modifying the stimulus displayed on a CRT according to the gaze-contingent procedure specified by the experimenter. EyeRIS is designed to update the stimulus within a delay of 10 ms. To thoroughly evaluate EyeRIS' perforltlancc, this study (a) examines the response of the system in a number of EMCD procedures and computational benchmarking tests, (b) compares the accuracy of implementation of one particular EMCD procedure, retinal stabilization, to that produced by a standard tool used for this task, and (c) examines EyeRIS' performance in one of the many EMCD procedures that cannot be executed by means of any other currently available device.National Institute of Health (EY15732-01

    Engineering evolutionary control for real-world robotic systems

    Get PDF
    Evolutionary Robotics (ER) is the field of study concerned with the application of evolutionary computation to the design of robotic systems. Two main issues have prevented ER from being applied to real-world tasks, namely scaling to complex tasks and the transfer of control to real-robot systems. Finding solutions to complex tasks is challenging for evolutionary approaches due to the bootstrap problem and deception. When the task goal is too difficult, the evolutionary process will drift in regions of the search space with equally low levels of performance and therefore fail to bootstrap. Furthermore, the search space tends to get rugged (deceptive) as task complexity increases, which can lead to premature convergence. Another prominent issue in ER is the reality gap. Behavioral control is typically evolved in simulation and then only transferred to the real robotic hardware when a good solution has been found. Since simulation is an abstraction of the real world, the accuracy of the robot model and its interactions with the environment is limited. As a result, control evolved in a simulator tends to display a lower performance in reality than in simulation. In this thesis, we present a hierarchical control synthesis approach that enables the use of ER techniques for complex tasks in real robotic hardware by mitigating the bootstrap problem, deception, and the reality gap. We recursively decompose a task into sub-tasks, and synthesize control for each sub-task. The individual behaviors are then composed hierarchically. The possibility of incrementally transferring control as the controller is composed allows transferability issues to be addressed locally in the controller hierarchy. Our approach features hybridity, allowing different control synthesis techniques to be combined. We demonstrate our approach in a series of tasks that go beyond the complexity of tasks where ER has been successfully applied. We further show that hierarchical control can be applied in single-robot systems and in multirobot systems. Given our long-term goal of enabling the application of ER techniques to real-world tasks, we systematically validate our approach in real robotic hardware. For one of the demonstrations in this thesis, we have designed and built a swarm robotic platform, and we show the first successful transfer of evolved and hierarchical control to a swarm of robots outside of controlled laboratory conditions.A Robótica Evolutiva (RE) é a área de investigação que estuda a aplicação de computação evolutiva na conceção de sistemas robóticos. Dois principais desafios têm impedido a aplicação da RE em tarefas do mundo real: a dificuldade em solucionar tarefas complexas e a transferência de controladores evoluídos para sistemas robóticos reais. Encontrar soluções para tarefas complexas é desafiante para as técnicas evolutivas devido ao bootstrap problem e à deception. Quando o objetivo é demasiado difícil, o processo evolutivo tende a permanecer em regiões do espaço de procura com níveis de desempenho igualmente baixos, e consequentemente não consegue inicializar. Por outro lado, o espaço de procura tende a enrugar à medida que a complexidade da tarefa aumenta, o que pode resultar numa convergência prematura. Outro desafio na RE é a reality gap. O controlo robótico é tipicamente evoluído em simulação, e só é transferido para o sistema robótico real quando uma boa solução tiver sido encontrada. Como a simulação é uma abstração da realidade, a precisão do modelo do robô e das suas interações com o ambiente é limitada, podendo resultar em controladores com um menor desempenho no mundo real. Nesta tese, apresentamos uma abordagem de síntese de controlo hierárquica que permite o uso de técnicas de RE em tarefas complexas com hardware robótico real, mitigando o bootstrap problem, a deception e a reality gap. Decompomos recursivamente uma tarefa em sub-tarefas, e sintetizamos controlo para cada subtarefa. Os comportamentos individuais são então compostos hierarquicamente. A possibilidade de transferir o controlo incrementalmente à medida que o controlador é composto permite que problemas de transferibilidade possam ser endereçados localmente na hierarquia do controlador. A nossa abordagem permite o uso de diferentes técnicas de síntese de controlo, resultando em controladores híbridos. Demonstramos a nossa abordagem em várias tarefas que vão para além da complexidade das tarefas onde a RE foi aplicada. Também mostramos que o controlo hierárquico pode ser aplicado em sistemas de um robô ou sistemas multirobô. Dado o nosso objetivo de longo prazo de permitir o uso de técnicas de RE em tarefas no mundo real, concebemos e desenvolvemos uma plataforma de robótica de enxame, e mostramos a primeira transferência de controlo evoluído e hierárquico para um exame de robôs fora de condições controladas de laboratório.This work has been supported by the Portuguese Foundation for Science and Technology (Fundação para a Ciência e Tecnologia) under the grants SFRH/BD/76438/2011, EXPL/EEI-AUT/0329/2013, and by Instituto de Telecomunicações under the grant UID/EEA/50008/2013

    Towards the 3D Web with Open Simulator

    Get PDF
    Continuing advances and reduced costs in computational power, graphics processors and network bandwidth have led to 3D immersive multi-user virtual worlds becoming increasingly accessible while offering an improved and engaging Quality of Experience. At the same time the functionality of the World Wide Web continues to expand alongside the computing infrastructure it runs on and pages can now routinely accommodate many forms of interactive multimedia components as standard features - streaming video for example. Inevitably there is an emerging expectation that the Web will expand further to incorporate immersive 3D environments. This is exciting because humans are well adapted to operating in 3D environments and it is challenging because existing software and skill sets are focused around competencies in 2D Web applications. Open Simulator (OpenSim) is a freely available open source tool-kit that empowers users to create and deploy their own 3D environments in the same way that anyone can create and deploy a Web site. Its characteristics can be seen as a set of references as to how the 3D Web could be instantiated. This paper describes experiments carried out with OpenSim to better understand network and system issues, and presents experience in using OpenSim to develop and deliver applications for education and cultural heritage. Evaluation is based upon observations of these applications in use and measurements of systems both in the lab and in the wild.Postprin
    corecore