9,148 research outputs found
A short curriculum of the robotics and technology of computer lab
Our research Lab is directed by Prof. Anton Civit. It is an interdisciplinary group of 23
researchers that carry out their teaching and researching labor at the Escuela
Politécnica Superior (Higher Polytechnic School) and the Escuela de Ingeniería
Informática (Computer Engineering School). The main research fields are: a)
Industrial and mobile Robotics, b) Neuro-inspired processing using electronic spikes,
c) Embedded and real-time systems, d) Parallel and massive processing computer
architecture, d) Information Technologies for rehabilitation, handicapped and elder
people, e) Web accessibility and usability
In this paper, the Lab history is presented and its main publications and research
projects over the last few years are summarized.Nuestro grupo de investigación está liderado por el profesor Civit. Somos un grupo
multidisciplinar de 23 investigadores que realizan su labor docente e investigadora
en la Escuela Politécnica Superior y en Escuela de Ingeniería Informática. Las
principales líneas de investigaciones son: a) Robótica industrial y móvil. b)
Procesamiento neuro-inspirado basado en pulsos electrónicos. c) Sistemas
empotrados y de tiempo real. d) Arquitecturas paralelas y de procesamiento masivo.
e) Tecnología de la información aplicada a la discapacidad, rehabilitación y a las
personas mayores. f) Usabilidad y accesibilidad Web.
En este artículo se reseña la historia del grupo y se resumen las principales
publicaciones y proyectos que ha conseguido en los últimos años
Near real-time stereo vision system
The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging
Quick and energy-efficient Bayesian computing of binocular disparity using stochastic digital signals
Reconstruction of the tridimensional geometry of a visual scene using the
binocular disparity information is an important issue in computer vision and
mobile robotics, which can be formulated as a Bayesian inference problem.
However, computation of the full disparity distribution with an advanced
Bayesian model is usually an intractable problem, and proves computationally
challenging even with a simple model. In this paper, we show how probabilistic
hardware using distributed memory and alternate representation of data as
stochastic bitstreams can solve that problem with high performance and energy
efficiency. We put forward a way to express discrete probability distributions
using stochastic data representations and perform Bayesian fusion using those
representations, and show how that approach can be applied to diparity
computation. We evaluate the system using a simulated stochastic implementation
and discuss possible hardware implementations of such architectures and their
potential for sensorimotor processing and robotics.Comment: Preprint of article submitted for publication in International
Journal of Approximate Reasoning and accepted pending minor revision
Miniaturized embedded stereo vision system (MESVS)
Stereo vision is one of the fundamental problems of computer vision. It is also one of the oldest and heavily investigated areas of 3D vision. Recent advances of stereo matching methodologies and availability of high performance and efficient algorithms along with availability of fast and affordable hardware technology, have allowed researchers to develop several stereo vision systems capable of operating at real-time. Although a multitude of such systems exist in the literature, the majority of them concentrates only on raw performance and quality rather than factors such as dimension, and power requirement, which are of significant importance in the embedded settings.
In this thesis a new miniaturized embedded stereo vision system (MESVS) is presented, which is miniaturized to fit within a package of 5x5cm, is power efficient, and cost-effective. Furthermore, through application of embedded programming techniques and careful optimization, MESVS achieves the real-time performance of 20 frames per second. This work discusses the various challenges involved regarding design and implementation of this system and the measures taken to tackle them
Design of a Real-time Image-based Distance Sensing System by Stereo Vision on FPGA
A stereo vision system is a robust method to sense the distance information in a scene. This research explores the stereo vision system from the fundamentals of stereo vision and the
computer stereo vision algorithm to the final implementation of the system on a FPGA chip. In a stereo vision system, images are captured by a pair of stereo image sensors. The distance information
can be derived from the disparities between the stereo image pair, based on the theory of binocular geometry. With the increasing focus on 3D vision, stereo vision is becoming a hot topic in the areas of computer games, robot vision and medical applications. Particularly, most stereo vision systems are expected to be used in real-time applications.
In this thesis, several stereo correspondence algorithms that determine the disparities between stereo image pair are examined. The algorithms can be categorized into global stereo algorithms and local stereo algorithms depending on the optimization techniques. The global algorithms examined are the Dynamic Time Warp (DTW) algorithm and the DTW with quantization algorithm, while the local algorithms examined are the window based Sum of Squared Differences (SSD), Sum of Absolute Differences (SAD) and Census transform correlation algorithms. With analysis among them, the window based
SAD correlation algorithm is proposed for implementation on a FPGA platform.
The proposed algorithm is implemented onto an Altera DE2 board featuring an Altera Cyclone II 2C35 FPGA. The implemented module of the algorithm is simulated using ModelSim-Altera to verify the correctness of its functionality. Along with a pair of stere image sensors and a LCD monitor, a stereo vision system is built. The entire system realizes a real-time video frame rate of 16.83 frames per second with an image resolution of 640 by 480 and produces disparity maps in which the objects are clearly distinguished by their relative distance information
Viewfinder: final activity report
The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources.
The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation.
The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein
Motion analysis report
Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations
- …