1,736 research outputs found
Recommended from our members
Image Understanding and Robotics Research at Columbia University
The research investigations of the Vision/Robotics Laboratory at Columbia University reflect the diversity of interests of its four faculty members, two staff programmers and 15 Ph.D. students. Several of the projects involve either a visiting computer science post-doc, other faculty members in the department or the university, or researchers at AT&T Bell Laboratories or Philips laboratories. We list below a summary of our interest and results, together with the principal researchers associated with them. Since it is difficult to separate those aspects of robotic research that are purely visual from those that are vision-like (for example, tactile sensing) or vision-related (for example, integrated vision-robotic systems), we have listed all robotic research that is not purely manipulative
Celestial compass sensor mimics the insect eye for navigation under cloudy and occluded skies
Insects use the sun’s position (even when concealed) as a compass for navigation by filtering celestial light intensity and polarisation through their compound eyes. To replicate this functionality, we present a sensor that imitates essential aspects of insect eyes, particularly the fan-like arrangement of polarised light receptors in their dorsal rim area. Our sensor comprises a ring of eight pairs of photodiodes (evaluating two orthogonal orientations of polarised light) to analyse the skylight coming from different directions. Because the layout of our sensor aligns with the polarised light pattern in the sky, a circular-mean model that integrates information spatially across the analysers can estimate the solar azimuth. When using the same sensor design, our model achieves lower compass errors than alternative (and computationally more complex) algorithms, especially under cloudy and occluded skies. Thus, the morphology and processing of the insect celestial compass provide an efficient and robust directional input for navigation
Using a mobile robot for hazardous substances detection in a factory environment
Dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáIndustries that work with toxic materials need extensive security protocols to avoid accidents.
Instead of having fixed sensors, the concept of assembling the sensors on a mobile
robot that performs the scanning through a defined path is cheaper, configurable and
adaptable. This work describes a mobile robot, equipped with several gas sensors and
LIDAR, that follows a trajectory based on waypoints, simulating a working Autonomous
Guided Vehicle (AGV). At the same time, the robot keeps measuring for toxic gases. In
other words, the robot follows the trajectory while the gas concentration is under a defined
value. Otherwise, it starts the autonomous leakage search based on a search algorithm
that allows to find the leakage position avoiding obstacles in real time. The proposed
methodology is verified in simulation based on a model of the real robot. Therefore, three
path plannings were developed and their performance compared. A Light Detection And
Ranging (LIDAR) device was integrated with the path planning to propose an obstacle
avoidance system with a dilation technique to enlarge the obstacles, thus, considering the
robot’s dimensions. Moreover, if needed, the robot can be remotely operated with visual
feedback. In addition, a controller was made for the robot. Gas sensors were embedded in
the robot with Finite Impulse Response (FIR) filter to process the data. A low cost AGV
was developed to compete in Festival Nacional de Robótica (Portuguese Robotics Open)
2019 - Gondomar, describing the robot’s control and software solution to the competition.As indústrias que trabalham com materiais tóxicos necessitam de extensos protocolos
de segurança para evitar acidentes. Ao invés de ter sensores estáticos, o conceito de
instalar sensores em um robô móvel que inspeciona através de um caminho definido é mais
barato, configurável e adaptável. O presente trabalho descreve um robô móvel, equipado
com vários sensores de gás e LIDAR, que percorre uma trajetória baseada em pontos
de controle, simulando um AGV em trabalho. Em simultâneo são efetuadas medidas de
gases tóxicos. Em outras palavras, o robô segue uma trajetória enquanto a concentração
de gás está abaixo de um valor definido. Caso contrário, inicia uma busca autônoma
de vazamento de gás com um algoritmo de busca que permite achar a posição do gás
evitando os obstáculos em tempo real. A metodologia proposta é verificada em simulação.
Três algoritmos de planejamento de caminho foram desenvolvidos e suas performances
comparadas. Um LIDAR foi integrado com o planejamento de caminho para propôr
um sistema de evitar obstáculos. Além disso, o robô pode ser operado remotamente com
auxÃlio visual. Foi feito um controlador para o robô. Sensores de gás foram embarcados no
robô com um filtro de resposta ao impulso finita para processar as informações. Um veÃculo
guiado automático de baixo custo foi desenvolvido para competir no Festival Nacional de
Robótica 2019 - Gondomar. O controle do veÃculo foi descrito com o programa de solução
para a competição
High-precision grasping and placing for mobile robots
This work presents a manipulation system for multiple labware in life science laboratories using the H20 mobile robots. The H20 robot is equipped with the Kinect V2 sensor to identify and estimate the position of the required labware on the workbench. The local features recognition based on SURF algorithm is used. The recognition process is performed for the labware to be grasped and for the workbench holder. Different grippers and labware containers are designed to manipulate different weights of labware and to realize a safe transportation
Recommended from our members
Image Understanding and Robotics Research at Columbia University
Over the past year, the research investigations of the Vision/Robotics Laboratory at Columbia University have reflected the interests of its four faculty members, two staff programmers, and 16 Ph.D. students. Several of the projects involve other faculty members in the department or the university, or researchers at AT&T, IBM, or Philips. We list below a summary of our interests and results, together with the principal researchers associated with them. Since it is difficult to separate those aspects of robotic research that are purely visual from those that are vision-like (for example, tactile sensing) or vision-related (for example, integrated vision-robotic systems), we have listed all robotic research that is not purely manipulative. The majority of our current investigations are deepenings of work reported last year; this was the second year of both our basic Image Understanding contract and our Strategic Computing contract. Therefore, the form of this year's report closely resembles last year's. Although there are a few new initiatives, mainly we report the new results we have obtained in the same five basic research areas. Much of this work is summarized on a video tape that is available on request. We also note two service contributions this past year. The Special Issue on Computer Vision of the Proceedings of the IEEE, August, 1988, was co-edited by one of us (John Kender [27]). And, the upcoming IEEE Computer Society Conference on Computer Vision and Pattem Recognition, June, 1989, is co-program chaired by one of us (John Kender [23])
Vision technology/algorithms for space robotics applications
The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed
Mobile Robots Navigation
Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described
- …