6,759 research outputs found
3D head tracking using normal flow constraints in a vehicle environment
Head tracking is a key component in applications such as human computer interaction, person monitoring, driver monitoring, video conferencing, and object-based compression. The motion of a driver’s head can tell us a lot about his/her mental state; e.g. whether he/she is drowsy, alert, aggressive,
comfortable, tense, distracted, etc. This paper reviews an optical flow based method to track the head pose, both orientation and position, of a person and presents results from real world data recorded in a car environment
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Keyframe-based monocular SLAM: design, survey, and future directions
Extensive research in the field of monocular SLAM for the past fifteen years
has yielded workable systems that found their way into various applications in
robotics and augmented reality. Although filter-based monocular SLAM systems
were common at some time, the more efficient keyframe-based solutions are
becoming the de facto methodology for building a monocular SLAM system. The
objective of this paper is threefold: first, the paper serves as a guideline
for people seeking to design their own monocular SLAM according to specific
environmental constraints. Second, it presents a survey that covers the various
keyframe-based monocular SLAM systems in the literature, detailing the
components of their implementation, and critically assessing the specific
strategies made in each proposed solution. Third, the paper provides insight
into the direction of future research in this field, to address the major
limitations still facing monocular SLAM; namely, in the issues of illumination
changes, initialization, highly dynamic motion, poorly textured scenes,
repetitive textures, map maintenance, and failure recovery
A Non-Rigid Map Fusion-Based RGB-Depth SLAM Method for Endoscopic Capsule Robots
In the gastrointestinal (GI) tract endoscopy field, ingestible wireless
capsule endoscopy is considered as a minimally invasive novel diagnostic
technology to inspect the entire GI tract and to diagnose various diseases and
pathologies. Since the development of this technology, medical device companies
and many groups have made significant progress to turn such passive capsule
endoscopes into robotic active capsule endoscopes to achieve almost all
functions of current active flexible endoscopes. However, the use of robotic
capsule endoscopy still has some challenges. One such challenge is the precise
localization of such active devices in 3D world, which is essential for a
precise three-dimensional (3D) mapping of the inner organ. A reliable 3D map of
the explored inner organ could assist the doctors to make more intuitive and
correct diagnosis. In this paper, we propose to our knowledge for the first
time in literature a visual simultaneous localization and mapping (SLAM) method
specifically developed for endoscopic capsule robots. The proposed RGB-Depth
SLAM method is capable of capturing comprehensive dense globally consistent
surfel-based maps of the inner organs explored by an endoscopic capsule robot
in real time. This is achieved by using dense frame-to-model camera tracking
and windowed surfelbased fusion coupled with frequent model refinement through
non-rigid surface deformations
Vision-based interface applied to assistive robots
This paper presents two vision-based interfaces for disabled people to command a mobile robot for personal assistance. The developed interfaces can be subdivided according to the algorithm of image processing implemented for the detection and tracking of two different body regions. The first interface detects and tracks movements of the user's head, and these movements are transformed into linear and angular velocities in order to command a mobile robot. The second interface detects and tracks movements of the user's hand, and these movements are similarly transformed. In addition, this paper also presents the control laws for the robot. The experimental results demonstrate good performance and balance between complexity and feasibility for real-time applications.Fil: PĂ©rez Berenguer, MarĂa Elisa. Universidad Nacional de San Juan. Facultad de IngenierĂa. Departamento de ElectrĂłnica y Automática. Gabinete de TecnologĂa MĂ©dica; Argentina. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas; ArgentinaFil: Soria, Carlos Miguel. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - San Juan. Instituto de Automática. Universidad Nacional de San Juan. Facultad de IngenierĂa. Instituto de Automática; ArgentinaFil: LĂłpez Celani, Natalia Martina. Universidad Nacional de San Juan. Facultad de IngenierĂa. Departamento de ElectrĂłnica y Automática. Gabinete de TecnologĂa MĂ©dica; Argentina. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas; ArgentinaFil: Nasisi, Oscar Herminio. Universidad Nacional de San Juan. Facultad de IngenierĂa. Instituto de Automática; ArgentinaFil: Mut, Vicente Antonio. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - San Juan. Instituto de Automática. Universidad Nacional de San Juan. Facultad de IngenierĂa. Instituto de Automática; Argentin
- …