3,328 research outputs found
EndoSLAM Dataset and An Unsupervised Monocular Visual Odometry and Depth Estimation Approach for Endoscopic Videos: Endo-SfMLearner
Deep learning techniques hold promise to develop dense topography
reconstruction and pose estimation methods for endoscopic videos. However,
currently available datasets do not support effective quantitative
benchmarking. In this paper, we introduce a comprehensive endoscopic SLAM
dataset consisting of 3D point cloud data for six porcine organs, capsule and
standard endoscopy recordings as well as synthetically generated data. A Panda
robotic arm, two commercially available capsule endoscopes, two conventional
endoscopes with different camera properties, and two high precision 3D scanners
were employed to collect data from 8 ex-vivo porcine gastrointestinal
(GI)-tract organs. In total, 35 sub-datasets are provided with 6D pose ground
truth for the ex-vivo part: 18 sub-dataset for colon, 12 sub-datasets for
stomach and 5 sub-datasets for small intestine, while four of these contain
polyp-mimicking elevations carried out by an expert gastroenterologist.
Synthetic capsule endoscopy frames from GI-tract with both depth and pose
annotations are included to facilitate the study of simulation-to-real transfer
learning algorithms. Additionally, we propound Endo-SfMLearner, an unsupervised
monocular depth and pose estimation method that combines residual networks with
spatial attention module in order to dictate the network to focus on
distinguishable and highly textured tissue regions. The proposed approach makes
use of a brightness-aware photometric loss to improve the robustness under fast
frame-to-frame illumination changes. To exemplify the use-case of the EndoSLAM
dataset, the performance of Endo-SfMLearner is extensively compared with the
state-of-the-art. The codes and the link for the dataset are publicly available
at https://github.com/CapsuleEndoscope/EndoSLAM. A video demonstrating the
experimental setup and procedure is accessible through
https://www.youtube.com/watch?v=G_LCe0aWWdQ.Comment: 27 pages, 16 figure
Autocamera Calibration for traffic surveillance cameras with wide angle lenses
We propose a method for automatic calibration of a traffic surveillance
camera with wide-angle lenses. Video footage of a few minutes is sufficient for
the entire calibration process to take place. This method takes in the height
of the camera from the ground plane as the only user input to overcome the
scale ambiguity. The calibration is performed in two stages, 1. Intrinsic
Calibration 2. Extrinsic Calibration. Intrinsic calibration is achieved by
assuming an equidistant fisheye distortion and an ideal camera model. Extrinsic
calibration is accomplished by estimating the two vanishing points, on the
ground plane, from the motion of vehicles at perpendicular intersections. The
first stage of intrinsic calibration is also valid for thermal cameras.
Experiments have been conducted to demonstrate the effectiveness of this
approach on visible as well as thermal cameras.
Index Terms: fish-eye, calibration, thermal camera, intelligent
transportation systems, vanishing point
3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection
Cameras are a crucial exteroceptive sensor for self-driving cars as they are
low-cost and small, provide appearance information about the environment, and
work in various weather conditions. They can be used for multiple purposes such
as visual navigation and obstacle detection. We can use a surround multi-camera
system to cover the full 360-degree field-of-view around the car. In this way,
we avoid blind spots which can otherwise lead to accidents. To minimize the
number of cameras needed for surround perception, we utilize fisheye cameras.
Consequently, standard vision pipelines for 3D mapping, visual localization,
obstacle detection, etc. need to be adapted to take full advantage of the
availability of multiple cameras rather than treat each camera individually. In
addition, processing of fisheye images has to be supported. In this paper, we
describe the camera calibration and subsequent processing pipeline for
multi-fisheye-camera systems developed as part of the V-Charge project. This
project seeks to enable automated valet parking for self-driving cars. Our
pipeline is able to precisely calibrate multi-camera systems, build sparse 3D
maps for visual navigation, visually localize the car with respect to these
maps, generate accurate dense maps, as well as detect obstacles based on
real-time depth map extraction
Interoperable services based on activity monitoring in ambient assisted living environments
Ambient Assisted Living (AAL) is considered as the main technological solution that will enable the aged and people in recovery to maintain their independence and a consequent high quality of life for a longer period of time than would otherwise be the case. This goal is achieved by monitoring human’s activities and deploying the appropriate collection of services to set environmental features and satisfy user preferences in a given context. However, both human monitoring and services deployment are particularly hard to accomplish due to the uncertainty and ambiguity characterising human actions, and heterogeneity of hardware devices composed in an AAL system. This research addresses both the aforementioned challenges by introducing 1) an innovative system, based on Self Organising Feature Map (SOFM), for automatically classifying the resting location of a moving object in an indoor environment and 2) a strategy able to generate context-aware based Fuzzy Markup Language (FML) services in order to maximize the users’ comfort and hardware interoperability level. The overall system runs on a distributed embedded platform with a specialised ceiling- mounted video sensor for intelligent activity monitoring. The system has the ability to learn resting locations, to measure overall activity levels, to detect specific events such as potential falls and to deploy the right sequence of fuzzy services modelled through FML for supporting people in that particular context. Experimental results show less than 20% classification error in monitoring human activities and providing the right set of services, showing the robustness of our approach over others in literature with minimal power consumption
Multi-camera Realtime 3D Tracking of Multiple Flying Animals
Automated tracking of animal movement allows analyses that would not
otherwise be possible by providing great quantities of data. The additional
capability of tracking in realtime - with minimal latency - opens up the
experimental possibility of manipulating sensory feedback, thus allowing
detailed explorations of the neural basis for control of behavior. Here we
describe a new system capable of tracking the position and body orientation of
animals such as flies and birds. The system operates with less than 40 msec
latency and can track multiple animals simultaneously. To achieve these
results, a multi target tracking algorithm was developed based on the Extended
Kalman Filter and the Nearest Neighbor Standard Filter data association
algorithm. In one implementation, an eleven camera system is capable of
tracking three flies simultaneously at 60 frames per second using a gigabit
network of nine standard Intel Pentium 4 and Core 2 Duo computers. This
manuscript presents the rationale and details of the algorithms employed and
shows three implementations of the system. An experiment was performed using
the tracking system to measure the effect of visual contrast on the flight
speed of Drosophila melanogaster. At low contrasts, speed is more variable and
faster on average than at high contrasts. Thus, the system is already a useful
tool to study the neurobiology and behavior of freely flying animals. If
combined with other techniques, such as `virtual reality'-type computer
graphics or genetic manipulation, the tracking system would offer a powerful
new way to investigate the biology of flying animals.Comment: pdfTeX using libpoppler 3.141592-1.40.3-2.2 (Web2C 7.5.6), 18 pages
with 9 figure
RUR53: an Unmanned Ground Vehicle for Navigation, Recognition and Manipulation
This paper proposes RUR53: an Unmanned Ground Vehicle able to autonomously
navigate through, identify, and reach areas of interest; and there recognize,
localize, and manipulate work tools to perform complex manipulation tasks. The
proposed contribution includes a modular software architecture where each
module solves specific sub-tasks and that can be easily enlarged to satisfy new
requirements. Included indoor and outdoor tests demonstrate the capability of
the proposed system to autonomously detect a target object (a panel) and
precisely dock in front of it while avoiding obstacles. They show it can
autonomously recognize and manipulate target work tools (i.e., wrenches and
valve stems) to accomplish complex tasks (i.e., use a wrench to rotate a valve
stem). A specific case study is described where the proposed modular
architecture lets easy switch to a semi-teleoperated mode. The paper
exhaustively describes description of both the hardware and software setup of
RUR53, its performance when tests at the 2017 Mohamed Bin Zayed International
Robotics Challenge, and the lessons we learned when participating at this
competition, where we ranked third in the Gran Challenge in collaboration with
the Czech Technical University in Prague, the University of Pennsylvania, and
the University of Lincoln (UK).Comment: This article has been accepted for publication in Advanced Robotics,
published by Taylor & Franci
- …