5,950 research outputs found
DALES: Automated Tool for Detection, Annotation, Labelling and Segmentation of Multiple Objects in Multi-Camera Video Streams
In this paper, we propose a new software tool called DALES to extract semantic information
from multi-view videos based on the analysis of their visual content. Our system is fully automatic
and is well suited for multi-camera environment. Once the multi-view video sequences are
loaded into DALES, our software performs the detection, counting, and segmentation of the visual
objects evolving in the provided video streams. Then, these objects of interest are processed
in order to be labelled, and the related frames are thus annotated with the corresponding semantic
content. Moreover, a textual script is automatically generated with the video annotations.
DALES system shows excellent performance in terms of accuracy and computational speed and
is robustly designed to ensure view synchronization
The Design and Operation of The Keck Observatory Archive
The Infrared Processing and Analysis Center (IPAC) and the W. M. Keck
Observatory (WMKO) operate an archive for the Keck Observatory. At the end of
2013, KOA completed the ingestion of data from all eight active observatory
instruments. KOA will continue to ingest all newly obtained observations, at an
anticipated volume of 4 TB per year. The data are transmitted electronically
from WMKO to IPAC for storage and curation. Access to data is governed by a
data use policy, and approximately two-thirds of the data in the archive are
public.Comment: 12 pages, 4 figs, 4 tables. Presented at Software and
Cyberinfrastructure for Astronomy III, SPIE Astronomical Telescopes +
Instrumentation 2014. June 2014, Montreal, Canad
Vision-Based Production of Personalized Video
In this paper we present a novel vision-based system for the automated production of personalised video souvenirs for visitors in leisure and cultural heritage venues. Visitors are visually identified and tracked through a camera network. The system produces a personalized DVD souvenir at the end of a visitor’s stay allowing visitors to relive their experiences. We analyze how we identify visitors by fusing facial and body features, how we track visitors, how the tracker recovers from failures due to occlusions, as well as how we annotate and compile the final product. Our experiments demonstrate the feasibility of the proposed approach
A mosaic of eyes
Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through onboard sensors such as cameras or laser scanners, to enable it to drive to its goal. Most research to date has focused on the development of a large and smart brain to gain autonomous capability for robots. There are three fundamental questions to be answered by an autonomous mobile robot: 1) Where am I going? 2) Where am I? and 3) How do I get there? To answer these basic questions, a robot requires a massive spatial memory and considerable computational resources to accomplish perception, localization, path planning, and control. It is not yet possible to deliver the centralized intelligence required for our real-life applications, such as autonomous ground vehicles and wheelchairs in care centers. In fact, most autonomous robots try to mimic how humans navigate, interpreting images taken by cameras and then taking decisions accordingly. They may encounter the following difficulties
- …