2,068 research outputs found

    The Cyborg Astrobiologist: Testing a Novelty-Detection Algorithm on Two Mobile Exploration Systems at Rivas Vaciamadrid in Spain and at the Mars Desert Research Station in Utah

    Full text link
    (ABRIDGED) In previous work, two platforms have been developed for testing computer-vision algorithms for robotic planetary exploration (McGuire et al. 2004b,2005; Bartolo et al. 2007). The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone-camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon color, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone-camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colors to test this algorithm. The algorithm robustly recognized previously-observed units by their color, while requiring only a single image or a few images to learn colors as familiar, demonstrating its fast learning capability.Comment: 28 pages, 12 figures, accepted for publication in the International Journal of Astrobiolog

    Enabling Technologies for Deep Space Imaging

    Get PDF
    From the beginning of the Space Age, imagery, particularly motion imagery, has been a part of crewed and un-crewed missions. As technologies have evolved the imagery gets better, more compelling, and more useful for operations and monitoring of systems, crew, and spacecraft. As we look forward now to crewed missions beyond low-Earth orbit, such as the Lunar Orbiting Platform-Gateway being considered as a pre-cursor to future crewed Mars missions, there are both opportunities and challenges in implementing a multi-faceted imaging system that advances mission capabilities and technology. This paper will present a vision for an imaging system that is relevant for operations of the ISS and future crewed missions in deep space, with a detailed look at some of the key innovative technologies required to enable such a system. Specific enabling technologies included are: Innovative camera systems capable of providing a 360deg field-of-view without moving parts; Ultra-high Definition (or higher) resolution; High Efficiency Video Coding compression; Compatibility with Delay Tolerant Network protocols; and Intelligent systems capable of monitoring the field-of-view for un-crewed missions. Opportunities where Standardization can enable interoperability are also identified

    Virtual Reality via Object Pose Estimation and Active Learning:Realizing Telepresence Robots with Aerial Manipulation Capabilities

    Get PDF
    This paper presents a novel telepresence system for advancing aerial manipulation indynamic and unstructured environments. The proposed system not only features a haptic device, but also a virtual reality (VR) interface that provides real-time 3D displays of the robot’s workspace as well as a haptic guidance to its remotely located operator. To realize this, multiple sensors, namely, a LiDAR, cameras, and IMUs are utilized. For processing of the acquired sensory data, pose estimation pipelines are devised for industrial objects of both known and unknown geometries. We further propose an active learning pipeline in order to increase the sample efficiency of a pipeline component that relies on a Deep Neural Network (DNN) based object detector. All these algorithms jointly address various challenges encountered during the execution of perception tasks in industrial scenarios. In the experiments, exhaustive ablation studies are provided to validate the proposed pipelines. Method-ologically, these results commonly suggest how an awareness of the algorithms’ own failures and uncertainty (“introspection”) can be used to tackle the encountered problems. Moreover, outdoor experiments are conducted to evaluate the effectiveness of the overall system in enhancing aerial manipulation capabilities. In particular, with flight campaigns over days and nights, from spring to winter, and with different users and locations, we demonstrate over 70 robust executions of pick-and-place, force application and peg-in-hole tasks with the DLR cable-Suspended Aerial Manipulator (SAM). As a result, we show the viability of the proposed system in future industrial applications

    Video based vehicle detection for advance warning Intelligent Transportation System

    Full text link
    Video based vehicle detection and surveillance technologies are an integral part of Intelligent Transportation System (ITS), due to its non-intrusiveness and capability or capturing global and specific vehicle behavior data. The initial goal of this thesis is to develop an efficient advance warning ITS system for detection of congestion at work zones and special events based on video detection. The goals accomplished by this thesis are: (1) successfully developed the advance warning ITS system using off-the-shelf components and, (2) Develop and evaluate an improved vehicle detection and tracking algorithm. The advance warning ITS system developed includes many off-the-shelf equipments like Autoscope (video based vehicle detector), Digital Video Recorders, RF transceivers, high gain Yagi antennas, variable message signs and interface processors. The video based detection system used requires calibration and fine tuning of configuration parameters for accurate results. Therefore, an in-house video based vehicle detection system was developed using the Corner Harris algorithm to eliminate the need of complex calibration and contrasts modifications. The algorithm was implemented using OpenCV library on a Arcom\u27s Olympus Windows XP Embedded development kit running WinXPE operating system. The algorithm performance is for accuracy in vehicle speed and count is evaluated. The performance of the proposed algorithm is equivalent or better to the Autoscope system without any modifications to calibration and lamination adjustments

    New Generation of Instrumented Ranges: Enabling Automated Performance Analysis

    Get PDF
    Military training conducted on physical ranges that match a unit’s future operational environment provides an invaluable experience. Today, to conduct a training exercise while ensuring a unit’s performance is closely observed, evaluated, and reported on in an After Action Review, the unit requires a number of instructors to accompany the different elements. Training organized on ranges for urban warfighting brings an additional level of complexity—the high level of occlusion typical for these environments multiplies the number of evaluators needed. While the units have great need for such training opportunities, they may not have the necessary human resources to conduct them successfully. In this paper we report on our US Navy/ONR-sponsored project aimed at a new generation of instrumented ranges, and the early results we have achieved. We suggest a radically different concept: instead of recording multiple video streams that need to be reviewed and evaluated by a number of instructors, our system will focus on capturing dynamic individual warfighter pose data and performing automated performance evaluation. We will use an in situ network of automatically-controlled pan-tilt-zoom video cameras and personal position and orientation sensing devices. Our system will record video, reconstruct dynamic 3D individual poses, analyze, recognize events, evaluate performances, generate reports, provide real-time free exploration of recorded data, and even allow the user to generate ‘what-if’ scenarios that were never recorded. The most direct benefit for an individual unit will be the ability to conduct training with fewer human resources, while having a more quantitative account of their performance (dispersion across the terrain, ‘weapon flagging’ incidents, number of patrols conducted). The instructors will have immediate feedback on some elements of the unit’s performance. Having data sets for multiple units will enable historical trend analysis, thus providing new insights and benefits for the entire service.Office of Naval Researc

    3D Sensor Placement and Embedded Processing for People Detection in an Industrial Environment

    Get PDF
    Papers I, II and III are extracted from the dissertation and uploaded as separate documents to meet post-publication requirements for self-arciving of IEEE conference papers.At a time when autonomy is being introduced in more and more areas, computer vision plays a very important role. In an industrial environment, the ability to create a real-time virtual version of a volume of interest provides a broad range of possibilities, including safety-related systems such as vision based anti-collision and personnel tracking. In an offshore environment, where such systems are not common, the task is challenging due to rough weather and environmental conditions, but the result of introducing such safety systems could potentially be lifesaving, as personnel work close to heavy, huge, and often poorly instrumented moving machinery and equipment. This thesis presents research on important topics related to enabling computer vision systems in industrial and offshore environments, including a review of the most important technologies and methods. A prototype 3D sensor package is developed, consisting of different sensors and a powerful embedded computer. This, together with a novel, highly scalable point cloud compression and sensor fusion scheme allows to create a real-time 3D map of an industrial area. The question of where to place the sensor packages in an environment where occlusions are present is also investigated. The result is algorithms for automatic sensor placement optimisation, where the goal is to place sensors in such a way that maximises the volume of interest that is covered, with as few occluded zones as possible. The method also includes redundancy constraints where important sub-volumes can be defined to be viewed by more than one sensor. Lastly, a people detection scheme using a merged point cloud from six different sensor packages as input is developed. Using a combination of point cloud clustering, flattening and convolutional neural networks, the system successfully detects multiple people in an outdoor industrial environment, providing real-time 3D positions. The sensor packages and methods are tested and verified at the Industrial Robotics Lab at the University of Agder, and the people detection method is also tested in a relevant outdoor, industrial testing facility. The experiments and results are presented in the papers attached to this thesis.publishedVersio
    • 

    corecore