4,140 research outputs found

    Cosmic cookery : making a stereoscopic 3D animated movie.

    Get PDF
    This paper describes our experience making a short stereoscopic movie visualizing the development of structure in the universe during the 13.7 billion years from the Big Bang to the present day. Aimed at a general audience for the Royal Society's 2005 Summer Science Exhibition, the movie illustrates how the latest cosmological theories based on dark matter and dark energy are capable of producing structures as complex as spiral galaxies and allows the viewer to directly compare observations from the real universe with theoretical results. 3D is an inherent feature of the cosmology data sets and stereoscopic visualization provides a natural way to present the images to the viewer, in addition to allowing researchers to visualize these vast, complex data sets. The presentation of the movie used passive, linearly polarized projection onto a 2m wide screen but it was also required to playback on a Sharp RD3D display and in anaglyph projection at venues without dedicated stereoscopic display equipment. Additionally lenticular prints were made from key images in the movie. We discuss the following technical challenges during the stereoscopic production process; 1) Controlling the depth presentation, 2) Editing the stereoscopic sequences, 3) Generating compressed movies in display speciÂŻc formats. We conclude that the generation of high quality stereoscopic movie content using desktop tools and equipment is feasible. This does require careful quality control and manual intervention but we believe these overheads are worthwhile when presenting inherently 3D data as the result is signiÂŻcantly increased impact and better understanding of complex 3D scenes

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    Visuomotor control, eye movements, and steering : A unified approach for incorporating feedback, feedforward, and internal models

    Get PDF
    The authors present an approach to the coordination of eye movements and locomotion in naturalistic steering tasks. It is based on recent empirical research, in particular, on driver eye movements, that poses challenges for existing accounts of how we visually steer a course. They first analyze how the ideas of feedback and feedforward processes and internal models are treated in control theoretical steering models within vision science and engineering, which share an underlying architecture but have historically developed in very separate ways. The authors then show how these traditions can be naturally (re)integrated with each other and with contemporary neuroscience, to better understand the skill and gaze strategies involved. They then propose a conceptual model that (a) gives a unified account to the coordination of gaze and steering control, (b) incorporates higher-level path planning, and (c) draws on the literature on paired forward and inverse models in predictive control. Although each of these (a–c) has been considered before (also in the context of driving), integrating them into a single framework and the authors’ multiple waypoint identification hypothesis within that framework are novel. The proposed hypothesis is relevant to all forms of visually guided locomotion.Peer reviewe

    The Application of Driver Models in the Safety Assessment of Autonomous Vehicles: A Survey

    Full text link
    Driver models play a vital role in developing and verifying autonomous vehicles (AVs). Previously, they are mainly applied in traffic flow simulation to model realistic driver behavior. With the development of AVs, driver models attract much attention again due to their potential contributions to AV certification. The simulation-based testing method is considered an effective measure to accelerate AV testing due to its safe and efficient characteristics. Nonetheless, realistic driver models are prerequisites for valid simulation results. Additionally, an AV is assumed to be at least as safe as a careful and competent driver. Therefore, driver models are inevitable for AV safety assessment. However, no comparison or discussion of driver models is available regarding their utility to AVs in the last five years despite their necessities in the release of AVs. This motivates us to present a comprehensive survey of driver models in the paper and compare their applicability. Requirements for driver models in terms of their application to AV safety assessment are discussed. A summary of driver models for simulation-based testing and AV certification is provided. Evaluation metrics are defined to compare their strength and weakness. Finally, an architecture for a careful and competent driver model is proposed. Challenges and future work are elaborated. This study gives related researchers especially regulators an overview and helps them to define appropriate driver models for AVs

    Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High Speed Scenarios

    Full text link
    Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high speed motions or in scenes characterized by high dynamic range. However, event cameras output only little information when the amount of motion is limited, such as in the case of almost still motion. Conversely, standard cameras provide instant and rich information about the environment most of the time (in low-speed and good lighting scenarios), but they fail severely in case of fast motions, or difficult lighting such as high dynamic range or low light scenes. In this paper, we present the first state estimation pipeline that leverages the complementary advantages of these two sensors by fusing in a tightly-coupled manner events, standard frames, and inertial measurements. We show on the publicly available Event Camera Dataset that our hybrid pipeline leads to an accuracy improvement of 130% over event-only pipelines, and 85% over standard-frames-only visual-inertial systems, while still being computationally tractable. Furthermore, we use our pipeline to demonstrate - to the best of our knowledge - the first autonomous quadrotor flight using an event camera for state estimation, unlocking flight scenarios that were not reachable with traditional visual-inertial odometry, such as low-light environments and high-dynamic range scenes.Comment: 8 pages, 9 figures, 2 table

    Autonomous flight and remote site landing guidance research for helicopters

    Get PDF
    Automated low-altitude flight and landing in remote areas within a civilian environment are investigated, where initial cost, ongoing maintenance costs, and system productivity are important considerations. An approach has been taken which has: (1) utilized those technologies developed for military applications which are directly transferable to a civilian mission; (2) exploited and developed technology areas where new methods or concepts are required; and (3) undertaken research with the potential to lead to innovative methods or concepts required to achieve a manual and fully automatic remote area low-altitude and landing capability. The project has resulted in a definition of system operational concept that includes a sensor subsystem, a sensor fusion/feature extraction capability, and a guidance and control law concept. These subsystem concepts have been developed to sufficient depth to enable further exploration within the NASA simulation environment, and to support programs leading to the flight test

    Video browsing interfaces and applications: a review

    Get PDF
    We present a comprehensive review of the state of the art in video browsing and retrieval systems, with special emphasis on interfaces and applications. There has been a significant increase in activity (e.g., storage, retrieval, and sharing) employing video data in the past decade, both for personal and professional use. The ever-growing amount of video content available for human consumption and the inherent characteristics of video data—which, if presented in its raw format, is rather unwieldy and costly—have become driving forces for the development of more effective solutions to present video contents and allow rich user interaction. As a result, there are many contemporary research efforts toward developing better video browsing solutions, which we summarize. We review more than 40 different video browsing and retrieval interfaces and classify them into three groups: applications that use video-player-like interaction, video retrieval applications, and browsing solutions based on video surrogates. For each category, we present a summary of existing work, highlight the technical aspects of each solution, and compare them against each other

    Software for recording and capture video sequences for poultry and laying hens facility (BIOTERIUM)

    Get PDF
    Behavior can sometimes be observed directly, and can also be affected by the presence of human observers. Technological devices have strongly advanced our understanding of certain aspects of animal behavior, and the small cameras borne in a straight line by study animals offer a reliable alternative to direct observation. One possible way to make animal welfare assessments easier and faster could be the application of audio and video data analysis. In this research, the main goal is to reach the requirements determination, and the construction to build a concurrent alternative to CCTV software based on new proprietary software, developed in Matlab® language, to record and capture video sequences in digital memory hardware. The proposed software stakeholders’ needs are written attending the specification in the ISO/IEC/IEEE 29148:2011 standard, and the life cycle adapted to the development was based in the standard ISO/IEC/IEEE 12207:2008.  The main user interface was generated using the Matlab® GUI (Graphical User Interface). In the results, a resumed table with the final document of Requirement Specification (StRS) was generated and the main interface is coded. The system validation took place in the animal houses (bioterium) over a period of three months and the data collection and software usability reach the attendance to all the requirements listed. Thus, in the conclusions, it was observed that its users considered the developed software a good tool to help researches in the poultry and laying hens’ facility (bioterium)

    A multivariable sampled-data model of an automobile driver

    Get PDF
    In this thesis, a multivariable system model of driver performance in the basic driving tasks is presented. The driver model described acts as a serial-process, priority-accessed, time-sharing computer. This model processes the input or output task which currently possesses the highest priority. Input tasks are represented by continuous signals sampled intermittently according to priority laws. Output tasks are modeled as simple analog processes operating on the last few intermittently generated output controls. An individual priority rule is constructed for each input and output task. The performance of the driver in the lateral control task involves a feedforward pattern which is consequence of the fact the driver looks several feet ahead of the pathway. A laboratory analysis of the feedforward aspects of the driver in the single-input single-output lateral control task is described --Abstract, page ii

    A Doppler Lidar system with preview control for wind turbine load mitigation

    Get PDF
    This dissertation focuses on the development of a system for wind turbine in order to mitigate the load from unstable wind speed. The work is divided into 2 main parts: a cost efficient Doppler wind Lidar system is developed based on a short coherence length laser system in combine with multiple length delayline concept; a preview pitch control is developed based on the design of a combination of 2 degree of freedom (2-DOF) feedback / feedforward control with a model predictive control
    • …
    corecore