7 research outputs found

    Probabilistic Robot Localization using Visual Landmarks

    Get PDF
    Effective robot navigation and route planning is impossible unless the position of the robot within its environment is known. Motion sensors that track the relative movement of a robot are inherently unreliable, so it is necessary to use cues from the external environment to periodically localize the robot. There are many methods for accomplishing this, most of which either probabilistically estimate the robot\u27s movement based on range sensors, or require having enough unique visual landmarks present to geometrically calculate the robot\u27s position at any time. In this project I examined the feasibility of using the probabilistic Monte Carlo localization algorithm to estimate a robot\u27s location based off of occasional visual landmark cues. Using visual landmarks has several advantages over using range sensor data in that landmark readings are less affected by unexpected objects and can be used for fast global localization. To test this system I designed a robot capable of navigating Olin-Rice by observing pieces of colored paper placed at regular intervals along the halls as an extension of my summer 2005 research on RUPART. The localization system could not localize the robot in many situations due to the sparse nature of the landmarks, but results suggest that with minor modifications the system could become a reliable localization scheme

    Sensor Fusion for Human Safety in Industrial Workcells

    No full text
    <p>Current manufacturing practices require complete physical separation between people and active industrial robots. These precautions ensure safety, but are inefficient in terms of time and resources, and place limits on the types of tasks that can be performed. In this paper, we present a real-time, sensor-based approach for ensuring the safety of people in close proximity to robots in an industrial workcell. Our approach fuses data from multiple 3D imaging sensors of different modalities into a volumetric evidence grid and segments the volume into regions corresponding to background, robots, and people. Surrounding each robot is a danger zone that dynamically updates according to the robot's position and trajectory. Similarly, surrounding each person is a dynamically updated safety zone. A collision between danger and safety zones indicates an impending actual collision, and the affected robot is stopped until the problem is resolved. We demonstrate and experimentally evaluate the concept in a prototype industrial workcell augmented with stereo and range cameras.</p

    A Bottom-up Approach to the Financial Markets

    No full text

    Data-Driven Models &amp; Mathematical Finance: Opposition or Apposition?

    No full text

    FOUNDATION FOR A GENERAL STRAIN THEORY OF CRIME AND DELINQUENCY*

    No full text
    corecore