221 research outputs found

    Pararover: A Remote Controlled Vehicle with Omnidirectional Sensors

    Get PDF
    The process of teleoperation can be described as allowing a remote user to control a vehicle by interpretting sensor information captured by the vehicle. One method that is frequently used to implement teleoperation is to provide the user with a real-time video display of a perspective camera mounted on the vehicle. This method limits the remote user to seeing the environment in which is vehicle is present through the fixed viewpoint with which the camera is mounted. Having a fixed viewpoint is extremely limiting and significantly impedes the ability of the remote user to properly navigate. One way to address this problem is to mount the perspective camera on a pan-tilt device. This is rarely done because it is expensive and introduces a significant increase in implementation complexity from both the mechanical and electrical point of view. With the advent of omnidirectional camera technology, there is now a second more attractive alternative. This paper describes the \rover, a remote controlled vehicle constructed in the summer of 1998 to demonstrate the use of omnidirectional camera technology and a virtual reality display for vehicular teleoperation, audio-video surveillance and forward reconnaissance

    Micro Phase Shifting

    Get PDF
    We consider the problem of shape recovery for real world scenes, where a variety of global illumination (inter-reflections, subsurface scattering, etc.) and illumination defocus effects are present. These effects introduce systematic and often significant errors in the recovered shape. We introduce a structured light technique called Micro Phase Shifting, which overcomes these problems. The key idea is to project sinusoidal patterns with frequencies limited to a narrow, high-frequency band. These patterns produce a set of images over which global illumination and defocus effects remain constant for each point in the scene. This enables high quality reconstructions of scenes which have traditionally been considered hard, using only a small number of images. We also derive theoretical lower bounds on the number of input images needed for phase shifting and show that Micro PS achieves the bound

    A Theory of Catadioptric Image Formation

    Get PDF
    Conventional video cameras have limited fields of view which make them restrictive for certain applications in computational vision. A catadioptric sensor uses a combination of lenses and mirrors placed in a carefully arranged configuration to capture a much wider field of view. When designing a catadioptric sensor, the shape of the mirror(s) should ideally be selected to ensure that the complete catadioptric system has a single effective viewpoint. The reason a single viewpoint is so desirable is that it is a requirement for the generation of pure perspective images from the sensed image(s). In this paper, we derive and analyze the complete class of single-lens single-mirror catadioptric sensors which satisfy the fixed viewpoint constraint. Some of the solutions turn out to be degenerate with no practical value, while other solutions lead to realizable sensors. We also derive an expression for the spatial resolution of a catadioptric sensor, and include a preliminary analysis of the defocus blur caused by the use of a curved mirror

    AirCode: Unobtrusive Physical Tags for Digital Fabrication

    Full text link
    We present AirCode, a technique that allows the user to tag physically fabricated objects with given information. An AirCode tag consists of a group of carefully designed air pockets placed beneath the object surface. These air pockets are easily produced during the fabrication process of the object, without any additional material or postprocessing. Meanwhile, the air pockets affect only the scattering light transport under the surface, and thus are hard to notice to our naked eyes. But, by using a computational imaging method, the tags become detectable. We present a tool that automates the design of air pockets for the user to encode information. AirCode system also allows the user to retrieve the information from captured images via a robust decoding algorithm. We demonstrate our tagging technique with applications for metadata embedding, robotic grasping, as well as conveying object affordances.Comment: ACM UIST 2017 Technical Paper
    • …
    corecore