16,323 research outputs found

    Calibration Wizard: A Guidance System for Camera Calibration Based on Modelling Geometric and Corner Uncertainty

    Get PDF
    It is well known that the accuracy of a calibration depends strongly on the choice of camera poses from which images of a calibration object are acquired. We present a system -- Calibration Wizard -- that interactively guides a user towards taking optimal calibration images. For each new image to be taken, the system computes, from all previously acquired images, the pose that leads to the globally maximum reduction of expected uncertainty on intrinsic parameters and then guides the user towards that pose. We also show how to incorporate uncertainty in corner point position in a novel principled manner, for both, calibration and computation of the next best pose. Synthetic and real-world experiments are performed to demonstrate the effectiveness of Calibration Wizard.Comment: Oral presentation at ICCV 201

    Pictures in Your Mind: Using Interactive Gesture-Controlled Reliefs to Explore Art

    Get PDF
    Tactile reliefs offer many benefits over the more classic raised line drawings or tactile diagrams, as depth, 3D shape, and surface textures are directly perceivable. Although often created for blind and visually impaired (BVI) people, a wider range of people may benefit from such multimodal material. However, some reliefs are still difficult to understand without proper guidance or accompanying verbal descriptions, hindering autonomous exploration. In this work, we present a gesture-controlled interactive audio guide (IAG) based on recent low-cost depth cameras that can be operated directly with the hands on relief surfaces during tactile exploration. The interactively explorable, location-dependent verbal and captioned descriptions promise rapid tactile accessibility to 2.5D spatial information in a home or education setting, to online resources, or as a kiosk installation at public places. We present a working prototype, discuss design decisions, and present the results of two evaluation studies: the first with 13 BVI test users and the second follow-up study with 14 test users across a wide range of people with differences and difficulties associated with perception, memory, cognition, and communication. The participant-led research method of this latter study prompted new, significant and innovative developments

    3-D immersive screen experiments

    Get PDF
    We are currently piloting a range of computer simulated science experiments as 3-D virtual environments. These are rendered on a PC in 3-D and use photographs of specic parts of the actual apparatus as textures to add realism to the simulation. In particular, photographs are used to represent the consequential views of an experiment. These particular views may also be animated depending on the state of the experiment. The work combines the photographic approach of the Interactive Screen Experiments (ISEs) with the advantages of a fully simulated 3-D environment where the user can interact with the apparatus in a more natural and intuitive way. The potential advantages are that users can quickly adapt to the environment and in particular the controls. They gain realistic views of the physicality of the experiment as they are not just seeing it from a particular viewpoint, but from wherever they see t to place themselves within the experiment's scene. They are immersed in the experiment in a way that mitigates some of the objections to online as opposed to real laboratory experimentation. It is also the case that the results of an initial calibration or setup carry over into the main part of the experiment. This is perceived as an extremely important teaching element of Physics practicals as the user learns that care in setting up an experiment is an essential part of being able to get good results. Furthermore there is no need to represent scales, read-outs or controls as separate parts of the interface; these can all be rendered at their correct physical positions within the experiment. The rst of these experiments based on the use of a diffraction grating has been fully implemented and has been evaluated with a Physics A level class. The application and its evaluation will be presented. A more complicated experiment using a spectrometer has also been modelled which raises issues of complexity. These issues will also be discussed

    This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer

    Full text link
    We address the problem of controlling the workspace of a 3-DoF mobile robot. In a human-robot shared space, robots should navigate in a human-acceptable way according to the users' demands. For this purpose, we employ virtual borders, that are non-physical borders, to allow a user the restriction of the robot's workspace. To this end, we propose an interaction method based on a laser pointer to intuitively define virtual borders. This interaction method uses a previously developed framework based on robot guidance to change the robot's navigational behavior. Furthermore, we extend this framework to increase the flexibility by considering different types of virtual borders, i.e. polygons and curves separating an area. We evaluated our method with 15 non-expert users concerning correctness, accuracy and teaching time. The experimental results revealed a high accuracy and linear teaching time with respect to the border length while correctly incorporating the borders into the robot's navigational map. Finally, our user study showed that non-expert users can employ our interaction method.Comment: Accepted at 2019 Third IEEE International Conference on Robotic Computing (IRC), supplementary video: https://youtu.be/lKsGp8xtyI
    corecore