2,077 research outputs found

    SPRK: A Low-Cost Stewart Platform For Motion Study In Surgical Robotics

    Full text link
    To simulate body organ motion due to breathing, heart beats, or peristaltic movements, we designed a low-cost, miniaturized SPRK (Stewart Platform Research Kit) to translate and rotate phantom tissue. This platform is 20cm x 20cm x 10cm to fit in the workspace of a da Vinci Research Kit (DVRK) surgical robot and costs $250, two orders of magnitude less than a commercial Stewart platform. The platform has a range of motion of +/- 1.27 cm in translation along x, y, and z directions and has motion modes for sinusoidal motion and breathing-inspired motion. Modular platform mounts were also designed for pattern cutting and debridement experiments. The platform's positional controller has a time-constant of 0.2 seconds and the root-mean-square error is 1.22 mm, 1.07 mm, and 0.20 mm in x, y, and z directions respectively. All the details, CAD models, and control software for the platform is available at github.com/BerkeleyAutomation/sprk

    Spatial Programming for Industrial Robots through Task Demonstration

    Get PDF
    We present an intuitive system for the programming of industrial robots using markerless gesture recognition and mobile augmented reality in terms of programming by demonstration. The approach covers gesture-based task definition and adaption by human demonstration, as well as task evaluation through augmented reality. A 3D motion tracking system and a handheld device establish the basis for the presented spatial programming system. In this publication, we present a prototype toward the programming of an assembly sequence consisting of several pick-and-place tasks. A scene reconstruction provides pose estimation of known objects with the help of the 2D camera of the handheld. Therefore, the programmer is able to define the program through natural bare-hand manipulation of these objects with the help of direct visual feedback in the augmented reality application. The program can be adapted by gestures and transmitted subsequently to an arbitrary industrial robot controller using a unified interface. Finally, we discuss an application of the presented spatial programming approach toward robot-based welding tasks

    The Aesthetic Uncanny: Staging Dorian Gray

    Get PDF
    This article discusses my theatrical adaptation of Oscar Wilde's The Picture of Dorian Gray (1891) for the Edinburgh Festival Fringe (2008). Freud's concept of the uncanny (1919) was treated as a purely aesthetic phenomenon and related to late nineteenth century social and literary preoccupations such as Christianity, the supernatural and glamorous, criminal homosexuality. These considerations led to a conceptual ground plan that allowed for experiments during rehearsal in a form of theatrical shorthand

    Deep Drone Racing: From Simulation to Reality with Domain Randomization

    Full text link
    Dynamically changing environments, unreliable state estimation, and operation under severe resource constraints are fundamental challenges that limit the deployment of small autonomous drones. We address these challenges in the context of autonomous, vision-based drone racing in dynamic environments. A racing drone must traverse a track with possibly moving gates at high speed. We enable this functionality by combining the performance of a state-of-the-art planning and control system with the perceptual awareness of a convolutional neural network (CNN). The resulting modular system is both platform- and domain-independent: it is trained in simulation and deployed on a physical quadrotor without any fine-tuning. The abundance of simulated data, generated via domain randomization, makes our system robust to changes of illumination and gate appearance. To the best of our knowledge, our approach is the first to demonstrate zero-shot sim-to-real transfer on the task of agile drone flight. We extensively test the precision and robustness of our system, both in simulation and on a physical platform, and show significant improvements over the state of the art.Comment: Accepted as a Regular Paper to the IEEE Transactions on Robotics Journal. arXiv admin note: substantial text overlap with arXiv:1806.0854

    Virtual Reality based Telerobotics Framework with Depth Cameras

    Get PDF
    This work describes a virtual reality (VR) based robot teleoperation framework which relies on scene visualization from depth cameras and implements human-robot and human-scene interaction gestures. We suggest that mounting a camera on a slave robot's end-effector (an in-hand camera) allows the operator to achieve better visualization of the remote scene and improve task performance. We compared experimentally the operator's ability to understand the remote environment in different visualization modes: single external static camera, in-hand camera, in-hand and external static camera, in-hand camera with OctoMap occupancy mapping. The latter option provided the operator with a better understanding of the remote environment whilst requiring relatively small communication bandwidth. Consequently, we propose suitable grasping methods compatible with the VR based teleoperation with the in-hand camera. Video demonstration: https://youtu.be/3vZaEykMS_E

    PDA Interface for Humanoid Robots

    Get PDF
    To fulfill a need for natural, user-friendly means of interacting and reprogramming toy and humanoid robots, a growing trend of robotics research investigates the integration of methods for gesture recognition and natural speech processing. Unfortunately, efficient methods for speech and vision processing remain computationally expensive and, thus, cannot be easily exploited on cost- and size-limited platforms. Personal Digital Assistants (PDAs) are ideal low-cost platforms to provide simple speech and vision-based communication for a robot. This paper investigates the use of Personal Digital Assistant (PDA) interfaces to provide multi-modal means of interacting with humanoid robots. We present PDA applications in which the robot can track and imitate the user's arm and head motions, and can learn a simple vocabulary to label objects and actions by associating the user's verbal utterance with the user's gestures. The PDA applications are tested on two humanoid platforms: a mini doll-shaped robot, Robota, used as an educational toy with children, and DB, a full body 30 degrees of freedom humanoid robot

    Single firm product diffusion model for single-function and fusion products

    Get PDF
    The prosperity of multifunction products (also referred to as fusion products) has changed the landscape of the marketplace for several electronics products. To illustrate, as fusion products gain popularity in cellular phones and office machines, we observe that single-function products (e.g., stand-alone PDAs and stand-alone scanners) gradually disappear from the market as they are supplanted by fusion products. This paper presents a product diffusion model that captures the diffusion transition from two distinct single-function products into one fusion product. We investigate the optimal launch time of the fusion product in various conditions and conduct a numerical analysis to demonstrate the dynamics among the three products. Similar to previous multi-generation single product diffusion models, we find that the planning horizon, the products\u27 relative profit margin, and substitution effects are important to the launch time decision. However, there are several unique factors that warrant special consideration when a firm introduces a fusion product to the market: the firm\u27s competitive role, buyer consolidation of purchases to a multi-function product, the fusion technology and the age of current single-function products

    JAWS: Just A Wild Shot for Cinematic Transfer in Neural Radiance Fields

    Full text link
    This paper presents JAWS, an optimization-driven approach that achieves the robust transfer of visual cinematic features from a reference in-the-wild video clip to a newly generated clip. To this end, we rely on an implicit-neural-representation (INR) in a way to compute a clip that shares the same cinematic features as the reference clip. We propose a general formulation of a camera optimization problem in an INR that computes extrinsic and intrinsic camera parameters as well as timing. By leveraging the differentiability of neural representations, we can back-propagate our designed cinematic losses measured on proxy estimators through a NeRF network to the proposed cinematic parameters directly. We also introduce specific enhancements such as guidance maps to improve the overall quality and efficiency. Results display the capacity of our system to replicate well known camera sequences from movies, adapting the framing, camera parameters and timing of the generated video clip to maximize the similarity with the reference clip.Comment: CVPR 2023. Project page with videos and code: http://www.lix.polytechnique.fr/vista/projects/2023_cvpr_wan
    • …
    corecore