15,076 research outputs found

    A multi-projector CAVE system with commodity hardware and gesture-based interaction

    Get PDF
    Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever combination of skeletal data from multiple Kinect sensors.Preprin

    Intention recognition for gaze controlled robotic minimally invasive laser ablation

    Get PDF
    Eye tracking technology has shown promising results for allowing hands-free control of robotically-mounted cameras and tools. However existing systems present only limited capabilities in allowing the full range of camera motions in a safe, intuitive manner. This paper introduces a framework for the recognition of surgeon intention, allowing activation and control of the camera through natural gaze behaviour. The system is resistant to noise such as blinking, while allowing the surgeon to look away safely at any time. Furthermore, this paper presents a novel approach to control the translation of the camera along its optical axis using a combination of eye tracking and stereo reconstruction. Combining eye tracking and stereo reconstruction allows the system to determine which point in 3D space the user is fixating, enabling a translation of the camera to achieve the optimal viewing distance. In addition, the eye tracking information is used to perform automatic laser targeting for laser ablation. The desired target point of the laser, mounted on a separate robotic arm, is determined with the eye tracking thus removing the need to manually adjust the laser's target point before starting each new ablation. The calibration methodology used to obtain millimetre precision for the laser targeting without the aid of visual servoing is described. Finally, a user study validating the system is presented, showing clear improvement with median task times under half of those of a manually controlled robotic system

    Computer- and robot-assisted Medical Intervention

    Full text link
    Medical robotics includes assistive devices used by the physician in order to make his/her diagnostic or therapeutic practices easier and more efficient. This chapter focuses on such systems. It introduces the general field of Computer-Assisted Medical Interventions, its aims, its different components and describes the place of robots in that context. The evolutions in terms of general design and control paradigms in the development of medical robots are presented and issues specific to that application domain are discussed. A view of existing systems, on-going developments and future trends is given. A case-study is detailed. Other types of robotic help in the medical environment (such as for assisting a handicapped person, for rehabilitation of a patient or for replacement of some damaged/suppressed limbs or organs) are out of the scope of this chapter.Comment: Handbook of Automation, Shimon Nof (Ed.) (2009) 000-00

    Technical note : TRACKFlow, a new versatile microscope system forfission track analysis

    Get PDF
    We here present TRACKFlow, a new system with dedicated modules for the fission track (FT) laboratory. It is based on the motorised Nikon Eclipse Ni-E upright microscope with the Nikon DS-Ri2 full frame camera and is embedded within the Nikon NIS-Elements Advanced Research software package. TRACKFlow decouples image acquisition from analysis to decrease schedule stress of the microscope. The system further has the aim of being versatile, adaptable to multiple preparation protocols and analysis approaches. It is both suited for small-scale laboratories and is also ready for upscaling to high-throughput imaging. The versatility of the system, based on the operators’ full access to the NIS-Elements package, exceeds that of other systems for FT and further expands to stepping away from the dedicated FT microscope towards a general microscope for Earth Sciences, including dedicated modules for FT research. TRACKFlow consists of a number of user-friendly protocols which are based on the well plate design that allows sequential scanning of multiple samples without the need of replacing the slide on the stage. All protocols include a sub-protocol to scan a map of the mount for easy navigation through the samples on the stage. Two protocols are designed for the External Detector Method (EDM) and the LA–ICP–MS apatite fission track (LAFT) approach, with tools for repositioning and calibration to the external detector. Two other tools are designed for large crystals, such as the Durango age standard and U-doped glass external detectors. These protocols generate a regular grid of points and inspect if each point is suitable for analysis. Both protocols also include an option to image each withheld point. One more protocol is included for the measurement of etch pit diameters and one last protocol prepares a list of coordinates for correlative microscopy. In a following phase of development TRACKFlow can be expanded towards fully autonomous calibration, grain detection and imaging

    Challenges in development of the American Sign Language Lexicon Video Dataset (ASLLVD) corpus

    Full text link
    The American Sign Language Lexicon Video Dataset (ASLLVD) consists of videos of >3,300 ASL signs in citation form, each produced by 1-6 native ASL signers, for a total of almost 9,800 tokens. This dataset, including multiple synchronized videos showing the signing from different angles, will be shared publicly once the linguistic annotations and verifications are complete. Linguistic annotations include gloss labels, sign start and end time codes, start and end handshape labels for both hands, morphological and articulatory classifications of sign type. For compound signs, the dataset includes annotations for each morpheme. To facilitate computer vision-based sign language recognition, the dataset also includes numeric ID labels for sign variants, video sequences in uncompressed-raw format, camera calibration sequences, and software for skin region extraction. We discuss here some of the challenges involved in the linguistic annotations and categorizations. We also report an example computer vision application that leverages the ASLLVD: the formulation employs a HandShapes Bayesian Network (HSBN), which models the transition probabilities between start and end handshapes in monomorphemic lexical signs. Further details and statistics for the ASLLVD dataset, as well as information about annotation conventions, are available from http://www.bu.edu/asllrp/lexicon

    Paper-based Mixed Reality Sketch Augmentation as a Conceptual Design Support Tool

    Get PDF
    This undergraduate student paper explores usage of mixed reality techniques as support tools for conceptual design. A proof-of-concept was developed to illustrate this principle. Using this as an example, a small group of designers was interviewed to determine their views on the use of this technology. These interviews are the main contribution of this paper. Several interesting applications were determined, suggesting possible usage in a wide range of domains. Paper-based sketching, mixed reality and sketch augmentation techniques complement each other, and the combination results in a highly intuitive interface

    Calibration by correlation using metric embedding from non-metric similarities

    Get PDF
    This paper presents a new intrinsic calibration method that allows us to calibrate a generic single-view point camera just by waving it around. From the video sequence obtained while the camera undergoes random motion, we compute the pairwise time correlation of the luminance signal for a subset of the pixels. We show that, if the camera undergoes a random uniform motion, then the pairwise correlation of any pixels pair is a function of the distance between the pixel directions on the visual sphere. This leads to formalizing calibration as a problem of metric embedding from non-metric measurements: we want to find the disposition of pixels on the visual sphere from similarities that are an unknown function of the distances. This problem is a generalization of multidimensional scaling (MDS) that has so far resisted a comprehensive observability analysis (can we reconstruct a metrically accurate embedding?) and a solid generic solution (how to do so?). We show that the observability depends both on the local geometric properties (curvature) as well as on the global topological properties (connectedness) of the target manifold. We show that, in contrast to the Euclidean case, on the sphere we can recover the scale of the points distribution, therefore obtaining a metrically accurate solution from non-metric measurements. We describe an algorithm that is robust across manifolds and can recover a metrically accurate solution when the metric information is observable. We demonstrate the performance of the algorithm for several cameras (pin-hole, fish-eye, omnidirectional), and we obtain results comparable to calibration using classical methods. Additional synthetic benchmarks show that the algorithm performs as theoretically predicted for all corner cases of the observability analysis

    Academical and Research Wiimote Applications

    Get PDF
    IADIS MULTI CONFERENCE ON COMPUTER SCIENCE AND INFORMATION SYSTEMS 2008 Amsterdam, The Netherlands JULY 22 - 24, 2008This paper proposes the employment of the Wii Remote controller, better known as Wiimote, as an useful tool for educators and researchers. The quick development on fields such as Wireless Sensors and Actuators Networks or Hybrid Systems, and their applications, requires engineers with a solid knowledge in these areas. To achieve this goal the Wiimote becomes a great alternative to other options due to its great variety of analog and digital components, for a very low price, and the good documentation about it existing in Internet. As will be seen in this paper, the possible academical and research uses of the Wiimote are almost endless and cover many interesting problems in control engineering
    • 

    corecore