1,007 research outputs found

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    Virtuality Supports Reality for e-Health Applications

    Get PDF
    Strictly speaking the word “virtuality” or the expression “virtual reality” refers to an application for things simulated or created by the computer, which not really exist. More and more often such things are becoming equally referred with the adjective “virtual” or “digital” or mentioned with the prefixes “e-” or “cyber-”. So we know, for instance, of virtual or digital or e- or cyber- community, cash, business, greetings, books .. till even pets. The virtuality offers interesting advantages with respect to the “simple” reality, since it can reproduce, augment and even overcome the reality. The reproduction is not intended as it has been so far that a camera films a scenario from a fixed point of view and a player shows it, but today it is possible to reproduce the scene dynamically moving the point of view in practically any directions, and “real” becomes “realistic”. The virtuality can augment the reality in the sense that graphics are pulled out from a television screen (or computer/laptop/palm display) and integrated with the real world environments. In this way useful, and often in somehow essentials, information are added for the user. As an example new apps are now available even for iphone users who can obtain graphical information overlapped on camera played real scene surroundings, so directly reading the height of mountains, names of streets, lined up of satellites .., directly over the real mountains, the real streets, the real sky. But the virtuality can even overcome reality, since it can produce and make visible the hidden or inaccessible or old reality and even provide an alternative not real world. So we can virtually see deeply into the matter till atomic dimensions, realize a virtual tour in a past century or give visibility to hypothetical lands otherwise difficult or impossible to simple describe. These are the fundamental reasons for a naturally growing interest in “producing” virtuality. So here we will discuss about some of the different available methods to “produce” virtuality, in particular pointing out some steps necessary for “crossing” reality “towards” virtuality. But between these two parallel worlds, as the “real” and the “virtual” ones are, interactions can exist and this can lead to some further advantages. We will treat about the “production” and the “interaction” with the aim to focus the attention on how the virtuality can be applied in biomedical fields, since it has been demonstrated that virtual reality can furnish important and relevant benefits in e-health applications. As an example virtual tomography joins together 3D imaging anatomical features from several CT (Computerized axial Tomography) or MRI (Magnetic Resonance Imaging) images overlapped with a computer-generated kinesthetic interface so to obtain a useful tool in diagnosis and healing. With the new endovascular simulation possibilities, a head mounted display superimposes 3D images on the patient’s skin so to furnish a direction for implantable devices inside blood vessels. Among all, we chose to investigate the fields where we believe the virtual applications can furnish the meaningful advantages, i.e. in surgery simulation, in cognitive and neurological rehabilitation, in postural and motor training, in brain computer interface. We will furnish to the reader a necessary partial but at the same time fundamental view on what the virtual reality can do to improve possible medical treatment and so, at the end, resulting a better quality of our life

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Gesture Recognition and Control for Semi-Autonomous Robotic Assistant Surgeons

    Get PDF
    The next stage for robotics development is to introduce autonomy and cooperation with human agents in tasks that require high levels of precision and/or that exert considerable physical strain. To guarantee the highest possible safety standards, the best approach is to devise a deterministic automaton that performs identically for each operation. Clearly, such approach inevitably fails to adapt itself to changing environments or different human companions. In a surgical scenario, the highest variability happens for the timing of different actions performed within the same phases. This thesis explores the solutions adopted in pursuing automation in robotic minimally-invasive surgeries (R-MIS) and presents a novel cognitive control architecture that uses a multi-modal neural network trained on a cooperative task performed by human surgeons and produces an action segmentation that provides the required timing for actions while maintaining full phase execution control via a deterministic Supervisory Controller and full execution safety by a velocity-constrained Model-Predictive Controller

    A Fuzzy Logic Architecture for Rehabilitation Robotic Systems

    Get PDF
    Robots are highly incorporated in rehabilitation in the last decade to compensate lost functions in disabled individuals. By controlling the rehabilitation robots from far, many benefits are achieved. These benefits include but not restricted to minimum hospital stays, decreasing cost, and increasing the level of care. The main goal of this work is to have an effective solution to take care of patients from far. Tackling the problem of the remote control of rehabilitation robots is undergoing and highly challenging. In this paper, a remote wrist rehabilitation system is presented. The developed system is a sophisticated robot ensuring the two wrist movements (Flexion /extension and abduction/adduction). Additionally, the proposed system provides a software interface enabling the physiotherapists to control the rehabilitation process remotely. The patient’s safety during the therapy is achieved through the integration of a fuzzy controller in the system control architecture. The fuzzy controller is employed to control the robot action according to the pain felt by the patient. By using fuzzy logic approach, the system can adapt effectively according to the patients’ conditions. The Queue Telemetry Transport Protocol (MQTT) is considered to overcome the latency during the human robot interaction. Based on a Kinect camera, the control technique is made gestural. The physiotherapist gestures are detected and transmitted to the software interface to be processed and be sent to the robot. The acquired measurements are recorded in a database that can be used later to monitor patient progress during the treatment protocol. The obtained experimental results show the effectiveness of the developed remote rehabilitation system

    ISMCR 1994: Topical Workshop on Virtual Reality. Proceedings of the Fourth International Symposium on Measurement and Control in Robotics

    Get PDF
    This symposium on measurement and control in robotics included sessions on: (1) rendering, including tactile perception and applied virtual reality; (2) applications in simulated medical procedures and telerobotics; (3) tracking sensors in a virtual environment; (4) displays for virtual reality applications; (5) sensory feedback including a virtual environment application with partial gravity simulation; and (6) applications in education, entertainment, technical writing, and animation

    A white paper: NASA virtual environment research, applications, and technology

    Get PDF
    Research support for Virtual Environment technology development has been a part of NASA's human factors research program since 1985. Under the auspices of the Office of Aeronautics and Space Technology (OAST), initial funding was provided to the Aerospace Human Factors Research Division, Ames Research Center, which resulted in the origination of this technology. Since 1985, other Centers have begun using and developing this technology. At each research and space flight center, NASA missions have been major drivers of the technology. This White Paper was the joint effort of all the Centers which have been involved in the development of technology and its applications to their unique missions. Appendix A is the list of those who have worked to prepare the document, directed by Dr. Cynthia H. Null, Ames Research Center, and Dr. James P. Jenkins, NASA Headquarters. This White Paper describes the technology and its applications in NASA Centers (Chapters 1, 2 and 3), the potential roles it can take in NASA (Chapters 4 and 5), and a roadmap of the next 5 years (FY 1994-1998). The audience for this White Paper consists of managers, engineers, scientists and the general public with an interest in Virtual Environment technology. Those who read the paper will determine whether this roadmap, or others, are to be followed

    Exploration Of Robotics Need In The Medical Field And Robotic Arm Operation Via Glove Control

    Get PDF
    This thesis project is an exercise in getting hands-on experience in redesigning and modifying a robotic system. It also involves understanding the current need for robotic applications in hospital settings. To achieve the above, a thorough literature review of the current state of robotics in a hospital setting was conducted. Moreover, a number of interviews with medical care professionals were completed. Three main themes were obtained from the literature review and five main themes were obtained from the interviews which will be presented in this thesis report. The next phase of the project involved redesigning a system that is composed of two main parts: a glove and a robotic arm. The glove consists of multiple flex sensors and an inertial measurement unit (IMU) that sends data to an Arduino, which processes the data and sends a signal through Bluetooth transmission to the robotic arm. The robotic arm consists of servo motors that move according to the signal that is received from the glove. The results of the current performance of the system will be presented

    Multimodal human hand motion sensing and analysis - a review

    Get PDF
    • …
    corecore