2,030 research outputs found

    Technology of swallowable capsule for medical applications

    Get PDF
    Medical technology has undergone major breakthroughs in recent years, especially in the area of the examination tools for diagnostic purposes. This paper reviews the swallowable capsule technology in the examination of the gastrointestinal system for various diseases. The wireless camera pill has created a more advanced method than many traditional examination methods for the diagnosis of gastrointestinal diseases such as gastroscopy by the use of an endoscope. After years of great innovation, commercial swallowable pills have been produced and applied in clinical practice. These smart pills can cover the examination of the gastrointestinal system and not only provide to the physicians a lot more useful data that is not available from the traditional methods, but also eliminates the use of the painful endoscopy procedure. In this paper, the key state-of-the-art technologies in the existing Wireless Capsule Endoscopy (WCE) systems are fully reported and the recent research progresses related to these technologies are reviewed. The paper ends by further discussion on the current technical bottlenecks and future research in this area

    Innovation in Mobile Learning: A European Perspective

    Get PDF
    In the evolving landscape of mobile learning, European researchers have conducted significant mobile learning projects, representing a distinct perspective on mobile learning research and development. Our paper aims to explore how these projects have arisen, showing the driving forces of European innovation in mobile learning. We propose context as a central construct in mobile learning and examine theories of learning for the mobile world, based on physical, technological, conceptual, social and temporal mobility. We also examine the impacts of mobile learning research on educational practices and the implications for policy. Throughout, we identify lessons learnt from European experiences to date

    Shape and deformation measurement using heterodyne range imaging technology

    Get PDF
    Range imaging is emerging as a promising alternative technology for applications that require non-contact visual inspection of object deformation and shape. Previously, we presented a solid-state full-field heterodyne range imaging device capable of capturing three-dimensional images with sub-millimetre range resolution. Using a heterodyne indirect time-of-flight configuration, this system simultaneously measures distance (and intensity), for each pixel in a cameras field of view. In this paper we briefly describe our range imaging system, and its principle of operation. By performing measurements on several metal objects, we demonstrate the potential capabilities of this technology for surface profiling and deformation measurement. In addition to verifying system performance, the reported examples highlight some important system limitations. With these in mind we subsequently discuss the further developments required to enable the use of this device as a robust and practical tool in non-destructive testing and measurement applications

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Education in the Wild: Contextual and Location-Based Mobile Learning in Action. A Report from the STELLAR Alpine Rendez-Vous Workshop Series

    Get PDF

    Augmenting the field experience: a student-led comparison of techniques and technologies

    Get PDF
    In this study we report on our experiences of creating and running a student fieldtrip exercise which allowed students to compare a range of approaches to the design of technologies for augmenting landscape scenes. The main study site is around Keswick in the English Lake District, Cumbria, UK, an attractive upland environment popular with tourists and walkers. The aim of the exercise for the students was to assess the effectiveness of various forms of geographic information in augmenting real landscape scenes, as mediated through a range of techniques and technologies. These techniques were: computer-generated acetate overlays showing annotated wireframe views from certain key points; a custom-designed application running on a PDA; a mediascape running on the mScape software on a GPS-enabled mobile phone; Google Earth on a tablet PC; and a head-mounted in-field Virtual Reality system. Each group of students had all five techniques available to them, and were tasked with comparing them in the context of creating a visitor guide to the area centred on the field centre. Here we summarise their findings and reflect upon some of the broader research questions emerging from the project

    Handheld Guides in Inspection Tasks : Augmented Reality versus Picture

    Get PDF
    Inspection tasks focus on observation of the environment and are required in many industrial domains. Inspectors usually execute these tasks by using a guide such as a paper manual, and directly observing the environment. The effort required to match the information in a guide with the information in an environment and the constant gaze shifts required between the two can severely lower the work efficiency of inspector in performing his/her tasks. Augmented reality (AR) allows the information in a guide to be overlaid directly on an environment. This can decrease the amount of effort required for information matching, thus increasing work efficiency. AR guides on head-mounted displays (HMDs) have been shown to increase efficiency. Handheld AR (HAR) is not as efficient as HMD-AR in terms of manipulability, but is more practical and features better information input and sharing capabilities. In this study, we compared two handheld guides: an AR interface that shows 3D registered annotations, that is, annotations having a fixed 3D position in the AR environment, and a non-AR picture interface that displays non-registered annotations on static images. We focused on inspection tasks that involve high information density and require the user to move, as well as to perform several viewpoint alignments. The results of our comparative evaluation showed that use of the AR interface resulted in lower task completion times, fewer errors, fewer gaze shifts, and a lower subjective workload. We are the first to present findings of a comparative study of an HAR and a picture interface when used in tasks that require the user to move and execute viewpoint alignments, focusing only on direct observation. Our findings can be useful for AR practitioners and psychology researchers
    corecore