760 research outputs found

    Proposed Fuzzy Real-Time HaPticS Protocol Carrying Haptic Data and Multisensory Streams

    Get PDF
    Sensory and haptic data transfers to critical real-time applications over the Internet require better than best effort transport, strict timely and reliable ordered deliveries. Multi-sensory applications usually include video and audio streams with real-time control and sensory data, which aggravate and compress within real-time flows. Such real-time are vulnerable to synchronization to synchronization problems, if combined with poor Internet links. Apart from the use of differentiated QoS and MPLS services, several haptic transport protocols have been proposed to confront such issues, focusing on minimizing flows rate disruption while maintaining a steady transmission rate at the sender. Nevertheless, these protocols fail to cope with network variations and queuing delays posed by the Internet routers. This paper proposes a new haptic protocol that tries to alleviate such inadequacies using three different metrics: mean frame delay, jitter and frame loss calculated at the receiver end and propagated to the sender. In order to dynamically adjust flow rate in a fuzzy controlled manners, the proposed protocol includes a fuzzy controller to its protocol structure. The proposed FRTPS protocol (Fuzzy Real-Time haPticS protocol), utilizes crisp inputs into a fuzzification process followed by fuzzy control rules in order to calculate a crisp level output service class, denoted as Service Rate Level (SRL). The experimental results of FRTPS over RTP show that FRTPS outperforms RTP in cases of congestion incidents, out of order deliveries and goodput

    Quantification of human operator skill in a driving simulator for applications in human adaptive mechatronics

    Get PDF
    Nowadays, the Human Machine System (HMS) is considered to be a proven technology, and now plays an important role in various human activities. However, this system requires that only a human has an in-depth understanding of the machine operation, and is thus a one-way relationship. Therefore, researchers have recently developed Human Adaptive Mechatronics (HAM) to overcome this problem and balance the roles of the human and machine in any HMS. HAM is different compared to ordinary HMS in terms of its ability to adapt to changes in its surroundings and the changing skill level of humans. Nonetheless, the main problem with HAM is in quantifying the human skill level in machine manipulation as part of human recognition. Therefore, this thesis deals with a proposed formula to quantify and classify the skill of the human operator in driving a car as an example application between humans and machines. The formula is evaluated using the logical conditions and the definition of skill in HAM in terms of time and error. The skill indices are classified into five levels: Very Highly Skilled, Highly Skilled, Medium Skilled, Low Skilled and Very Low Skilled. Driving was selected because it is considered to be a complex mechanical task that involves skill, a human and a machine. However, as the safety of the human subjects when performing the required tasks in various situations must be considered, a driving simulator was used. The simulator was designed using Microsoft Visual Studio, controlled using a USB steering wheel and pedals, as was able to record the human ii path and include the desired effects on the road. Thus, two experiments involving the driving simulator were performed; 20 human subjects with a varying numbers of years experience in driving and gaming were used in the experiments. In the first experiment, the subjects were asked to drive in Expected and Guided Conditions (EGC). Five guided tracks were used to show the variety of driving skill: straight, circular, elliptical, square and triangular. The results of this experiment indicate that the tracking error is inversely proportional to the elapsed time. In second experiment, the subjects experienced Sudden Transitory Conditions (STC). Two types of unexpected situations in driving were used: tyre puncture and slippery surface. This experiment demonstrated that the tracking error is not directly proportional to the elapsed time. Both experiments also included the correlation between experience and skill. For the first time, a new skill index formula is proposed based on the logical conditions and the definition of skill in HAM

    An Overview of Self-Adaptive Technologies Within Virtual Reality Training

    Get PDF
    This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training

    An intelligent navigation system for an unmanned surface vehicle

    Get PDF
    Merged with duplicate record 10026.1/2768 on 27.03.2017 by CS (TIS)A multi-disciplinary research project has been carried out at the University of Plymouth to design and develop an Unmanned Surface Vehicle (USV) named ýpringer. The work presented herein relates to formulation of a robust, reliable, accurate and adaptable navigation system to enable opringei to undertake various environmental monitoring tasks. Synergistically, sensor mathematical modelling, fuzzy logic, Multi-Sensor Data Fusion (MSDF), Multi-Model Adaptive Estimation (MMAE), fault adaptive data acquisition and an user interface system are combined to enhance the robustness and fault tolerance of the onboard navigation system. This thesis not only provides a holistic framework but also a concourse of computational techniques in the design of a fault tolerant navigation system. One of the principle novelties of this research is the use of various fuzzy logic based MSDF algorithms to provide an adaptive heading angle under various fault situations for Springer. This algorithm adapts the process noise covariance matrix ( Q) and measurement noise covariance matrix (R) in order to address one of the disadvantages of Kalman filtering. This algorithm has been implemented in Spi-inger in real time and results demonstrate excellent robustness qualities. In addition to the fuzzy logic based MSDF, a unique MMAE algorithm has been proposed in order to provide an alternative approach to enhance the fault tolerance of the heading angles for Springer. To the author's knowledge, the work presented in this thesis suggests a novel way forward in the development of autonomous navigation system design and, therefore, it is considered that the work constitutes a contribution to knowledge in this area of study. Also, there are a number of ways in which the work presented in this thesis can be extended to many other challenging domains.DEVONPORT MANAGEMENT LTD, J&S MARINE LTD AND SOUTH WEST WATER PL

    Towards the development of safe, collaborative robotic freehand ultrasound

    Get PDF
    The use of robotics in medicine is of growing importance for modern health services, as robotic systems have the capacity to improve upon human tasks, thereby enhancing the treatment ability of a healthcare provider. In the medical sector, ultrasound imaging is an inexpensive approach without the high radiation emissions often associated with other modalities, especially when compared to MRI and CT imaging respectively. Over the past two decades, considerable effort has been invested into freehand ultrasound robotics research and development. However, this research has focused on the feasibility of the application, not the robotic fundamentals, such as motion control, calibration, and contextual awareness. Instead, much of the work is concentrated on custom designed robots, ultrasound image generation and visual servoing, or teleoperation. Research based on these topics often suffer from important limitations that impede their use in an adaptable, scalable, and real-world manner. Particularly, while custom robots may be designed for a specific application, commercial collaborative robots are a more robust and economical solution. Otherwise, various robotic ultrasound studies have shown the feasibility of using basic force control, but rarely explore controller tuning in the context of patient safety and deformable skin in an unstructured environment. Moreover, many studies evaluate novel visual servoing approaches, but do not consider the practicality of relying on external measurement devices for motion control. These studies neglect the importance of robot accuracy and calibration, which allow a system to safely navigate its environment while reducing the imaging errors associated with positioning. Hence, while the feasibility of robotic ultrasound has been the focal point in previous studies, there is a lack of attention to what occurs between system design and image output. This thesis addresses limitations of the current literature through three distinct contributions. Given the force-controlled nature of an ultrasound robot, the first contribution presents a closed-loop calibration approach using impedance control and low-cost equipment. Accuracy is a fundamental requirement for high-quality ultrasound image generation and targeting. This is especially true when following a specified path along a patient or synthesizing 2D slices into a 3D ultrasound image. However, even though most industrial robots are inherently precise, they are not necessarily accurate. While robot calibration itself has been extensively studied, many of the approaches rely on expensive and highly delicate equipment. Experimental testing showed that this method is comparable in quality to traditional calibration using a laser tracker. As demonstrated through an experimental study and validated with a laser tracker, the absolute accuracy of a collaborative robot was improved to a maximum error of 0.990mm, representing a 58.4% improvement when compared to the nominal model. The second contribution explores collisions and contact events, as they are a natural by-product of applications involving physical human-robot interaction (pHRI) in unstructured environments. Robot-assisted medical ultrasound is an example of a task where simply stopping the robot upon contact detection may not be an appropriate reaction strategy. Thus, the robot should have an awareness of body contact location to properly plan force-controlled trajectories along the human body using the imaging probe. This is especially true for remote ultrasound systems where safety and manipulability are important elements to consider when operating a remote medical system through a communication network. A framework is proposed for robot contact classification using the built-in sensor data of a collaborative robot. Unlike previous studies, this classification does not discern between intended vs. unintended contact scenarios, but rather classifies what was involved in the contact event. The classifier can discern different ISO/TS 15066:2016 specific body areas along a human-model leg with 89.37% accuracy. Altogether, this contact distinction framework allows for more complex reaction strategies and tailored robot behaviour during pHRI. Lastly, given that the success of an ultrasound task depends on the capability of the robot system to handle pHRI, pure motion control is insufficient. Force control techniques are necessary to achieve effective and adaptable behaviour of a robotic system in the unstructured ultrasound environment while also ensuring safe pHRI. While force control does not require explicit knowledge of the environment, to achieve an acceptable dynamic behaviour, the control parameters must be tuned. The third contribution proposes a simple and effective online tuning framework for force-based robotic freehand ultrasound motion control. Within the context of medical ultrasound, different human body locations have a different stiffness and will require unique tunings. Through real-world experiments with a collaborative robot, the framework tuned motion control for optimal and safe trajectories along a human leg phantom. The optimization process was able to successfully reduce the mean absolute error (MAE) of the motion contact force to 0.537N through the evolution of eight motion control parameters. Furthermore, contextual awareness through motion classification can offer a framework for pHRI optimization and safety through predictive motion behaviour with a future goal of autonomous pHRI. As such, a classification pipeline, trained using the tuning process motion data, was able to reliably classify the future force tracking quality of a motion session with an accuracy of 91.82 %

    Flexible robotic control via co-operation between an operator and an ai-based control system

    Get PDF
    This thesis addresses the problem of variable autonomy in teleoperated mobile robots. Variable autonomy refers to the approach of incorporating several different levels of autonomous capabilities (Level(s) of Autonomy (LOA)) ranging from pure teleoperation (human has complete control of the robot) to full autonomy (robot has control of every capability), within a single robot. Most robots used for demanding and safety critical tasks (e.g. search and rescue, hazardous environments inspection), are currently teleoperated in simple ways, but could soon start to benefit from variable autonomy. The use of variable autonomy would allow Artificial Intelligence (AI) control algorithms to autonomously take control of certain functions when the human operator is suffering a high workload, high cognitive load, anxiety, or other distractions and stresses. In contrast, some circumstances may still necessitate direct human control of the robot. More specifically, this thesis is focused on investigating the issues of dynamically changing LOA (i.e. during task execution) using either Human-Initiative (HI) orMixed-Initiative (MI) control. MI refers to the peer-to-peer relationship between the robot and the operator in terms of the authority to initiate actions and LOA switches. HI refers to the human operators switching LOA based on their judgment, with the robot having no capacity to initiate LOA switches. A HI and a novel expert-guided MI controller are presented in this thesis. These controllers were evaluated using a multidisciplinary systematic experimental framework, that combines quantifiable and repeatable performance degradation factors for both the robot and the operator. The thesis presents statistically validated evidence that variable autonomy, in the form of HI and MI, provides advantages compared to only using teleoperation or only using autonomy, in various scenarios. Lastly, analyses of the interactions between the operators and the variable autonomy systems are reported. These analyses highlight the importance of personality traits and preferences, trust in the system, and the understanding of the system by the human operator, in the context of HRI with the proposed controllers
    • …
    corecore