3,252 research outputs found

    Towards automated visual flexible endoscope navigation

    Get PDF
    Background:\ud The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research.\ud Methods:\ud A systematic literature search was performed using three general search terms in two medical–technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included.\ud Results:\ud Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date.\ud Conclusions:\ud Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process

    A first approach to a taxonomy-based classification framework for hand grasps

    Get PDF
    Many solutions have been proposed to help amputated subjects regain the lost functionality. In order to interact with the outer world and objects that populate it, it is crucial for these subjects to being able to perform essential grasps. In this paper we propose a preliminary solution for the online classification of 8 basics hand grasps by considering physiological signals, namely Surface Electromyography (sEMG), exploiting a quantitative taxonomy of the considered movement. The hierarchical organization of the taxonomy allows a decomposition of the classification phase between couples of movement groups. The idea is that the closest to the roots the more hard is the classification, but on the meantime the miss-classification error is less problematic, since the two movements will be close to each other. The proposed solution is subject-independent, which means that signals from many different subjects are considered by the probabilistic framework to modelize the input signals. The information has been modeled offline by using a Gaussian Mixture Model (GMM), and then testen online on a unseen subject, by using a Gaussian-based classification. In order to be able to process the signal online, an accurate preprocessing phase is needed, in particular, we apply the Wavelet Transform (Wavelet Transform) to the Electromyography (EMG) signal. Thanks to this approach we are able to develop a robust and general solution, which can adapt quickly to new subjects, with no need of long and draining training phase. In this preliminary study we were able to reach a mean accuracy of 76.5%, reaching up to 97.29% in the higher levels

    Real-time human motion analysis and grasping force using the OptiTrack system and Flexi-force sensor

    Get PDF
    Biologically inspired robotic hands have important applications in industry and biomedical robotics. The grasping capacity of robotic hands is crucial for a robotic system. This paper presents an experimental study on the finger force and movements of a human hand during the grasping operation in real-time. It focuses on two topics; measuring grasping force using Flexi-force sensors and analysing human hand action during grasping operation. The findings show that lifting required higher forces compared with grasp force in the static phase

    Design and assembly of a magneto-inertial wearable device for ecological behavioural analysis of infants

    Get PDF
    There are recent evidence which show how brain development is strictly linked to the action. Movements shape and are, in turn, shaped by cortical and sub-cortical areas. In particular spontaneous movements of newborn infants matter for developing the capability of generating voluntary skill movements. Therefore studying spontaneous infants’ movements can be useful to understand the main developmental milestones achieved by humans from birth onward. This work focuses on the design and development of a mechatronic wearable device for ecological movement analysis called WAMS (Wrist and Ankle Movement Sensor). The design and assembling of the device is presented, as well as the communication protocol and the synchronization with other marker-based optical movement analysis systems

    Adaptive motor control and learning in a spiking neural network realised on a mixed-signal neuromorphic processor

    Full text link
    Neuromorphic computing is a new paradigm for design of both the computing hardware and algorithms inspired by biological neural networks. The event-based nature and the inherent parallelism make neuromorphic computing a promising paradigm for building efficient neural network based architectures for control of fast and agile robots. In this paper, we present a spiking neural network architecture that uses sensory feedback to control rotational velocity of a robotic vehicle. When the velocity reaches the target value, the mapping from the target velocity of the vehicle to the correct motor command, both represented in the spiking neural network on the neuromorphic device, is autonomously stored on the device using on-chip plastic synaptic weights. We validate the controller using a wheel motor of a miniature mobile vehicle and inertia measurement unit as the sensory feedback and demonstrate online learning of a simple 'inverse model' in a two-layer spiking neural network on the neuromorphic chip. The prototype neuromorphic device that features 256 spiking neurons allows us to realise a simple proof of concept architecture for the purely neuromorphic motor control and learning. The architecture can be easily scaled-up if a larger neuromorphic device is available.Comment: 6+1 pages, 4 figures, will appear in one of the Robotics conference

    Fast human motion prediction for human-robot collaboration with wearable interfaces

    Full text link
    In this paper, we aim at improving human motion prediction during human-robot collaboration in industrial facilities by exploiting contributions from both physical and physiological signals. Improved human-machine collaboration could prove useful in several areas, while it is crucial for interacting robots to understand human movement as soon as possible to avoid accidents and injuries. In this perspective, we propose a novel human-robot interface capable to anticipate the user intention while performing reaching movements on a working bench in order to plan the action of a collaborative robot. The proposed interface can find many applications in the Industry 4.0 framework, where autonomous and collaborative robots will be an essential part of innovative facilities. A motion intention prediction and a motion direction prediction levels have been developed to improve detection speed and accuracy. A Gaussian Mixture Model (GMM) has been trained with IMU and EMG data following an evidence accumulation approach to predict reaching direction. Novel dynamic stopping criteria have been proposed to flexibly adjust the trade-off between early anticipation and accuracy according to the application. The output of the two predictors has been used as external inputs to a Finite State Machine (FSM) to control the behaviour of a physical robot according to user's action or inaction. Results show that our system outperforms previous methods, achieving a real-time classification accuracy of 94.3±2.9%94.3\pm2.9\% after 160.0msec±80.0msec160.0msec\pm80.0msec from movement onset
    corecore