1,693 research outputs found
Neuro-electronic technology in medicine and beyond
This dissertation looks at the technology and social issues involved with interfacing electronics directly to the human nervous system, in particular the methods for both reading and stimulating nerves. The development and use of cochlea implants is discussed, and is compared with recent developments in artificial vision. The final sections consider a future for non-medicinal applications of neuro-electronic technology. Social attitudes towards use for both medicinal and non-medicinal purposes are discussed, and the viability of use in the latter case assessed
Towards Natural Control of Artificial Limbs
The use of implantable electrodes has been long thought as the solution for a more natural control of artificial limbs, as these offer access to long-term stable and physiologically appropriate sources of control, as well as the possibility to elicit appropriate sensory feedback via neurostimulation. Although these ideas have been explored since the 1960’s, the lack of a long-term stable human-machine interface has prevented the utilization of even the simplest implanted electrodes in clinically viable limb prostheses.In this thesis, a novel human-machine interface for bidirectional communication between implanted electrodes and the artificial limb was developed and clinically implemented. The long-term stability was achieved via osseointegration, which has been shown to provide stable skeletal attachment. By enhancing this technology as a communication gateway, the longest clinical implementation of prosthetic control sourced by implanted electrodes has been achieved, as well as the first in modern times. The first recipient has used it uninterruptedly in daily and professional activities for over one year. Prosthetic control was found to improve in resolution while requiring less muscular effort, as well as to be resilient to motion artifacts, limb position, and environmental conditions.In order to support this work, the literature was reviewed in search of reliable and safe neuromuscular electrodes that could be immediately used in humans. Additional work was conducted to improve the signal-to-noise ratio and increase the amount of information retrievable from extraneural recordings. Different signal processing and pattern recognition algorithms were investigated and further developed towards real-time and simultaneous prediction of limb movements. These algorithms were used to demonstrate that higher functionality could be restored by intuitive control of distal joints, and that such control remains viable over time when using epimysial electrodes. Lastly, the long-term viability of direct nerve stimulation to produce intuitive sensory feedback was also demonstrated.The possibility to permanently and reliably access implanted electrodes, thus making them viable for prosthetic control, is potentially the main contribution of this work. Furthermore, the opportunity to chronically record and stimulate the neuromuscular system offers new venues for the prediction of complex limb motions and increased understanding of somatosensory perception. Therefore, the technology developed here, combining stable attachment with permanent and reliable human-machine communication, is considered by the author as a critical step towards more functional artificial limbs
Augmented navigation
Spinal fixation procedures have the inherent risk of causing damage to vulnerable anatomical structures such as the spinal cord, nerve roots, and blood vessels. To prevent complications, several technological aids have been introduced. Surgical navigation is the most widely used, and guides the surgeon by providing the position of the surgical instruments and implants in relation to the patient anatomy based on radiographic images. Navigation can be extended by the addition of a robotic arm to replace the surgeon’s hand to increase accuracy. Another line of surgical aids is tissue sensing equipment, that recognizes different tissue types and provides a warning system built into surgical instruments. All these technologies are under continuous development and the optimal solution is yet to be found. The aim of this thesis was to study the use of Augmented Reality (AR), Virtual Reality (VR), Artificial Intelligence (AI), and tissue sensing technology in spinal navigation to improve precision and prevent surgical errors.
The aim of Paper I was to develop and validate an algorithm for automatizing the intraoperative planning of pedicle screws. An AI algorithm for automatic segmentation of the spine, and screw path suggestion was developed and evaluated. In a clinical study of advanced deformity cases, the algorithm could provide correct suggestions for 86% of all pedicles—or 95%, when cases with extremely altered anatomy were excluded.
Paper II evaluated the accuracy of pedicle screw placement using a novel augmented reality surgical navigation (ARSN) system, harboring the above-developed algorithm. Twenty consecutively enrolled patients, eligible for deformity correction surgery in the thoracolumbar region, were operated on using the ARSN system. In this cohort, we found a pedicle screw placement accuracy of 94%, as measured according to the Gertzbein grading scale.
The primary goal of Paper III was to validate an extension of the ARSN system for placing pedicle screws using instrument tracking and VR. In a porcine cadaver model, it was demonstrated that VR instrument tracking could successfully be integrated with the ARSN system, resulting in pedicle devices placed within 1.7 ± 1.0 mm of the planed path.
Paper IV examined the feasibility of a robot-guided system for semi-automated, minimally invasive, pedicle screw placement in a cadaveric model. Using the robotic arm, pedicle devices were placed within 0.94 ± 0.59 mm of the planned path. The use of a semi-automated surgical robot was feasible, providing a higher technical accuracy compared to non-robotic solutions.
Paper V investigated the use of a tissue sensing technology, diffuse reflectance spectroscopy (DRS), for detecting the cortical bone boundary in vertebrae during pedicle screw insertions. The technology could accurately differentiate between cancellous and cortical bone and warn the surgeon before a cortical breach. Using machine learning models, the technology demonstrated a sensitivity of 98% [range: 94-100%] and a specificity of 98% [range: 91-100%].
In conclusion, several technological aids can be used to improve accuracy during spinal fixation procedures. In this thesis, the advantages of adding AR, VR, AI and tissue sensing technology to conventional navigation solutions were studied
Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges
In recent years, new research has brought the field of EEG-based Brain-Computer Interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely,“Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user-machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human-computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices
Aerospace Medicine and Biology: A continuing bibliography with indexes, supplement 171
This bibliography lists 186 reports, articles, and other documents introduced into the NASA scientific and technical information system in August 1977
Navigation system based in motion tracking sensor for percutaneous renal access
Tese de Doutoramento em Engenharia BiomédicaMinimally-invasive kidney interventions are daily performed to diagnose and treat several renal
diseases. Percutaneous renal access (PRA) is an essential but challenging stage for most of these
procedures, since its outcome is directly linked to the physician’s ability to precisely visualize and
reach the anatomical target.
Nowadays, PRA is always guided with medical imaging assistance, most frequently using X-ray
based imaging (e.g. fluoroscopy). Thus, radiation on the surgical theater represents a major risk to
the medical team, where its exclusion from PRA has a direct impact diminishing the dose exposure
on both patients and physicians.
To solve the referred problems this thesis aims to develop a new hardware/software framework
to intuitively and safely guide the surgeon during PRA planning and puncturing.
In terms of surgical planning, a set of methodologies were developed to increase the certainty of
reaching a specific target inside the kidney. The most relevant abdominal structures for PRA were
automatically clustered into different 3D volumes. For that, primitive volumes were merged as a local
optimization problem using the minimum description length principle and image statistical
properties. A multi-volume Ray Cast method was then used to highlight each segmented volume.
Results show that it is possible to detect all abdominal structures surrounding the kidney, with the
ability to correctly estimate a virtual trajectory.
Concerning the percutaneous puncturing stage, either an electromagnetic or optical solution
were developed and tested in multiple in vitro, in vivo and ex vivo trials. The optical tracking solution
aids in establishing the desired puncture site and choosing the best virtual puncture trajectory.
However, this system required a line of sight to different optical markers placed at the needle base,
limiting the accuracy when tracking inside the human body. Results show that the needle tip can
deflect from its initial straight line trajectory with an error higher than 3 mm. Moreover, a complex
registration procedure and initial setup is needed.
On the other hand, a real-time electromagnetic tracking was developed. Hereto, a catheter
was inserted trans-urethrally towards the renal target. This catheter has a position and orientation
electromagnetic sensor on its tip that function as a real-time target locator. Then, a needle integrating a similar sensor is used. From the data provided by both sensors, one computes a virtual puncture
trajectory, which is displayed in a 3D visualization software. In vivo tests showed a median renal and
ureteral puncture times of 19 and 51 seconds, respectively (range 14 to 45 and 45 to 67 seconds).
Such results represent a puncture time improvement between 75% and 85% when comparing to
state of the art methods.
3D sound and vibrotactile feedback were also developed to provide additional information about
the needle orientation. By using these kind of feedback, it was verified that the surgeon tends to
follow a virtual puncture trajectory with a reduced amount of deviations from the ideal trajectory,
being able to anticipate any movement even without looking to a monitor. Best results show that 3D
sound sources were correctly identified 79.2 ± 8.1% of times with an average angulation error of
10.4º degrees. Vibration sources were accurately identified 91.1 ± 3.6% of times with an average
angulation error of 8.0º degrees.
Additionally to the EMT framework, three circular ultrasound transducers were built with a needle
working channel. One explored different manufacture fabrication setups in terms of the piezoelectric
materials, transducer construction, single vs. multi array configurations, backing and matching
material design. The A-scan signals retrieved from each transducer were filtered and processed to
automatically detect reflected echoes and to alert the surgeon when undesirable anatomical
structures are in between the puncture path. The transducers were mapped in a water tank and
tested in a study involving 45 phantoms. Results showed that the beam cross-sectional area
oscillates around the ceramics radius and it was possible to automatically detect echo signals in
phantoms with length higher than 80 mm.
Hereupon, it is expected that the introduction of the proposed system on the PRA procedure,
will allow to guide the surgeon through the optimal path towards the precise kidney target, increasing
surgeon’s confidence and reducing complications (e.g. organ perforation) during PRA. Moreover, the
developed framework has the potential to make the PRA free of radiation for both patient and surgeon
and to broad the use of PRA to less specialized surgeons.Intervenções renais minimamente invasivas são realizadas diariamente para o tratamento e
diagnóstico de várias doenças renais. O acesso renal percutâneo (ARP) é uma etapa essencial e
desafiante na maior parte destes procedimentos. O seu resultado encontra-se diretamente
relacionado com a capacidade do cirurgião visualizar e atingir com precisão o alvo anatómico.
Hoje em dia, o ARP é sempre guiado com recurso a sistemas imagiológicos, na maior parte
das vezes baseados em raios-X (p.e. a fluoroscopia). A radiação destes sistemas nas salas cirúrgicas
representa um grande risco para a equipa médica, aonde a sua remoção levará a um impacto direto
na diminuição da dose exposta aos pacientes e cirurgiões.
De modo a resolver os problemas existentes, esta tese tem como objetivo o desenvolvimento
de uma framework de hardware/software que permita, de forma intuitiva e segura, guiar o cirurgião
durante o planeamento e punção do ARP.
Em termos de planeamento, foi desenvolvido um conjunto de metodologias de modo a
aumentar a eficácia com que o alvo anatómico é alcançado. As estruturas abdominais mais
relevantes para o procedimento de ARP, foram automaticamente agrupadas em volumes 3D, através
de um problema de optimização global com base no princípio de “minimum description length” e
propriedades estatísticas da imagem. Por fim, um procedimento de Ray Cast, com múltiplas funções
de transferência, foi utilizado para enfatizar as estruturas segmentadas. Os resultados mostram que
é possível detetar todas as estruturas abdominais envolventes ao rim, com a capacidade para
estimar corretamente uma trajetória virtual.
No que diz respeito à fase de punção percutânea, foram testadas duas soluções de deteção
de movimento (ótica e eletromagnética) em múltiplos ensaios in vitro, in vivo e ex vivo. A solução
baseada em sensores óticos ajudou no cálculo do melhor ponto de punção e na definição da melhor
trajetória a seguir. Contudo, este sistema necessita de uma linha de visão com diferentes
marcadores óticos acoplados à base da agulha, limitando a precisão com que a agulha é detetada
no interior do corpo humano. Os resultados indicam que a agulha pode sofrer deflexões à medida
que vai sendo inserida, com erros superiores a 3 mm.
Por outro lado, foi desenvolvida e testada uma solução com base em sensores
eletromagnéticos. Para tal, um cateter que integra um sensor de posição e orientação na sua ponta, foi colocado por via trans-uretral junto do alvo renal. De seguida, uma agulha, integrando um sensor
semelhante, é utilizada para a punção percutânea. A partir da diferença espacial de ambos os
sensores, é possível gerar uma trajetória de punção virtual. A mediana do tempo necessário para
puncionar o rim e ureter, segundo esta trajetória, foi de 19 e 51 segundos, respetivamente
(variações de 14 a 45 e 45 a 67 segundos). Estes resultados representam uma melhoria do tempo
de punção entre 75% e 85%, quando comparados com o estado da arte dos métodos atuais.
Além do feedback visual, som 3D e feedback vibratório foram explorados de modo a fornecer
informações complementares da posição da agulha. Verificou-se que com este tipo de feedback, o
cirurgião tende a seguir uma trajetória de punção com desvios mínimos, sendo igualmente capaz
de antecipar qualquer movimento, mesmo sem olhar para o monitor. Fontes de som e vibração
podem ser corretamente detetadas em 79,2 ± 8,1% e 91,1 ± 3,6%, com erros médios de angulação
de 10.4º e 8.0 graus, respetivamente.
Adicionalmente ao sistema de navegação, foram também produzidos três transdutores de
ultrassom circulares com um canal de trabalho para a agulha. Para tal, foram exploradas diferentes
configurações de fabricação em termos de materiais piezoelétricos, transdutores multi-array ou
singulares e espessura/material de layers de suporte. Os sinais originados em cada transdutor
foram filtrados e processados de modo a detetar de forma automática os ecos refletidos, e assim,
alertar o cirurgião quando existem variações anatómicas ao longo do caminho de punção. Os
transdutores foram mapeados num tanque de água e testados em 45 phantoms. Os resultados
mostraram que o feixe de área em corte transversal oscila em torno do raio de cerâmica, e que os
ecos refletidos são detetados em phantoms com comprimentos superiores a 80 mm.
Desta forma, é expectável que a introdução deste novo sistema a nível do ARP permitirá
conduzir o cirurgião ao longo do caminho de punção ideal, aumentado a confiança do cirurgião e
reduzindo possíveis complicações (p.e. a perfuração dos órgãos). Além disso, de realçar que este
sistema apresenta o potencial de tornar o ARP livre de radiação e alarga-lo a cirurgiões menos
especializados.The present work was only possible thanks to the support by the Portuguese Science and
Technology Foundation through the PhD grant with reference SFRH/BD/74276/2010 funded by
FCT/MEC (PIDDAC) and by Fundo Europeu de Desenvolvimento Regional (FEDER), Programa
COMPETE - Programa Operacional Factores de Competitividade (POFC) do QREN
Novel Bidirectional Body - Machine Interface to Control Upper Limb Prosthesis
Objective. The journey of a bionic prosthetic user is characterized by the opportunities and limitations involved in adopting a device (the prosthesis) that should enable activities of daily living (ADL). Within this context, experiencing a bionic hand as a functional (and, possibly, embodied) limb constitutes the premise for mitigating the risk of its abandonment through the continuous use of the device. To achieve such a result, different aspects must be considered for making the artificial limb an effective support for carrying out ADLs. Among them, intuitive and robust control is fundamental to improving amputees’ quality of life using upper limb prostheses. Still, as artificial proprioception is essential to perceive the prosthesis movement without constant visual attention, a good control framework may not be enough to restore practical functionality to the limb. To overcome this, bidirectional communication between the user and the prosthesis has been recently introduced and is a requirement of utmost importance in developing prosthetic hands. Indeed, closing the control loop between the user and a prosthesis by providing artificial sensory feedback is a fundamental step towards the complete restoration of the lost sensory-motor functions. Within my PhD work, I proposed the development of a more controllable and sensitive human-like hand prosthesis, i.e., the Hannes prosthetic hand, to improve its usability and effectiveness.
Approach. To achieve the objectives of this thesis work, I developed a modular and scalable software and firmware architecture to control the Hannes prosthetic multi-Degree of Freedom (DoF) system and to fit all users’ needs (hand aperture, wrist rotation, and wrist flexion in different combinations). On top of this, I developed several Pattern Recognition (PR) algorithms to translate electromyographic (EMG) activity into complex movements. However, stability and repeatability were still unmet requirements in multi-DoF upper limb systems; hence, I started by investigating different strategies to produce a more robust control. To do this, EMG signals were collected from trans-radial amputees using an array of up to six sensors placed over the skin. Secondly, I developed a vibrotactile system to implement haptic feedback to restore proprioception and create a bidirectional connection between the user and the prosthesis. Similarly, I implemented an object stiffness detection to restore tactile sensation able to connect the user with the external word. This closed-loop control between EMG and vibration feedback is essential to implementing a Bidirectional Body - Machine Interface to impact amputees’ daily life strongly. For each of these three activities: (i) implementation of robust pattern recognition control algorithms, (ii) restoration of proprioception, and (iii) restoration of the feeling of the grasped object's stiffness, I performed a study where data from healthy subjects and amputees was collected, in order to demonstrate the efficacy and usability of my implementations. In each study, I evaluated both the algorithms and the subjects’ ability to use the prosthesis by means of the F1Score parameter (offline) and the Target Achievement Control test-TAC (online). With this test, I analyzed the error rate, path efficiency, and time efficiency in completing different tasks.
Main results. Among the several tested methods for Pattern Recognition, the Non-Linear Logistic Regression (NLR) resulted to be the best algorithm in terms of F1Score (99%, robustness), whereas the minimum number of electrodes needed for its functioning was determined to be 4 in the conducted offline analyses. Further, I demonstrated that its low computational burden allowed its implementation and integration on a microcontroller running at a sampling frequency of 300Hz (efficiency). Finally, the online implementation allowed the subject to simultaneously control the Hannes prosthesis DoFs, in a bioinspired and human-like way. In addition, I performed further tests with the same NLR-based control by endowing it with closed-loop proprioceptive feedback. In this scenario, the results achieved during the TAC test obtained an error rate of 15% and a path efficiency of 60% in experiments where no sources of information were available (no visual and no audio feedback). Such results demonstrated an improvement in the controllability of the system with an impact on user experience.
Significance. The obtained results confirmed the hypothesis of improving robustness and efficiency of a prosthetic control thanks to of the implemented closed-loop approach. The bidirectional communication between the user and the prosthesis is capable to restore the loss of sensory functionality, with promising implications on direct translation in the clinical practice
Exploiting Temporal Image Information in Minimally Invasive Surgery
Minimally invasive procedures rely on medical imaging instead of the surgeons direct vision. While preoperative images can be used for surgical planning and navigation, once the surgeon arrives at the target site real-time intraoperative imaging is needed. However, acquiring and interpreting these images can be challenging and much of the rich temporal information present in these images is not visible. The goal of this thesis is to improve image guidance for minimally invasive surgery in two main areas. First, by showing how high-quality ultrasound video can be obtained by integrating an ultrasound transducer directly into delivery devices for beating heart valve surgery. Secondly, by extracting hidden temporal information through video processing methods to help the surgeon localize important anatomical structures. Prototypes of delivery tools, with integrated ultrasound imaging, were developed for both transcatheter aortic valve implantation and mitral valve repair. These tools provided an on-site view that shows the tool-tissue interactions during valve repair. Additionally, augmented reality environments were used to add more anatomical context that aids in navigation and in interpreting the on-site video. Other procedures can be improved by extracting hidden temporal information from the intraoperative video. In ultrasound guided epidural injections, dural pulsation provides a cue in finding a clear trajectory to the epidural space. By processing the video using extended Kalman filtering, subtle pulsations were automatically detected and visualized in real-time. A statistical framework for analyzing periodicity was developed based on dynamic linear modelling. In addition to detecting dural pulsation in lumbar spine ultrasound, this approach was used to image tissue perfusion in natural video and generate ventilation maps from free-breathing magnetic resonance imaging. A second statistical method, based on spectral analysis of pixel intensity values, allowed blood flow to be detected directly from high-frequency B-mode ultrasound video. Finally, pulsatile cues in endoscopic video were enhanced through Eulerian video magnification to help localize critical vasculature. This approach shows particular promise in identifying the basilar artery in endoscopic third ventriculostomy and the prostatic artery in nerve-sparing prostatectomy. A real-time implementation was developed which processed full-resolution stereoscopic video on the da Vinci Surgical System
Treadmill training augmented with real-time visualisation feedback and function electrical stimulation for gait rehabilitation after stroke : a feasibility study
Motor rehabilitation typically requires patients to perform task-specific training, in which biofeedback can be instrumental for encouraging neuroplasticity after stroke. Treadmill training augmented with real-time visual feedback and functional electrical stimulation (FES) may have a beneficial synergistic effect on this process. This study aims to develop a multi-channel FES (MFES) system with stimulation triggers based on the phase of gait cycle, determined using a 3D motion capture system. A feasibility study was conducted to determine whether this enhanced treadmill gait training systemis suitable for stroke survivors in clinical practice. The real-time biomechanical visual feedback system with computerised MFES was developed using six motion-capture cameras installed around a treadmill.;This system was designed to stimulate the pretibial muscle for correcting foot drop problems, gastro-soleus for facilitating push-off, and quadriceps and hamstring for improving knee stability. Dynamic avatar movement and step length/ratio were displayed on a monitor, providing patients with real-time visual biofeedback. Participants received up to 20 minutes of enhanced treadmill training once or twice per week for 6 weeks. Training programme, pre- and post-training ability, and adverse events of each participant were recorded. Feedback was also collected from participants and physiotherapists regarding their experience. Eight out of ten participants fully completed their programme.;In total, 67 training sessions were carried out. All participants had a good attendance rate. The number and duration of training sessions ranged from 5 to 20, and 11 to 20 minutes, respectively. The MFES system successfully improved gait patterns during training, and feedback from participants and physiotherapists regarding their experience of the research intervention was overwhelmingly positive. In conclusion, this enhanced treadmill gait training system is feasible for use in gait rehabilitation after stroke. However, a well-designed clinical trial with a larger sample size is needed to determine clinical efficacy on gait recovery.Motor rehabilitation typically requires patients to perform task-specific training, in which biofeedback can be instrumental for encouraging neuroplasticity after stroke. Treadmill training augmented with real-time visual feedback and functional electrical stimulation (FES) may have a beneficial synergistic effect on this process. This study aims to develop a multi-channel FES (MFES) system with stimulation triggers based on the phase of gait cycle, determined using a 3D motion capture system. A feasibility study was conducted to determine whether this enhanced treadmill gait training systemis suitable for stroke survivors in clinical practice. The real-time biomechanical visual feedback system with computerised MFES was developed using six motion-capture cameras installed around a treadmill.;This system was designed to stimulate the pretibial muscle for correcting foot drop problems, gastro-soleus for facilitating push-off, and quadriceps and hamstring for improving knee stability. Dynamic avatar movement and step length/ratio were displayed on a monitor, providing patients with real-time visual biofeedback. Participants received up to 20 minutes of enhanced treadmill training once or twice per week for 6 weeks. Training programme, pre- and post-training ability, and adverse events of each participant were recorded. Feedback was also collected from participants and physiotherapists regarding their experience. Eight out of ten participants fully completed their programme.;In total, 67 training sessions were carried out. All participants had a good attendance rate. The number and duration of training sessions ranged from 5 to 20, and 11 to 20 minutes, respectively. The MFES system successfully improved gait patterns during training, and feedback from participants and physiotherapists regarding their experience of the research intervention was overwhelmingly positive. In conclusion, this enhanced treadmill gait training system is feasible for use in gait rehabilitation after stroke. However, a well-designed clinical trial with a larger sample size is needed to determine clinical efficacy on gait recovery
Brachial Plexus Injury
In this book, specialists from different countries and continents share their knowledge and experience in brachial plexus surgery. It discusses the different types of brachial plexus injury and advances in surgical treatments
- …