1,144 research outputs found

    Estimating and understanding motion : from diagnostic to robotic surgery

    Get PDF
    Estimating and understanding motion from an image sequence is a central topic in computer vision. The high interest in this topic is because we are living in a world where many events that occur in the environment are dynamic. This makes motion estimation and understanding a natural component and a key factor in a widespread of applications including object recognition , 3D shape reconstruction, autonomous navigation and medica! diagnosis. Particularly, we focus on the medical domain in which understanding the human body for clinical purposes requires retrieving the organs' complex motion patterns, which is in general a hard problem when using only image data. In this thesis, we cope with this problem by posing the question - How to achieve a realistic motion estimation to offer a better clinical understanding? We focus this thesis on answering this question by using a variational formulation as a basis to understand one of the most complex motions in the human's body, the heart motion, through three different applications: (i) cardiac motion estimation for diagnostic, (ii) force estimation and (iii) motion prediction, both for robotic surgery. Firstly, we focus on a central topic in cardiac imaging that is the estimation of the cardiac motion. The main aim is to offer objective and understandable measures to physicians for helping them in the diagnostic of cardiovascular diseases. We employ ultrafast ultrasound data and tools for imaging motion drawn from diverse areas such as low-rank analysis and variational deformation to perform a realistic cardiac motion estimation. The significance is that by taking low-rank data with carefully chosen penalization, synergies in this complex variational problem can be created. We demonstrate how our proposed solution deals with complex deformations through careful numerical experiments using realistic and simulated data. We then move from diagnostic to robotic surgeries where surgeons perform delicate procedures remotely through robotic manipulators without directly interacting with the patients. As a result, they lack force feedback, which is an important primary sense for increasing surgeon-patient transparency and avoiding injuries and high mental workload. To solve this problem, we follow the conservation principies of continuum mechanics in which it is clear that the change in shape of an elastic object is directly proportional to the force applied. Thus, we create a variational framework to acquire the deformation that the tissues undergo due to an applied force. Then, this information is used in a learning system to find the nonlinear relationship between the given data and the applied force. We carried out experiments with in-vivo and ex-vivo data and combined statistical, graphical and perceptual analyses to demonstrate the strength of our solution. Finally, we explore robotic cardiac surgery, which allows carrying out complex procedures including Off-Pump Coronary Artery Bypass Grafting (OPCABG). This procedure avoids the associated complications of using Cardiopulmonary Bypass (CPB) since the heart is not arrested while performing the surgery on a beating heart. Thus, surgeons have to deal with a dynamic target that compromisetheir dexterity and the surgery's precision. To compensate the heart motion, we propase a solution composed of three elements: an energy function to estimate the 3D heart motion, a specular highlight detection strategy and a prediction approach for increasing the robustness of the solution. We conduct evaluation of our solution using phantom and realistic datasets. We conclude the thesis by reporting our findings on these three applications and highlight the dependency between motion estimation and motion understanding at any dynamic event, particularly in clinical scenarios.L’estimació i comprensió del moviment dins d’una seqüència d’imatges és un tema central en la visió per ordinador, el que genera un gran interès perquè vivim en un entorn ple d’esdeveniments dinàmics. Per aquest motiu és considerat com un component natural i factor clau dins d’un ampli ventall d’aplicacions, el qual inclou el reconeixement d’objectes, la reconstrucció de formes tridimensionals, la navegació autònoma i el diagnòstic de malalties. En particular, ens situem en l’àmbit mèdic en el qual la comprensió del cos humà, amb finalitats clíniques, requereix l’obtenció de patrons complexos de moviment dels òrgans. Aquesta és, en general, una tasca difícil quan s’utilitzen només dades de tipus visual. En aquesta tesi afrontem el problema plantejant-nos la pregunta - Com es pot aconseguir una estimació realista del moviment amb l’objectiu d’oferir una millor comprensió clínica? La tesi se centra en la resposta mitjançant l’ús d’una formulació variacional com a base per entendre un dels moviments més complexos del cos humà, el del cor, a través de tres aplicacions: (i) estimació del moviment cardíac per al diagnòstic, (ii) estimació de forces i (iii) predicció del moviment, orientant-se les dues últimes en cirurgia robòtica. En primer lloc, ens centrem en un tema principal en la imatge cardíaca, que és l’estimació del moviment cardíac. L’objectiu principal és oferir als metges mesures objectives i comprensibles per ajudar-los en el diagnòstic de les malalties cardiovasculars. Fem servir dades d’ultrasons ultraràpids i eines per al moviment d’imatges procedents de diverses àrees, com ara l’anàlisi de baix rang i la deformació variacional, per fer una estimació realista del moviment cardíac. La importància rau en que, en prendre les dades de baix rang amb una penalització acurada, es poden crear sinergies en aquest problema variacional complex. Mitjançant acurats experiments numèrics, amb dades realístiques i simulades, hem demostrat com les nostres propostes solucionen deformacions complexes. Després passem del diagnòstic a la cirurgia robòtica, on els cirurgians realitzen procediments delicats remotament, a través de manipuladors robòtics, sense interactuar directament amb els pacients. Com a conseqüència, no tenen la percepció de la força com a resposta, que és un sentit primari important per augmentar la transparència entre el cirurgià i el pacient, per evitar lesions i per reduir la càrrega de treball mental. Resolem aquest problema seguint els principis de conservació de la mecànica del medi continu, en els quals està clar que el canvi en la forma d’un objecte elàstic és directament proporcional a la força aplicada. Per això hem creat un marc variacional que adquireix la deformació que pateixen els teixits per l’aplicació d’una força. Aquesta informació s’utilitza en un sistema d’aprenentatge, per trobar la relació no lineal entre les dades donades i la força aplicada. Hem dut a terme experiments amb dades in-vivo i ex-vivo i hem combinat l’anàlisi estadístic, gràfic i de percepció que demostren la robustesa de la nostra solució. Finalment, explorem la cirurgia cardíaca robòtica, la qual cosa permet realitzar procediments complexos, incloent la cirurgia coronària sense bomba (off-pump coronary artery bypass grafting o OPCAB). Aquest procediment evita les complicacions associades a l’ús de circulació extracorpòria (Cardiopulmonary Bypass o CPB), ja que el cor no s’atura mentre es realitza la cirurgia. Això comporta que els cirurgians han de tractar amb un objectiu dinàmic que compromet la seva destresa i la precisió de la cirurgia. Per compensar el moviment del cor, proposem una solució composta de tres elements: un funcional d’energia per estimar el moviment tridimensional del cor, una estratègia de detecció de les reflexions especulars i una aproximació basada en mètodes de predicció, per tal d’augmentar la robustesa de la solució. L’avaluació de la nostra solució s’ha dut a terme mitjançant conjunts de dades sintètiques i realistes. La tesi conclou informant dels nostres resultats en aquestes tres aplicacions i posant de relleu la dependència entre l’estimació i la comprensió del moviment en qualsevol esdeveniment dinàmic, especialment en escenaris clínics.Postprint (published version

    Randomly connected networks generate emergent selectivity and predict decoding properties of large populations of neurons

    Full text link
    Advances in neural recording methods enable sampling from populations of thousands of neurons during the performance of behavioral tasks, raising the question of how recorded activity relates to the theoretical models of computations underlying performance. In the context of decision making in rodents, patterns of functional connectivity between choice-selective cortical neurons, as well as broadly distributed choice information in both excitatory and inhibitory populations, were recently reported [1]. The straightforward interpretation of these data suggests a mechanism relying on specific patterns of anatomical connectivity to achieve selective pools of inhibitory as well as excitatory neurons. We investigate an alternative mechanism for the emergence of these experimental observations using a computational approach. We find that a randomly connected network of excitatory and inhibitory neurons generates single-cell selectivity, patterns of pairwise correlations, and indistinguishable excitatory and inhibitory readout weight distributions, as observed in recorded neural populations. Further, we make the readily verifiable experimental predictions that, for this type of evidence accumulation task, there are no anatomically defined sub-populations of neurons representing choice, and that choice preference of a particular neuron changes with the details of the task. This work suggests that distributed stimulus selectivity and patterns of functional organization in population codes could be emergent properties of randomly connected networks

    The computational neurology of active vision

    Get PDF
    In this thesis, we appeal to recent developments in theoretical neurobiology – namely, active inference – to understand the active visual system and its disorders. Chapter 1 reviews the neurobiology of active vision. This introduces some of the key conceptual themes around attention and inference that recur through subsequent chapters. Chapter 2 provides a technical overview of active inference, and its interpretation in terms of message passing between populations of neurons. Chapter 3 applies the material in Chapter 2 to provide a computational characterisation of the oculomotor system. This deals with two key challenges in active vision: deciding where to look, and working out how to look there. The homology between this message passing and the brain networks solving these inference problems provide a basis for in silico lesion experiments, and an account of the aberrant neural computations that give rise to clinical oculomotor signs (including internuclear ophthalmoplegia). Chapter 4 picks up on the role of uncertainty resolution in deciding where to look, and examines the role of beliefs about the quality (or precision) of data in perceptual inference. We illustrate how abnormal prior beliefs influence inferences about uncertainty and give rise to neuromodulatory changes and visual hallucinatory phenomena (of the sort associated with synucleinopathies). We then demonstrate how synthetic pharmacological perturbations that alter these neuromodulatory systems give rise to the oculomotor changes associated with drugs acting upon these systems. Chapter 5 develops a model of visual neglect, using an oculomotor version of a line cancellation task. We then test a prediction of this model using magnetoencephalography and dynamic causal modelling. Chapter 6 concludes by situating the work in this thesis in the context of computational neurology. This illustrates how the variational principles used here to characterise the active visual system may be generalised to other sensorimotor systems and their disorders

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)

    The selection and evaluation of a sensory technology for interaction in a warehouse environment

    Get PDF
    In recent years, Human-Computer Interaction (HCI) has become a significant part of modern life as it has improved human performance in the completion of daily tasks in using computerised systems. The increase in the variety of bio-sensing and wearable technologies on the market has propelled designers towards designing more efficient, effective and fully natural User-Interfaces (UI), such as the Brain-Computer Interface (BCI) and the Muscle-Computer Interface (MCI). BCI and MCI have been used for various purposes, such as controlling wheelchairs, piloting drones, providing alphanumeric inputs into a system and improving sports performance. Various challenges are experienced by workers in a warehouse environment. Because they often have to carry objects (referred to as hands-full) it is difficult to interact with traditional devices. Noise undeniably exists in some industrial environments and it is known as a major factor that causes communication problems. This has reduced the popularity of using verbal interfaces with computer applications, such as Warehouse Management Systems. Another factor that effects the performance of workers are action slips caused by a lack of concentration during, for example, routine picking activities. This can have a negative impact on job performance and allow a worker to incorrectly execute a task in a warehouse environment. This research project investigated the current challenges workers experience in a warehouse environment and the technologies utilised in this environment. The latest automation and identification systems and technologies are identified and discussed, specifically the technologies which have addressed known problems. Sensory technologies were identified that enable interaction between a human and a computerised warehouse environment. Biological and natural behaviours of humans which are applicable in the interaction with a computerised environment were described and discussed. The interactive behaviours included the visionary, auditory, speech production and physiological movement where other natural human behaviours such paying attention, action slips and the action of counting items were investigated. A number of modern sensory technologies, devices and techniques for HCI were identified with the aim of selecting and evaluating an appropriate sensory technology for MCI. iii MCI technologies enable a computer system to recognise hand and other gestures of a user, creating means of direct interaction between a user and a computer as they are able to detect specific features extracted from a specific biological or physiological activity. Thereafter, Machine Learning (ML) is applied in order to train a computer system to detect these features and convert them to a computer interface. An application of biomedical signals (bio-signals) in HCI using a MYO Armband for MCI is presented. An MCI prototype (MCIp) was developed and implemented to allow a user to provide input to an HCI, in a hands-free and hands-full situation. The MCIp was designed and developed to recognise the hand-finger gestures of a person when both hands are free or when holding an object, such a cardboard box. The MCIp applies an Artificial Neural Network (ANN) to classify features extracted from the surface Electromyography signals acquired by the MYO Armband around the forearm muscle. The MCIp provided the results of data classification for gesture recognition to an accuracy level of 34.87% with a hands-free situation. This was done by employing the ANN. The MCIp, furthermore, enabled users to provide numeric inputs to the MCIp system hands-full with an accuracy of 59.7% after a training session for each gesture of only 10 seconds. The results were obtained using eight participants. Similar experimentation with the MYO Armband has not been found to be reported in any literature at submission of this document. Based on this novel experimentation, the main contribution of this research study is a suggestion that the application of a MYO Armband, as a commercially available muscle-sensing device on the market, has the potential as an MCI to recognise the finger gestures hands-free and hands-full. An accurate MCI can increase the efficiency and effectiveness of an HCI tool when it is applied to different applications in a warehouse where noise and hands-full activities pose a challenge. Future work to improve its accuracy is proposed
    • …
    corecore