57 research outputs found

    Markov-Gibbs Random Field Approach for Modeling of Skin Surface Textures

    Get PDF
    Medical imaging has been contributing to dermatology by providing computer-based assistance by 2D digital imaging of skin and processing of images. Skin imaging can be more effective by inclusion of 3D skin features. Furthermore, clinical examination of skin consists of both visual and tactile inspection. The tactile sensation is related to 3D surface profiles and mechanical parameters. The 3D imaging of skin can also be integrated with haptic technology for computer-based tactile inspection. The research objective of this work is to model 3D surface textures of skin. A 3D image acquisition set up capturing skin surface textures at high resolution (~0.1 mm) has been used. An algorithm to extract 2D grayscale texture (height map) from 3D texture has been presented. The extracted 2D textures are then modeled using Markov-Gibbs random field (MGRF) modeling technique. The modeling results for MGRF model depend on input texture characteristics. The homogeneous, spatially invariant texture patterns are modeled successfully. From the observation of skin samples, we classify three key features of3D skin profiles i.e. curvature of underlying limb, wrinkles/line like features and fine textures. The skin samples are distributed in three input sets to see the MGRF model's response to each of these 3D features. First set consists of all three features. Second set is obtained after elimination of curvature and contains both wrinkle/line like features and fine textures. Third set is also obtained after elimination of curvature but consists of fine textures only. MGRF modeling for set I did not result in any visual similarity. Hence the curvature of underlying limbs cannot be modeled successfully and makes an inhomogeneous feature. For set 2 the wrinkle/line like features can be modeled with low/medium visual similarity depending on the spatial invariance. The results for set 3 show that fine textures of skin are almost always modeled successfully with medium/high visual similarity and make a homogeneous feature. We conclude that the MGRF model is able to model fine textures of skin successfully which are on scale of~ 0.1 mm. The surface profiles on this resolution can provide haptic sensation of roughness and friction. Therefore fine textures can be an important clue to different skin conditions perceived through tactile inspection via a haptic device

    Facial and Bodily Expressions for Control and Adaptation of Games (ECAG 2008)

    Get PDF

    Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning

    Get PDF
    Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons

    Medical Robotics

    Get PDF
    The first generation of surgical robots are already being installed in a number of operating rooms around the world. Robotics is being introduced to medicine because it allows for unprecedented control and precision of surgical instruments in minimally invasive procedures. So far, robots have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal reflux and heartburn. The ultimate goal of the robotic surgery field is to design a robot that can be used to perform closed-chest, beating-heart surgery. The use of robotics in surgery will expand over the next decades without any doubt. Minimally Invasive Surgery (MIS) is a revolutionary approach in surgery. In MIS, the operation is performed with instruments and viewing equipment inserted into the body through small incisions created by the surgeon, in contrast to open surgery with large incisions. This minimizes surgical trauma and damage to healthy tissue, resulting in shorter patient recovery time. The aim of this book is to provide an overview of the state-of-art, to present new ideas, original results and practical experiences in this expanding area. Nevertheless, many chapters in the book concern advanced research on this growing area. The book provides critical analysis of clinical trials, assessment of the benefits and risks of the application of these technologies. This book is certainly a small sample of the research activity on Medical Robotics going on around the globe as you read it, but it surely covers a good deal of what has been done in the field recently, and as such it works as a valuable source for researchers interested in the involved subjects, whether they are currently “medical roboticists” or not

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing

    Quantifying Texture Scale in Accordance With Human Perception

    Get PDF
    Visual texture has multiple perceptual attributes (e.g. regularity, isotropy, etc.), including scale. The scale of visual texture has been defined as the size of the repeating unit (or texel) of which the texture is composed. Not all textures are formed through the placement of a clearly discernible repeating unit (e.g. irregular and stochastic textures). There is currently no rigorous definition for texture scale that is applicable to textures of a wide range of regularities. We hypothesised that texture scale ought to extend to these less regular textures. Non-overlapping sample windows (or patches) taken from a texture appear increasingly similar as the size of the window gets larger. This is true irrespective of whether the texture is formed by the placement of a discernible repeating unit or not. We propose the following new characterisation for texture scale: “the smallest window size beyond within which texture appears consistently”. We perform two psychophysical studies and report data that demonstrates consensus across subjects and across methods of probing in the assessment of texture scale. We then present an empirical algorithm for the estimation of scale based on this characterisation. We demonstrate agreement between the algorithm and (subjective) human assessment with an RMS accuracy of 1.2 just-noticeable-differences, a significant improvement over previous published algorithms. We provide two ground-truth perceptual datasets, one for each of our psychophysical studies, for the texture scale of the entire Brodatz album, together with confidence levels for each of our estimates. Finally, we make available an online tool which researchers can use to obtain texture scale estimates by uploading images of textures

    Discovering a Domain Knowledge Representation for Image Grouping: Multimodal Data Modeling, Fusion, and Interactive Learning

    Get PDF
    In visually-oriented specialized medical domains such as dermatology and radiology, physicians explore interesting image cases from medical image repositories for comparative case studies to aid clinical diagnoses, educate medical trainees, and support medical research. However, general image classification and retrieval approaches fail in grouping medical images from the physicians\u27 viewpoint. This is because fully-automated learning techniques cannot yet bridge the gap between image features and domain-specific content for the absence of expert knowledge. Understanding how experts get information from medical images is therefore an important research topic. As a prior study, we conducted data elicitation experiments, where physicians were instructed to inspect each medical image towards a diagnosis while describing image content to a student seated nearby. Experts\u27 eye movements and their verbal descriptions of the image content were recorded to capture various aspects of expert image understanding. This dissertation aims at an intuitive approach to extracting expert knowledge, which is to find patterns in expert data elicited from image-based diagnoses. These patterns are useful to understand both the characteristics of the medical images and the experts\u27 cognitive reasoning processes. The transformation from the viewed raw image features to interpretation as domain-specific concepts requires experts\u27 domain knowledge and cognitive reasoning. This dissertation also approximates this transformation using a matrix factorization-based framework, which helps project multiple expert-derived data modalities to high-level abstractions. To combine additional expert interventions with computational processing capabilities, an interactive machine learning paradigm is developed to treat experts as an integral part of the learning process. Specifically, experts refine medical image groups presented by the learned model locally, to incrementally re-learn the model globally. This paradigm avoids the onerous expert annotations for model training, while aligning the learned model with experts\u27 sense-making
    • 

    corecore