356 research outputs found

    The next challenge for world wide robotized tele-echography experiment (WORTEX 2012): from engineering success to healthcare delivery.

    Get PDF
    Access to good quality healthcare remains difficult for many patients whether they live in developed or developing countries. In developed countries, specialist medical expertise is concentrated in major hospitals in urban settings both to improve clinical outcomes and as a strategy to reduce the costs of specialist healthcare delivery. In developing countries, millions of people have limited, if any, routine access to a healthcare system and due to economic and cultural factors the accessibility of any services may be restricted. In both cases, geographical, socio-political, cultural and economic factors produce ‘medically isolated areas’ where patients find themselves disadvantaged in terms of timely diagnosis and expert and/or expensive treatment. The robotized teleechography approach, also referred to as robotized teleultrasound, offers a potential solution to diagnostic imaging in medically isolated areas. It is designed for patients requiring ultrasound scans for routine care (e.g., ante natal care) and for diagnostic imaging to investigate acute and medical emergencies conditions, including trauma care and responses to natural disasters such as earthquakes. The robotized teleechography system can hold any standard ultrasound probe; this lightweight system is positioned on the patient’s body by a healthcare assistant. The medical expert, a clinician with expertise in ultrasound imaging and diagnosis, is in a distant location and, using a dedicated joystick, remotely controls the scanning via any available communication link (Internet, satellite). The WORTEX2012 intercontinental trials of the system conducted last year successfully demonstrated the feasibility of remote robotized tele-echography in a range of cultural, technical and clinical contexts. In addition to the engineering success, these trials provided positive feedback from the participating clinicians and patients on using the system and on the system’s perceived potential to transform healthcare in medically isolated areas. The next challenge is to show evidence that this innovative technology can deliver on its promise if introduced into routine healthcare

    Medical image computing and computer-aided medical interventions applied to soft tissues. Work in progress in urology

    Full text link
    Until recently, Computer-Aided Medical Interventions (CAMI) and Medical Robotics have focused on rigid and non deformable anatomical structures. Nowadays, special attention is paid to soft tissues, raising complex issues due to their mobility and deformation. Mini-invasive digestive surgery was probably one of the first fields where soft tissues were handled through the development of simulators, tracking of anatomical structures and specific assistance robots. However, other clinical domains, for instance urology, are concerned. Indeed, laparoscopic surgery, new tumour destruction techniques (e.g. HIFU, radiofrequency, or cryoablation), increasingly early detection of cancer, and use of interventional and diagnostic imaging modalities, recently opened new challenges to the urologist and scientists involved in CAMI. This resulted in the last five years in a very significant increase of research and developments of computer-aided urology systems. In this paper, we propose a description of the main problems related to computer-aided diagnostic and therapy of soft tissues and give a survey of the different types of assistance offered to the urologist: robotization, image fusion, surgical navigation. Both research projects and operational industrial systems are discussed

    Lungs cancer nodules detection from ct scan images with convolutional neural networks

    Get PDF
    Lungs cancer is a life-taking disease and is causing a problem around the world for a long time. The only plausible solution for this type of disease is the early detection of the disease because at preliminary stages it can be treated or cured. With the recent medical advancements, Computerized Tomography (CT) scan is the best technique out there to get the images of internal body organs. Sometimes, even experienced doctors are not able to identify cancer just by looking at the CT scan. During the past few years, a lot of research work is devoted to achieve the task for lung cancer detection but they failed to achieve accuracy. The main objective of this piece of this research was to find an appropriate method for classification of nodules and non-nodules. For classification, the dataset was taken from Japanese Society of Radiological Technology (JSRT) with 247 three-dimensional images. The images were preprocessed into gray-scale images. The lung cancer detection model was built using Convolutional Neural Networks (CNN). The model was able to achieve an accuracy of 88% with lowest loss rate of 0.21% and was found better than other highly complex methods for classification

    Sensor-based navigating mobile robots for people with disabilities

    Get PDF
    People with severe physical disabilities need help with everyday tasks, such as getting dressed, eating, brushing their teeth, scratching themselves, drinking, etc. They also need support to be able to work. They are usually helped by one or more persona

    From Concept to Market: Surgical Robot Development

    Get PDF
    Surgical robotics and supporting technologies have really become a prime example of modern applied information technology infiltrating our everyday lives. The development of these systems spans across four decades, and only the last few years brought the market value and saw the rising customer base imagined already by the early developers. This chapter guides through the historical development of the most important systems, and provide references and lessons learnt for current engineers facing similar challenges. A special emphasis is put on system validation, assessment and clearance, as the most commonly cited barrier hindering the wider deployment of a system

    Robotic Platforms for Assistance to People with Disabilities

    Get PDF
    People with congenital and/or acquired disabilities constitute a great number of dependents today. Robotic platforms to help people with disabilities are being developed with the aim of providing both rehabilitation treatment and assistance to improve their quality of life. A high demand for robotic platforms that provide assistance during rehabilitation is expected because of the health status of the world due to the COVID-19 pandemic. The pandemic has resulted in countries facing major challenges to ensure the health and autonomy of their disabled population. Robotic platforms are necessary to ensure assistance and rehabilitation for disabled people in the current global situation. The capacity of robotic platforms in this area must be continuously improved to benefit the healthcare sector in terms of chronic disease prevention, assistance, and autonomy. For this reason, research about human–robot interaction in these robotic assistance environments must grow and advance because this topic demands sensitive and intelligent robotic platforms that are equipped with complex sensory systems, high handling functionalities, safe control strategies, and intelligent computer vision algorithms. This Special Issue has published eight papers covering recent advances in the field of robotic platforms to assist disabled people in daily or clinical environments. The papers address innovative solutions in this field, including affordable assistive robotics devices, new techniques in computer vision for intelligent and safe human–robot interaction, and advances in mobile manipulators for assistive tasks

    Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives

    Full text link
    Ultrasound (US) is one of the most widely used modalities for clinical intervention and diagnosis due to the merits of providing non-invasive, radiation-free, and real-time images. However, free-hand US examinations are highly operator-dependent. Robotic US System (RUSS) aims at overcoming this shortcoming by offering reproducibility, while also aiming at improving dexterity, and intelligent anatomy and disease-aware imaging. In addition to enhancing diagnostic outcomes, RUSS also holds the potential to provide medical interventions for populations suffering from the shortage of experienced sonographers. In this paper, we categorize RUSS as teleoperated or autonomous. Regarding teleoperated RUSS, we summarize their technical developments, and clinical evaluations, respectively. This survey then focuses on the review of recent work on autonomous robotic US imaging. We demonstrate that machine learning and artificial intelligence present the key techniques, which enable intelligent patient and process-specific, motion and deformation-aware robotic image acquisition. We also show that the research on artificial intelligence for autonomous RUSS has directed the research community toward understanding and modeling expert sonographers' semantic reasoning and action. Here, we call this process, the recovery of the "language of sonography". This side result of research on autonomous robotic US acquisitions could be considered as valuable and essential as the progress made in the robotic US examination itself. This article will provide both engineers and clinicians with a comprehensive understanding of RUSS by surveying underlying techniques.Comment: Accepted by Medical Image Analysi

    Impact of Ear Occlusion on In-Ear Sounds Generated by Intra-oral Behaviors

    Get PDF
    We conducted a case study with one volunteer and a recording setup to detect sounds induced by the actions: jaw clenching, tooth grinding, reading, eating, and drinking. The setup consisted of two in-ear microphones, where the left ear was semi-occluded with a commercially available earpiece and the right ear was occluded with a mouldable silicon ear piece. Investigations in the time and frequency domains demonstrated that for behaviors such as eating, tooth grinding, and reading, sounds could be recorded with both sensors. For jaw clenching, however, occluding the ear with a mouldable piece was necessary to enable its detection. This can be attributed to the fact that the mouldable ear piece sealed the ear canal and isolated it from the environment, resulting in a detectable change in pressure. In conclusion, our work suggests that detecting behaviors such as eating, grinding, reading with a semi-occluded ear is possible, whereas, behaviors such as clenching require the complete occlusion of the ear if the activity should be easily detectable. Nevertheless, the latter approach may limit real-world applicability because it hinders the hearing capabilities.</p
    corecore