4,218 research outputs found
Intraoperative Navigation Systems for Image-Guided Surgery
Recent technological advancements in medical imaging equipment have resulted in
a dramatic improvement of image accuracy, now capable of providing useful information
previously not available to clinicians. In the surgical context, intraoperative
imaging provides a crucial value for the success of the operation.
Many nontrivial scientific and technical problems need to be addressed in order to
efficiently exploit the different information sources nowadays available in advanced
operating rooms. In particular, it is necessary to provide: (i) accurate tracking of
surgical instruments, (ii) real-time matching of images from different modalities, and
(iii) reliable guidance toward the surgical target. Satisfying all of these requisites
is needed to realize effective intraoperative navigation systems for image-guided
surgery.
Various solutions have been proposed and successfully tested in the field of image
navigation systems in the last ten years; nevertheless several problems still arise in
most of the applications regarding precision, usability and capabilities of the existing
systems. Identifying and solving these issues represents an urgent scientific challenge.
This thesis investigates the current state of the art in the field of intraoperative
navigation systems, focusing in particular on the challenges related to efficient and
effective usage of ultrasound imaging during surgery.
The main contribution of this thesis to the state of the art are related to:
Techniques for automatic motion compensation and therapy monitoring applied
to a novel ultrasound-guided surgical robotic platform in the context of
abdominal tumor thermoablation.
Novel image-fusion based navigation systems for ultrasound-guided neurosurgery
in the context of brain tumor resection, highlighting their applicability
as off-line surgical training instruments.
The proposed systems, which were designed and developed in the framework of
two international research projects, have been tested in real or simulated surgical
scenarios, showing promising results toward their application in clinical practice
Three-dimensional imaging with multiple degrees of freedom using data fusion
This paper presents an overview of research work
and some novel strategies and results on using data fusion in
3-D imaging when using multiple information sources. We examine
a variety of approaches and applications such as 3-D
imaging integrated with polarimetric and multispectral imaging,
low levels of photon flux for photon-counting 3-D imaging,
and image fusion in both multiwavelength 3-D digital holography
and 3-D integral imaging. Results demonstrate the benefits
data fusion provides for different purposes, including visualization
enhancement under different conditions, and 3-D reconstruction
quality improvement
Fine Art Pattern Extraction and Recognition
This is a reprint of articles from the Special Issue published online in the open access journal Journal of Imaging (ISSN 2313-433X) (available at: https://www.mdpi.com/journal/jimaging/special issues/faper2020)
Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots
Autonomous robots that assist humans in day to day living tasks are becoming
increasingly popular. Autonomous mobile robots operate by sensing and
perceiving their surrounding environment to make accurate driving decisions. A
combination of several different sensors such as LiDAR, radar, ultrasound
sensors and cameras are utilized to sense the surrounding environment of
autonomous vehicles. These heterogeneous sensors simultaneously capture various
physical attributes of the environment. Such multimodality and redundancy of
sensing need to be positively utilized for reliable and consistent perception
of the environment through sensor data fusion. However, these multimodal sensor
data streams are different from each other in many ways, such as temporal and
spatial resolution, data format, and geometric alignment. For the subsequent
perception algorithms to utilize the diversity offered by multimodal sensing,
the data streams need to be spatially, geometrically and temporally aligned
with each other. In this paper, we address the problem of fusing the outputs of
a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image
sensor for free space detection. The outputs of LiDAR scanner and the image
sensor are of different spatial resolutions and need to be aligned with each
other. A geometrical model is used to spatially align the two sensor outputs,
followed by a Gaussian Process (GP) regression-based resolution matching
algorithm to interpolate the missing data with quantifiable uncertainty. The
results indicate that the proposed sensor data fusion framework significantly
aids the subsequent perception steps, as illustrated by the performance
improvement of a uncertainty aware free space detection algorith
State of the art of audio- and video based solutions for AAL
Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach.
This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users.
The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted.
The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio
Advanced Endoscopic Navigation:Surgical Big Data,Methodology,and Applications
随着科学技术的飞速发展,健康与环境问题日益成为人类面临的最重大问题之一。信息科学、计算机技术、电子工程与生物医学工程等学科的综合应用交叉前沿课题,研究现代工程技术方法,探索肿瘤癌症等疾病早期诊断、治疗和康复手段。本论文综述了计算机辅助微创外科手术导航、多模态医疗大数据、方法论及其临床应用:从引入微创外科手术导航概念出发,介绍了医疗大数据的术前与术中多模态医学成像方法、阐述了先进微创外科手术导航的核心流程包括计算解剖模型、术中实时导航方案、三维可视化方法及交互式软件技术,归纳了各类微创外科手术方法的临床应用。同时,重点讨论了全球各种手术导航技术在临床应用中的优缺点,分析了目前手术导航领域内的最新技术方法。在此基础上,提出了微创外科手术方法正向数字化、个性化、精准化、诊疗一体化、机器人化以及高度智能化的发展趋势。【Abstract】Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.X.L. acknowledges funding from the Fundamental Research Funds for the Central Universities. T.M.P. acknowledges funding from the Canadian Foundation for Innovation, the Canadian Institutes for Health Research, the National Sciences and Engineering Research Council of Canada, and a grant from Intuitive Surgical Inc
- …