492 research outputs found
Scale Stain: Multi-Resolution Feature Enhancement in Pathology Visualization
Digital whole-slide images of pathological tissue samples have recently
become feasible for use within routine diagnostic practice. These gigapixel
sized images enable pathologists to perform reviews using computer workstations
instead of microscopes. Existing workstations visualize scanned images by
providing a zoomable image space that reproduces the capabilities of the
microscope. This paper presents a novel visualization approach that enables
filtering of the scale-space according to color preference. The visualization
method reveals diagnostically important patterns that are otherwise not
visible. The paper demonstrates how this approach has been implemented into a
fully functional prototype that lets the user navigate the visualization
parameter space in real time. The prototype was evaluated for two common
clinical tasks with eight pathologists in a within-subjects study. The data
reveal that task efficiency increased by 15% using the prototype, with
maintained accuracy. By analyzing behavioral strategies, it was possible to
conclude that efficiency gain was caused by a reduction of the panning needed
to perform systematic search of the images. The prototype system was well
received by the pathologists who did not detect any risks that would hinder use
in clinical routine
Whole slide image registration for the study of tumor heterogeneity
Consecutive thin sections of tissue samples make it possible to study local
variation in e.g. protein expression and tumor heterogeneity by staining for a
new protein in each section. In order to compare and correlate patterns of
different proteins, the images have to be registered with high accuracy. The
problem we want to solve is registration of gigapixel whole slide images (WSI).
This presents 3 challenges: (i) Images are very large; (ii) Thin sections
result in artifacts that make global affine registration prone to very large
local errors; (iii) Local affine registration is required to preserve correct
tissue morphology (local size, shape and texture). In our approach we compare
WSI registration based on automatic and manual feature selection on either the
full image or natural sub-regions (as opposed to square tiles). Working with
natural sub-regions, in an interactive tool makes it possible to exclude
regions containing scientifically irrelevant information. We also present a new
way to visualize local registration quality by a Registration Confidence Map
(RCM). With this method, intra-tumor heterogeneity and charateristics of the
tumor microenvironment can be observed and quantified.Comment: MICCAI2018 - Computational Pathology and Ophthalmic Medical Image
Analysis - COMPA
Handheld image acquisition with real-time vision for human-computer interaction on mobile applications
Tese de mestrado integrado, Engenharia Biomédica e Biofísica (Engenharia Clínica e Instrumentação Médica), Universidade de Lisboa, Faculdade de Ciências, 2019Várias patologias importantes manifestam-se na retina, sendo que estas podem ter origem na própria retina ou então provirem de doenças sistémicas. A retinopatia diabética, o glaucoma e a degeneração macular relacionada com a idade são algumas dessas patologias oculares, e também as maiores causas de cegueira nos países desenvolvidos. Graças à maior prevalência que se tem verificado, tem havido uma aposta cada vez maior na massificação do rastreio destas doenças, principalmente na população mais suscetível de as contrair. Visto que a retina é responsável pela formação de imagens, ou seja, pelo sentido da visão, os componentes oculares que estão localizados anteriormente têm de ser transparentes, permitindo assim a passagem da luz. Isto faz com que a retina e, por sua vez, o tecido cerebral, possam ser examinados de forma não-invasiva. Existem várias técnicas de imagiologia da retina, incluindo a angiografia fluoresceínica, a tomografia de coerência ótica e a retinografia. O protótipo EyeFundusScope (EFS) da Fraunhofer é um retinógrafo portátil, acoplado a um smartphone, que permite a obtenção de imagens do fundo do olho, sem que seja necessária a dilatação da pupila. Utiliza um algoritmo de aprendizagem automática para detetar lesões existentes na retina, que estão normalmente associadas a um quadro de retinopatia diabética. Para além disso, utiliza um sistema de suporte à decisão, que indica a ausência ou presença da referida retinopatia. A fiabilidade deste tipo de algoritmos e o correto diagnóstico por parte de oftalmologistas e neurologistas estão extremamente dependentes da qualidade das imagens adquiridas. A consistência da captura portátil, com este tipo de retinógrafos, está intimamente relacionada com uma interação apropriada com o utilizador. De forma a melhorar o contributo prestado pelo utilizador, durante o procedimento habitual da retinografia, foi desenvolvida uma nova interface gráfica de utilizador, na aplicação Android do EFS. A abordagem pretendida consiste em tornar o uso do EFS mais acessível, e encorajar técnicos não especializados a utilizarem esta técnica de imagem médica, tanto em ambiente clínico como fora deste. Composto por vários elementos de interação, que foram criados para atender às necessidades do protocolo de aquisição de imagem, a interface gráfica de utilizador deverá auxiliar todos os utilizadores no posicionamento e alinhamento do EFS com a pupila do doente. Para além disto, poderá existir um controlo personalizado do tempo despendido em aquisições do mesmo olho. Inicialmente, foram desenhadas várias versões dos elementos de interação rotacionais, sendo posteriormente as mesmas implementadas na aplicação Android. Estes elementos de interação utilizam os dados recolhidos dos sensores inerciais, já existentes no smartphone, para transmitir uma resposta em tempo real ao utilizador enquanto este move o EFS. Além dos elementos de interação rotacionais, também foram implementados um temporizador e um indicador do olho que está a ser examinado. Após a implementação de três configurações com as várias versões dos elementos de interação, procedeu-se à realização dos testes de usabilidade. No entanto, antes desta etapa se poder concretizar, foram realizados vários acertos e correções com a ajuda de um olho fantoma. Durante o planeamento dos testes de usabilidade foi estabelecido um protocolo para os diferentes cenários de uso e foi criado um tutorial com as principais cautelas que os utilizadores deveriam ter aquando das aquisições. Os resultados dos testes de usabilidade mostram que a nova interface gráfica teve um efeito bastante positivo na experiência dos utilizadores. A maioria adaptou-se rapidamente à nova interface, sendo que para muitos contribuiu para o sucesso da tarefa de aquisição de imagem. No futuro, espera-se que a combinação dos dados fornecidos pelos sensores inerciais, juntamente com a implementação de novos algoritmos de reconhecimento de imagem, sejam a base de uma nova e mais eficaz técnica de interação em prática clínica. Além disso, a nova interface gráfica poderá proporcionar ao EFS uma aplicação que sirva exclusivamente para efeitos de formação profissional.Many important diseases manifest themselves in the retina, both primary retinal conditions and systemic disorders. Diabetic retinopathy, glaucoma and age-related macular degeneration are some of the most frequent ocular disorders and the leading causes of blindness in developed countries. Since these disorders are becoming increasingly prevalent, there has been the need to encourage high coverage screening among the most susceptible population.
As its function requires the retina to see the outside world, the involved optical components must be transparent for image formation. This makes the retinal tissue, and thereby brain tissue, accessible for imaging in a non-invasive manner. There are several approaches to visualize the retina including fluorescein angiography, optical coherence tomography and fundus photography. The Fraunhofer’s EyeFundusScope (EFS) prototype is a handheld smartphone-based fundus camera, that doesn’t require pupil dilation. It employs advanced machine learning algorithms to process the image in search of lesions that are often associated with diabetic retinopathy, making it a pre-diagnostic tool. The robustness of this computer vision algorithm, as well as the diagnose performance of ophthalmologists and neurologists, is strongly related with the quality of the images acquired. The consistency of handheld capture deeply depends on proper human interaction. In order to improve the user’s contribution to the retinal acquisition procedure, a new graphical user interface was designed and implemented in the EFS Acquisition App. The intended approach is to make the EFS easier to use by non-ophthalmic trained personnel, either in a non-clinical or in a clinical environment.
Comprised of several interaction elements that were created to suit the needs of the acquisition procedure, the graphical user interface should help the user to position and align the EFS illumination beam with the patient’s pupil as well as keeping track of the time between acquisitions on the same eye. Initially, several versions of rotational interaction elements were designed and later implemented on the EFS Acquisition App. These used data from the smartphone’s inertial sensors to give real-time feedback to the user while moving the EFS. Besides the rotational interactional elements, a time-lapse and an eye indicator were also designed and implemented in the EFS. Usability tests took place, after three assemblies being successfully implemented and corrected with the help of a model eye ophthalmoscope trainer. Also, a protocol for the different use-case scenarios was elaborated, and a tutorial was created. Results from the usability tests, show that the new graphical user interface had a very positive outcome. The majority of users adapted very quickly to the new interface, and for many it contributed for a successful acquisition task. In the future, the grouping of inertial sensors data and image recognition may prove to be the foundations for a more efficient interaction technique performed in clinical practices. Furthermore, the new graphical user interface could provide the EFS with an application for educational purposes
Microscope 2.0: An Augmented Reality Microscope with Real-time Artificial Intelligence Integration
The brightfield microscope is instrumental in the visual examination of both
biological and physical samples at sub-millimeter scales. One key clinical
application has been in cancer histopathology, where the microscopic assessment
of the tissue samples is used for the diagnosis and staging of cancer and thus
guides clinical therapy. However, the interpretation of these samples is
inherently subjective, resulting in significant diagnostic variability.
Moreover, in many regions of the world, access to pathologists is severely
limited due to lack of trained personnel. In this regard, Artificial
Intelligence (AI) based tools promise to improve the access and quality of
healthcare. However, despite significant advances in AI research, integration
of these tools into real-world cancer diagnosis workflows remains challenging
because of the costs of image digitization and difficulties in deploying AI
solutions. Here we propose a cost-effective solution to the integration of AI:
the Augmented Reality Microscope (ARM). The ARM overlays AI-based information
onto the current view of the sample through the optical pathway in real-time,
enabling seamless integration of AI into the regular microscopy workflow. We
demonstrate the utility of ARM in the detection of lymph node metastases in
breast cancer and the identification of prostate cancer with a latency that
supports real-time workflows. We anticipate that ARM will remove barriers
towards the use of AI in microscopic analysis and thus improve the accuracy and
efficiency of cancer diagnosis. This approach is applicable to other microscopy
tasks and AI algorithms in the life sciences and beyond
Halcyon -- A Pathology Imaging and Feature analysis and Management System
Halcyon is a new pathology imaging analysis and feature management system
based on W3C linked-data open standards and is designed to scale to support the
needs for the voluminous production of features from deep-learning feature
pipelines. Halcyon can support multiple users with a web-based UX with access
to all user data over a standards-based web API allowing for integration with
other processes and software systems. Identity management and data security is
also provided.Comment: 15 pages, 11 figures. arXiv admin note: text overlap with
arXiv:2005.0646
Intelligent computing applications to assist perceptual training in medical imaging
The research presented in this thesis represents a body of work which addresses issues in medical imaging, primarily as it applies to breast cancer screening and laparoscopic surgery. The concern here is how computer based methods can aid medical practitioners in these tasks. Thus, research is presented which develops both new techniques of analysing radiologists performance data and also new approaches of examining surgeons visual behaviour when they are undertaking laparoscopic training.
Initially a new chest X-Ray self-assessment application is described which has been developed to assess and improve radiologists performance in detecting lung cancer. Then, in breast cancer screening, a method of identifying potential poor performance outliers at an early stage in a national self-assessment scheme is demonstrated. Additionally, a method is presented to optimize whether a radiologist, in using this scheme, has correctly localised and identified an abnormality or made an error.
One issue in appropriately measuring radiological performance in breast screening is that both the size of clinical monitors used and the difficulty in linking the medical image to the observer s line of sight hinders suitable eye tracking. Consequently, a new method is presented which links these two items.
Laparoscopic surgeons have similar issues to radiologists in interpreting a medical display but with the added complications of hand-eye co-ordination. Work is presented which examines whether visual search feedback of surgeons operations can be useful training aids
- …