363 research outputs found

    Imparting 3D representations to artificial intelligence for a full assessment of pressure injuries.

    Get PDF
    During recent decades, researches have shown great interest to machine learning techniques in order to extract meaningful information from the large amount of data being collected each day. Especially in the medical field, images play a significant role in the detection of several health issues. Hence, medical image analysis remarkably participates in the diagnosis process and it is considered a suitable environment to interact with the technology of intelligent systems. Deep Learning (DL) has recently captured the interest of researchers as it has proven to be efficient in detecting underlying features in the data and outperformed the classical machine learning methods. The main objective of this dissertation is to prove the efficiency of Deep Learning techniques in tackling one of the important health issues we are facing in our society, through medical imaging. Pressure injuries are a dermatology related health issue associated with increased morbidity and health care costs. Managing pressure injuries appropriately is increasingly important for all the professionals in wound care. Using 2D photographs and 3D meshes of these wounds, collected from collaborating hospitals, our mission is to create intelligent systems for a full non-intrusive assessment of these wounds. Five main tasks have been achieved in this study: a literature review of wound imaging methods using machine learning techniques, the classification and segmentation of the tissue types inside the pressure injury, the segmentation of these wounds and the design of an end-to-end system which measures all the necessary quantitative information from 3D meshes for an efficient assessment of PIs, and the integration of the assessment imaging techniques in a web-based application

    A Framework for the Semantics-aware Modelling of Objects

    Get PDF
    The evolution of 3D visual content calls for innovative methods for modelling shapes based on their intended usage, function and role in a complex scenario. Even if different attempts have been done in this direction, shape modelling still mainly focuses on geometry. However, 3D models have a structure, given by the arrangement of salient parts, and shape and structure are deeply related to semantics and functionality. Changing geometry without semantic clues may invalidate such functionalities or the meaning of objects or their parts. We approach the problem by considering semantics as the formalised knowledge related to a category of objects; the geometry can vary provided that the semantics is preserved. We represent the semantics and the variable geometry of a class of shapes through the parametric template: an annotated 3D model whose geometry can be deformed provided that some semantic constraints remain satisfied. In this work, we design and develop a framework for the semantics-aware modelling of shapes, offering the user a single application environment where the whole workflow of defining the parametric template and applying semantics-aware deformations can take place. In particular, the system provides tools for the selection and annotation of geometry based on a formalised contextual knowledge; shape analysis methods to derive new knowledge implicitly encoded in the geometry, and possibly enrich the given semantics; a set of constraints that the user can apply to salient parts and a deformation operation that takes into account the semantic constraints and provides an optimal solution. The framework is modular so that new tools can be continuously added. While producing some innovative results in specific areas, the goal of this work is the development of a comprehensive framework combining state of the art techniques and new algorithms, thus enabling the user to conceptualise her/his knowledge and model geometric shapes. The original contributions regard the formalisation of the concept of annotation, with attached properties, and of the relations between significant parts of objects; a new technique for guaranteeing the persistence of annotations after significant changes in shape's resolution; the exploitation of shape descriptors for the extraction of quantitative information and the assessment of shape variability within a class; and the extension of the popular cage-based deformation techniques to include constraints on the allowed displacement of vertices. In this thesis, we report the design and development of the framework as well as results in two application scenarios, namely product design and archaeological reconstruction

    Mobile Wound Assessment and 3D Modeling from a Single Image

    Get PDF
    The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image

    Medical image synthesis using generative adversarial networks: towards photo-realistic image synthesis

    Full text link
    This proposed work addresses the photo-realism for synthetic images. We introduced a modified generative adversarial network: StencilGAN. It is a perceptually-aware generative adversarial network that synthesizes images based on overlaid labelled masks. This technique can be a prominent solution for the scarcity of the resources in the healthcare sector

    Application of mixed and virtual reality in geoscience and engineering geology

    Get PDF
    Visual learning and efficient communication in mining and geotechnical practices is crucial, yet often challenging. With the advancement of Virtual Reality (VR) and Mixed Reality (MR) a new era of geovisualization has emerged. This thesis demonstrates the capabilities of a virtual continuum approach using varying scales of geoscience applications. An application that aids analyses of small-scale geological investigation was constructed using a 3D holographic drill core model. A virtual core logger was also developed to assist logging in the field and subsequent communication by visualizing the core in a complementary holographic environment. Enriched logging practices enhance interpretation with potential economic and safety benefits to mining and geotechnical infrastructure projects. A mine-scale model of the LKAB mine in Sweden was developed to improve communication on mining induced subsidence between geologists, engineers and the public. GPS, InSAR and micro-seismicity data were hosted in a single database, which was geovisualized through Virtual and Mixed Reality. The wide array of applications presented in this thesis illustrate the potential of Mixed and Virtual Reality and improvements gained on current conventional geological and geotechnical data collection, interpretation and communication at all scales from the micro- (e.g. thin section) to the macro- scale (e.g. mine)

    Development, Implementation and Pre-clinical Evaluation of Medical Image Computing Tools in Support of Computer-aided Diagnosis: Respiratory, Orthopedic and Cardiac Applications

    Get PDF
    Over the last decade, image processing tools have become crucial components of all clinical and research efforts involving medical imaging and associated applications. The imaging data available to the radiologists continue to increase their workload, raising the need for efficient identification and visualization of the required image data necessary for clinical assessment. Computer-aided diagnosis (CAD) in medical imaging has evolved in response to the need for techniques that can assist the radiologists to increase throughput while reducing human error and bias without compromising the outcome of the screening, diagnosis or disease assessment. More intelligent, but simple, consistent and less time-consuming methods will become more widespread, reducing user variability, while also revealing information in a more clear, visual way. Several routine image processing approaches, including localization, segmentation, registration, and fusion, are critical for enhancing and enabling the development of CAD techniques. However, changes in clinical workflow require significant adjustments and re-training and, despite the efforts of the academic research community to develop state-of-the-art algorithms and high-performance techniques, their footprint often hampers their clinical use. Currently, the main challenge seems to not be the lack of tools and techniques for medical image processing, analysis, and computing, but rather the lack of clinically feasible solutions that leverage the already developed and existing tools and techniques, as well as a demonstration of the potential clinical impact of such tools. Recently, more and more efforts have been dedicated to devising new algorithms for localization, segmentation or registration, while their potential and much intended clinical use and their actual utility is dwarfed by the scientific, algorithmic and developmental novelty that only result in incremental improvements over already algorithms. In this thesis, we propose and demonstrate the implementation and evaluation of several different methodological guidelines that ensure the development of image processing tools --- localization, segmentation and registration --- and illustrate their use across several medical imaging modalities --- X-ray, computed tomography, ultrasound and magnetic resonance imaging --- and several clinical applications: Lung CT image registration in support for assessment of pulmonary nodule growth rate and disease progression from thoracic CT images. Automated reconstruction of standing X-ray panoramas from multi-sector X-ray images for assessment of long limb mechanical axis and knee misalignment. Left and right ventricle localization, segmentation, reconstruction, ejection fraction measurement from cine cardiac MRI or multi-plane trans-esophageal ultrasound images for cardiac function assessment. When devising and evaluating our developed tools, we use clinical patient data to illustrate the inherent clinical challenges associated with highly variable imaging data that need to be addressed before potential pre-clinical validation and implementation. In an effort to provide plausible solutions to the selected applications, the proposed methodological guidelines ensure the development of image processing tools that help achieve sufficiently reliable solutions that not only have the potential to address the clinical needs, but are sufficiently streamlined to be potentially translated into eventual clinical tools provided proper implementation. G1: Reducing the number of degrees of freedom (DOF) of the designed tool, with a plausible example being avoiding the use of inefficient non-rigid image registration methods. This guideline addresses the risk of artificial deformation during registration and it clearly aims at reducing complexity and the number of degrees of freedom. G2: The use of shape-based features to most efficiently represent the image content, either by using edges instead of or in addition to intensities and motion, where useful. Edges capture the most useful information in the image and can be used to identify the most important image features. As a result, this guideline ensures a more robust performance when key image information is missing. G3: Efficient method of implementation. This guideline focuses on efficiency in terms of the minimum number of steps required and avoiding the recalculation of terms that only need to be calculated once in an iterative process. An efficient implementation leads to reduced computational effort and improved performance. G4: Commence the workflow by establishing an optimized initialization and gradually converge toward the final acceptable result. This guideline aims to ensure reasonable outcomes in consistent ways and it avoids convergence to local minima, while gradually ensuring convergence to the global minimum solution. These guidelines lead to the development of interactive, semi-automated or fully-automated approaches that still enable the clinicians to perform final refinements, while they reduce the overall inter- and intra-observer variability, reduce ambiguity, increase accuracy and precision, and have the potential to yield mechanisms that will aid with providing an overall more consistent diagnosis in a timely fashion

    Computer-aided detection and diagnosis of breast cancer in 2D and 3D medical imaging through multifractal analysis

    Get PDF
    This Thesis describes the research work performed in the scope of a doctoral research program and presents its conclusions and contributions. The research activities were carried on in the industry with Siemens S.A. Healthcare Sector, in integration with a research team. Siemens S.A. Healthcare Sector is one of the world biggest suppliers of products, services and complete solutions in the medical sector. The company offers a wide selection of diagnostic and therapeutic equipment and information systems. Siemens products for medical imaging and in vivo diagnostics include: ultrasound, computer tomography, mammography, digital breast tomosynthesis, magnetic resonance, equipment to angiography and coronary angiography, nuclear imaging, and many others. Siemens has a vast experience in Healthcare and at the beginning of this project it was strategically interested in solutions to improve the detection of Breast Cancer, to increase its competitiveness in the sector. The company owns several patents related with self-similarity analysis, which formed the background of this Thesis. Furthermore, Siemens intended to explore commercially the computer- aided automatic detection and diagnosis eld for portfolio integration. Therefore, with the high knowledge acquired by University of Beira Interior in this area together with this Thesis, will allow Siemens to apply the most recent scienti c progress in the detection of the breast cancer, and it is foreseeable that together we can develop a new technology with high potential. The project resulted in the submission of two invention disclosures for evaluation in Siemens A.G., two articles published in peer-reviewed journals indexed in ISI Science Citation Index, two other articles submitted in peer-reviewed journals, and several international conference papers. This work on computer-aided-diagnosis in breast led to innovative software and novel processes of research and development, for which the project received the Siemens Innovation Award in 2012. It was very rewarding to carry on such technological and innovative project in a socially sensitive area as Breast Cancer.No cancro da mama a deteção precoce e o diagnóstico correto são de extrema importância na prescrição terapêutica e caz e e ciente, que potencie o aumento da taxa de sobrevivência à doença. A teoria multifractal foi inicialmente introduzida no contexto da análise de sinal e a sua utilidade foi demonstrada na descrição de comportamentos siológicos de bio-sinais e até na deteção e predição de patologias. Nesta Tese, três métodos multifractais foram estendidos para imagens bi-dimensionais (2D) e comparados na deteção de microcalci cações em mamogramas. Um destes métodos foi também adaptado para a classi cação de massas da mama, em cortes transversais 2D obtidos por ressonância magnética (RM) de mama, em grupos de massas provavelmente benignas e com suspeição de malignidade. Um novo método de análise multifractal usando a lacunaridade tri-dimensional (3D) foi proposto para classi cação de massas da mama em imagens volumétricas 3D de RM de mama. A análise multifractal revelou diferenças na complexidade subjacente às localizações das microcalci cações em relação aos tecidos normais, permitindo uma boa exatidão da sua deteção em mamogramas. Adicionalmente, foram extraídas por análise multifractal características dos tecidos que permitiram identi car os casos tipicamente recomendados para biópsia em imagens 2D de RM de mama. A análise multifractal 3D foi e caz na classi cação de lesões mamárias benignas e malignas em imagens 3D de RM de mama. Este método foi mais exato para esta classi cação do que o método 2D ou o método padrão de análise de contraste cinético tumoral. Em conclusão, a análise multifractal fornece informação útil para deteção auxiliada por computador em mamogra a e diagnóstico auxiliado por computador em imagens 2D e 3D de RM de mama, tendo o potencial de complementar a interpretação dos radiologistas

    Computer aided diagnosis system using dermatoscopical image

    Get PDF
    Computer Aided Diagnosis (CAD) systems for melanoma detection aim to mirror the expert dermatologist decision when watching a dermoscopic or clinical image. Computer Vision techniques, which can be based on expert knowledge or not, are used to characterize the lesion image. This information is delivered to a machine learning algorithm, which gives a diagnosis suggestion as an output. This research is included into this field, and addresses the objective of implementing a complete CAD system using ‘state of the art’ descriptors and dermoscopy images as input. Some of them are based on expert knowledge and others are typical in a wide variety of problems. Images are initially transformed into oRGB, a perceptual color space, looking for both enhancing the information that images provide and giving human perception to machine algorithms. Feature selection is also performed to find features that really contribute to discriminate between benign and malignant pigmented skin lesions (PSL). The problem of robust model fitting versus statistically significant system evaluation is critical when working with small datasets, which is indeed the case. This topic is not generally considered in works related to PSLs. Consequently, a method that optimizes the compromise between these two goals is proposed, giving non-overfitted models and statistically significant measures of performance. In this manner, different systems can be compared in a fairer way. A database which enjoys wide international acceptance among dermatologists is used for the experiments.Ingeniería de Sistemas Audiovisuale

    Toward Assessment of Lung Water Content Using Wireless Cardio-Pulmonary Stethoscope Measurements

    Get PDF
    M.S
    • …
    corecore