1,838 research outputs found

    Histopathological image analysis : a review

    Get PDF
    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe

    Novel 3D Ultrasound Elastography Techniques for In Vivo Breast Tumor Imaging and Nonlinear Characterization

    Get PDF
    Breast cancer comprises about 29% of all types of cancer in women worldwide. This type of cancer caused what is equivalent to 14% of all female deaths due to cancer. Nowadays, tissue biopsy is routinely performed, although about 80% of the performed biopsies yield a benign result. Biopsy is considered the most costly part of breast cancer examination and invasive in nature. To reduce unnecessary biopsy procedures and achieve early diagnosis, ultrasound elastography was proposed.;In this research, tissue displacement fields were estimated using ultrasound waves, and used to infer the elastic properties of tissues. Ultrasound radiofrequency data acquired at consecutive increments of tissue compression were used to compute local tissue strains using a cross correlation method. In vitro and in vivo experiments were conducted on different tissue types to demonstrate the ability to construct 2D and 3D elastography that helps distinguish stiff from soft tissues. Based on the constructed strain volumes, a novel nonlinear classification method for human breast tumors is introduced. Multi-compression elastography imaging is elucidated in this study to differentiate malignant from benign tumors, based on their nonlinear mechanical behavior under compression. A pilot study on ten patients was performed in vivo, and classification results were compared with biopsy diagnosis - the gold standard. Various nonlinear parameters based on different models, were evaluated and compared with two commonly used parameters; relative stiffness and relative tumor size. Moreover, different types of strain components were constructed in 3D for strain imaging, including normal axial, first principal, maximum shear and Von Mises strains. Interactive segmentation algorithms were also evaluated and applied on the constructed volumes, to delineate the stiff tissue by showing its isolated 3D shape.;Elastography 3D imaging results were in good agreement with the biopsy outcomes, where the new classification method showed a degree of discrepancy between benign and malignant tumors better than the commonly used parameters. The results show that the nonlinear parameters were found to be statistically significant with p-value \u3c0.05. Moreover, one parameter; power-law exponent, was highly statistically significant having p-value \u3c 0.001. Additionally, volumetric strain images reconstructed using the maximum shear strains provided an enhanced tumor\u27s boundary from the surrounding soft tissues. This edge enhancement improved the overall segmentation performance, and diminished the boundary leakage effect. 3D segmentation provided an additional reliable means to determine the tumor\u27s size by estimating its volume.;In summary, the proposed elastographic techniques can help predetermine the tumor\u27s type, shape and size that are considered key features helping the physician to decide the sort and extent of the treatment. The methods can also be extended to diagnose other types of tumors, such as prostate and cervical tumors. This research is aimed toward the development of a novel \u27virtual biopsy\u27 method that may reduce the number of unnecessary painful biopsies, and diminish the increasingly risk of cancer

    Dagstuhl Reports : Volume 1, Issue 2, February 2011

    Get PDF
    Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-HĂŒbner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro PezzĂ©, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn

    Image Processing and Analysis for Preclinical and Clinical Applications

    Get PDF
    Radiomics is one of the most successful branches of research in the field of image processing and analysis, as it provides valuable quantitative information for the personalized medicine. It has the potential to discover features of the disease that cannot be appreciated with the naked eye in both preclinical and clinical studies. In general, all quantitative approaches based on biomedical images, such as positron emission tomography (PET), computed tomography (CT) and magnetic resonance imaging (MRI), have a positive clinical impact in the detection of biological processes and diseases as well as in predicting response to treatment. This Special Issue, “Image Processing and Analysis for Preclinical and Clinical Applications”, addresses some gaps in this field to improve the quality of research in the clinical and preclinical environment. It consists of fourteen peer-reviewed papers covering a range of topics and applications related to biomedical image processing and analysis

    Parameter estimation of neuron models using <i>in-vitro </i>and<i> in-vivo </i>electrophysiological data

    Get PDF
    Spiking neuron models can accurately predict the response of neurons to somatically injected currents if the model parameters are carefully tuned. Predicting the response of in-vivo neurons responding to natural stimuli presents a far more challenging modeling problem. In this study, an algorithm is presented for parameter estimation of spiking neuron models. The algorithm is a hybrid evolutionary algorithm which uses a spike train metric as a fitness function. We apply this to parameter discovery in modeling two experimental data sets with spiking neurons; in-vitro current injection responses from a regular spiking pyramidal neuron are modeled using spiking neurons and in-vivo extracellular auditory data is modeled using a two stage model consisting of a stimulus filter and spiking neuron model

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Grid Analysis of Radiological Data

    Get PDF
    IGI-Global Medical Information Science Discoveries Research Award 2009International audienceGrid technologies and infrastructures can contribute to harnessing the full power of computer-aided image analysis into clinical research and practice. Given the volume of data, the sensitivity of medical information, and the joint complexity of medical datasets and computations expected in clinical practice, the challenge is to fill the gap between the grid middleware and the requirements of clinical applications. This chapter reports on the goals, achievements and lessons learned from the AGIR (Grid Analysis of Radiological Data) project. AGIR addresses this challenge through a combined approach. On one hand, leveraging the grid middleware through core grid medical services (data management, responsiveness, compression, and workflows) targets the requirements of medical data processing applications. On the other hand, grid-enabling a panel of applications ranging from algorithmic research to clinical use cases both exploits and drives the development of the services

    White matter hyperintensities classified according to intensity and spatial location reveal specific associations with cognitive performance.

    Get PDF
    White matter hyperintensities (WMHs) on T2-weighted images are radiological signs of cerebral small vessel disease. As their total volume is variably associated with cognition, a new approach that integrates multiple radiological criteria is warranted. Location may matter, as periventricular WMHs have been shown to be associated with cognitive impairments. WMHs that appear as hypointense in T1-weighted images (T1w) may also indicate the most severe component of WMHs. We developed an automatic method that sub-classifies WMHs into four categories (periventricular/deep and T1w-hypointense/nonT1w-hypointense) using MRI data from 684 community-dwelling older adults from the Whitehall II study. To test if location and intensity information can impact cognition, we derived two general linear models using either overall or subdivided volumes. Results showed that periventricular T1w-hypointense WMHs were significantly associated with poorer performance in the trail making A (p = 0.011), digit symbol (p = 0.028) and digit coding (p = 0.009) tests. We found no association between total WMH volume and cognition. These findings suggest that sub-classifying WMHs according to both location and intensity in T1w reveals specific associations with cognitive performance

    Bayesian Reconstruction of Magnetic Resonance Images using Gaussian Processes

    Full text link
    A central goal of modern magnetic resonance imaging (MRI) is to reduce the time required to produce high-quality images. Efforts have included hardware and software innovations such as parallel imaging, compressed sensing, and deep learning-based reconstruction. Here, we propose and demonstrate a Bayesian method to build statistical libraries of magnetic resonance (MR) images in k-space and use these libraries to identify optimal subsampling paths and reconstruction processes. Specifically, we compute a multivariate normal distribution based upon Gaussian processes using a publicly available library of T1-weighted images of healthy brains. We combine this library with physics-informed envelope functions to only retain meaningful correlations in k-space. This covariance function is then used to select a series of ring-shaped subsampling paths using Bayesian optimization such that they optimally explore space while remaining practically realizable in commercial MRI systems. Combining optimized subsampling paths found for a range of images, we compute a generalized sampling path that, when used for novel images, produces superlative structural similarity and error in comparison to previously reported reconstruction processes (i.e. 96.3% structural similarity and <0.003 normalized mean squared error from sampling only 12.5% of the k-space data). Finally, we use this reconstruction process on pathological data without retraining to show that reconstructed images are clinically useful for stroke identification
    • 

    corecore