1,361 research outputs found
Ultrasound segmentation using U-Net: learning from simulated data and testing on real data
Segmentation of ultrasound images is an essential task in both diagnosis and
image-guided interventions given the ease-of-use and low cost of this imaging
modality. As manual segmentation is tedious and time consuming, a growing body
of research has focused on the development of automatic segmentation
algorithms. Deep learning algorithms have shown remarkable achievements in this
regard; however, they need large training datasets. Unfortunately, preparing
large labeled datasets in ultrasound images is prohibitively difficult.
Therefore, in this study, we propose the use of simulated ultrasound (US)
images for training the U-Net deep learning segmentation architecture and test
on tissue-mimicking phantom data collected by an ultrasound machine. We
demonstrate that the trained architecture on the simulated data is
transferrable to real data, and therefore, simulated data can be considered as
an alternative training dataset when real datasets are not available. The
second contribution of this paper is that we train our U- Net network on
envelope and B-mode images of the simulated dataset, and test the trained
network on real envelope and B- mode images of phantom, respectively. We show
that test results are superior for the envelope data compared to B-mode image.Comment: Accepted in EMBC 201
Computational Dynamic Features Extraction from Anonymized Medical Images
Images depict clearer meaning than written words and this is reason they are used in a variety of human endeavors, including but not limited to medicine. Medical image datasets are used in medical environment to diagnose and confirm medical disorders for which physical examination may not be sufficient. However, the medical profession's ethics of patient confidentiality policy creates barrier to availability of medical datasets for research; thus, this research work was able to solve the above stated barrier through anonymization of sensitive identity information. Furthermore, the Content Based Image Retrieval (CBIR) using texture as the content was developed to overcome the challenge of information overloading associated with data retrieval systems.
Images acquired from various imaging modalities and placed into Digital Imaging and Communications in Medicine (DICOM) formats were obtained from several hospitals in Nigeria. The database of these images was created and consequently anonymized, then a new anonymized database was created. On anonymized images, feature extraction was done using textures as content and the content was considered for the implementation of retrieval system.
The anonymized images were tested in DICOM view to see if all files were successfully anonymized; the result obtained was 100%. A texture retrieval test was performed, and the number of precisely returned search images using the Similarity Distance Measure formulae resulted in a significant reduction in image overload. Thus, this research work solved the problem of non-availability of datasets for researchers in medical imaging field by providing datasets that can be used without violating law of patient confidentiality by the interested parties. It also solves the problem of hackers obtaining useful information about patients’ datasets. The CBIR using texture as content also enhances and solves the problem of information overloading
Expanding the medical physicist curricular and professional programme to include Artificial Intelligence
Purpose: To provide a guideline curriculum related to Artificial Intelligence (AI), for the education and training of European Medical Physicists (MPs). Materials and methods: The proposed curriculum consists of two levels: Basic (introducing MPs to the pillars of knowledge, development and applications of AI, in the context of medical imaging and radiation therapy) and Advanced. Both are common to the subspecialties (diagnostic and interventional radiology, nuclear medicine, and radiation oncology). The learning outcomes of the training are presented as knowledge, skills and competences (KSC approach). Results: For the Basic section, KSCs were stratified in four subsections: (1) Medical imaging analysis and AI Basics; (2) Implementation of AI applications in clinical practice; (3) Big data and enterprise imaging, and (4) Quality, Regulatory and Ethical Issues of AI processes. For the Advanced section instead, a common block was proposed to be further elaborated by each subspecialty core curriculum. The learning outcomes were also translated into a syllabus of a more traditional format, including practical applications. Conclusions: This AI curriculum is the first attempt to create a guideline expanding the current educational framework for Medical Physicists in Europe. It should be considered as a document to top the sub-specialties' curriculums and adapted by national training and regulatory bodies. The proposed educational program can be implemented via the European School of Medical Physics Expert (ESMPE) course modules and - to some extent - also by the national competent EFOMP organizations, to reach widely the medical physicist community in Europe.Peer reviewe
Recommended from our members
Privacy-preserving model learning on a blockchain network-of-networks.
ObjectiveTo facilitate clinical/genomic/biomedical research, constructing generalizable predictive models using cross-institutional methods while protecting privacy is imperative. However, state-of-the-art methods assume a "flattened" topology, while real-world research networks may consist of "network-of-networks" which can imply practical issues including training on small data for rare diseases/conditions, prioritizing locally trained models, and maintaining models for each level of the hierarchy. In this study, we focus on developing a hierarchical approach to inherit the benefits of the privacy-preserving methods, retain the advantages of adopting blockchain, and address practical concerns on a research network-of-networks.Materials and methodsWe propose a framework to combine level-wise model learning, blockchain-based model dissemination, and a novel hierarchical consensus algorithm for model ensemble. We developed an example implementation HierarchicalChain (hierarchical privacy-preserving modeling on blockchain), evaluated it on 3 healthcare/genomic datasets, as well as compared its predictive correctness, learning iteration, and execution time with a state-of-the-art method designed for flattened network topology.ResultsHierarchicalChain improves the predictive correctness for small training datasets and provides comparable correctness results with the competing method with higher learning iteration and similar per-iteration execution time, inherits the benefits of the privacy-preserving learning and advantages of blockchain technology, and immutable records models for each level.DiscussionHierarchicalChain is independent of the core privacy-preserving learning method, as well as of the underlying blockchain platform. Further studies are warranted for various types of network topology, complex data, and privacy concerns.ConclusionWe demonstrated the potential of utilizing the information from the hierarchical network-of-networks topology to improve prediction
DIPPAS: A Deep Image Prior PRNU Anonymization Scheme
Source device identification is an important topic in image forensics since
it allows to trace back the origin of an image. Its forensics counter-part is
source device anonymization, that is, to mask any trace on the image that can
be useful for identifying the source device. A typical trace exploited for
source device identification is the Photo Response Non-Uniformity (PRNU), a
noise pattern left by the device on the acquired images. In this paper, we
devise a methodology for suppressing such a trace from natural images without
significant impact on image quality. Specifically, we turn PRNU anonymization
into an optimization problem in a Deep Image Prior (DIP) framework. In a
nutshell, a Convolutional Neural Network (CNN) acts as generator and returns an
image that is anonymized with respect to the source PRNU, still maintaining
high visual quality. With respect to widely-adopted deep learning paradigms,
our proposed CNN is not trained on a set of input-target pairs of images.
Instead, it is optimized to reconstruct the PRNU-free image from the original
image under analysis itself. This makes the approach particularly suitable in
scenarios where large heterogeneous databases are analyzed and prevents any
problem due to lack of generalization. Through numerical examples on publicly
available datasets, we prove our methodology to be effective compared to
state-of-the-art techniques
- …