14 research outputs found

    Distributed E-learning Based on SOA

    Get PDF
    Non

    Progressive multi-atlas label fusion by dictionary evolution

    Get PDF
    AbstractAccurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary

    7T-guided super-resolution of 3T MRI

    Get PDF
    High-resolution MR images can depict rich details of brain anatomical structures and show subtle changes in longitudinal data. 7T MRI scanners can acquire MR images with higher resolution and better tissue contrast than the routine 3T MRI scanners. However, 7T MRI scanners are currently more expensive and less available in clinical and research centers. To this end, we propose a method to generate super-resolution 3T MRI that resembles 7T MRI, which is called as 7T-like MR image in this paper

    Reconstruction of 7T-Like Images From 3T MRI

    Get PDF
    In the recent MRI scanning, ultra-high-field (7T) MR imaging provides higher resolution and better tissue contrast compared to routine 3T MRI, which may help in more accurate and early brain diseases diagnosis. However, currently, 7T MRI scanners are more expensive and less available at clinical and research centers. These motivate us to propose a method for the reconstruction of images close to the quality of 7T MRI, called 7T-like images, from 3T MRI, to improve the quality in terms of resolution and contrast. By doing so, the post-processing tasks, such as tissue segmentation, can be done more accurately and brain tissues details can be seen with higher resolution and contrast. To do this, we have acquired a unique dataset which includes paired 3T and 7T images scanned from same subjects, and then propose a hierarchical reconstruction based on group sparsity in a novel multi-level Canonical Correlation Analysis (CCA) space, to improve the quality of 3T MR image to be 7T-like MRI. First, overlapping patches are extracted from the input 3T MR image. Then, by extracting the most similar patches from all the aligned 3T and 7T images in the training set, the paired 3T and 7T dictionaries are constructed for each patch. It is worth noting that, for the training, we use pairs of 3T and 7T MR images from each training subject. Then, we propose multi-level CCA to map the paired 3T and 7T patch sets to a common space to increase their correlations. In such space, each input 3T MRI patch is sparsely represented by the 3T dictionary and then the obtained sparse coefficients are used together with the corresponding 7T dictionary to reconstruct the 7T-like patch. Also, to have the structural consistency between adjacent patches, the group sparsity is employed. This reconstruction is performed with changing patch sizes in a hierarchical framework. Experiments have been done using 13 subjects with both 3T and 7T MR images. The results show that our method outperforms previous methods and is able to recover better structural details. Also, to place our proposed method in a medical application context, we evaluated the influence of post-processing methods such as brain tissue segmentation on the reconstructed 7T-like MR images. Results show that our 7T-like images lead to higher accuracy in segmentation of white matter (WM), gray matter (GM), cerebrospinal fluid (CSF), and skull, compared to segmentation of 3T MR images

    Image tampering detection based on level or type of blurriness

    No full text
    With the development of sophisticated photo-editing tools, image manipulation and forgery can be done easily and detection of tampered images by human eyes is difficult. Since images can be used in journalism, medical diagnosis, police investigation and as court evidences; image tampering can be a threat to the security of people and human society. Therefore, detection of image forgery is an urgent issue and development of reliable methods for image integrity examination and image forgery detection is important. Image splicing is one of the most common types of image tampering. In image splicing, if the original image and the spliced region have inconsistency in terms of blur type or blur level, such inconsistency can be used as an evidence of image splicing. In addition, after splicing, the traces of splicing boundary in the form of sharp edges are left in the tampered image which are different from the normal edges in the image. However, the forger may use some post-processing operations such as resizing the tampered image into a smaller size and artificial blurring of the splicing boundary to remove the splicing traces or visual anomalies. In such a case, the existing tampering detection methods are less reliable. In this thesis, we address the problem of splicing detection and localization by proposing three methods for 1) Splicing localization by exposing blur type inconsistency between the spliced region and the original image, 2) Splicing localization based on inconsistency between blur and depth in the spliced region, and 3) Splicing detection based on splicing boundary artifacts. To locate the splicing region based on blur type inconsistency, we propose a blur type detection feature to classifying the image blocks based on the blur type. This feature is used in a classification framework to classify the spliced and the authentic regions. To locate the splicing based on the inconsistency between blur and depth, we estimate two depths based on defocus blur cue and image content cues. The inconsistency between these two depths are used for splicing localization. To detect the splicing based on splicing boundary artifacts, we propose two sharpness features called Maximum Local Variation (MLV) and Content Aware Total Variation (CATV) to measure the local sharpness of the image. These sharpness features are incorporated in a machine learning framework to classify the image into authentic or spliced. Different from the previous splicing detection methods, the first two methods are reliable in the case of artificial blurring of the splicing boundary and all of our proposed methods are robust in general to image resizing.DOCTOR OF PHILOSOPHY (EEE

    Blurred image splicing localization by exposing blur type inconsistency

    No full text
    In a tampered blurred image generated by splicing, the spliced region and the original image may have different blur types. Splicing localization in this image is a challenging problem when a forger uses some postprocessing operations as antiforensics to remove the splicing traces anomalies by resizing the tampered image or blurring the spliced region boundary. Such operations remove the artifacts that make detection of splicing difficult. In this paper, we overcome this problem by proposing a novel framework for blurred image splicing localization based on the partial blur type inconsistency. In this framework, after the block-based image partitioning, a local blur type detection feature is extracted from the estimated local blur kernels. The image blocks are classified into out-of-focus or motion blur based on this feature to generate invariant blur type regions. Finally, a fine splicing localization is applied to increase the precision of regions boundary. We can use the blur type differences of the regions to trace the inconsistency for the splicing localization. Our experimental results show the efficiency of the proposed method in the detection and the classification of the out-of-focus and motion blur types. For splicing localization, the result demonstrates that our method works well in detecting the inconsistency in the partial blur types of the tampered images. However, our method can be applied to blurred images only. .Accepted versio

    E-learning

    Get PDF
    ELearning is a vast and complex research topic that poses many challenges in every aspect: educational and pedagogical strategies and techniques and the tools for achieving them; usability, accessibility and user interface design; knowledge sharing and collaborative environments; technologies, architectures, and protocols; user activity monitoring, assessment and evaluation; experiences, case studies and more. This book’s authors come from all over the world; their ideas, studies, findings and experiences are a valuable contribution to enriching our knowledge in the field of eLearning. The book consists of 18 chapters divided into three sections. The first chapters cover architectures and environments for eLearning, the second part of the book presents research on user interaction and technologies for building usable eLearning environments, which are the basis for realizing educational and pedagogical aims, and the last part illustrates applications, laboratories, and experiences

    Preparation and Quality Control of 68Ga-Citrate for PET Applications

    No full text
    Objective(s): In nuclear medicine studies, gallium-68 (68Ga) citrate has been recently known as a suitable infection agent in positron emission tomography (PET). In this study, by applying an in-house produced 68Ge/68Ga generator, a simple technique for the synthesis and quality control of 68Ga-citrate was introduced; followed by preliminary animal studies. Methods: 68GaCl3 eluted from the generator was studied in terms of quality control factors including radiochemical purity (assessed by HPLC and RTLC), chemical purity (assessed by ICP-EOS), radionuclide purity (evaluated by HPGe), and breakthrough. 68Ga-citrate was prepared from eluted 68GaCl3 and sodium citrate under various reaction conditions. Stability of the complex was evaluated in human serum for 2 h at 370C, followed by biodistribution studies in rats for 120 min. Results: 68Ga-citrate was prepared with acceptable radiochemical purity (>97 ITLC and >98% HPLC), specific activity (4-6 GBq/mM), chemical purity (Sn, FeConclusion: This study demonstrated the possible in-house preparation and quality control of 68Ga-citrate, using a commercially available 68Ge/68Ga generator for PET imaging throughout the country

    Reconstruction of 7T-Like Images From 3T MRI

    No full text
    In the recent MRI scanning, ultra-high-field (7T) MR imaging provides higher resolution and better tissue contrast compared to routine 3T MRI, which may help in more accurate and early brain diseases diagnosis. However, currently, 7T MRI scanners are more expensive and less available at clinical and research centers. These motivate us to propose a method for the reconstruction of images close to the quality of 7T MRI, called 7T-like images, from 3T MRI, to improve the quality in terms of resolution and contrast. By doing so, the post-processing tasks, such as tissue segmentation, can be done more accurately and brain tissues details can be seen with higher resolution and contrast. To do this, we have acquired a unique dataset which includes paired 3T and 7T images scanned from same subjects, and then propose a hierarchical reconstruction based on group sparsity in a novel multi-level Canonical Correlation Analysis (CCA) space, to improve the quality of 3T MR image to be 7T-like MRI. First, overlapping patches are extracted from the input 3T MR image. Then, by extracting the most similar patches from all the aligned 3T and 7T images in the training set, the paired 3T and 7T dictionaries are constructed for each patch. It is worth noting that, for the training, we use pairs of 3T and 7T MR images from each training subject. Then, we propose multi-level CCA to map the paired 3T and 7T patch sets to a common space to increase their correlations. In such space, each input 3T MRI patch is sparsely represented by the 3T dictionary and then the obtained sparse coefficients are used together with the corresponding 7T dictionary to reconstruct the 7T-like patch. Also, to have the structural consistency between adjacent patches, the group sparsity is employed. This reconstruction is performed with changing patch sizes in a hierarchical framework. Experiments have been done using 13 subjects with both 3T and 7T MR images. The results show that our method outperforms previous methods and is able to recover better structural details. Also, to place our proposed method in a medical application context, we evaluated the influence of post-processing methods such as brain tissue segmentation on the reconstructed 7T-like MR images. Results show that our 7T-like images lead to higher accuracy in segmentation of white matter (WM), gray matter (GM), cerebrospinal fluid (CSF), and skull, compared to segmentation of 3T MR images
    corecore