262 research outputs found

    FRNET: Flattened Residual Network for Infant MRI Skull Stripping

    Full text link
    Skull stripping for brain MR images is a basic segmentation task. Although many methods have been proposed, most of them focused mainly on the adult MR images. Skull stripping for infant MR images is more challenging due to the small size and dynamic intensity changes of brain tissues during the early ages. In this paper, we propose a novel CNN based framework to robustly extract brain region from infant MR image without any human assistance. Specifically, we propose a simplified but more robust flattened residual network architecture (FRnet). We also introduce a new boundary loss function to highlight ambiguous and low contrast regions between brain and non-brain regions. To make the whole framework more robust to MR images with different imaging quality, we further introduce an artifact simulator for data augmentation. We have trained and tested our proposed framework on a large dataset (N=343), covering newborns to 48-month-olds, and obtained performance better than the state-of-the-art methods in all age groups.Comment: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI

    High-order localized spoof surface plasmon resonances and experimental verifications

    Full text link
    We theoretically demonstrated and experimentally verified high-order radial spoof localized surface plasmon resonances supported by textured metal particles. Through an effective medium theory and exact numerical simulations, we show the emergence of these geometrically-originated electromagnetic modes at microwave frequencies. The occurrence of high-order radial spoof plasmon resonances is experimentally verified in ultrathin disks. Their spectral and near-field properties are characterized experimentally, showing an excellent agreement with theoretical predictions. Our findings shed light into the nature of spoof localized surface plasmons, and open the way to the design of broadband plasmonic devices able to operate at very different frequency regimes.Comment: 29 pages, 10 figure

    Segmentation of perivascular spaces in 7 T MR image using auto-context model with orientation-normalized features

    Get PDF
    Quantitative study of perivascular spaces (PVSs) in brain magnetic resonance (MR) images is important for understanding the brain lymphatic system and its relationship with neurological diseases. One of major challenges is the accurate extraction of PVSs that have very thin tubular structures with various directions in three-dimensional (3D) MR images. In this paper, we propose a learning-based PVS segmentation method to address this challenge. Specifically, we first determine a region of interest (ROI) by using the anatomical brain structure and the vesselness information derived from eigenvalues of image derivatives. Then, in the ROI, we extract a number of randomized Haar features which are normalized with respect to the principal directions of the underlying image derivatives. The classifier is trained by the random forest model that can effectively learn both discriminative features and classifier parameters to maximize the information gain. Finally, a sequential learning strategy is used to further enforce various contextual patterns around the thin tubular structures into the classifier. For evaluation, we apply our proposed method to the 7T brain MR images scanned from 17 healthy subjects aged from 25 to 37. The performance is measured by voxel-wise segmentation accuracy, cluster- wise classification accuracy, and similarity of geometric properties, such as volume, length, and diameter distributions between the predicted and the true PVSs. Moreover, the accuracies are also evaluated on the simulation images with motion artifacts and lacunes to demonstrate the potential of our method in segmenting PVSs from elderly and patient populations. The experimental results show that our proposed method outperforms all existing PVS segmentation methods

    Photonic Floquet time crystals

    Full text link
    The public and scientists constantly have different perspectives. While on a time crystal, they stand in line and ask: What is a time crystal? Show me a material that is spontaneously crystalline in time? This study synthesizes a photonic material of Floquet time crystals and experimentally observes its indicative period-2T beating. We explicitly reconstruct a discrete time-crystalline ground state and reveal using an appropriately-designed photonic Floquet simulator the rigid period-doubling as a signature of the spontaneous breakage of the discrete time-translational symmetry. Unlike the result of the exquisite many-body interaction, the photonic time crystal is derived from a single-particle topological phase that can be extensively accessed by many pertinent nonequilibrium and periodically-driven platforms. Our observation will drive theoretical and technological interests toward condensed matter physics and topological photonics, and demystify time crystals for the non-scientific public.Comment: 39 pages, 5 figures, supplementary materials, 6 suppl. figure

    A Quantization-Friendly Separable Convolution for MobileNets

    Full text link
    As deep learning (DL) is being rapidly pushed to edge computing, researchers invented various ways to make inference computation more efficient on mobile/IoT devices, such as network pruning, parameter compression, and etc. Quantization, as one of the key approaches, can effectively offload GPU, and make it possible to deploy DL on fixed-point pipeline. Unfortunately, not all existing networks design are friendly to quantization. For example, the popular lightweight MobileNetV1, while it successfully reduces parameter size and computation latency with separable convolution, our experiment shows its quantized models have large accuracy gap against its float point models. To resolve this, we analyzed the root cause of quantization loss and proposed a quantization-friendly separable convolution architecture. By evaluating the image classification task on ImageNet2012 dataset, our modified MobileNetV1 model can archive 8-bit inference top-1 accuracy in 68.03%, almost closed the gap to the float pipeline.Comment: Accepted At THE 1ST WORKSHOP ON ENERGY EFFICIENT MACHINE LEARNING AND COGNITIVE COMPUTING FOR EMBEDDED APPLICATIONS (EMC^2 2018

    Reconstruction of 7T-Like Images From 3T MRI

    Get PDF
    In the recent MRI scanning, ultra-high-field (7T) MR imaging provides higher resolution and better tissue contrast compared to routine 3T MRI, which may help in more accurate and early brain diseases diagnosis. However, currently, 7T MRI scanners are more expensive and less available at clinical and research centers. These motivate us to propose a method for the reconstruction of images close to the quality of 7T MRI, called 7T-like images, from 3T MRI, to improve the quality in terms of resolution and contrast. By doing so, the post-processing tasks, such as tissue segmentation, can be done more accurately and brain tissues details can be seen with higher resolution and contrast. To do this, we have acquired a unique dataset which includes paired 3T and 7T images scanned from same subjects, and then propose a hierarchical reconstruction based on group sparsity in a novel multi-level Canonical Correlation Analysis (CCA) space, to improve the quality of 3T MR image to be 7T-like MRI. First, overlapping patches are extracted from the input 3T MR image. Then, by extracting the most similar patches from all the aligned 3T and 7T images in the training set, the paired 3T and 7T dictionaries are constructed for each patch. It is worth noting that, for the training, we use pairs of 3T and 7T MR images from each training subject. Then, we propose multi-level CCA to map the paired 3T and 7T patch sets to a common space to increase their correlations. In such space, each input 3T MRI patch is sparsely represented by the 3T dictionary and then the obtained sparse coefficients are used together with the corresponding 7T dictionary to reconstruct the 7T-like patch. Also, to have the structural consistency between adjacent patches, the group sparsity is employed. This reconstruction is performed with changing patch sizes in a hierarchical framework. Experiments have been done using 13 subjects with both 3T and 7T MR images. The results show that our method outperforms previous methods and is able to recover better structural details. Also, to place our proposed method in a medical application context, we evaluated the influence of post-processing methods such as brain tissue segmentation on the reconstructed 7T-like MR images. Results show that our 7T-like images lead to higher accuracy in segmentation of white matter (WM), gray matter (GM), cerebrospinal fluid (CSF), and skull, compared to segmentation of 3T MR images

    Segment Anything in 3D with NeRFs

    Full text link
    The Segment Anything Model (SAM) has demonstrated its effectiveness in segmenting any object/part in various 2D images, yet its ability for 3D has not been fully explored. The real world is composed of numerous 3D scenes and objects. Due to the scarcity of accessible 3D data and high cost of its acquisition and annotation, lifting SAM to 3D is a challenging but valuable research avenue. With this in mind, we propose a novel framework to Segment Anything in 3D, named SA3D. Given a neural radiance field (NeRF) model, SA3D allows users to obtain the 3D segmentation result of any target object via only one-shot manual prompting in a single rendered view. With input prompts, SAM cuts out the target object from the according view. The obtained 2D segmentation mask is projected onto 3D mask grids via density-guided inverse rendering. 2D masks from other views are then rendered, which are mostly uncompleted but used as cross-view self-prompts to be fed into SAM again. Complete masks can be obtained and projected onto mask grids. This procedure is executed via an iterative manner while accurate 3D masks can be finally learned. SA3D can adapt to various radiance fields effectively without any additional redesigning. The entire segmentation process can be completed in approximately two minutes without any engineering optimization. Our experiments demonstrate the effectiveness of SA3D in different scenes, highlighting the potential of SAM in 3D scene perception. The project page is at https://jumpat.github.io/SA3D/.Comment: Work in progress. Project page: https://jumpat.github.io/SA3D

    Enhancement of Perivascular Spaces in 7 T MR Image using Haar Transform of Non-local Cubes and Block-matching Filtering

    Get PDF
    Perivascular spaces (PVSs) in brain have a close relationship with typical neurological diseases. The quantitative studies of PVSs are meaningful but usually difficult, due to their thin and weak signals and also background noise in the 7 T brain magnetic resonance images (MRI). To clearly distinguish the PVSs in the 7 T MRI, we propose a novel PVS enhancement method based on the Haar transform of non-local cubes. Specifically, we extract a certain number of cubes from a small neighbor to form a cube group, and then perform Haar transform on each cube group. The Haar transform coefficients are processed using a nonlinear function to amplify the weak signals relevant to the PVSs and to suppress the noise. The enhanced image is reconstructed using the inverse Haar transform of the processed coefficients. Finally, we perform a block-matching 4D filtering on the enhanced image to further remove any remaining noise, and thus obtain an enhanced and denoised 7 T MRI for PVS segmentation. We apply two existing methods to complete PVS segmentation, i.e., (1) vesselness-thresholding and (2) random forest classification. The experimental results show that the PVS segmentation performances can be significantly improved by using the enhanced and denoised 7 T MRI
    corecore