284 research outputs found

    Single Image Compressed Sensing MRI via a Self-Supervised Deep Denoising Approach

    Full text link
    Popular methods in compressed sensing (CS) are dependent on deep learning (DL), where large amounts of data are used to train non-linear reconstruction models. However, ensuring generalisability over and access to multiple datasets is challenging to realise for real-world applications. To address these concerns, this paper proposes a single image, self-supervised (SS) CS-MRI framework that enables a joint deep and sparse regularisation of CS artefacts. The approach effectively dampens structured CS artefacts, which can be difficult to remove assuming sparse reconstruction, or relying solely on the inductive biases of CNN to produce noise-free images. Image quality is thereby improved compared to either approach alone. Metrics are evaluated using Cartesian 1D masks on a brain and knee dataset, with PSNR improving by 2-4dB on average.Comment: 5 pages, 4 figures, 2 tables, conferenc

    AliasNet: Alias Artefact Suppression Network for Accelerated Phase-Encode MRI

    Full text link
    Sparse reconstruction is an important aspect of MRI, helping to reduce acquisition time and improve spatial-temporal resolution. Popular methods are based mostly on compressed sensing (CS), which relies on the random sampling of k-space to produce incoherent (noise-like) artefacts. Due to hardware constraints, 1D Cartesian phase-encode under-sampling schemes are popular for 2D CS-MRI. However, 1D under-sampling limits 2D incoherence between measurements, yielding structured aliasing artefacts (ghosts) that may be difficult to remove assuming a 2D sparsity model. Reconstruction algorithms typically deploy direction-insensitive 2D regularisation for these direction-associated artefacts. Recognising that phase-encode artefacts can be separated into contiguous 1D signals, we develop two decoupling techniques that enable explicit 1D regularisation and leverage the excellent 1D incoherence characteristics. We also derive a combined 1D + 2D reconstruction technique that takes advantage of spatial relationships within the image. Experiments conducted on retrospectively under-sampled brain and knee data demonstrate that combination of the proposed 1D AliasNet modules with existing 2D deep learned (DL) recovery techniques leads to an improvement in image quality. We also find AliasNet enables a superior scaling of performance compared to increasing the size of the original 2D network layers. AliasNet therefore improves the regularisation of aliasing artefacts arising from phase-encode under-sampling, by tailoring the network architecture to account for their expected appearance. The proposed 1D + 2D approach is compatible with any existing 2D DL recovery technique deployed for this application

    The Wisdom of Older Technology (Non-)Users

    Get PDF
    Older adults consistently reject digital technology even when designed to be accessible and trustworthy

    Resolution- and Stimulus-agnostic Super-Resolution of Ultra-High-Field Functional MRI: Application to Visual Studies

    Full text link
    High-resolution fMRI provides a window into the brain's mesoscale organization. Yet, higher spatial resolution increases scan times, to compensate for the low signal and contrast-to-noise ratio. This work introduces a deep learning-based 3D super-resolution (SR) method for fMRI. By incorporating a resolution-agnostic image augmentation framework, our method adapts to varying voxel sizes without retraining. We apply this innovative technique to localize fine-scale motion-selective sites in the early visual areas. Detection of these sites typically requires a resolution higher than 1 mm isotropic, whereas here, we visualize them based on lower resolution (2-3mm isotropic) fMRI data. Remarkably, the super-resolved fMRI is able to recover high-frequency detail of the interdigitated organization of these sites (relative to the color-selective sites), even with training data sourced from different subjects and experimental paradigms -- including non-visual resting-state fMRI, underscoring its robustness and versatility. Quantitative and qualitative results indicate that our method has the potential to enhance the spatial resolution of fMRI, leading to a drastic reduction in acquisition time.Comment: ISBI2024 final versio

    Fractal Compressive Sensing

    Full text link
    This paper introduces a sparse projection matrix composed of discrete (digital) periodic lines that create a pseudo-random (p.frac) sampling scheme. Our approach enables random Cartesian sampling whilst employing deterministic and one-dimensional (1D) trajectories derived from the discrete Radon transform (DRT). Unlike radial trajectories, DRT projections can be back-projected without interpolation. Thus, we also propose a novel reconstruction method based on the exact projections of the DRT called finite Fourier reconstruction (FFR). We term this combined p.frac and FFR strategy, finite compressive sensing (FCS), with image recovery demonstrated on experimental and simulated data; image quality comparisons are made with Cartesian random sampling in 1D and two-dimensional (2D), as well as radial under-sampling in a more constrained experiment. Our experiments indicate FCS enables 3-5dB gain in peak signal-to-noise ratio (PSNR) for 2-, 4- and 8-fold under-sampling compared to 1D Cartesian random sampling. This paper aims to: Review common sampling strategies for compressed sensing (CS)-magnetic resonance imaging (MRI) to inform the motivation of a projective and Cartesian sampling scheme. Compare the incoherence of these sampling strategies and the proposed p.frac. Compare reconstruction quality of the sampling schemes under various reconstruction strategies to determine the suitability of p.frac for CS-MRI. It is hypothesised that because p.frac is a highly incoherent sampling scheme, that reconstructions will be of high quality compared to 1D Cartesian phase-encode under-sampling.Comment: 12 pages, 10 figures, 1 tabl

    Plasmonic coupling in closed-packed ordered gallium nanoparticles

    Full text link
    Plasmonic gallium (Ga) nanoparticles (NPs) are well known to exhibit good performance in numerous applications such as surface enhanced fluorescence and Raman spectroscopy or biosensing. However, to reach the optimal optical performance, the strength of the localized surface plasmon resonances (LSPRs) must be enhanced particularly by suitable narrowing the NP size distribution among other factors. With this purpose, our last work demonstrated the production of hexagonal ordered arrays of Ga NPs by using templates of aluminium (Al) shallow pit arrays, whose LSPRs were observed in the VIS region. The quantitative analysis of the optical properties by spectroscopic ellipsometry confirmed an outstanding improvement of the LSPR intensity and full width at half maximum (FWHM) due to the imposed ordering. Here, by engineering the template dimensions, and therefore by tuning Ga NPs size, we expand the LSPRs of the Ga NPs to cover a wider range of the electromagnetic spectrum from the UV to the IR regions. More interestingly, the factors that cause this optical performance improvement are studied with the universal plasmon ruler equation, supported with discrete dipole approximation simulations. The results allow us to conclude that the plasmonic coupling between NPs originated in the ordered systems is the main cause for the optimized optical responseThe research is supported by the MINECO (CTQ2014-53334-C2-2-R, CTQ2017-84309-C2-2-R and MAT201676824-C3-1-R) and Comunidad de Madrid (P2018/NMT4349 and S2018/NMT-4321 NANOMAGCOST) projects. ARC acknowledges Ramón y Cajal program (under contract number RYC-2015-18047

    ICT under constraint: exposing tensions in collaboratively prioritising ICT innovation for climate targets

    Get PDF
    The international treaty known as the Paris Agreement requires global greenhouse gas emissions to decrease at a pace that will limit global warming to 1.5 degrees Celsius. Given the pressure on all sectors to reduce their emissions to meet this target, the ICT sector must begin to explore how to innovate under constraint for the first time. This could mean facing the unprecedented dilemma of having to choose between innovations, in which case the community will need to develop processes for making collective decisions regarding which innovations are most deserving of their carbon costs. In this paper, we expose tensions in collaboratively prioritising ICT innovation under constraints, and discuss the considerations and approaches the ICT sector may require to make such decisions effectively across the sector. This opens up a new area of research where we envision HCI expertise can inform and resolve such tensions for values-based and target-led ICT innovation towards a sustainable future

    ICT Under Constraint : Exposing Tensions in Collaboratively Prioritising ICT Innovation for Climate Targets

    Get PDF
    The international treaty known as the Paris Agreement requires global greenhouse gas emissions to decrease at a pace that will limit global warming to 1.5 degrees Celsius. Given the pressure on all sectors to reduce their emissions to meet this target, the ICT sector must begin to explore how to innovate under constraint for the first time. This could mean facing the unprecedented dilemma of having to choose between innovations, in which case the community will need to develop processes for making collective decisions regarding which innovations are most deserving of their carbon costs. In this paper, we expose tensions in collaboratively prioritising ICT innovation under constraints, and discuss the considerations and approaches the ICT sector may require to make such decisions effectively across the sector. This opens up a new area of research where we envision HCI expertise can inform and resolve such tensions for values-based and target-led ICT innovation towards a sustainable future

    Proposals for Innovation and Improvement of the Quality of Life in Caprine Pastoralist Communities of Subsistence in the Monte Desert, Argentina

    Get PDF
    In a satisfactory alliance between the main environmental policy organizations and the academy, the National Observatory on Land Degradation and Desertification (ONDTyT) is created. The ONDTyD provides information regarding status and trends of land degradation/desertification in order to promote prevention and mitigation measures used for advising public and private decision-makers in Argentina. It is based in the development of 17 Pilot Sites that constitutes the local level network, providing bio-physical and socio-economic indicators of land degradation. In this network the pilot site of the Monte, the largest dry region of Argentina (Lavalle desert, Mendoza), aims to improve the living conditions of native communities dedicated to subsistence goat farming, located below the poverty line. Precipitation ranges from 80-100 mm/year, strongly affecting productive activities. The proposal includes innovative traits in an area whose natural resources have been devastated. It is framed within a conception of rural territory development generating sustainable development strategies of rural indigenous communities, improve the status of the ecosystem through an integral management of natural and cultural resources, and improve socioeconomic conditions of inhabitants, compatibilizing ecosystem regeneration with investment in infrastructure and services, diversification of productive activities and generation of employment. An interdisciplinary group designed the proposal and the integrated desertification assessment in the fields with active community participation through their knowledge, land and livestock. The pilot case can be replicated throughout the territory. The work combines participatory and integrated methodologies, showing that the Observatory is a successful example of partnership building between the political and scientific-technological sectors in Argentina

    Multi-contrast MRI Super-resolution via Implicit Neural Representations

    Full text link
    Clinical routine and retrospective cohorts commonly include multi-parametric Magnetic Resonance Imaging; however, they are mostly acquired in different anisotropic 2D views due to signal-to-noise-ratio and scan-time constraints. Thus acquired views suffer from poor out-of-plane resolution and affect downstream volumetric image analysis that typically requires isotropic 3D scans. Combining different views of multi-contrast scans into high-resolution isotropic 3D scans is challenging due to the lack of a large training cohort, which calls for a subject-specific framework.This work proposes a novel solution to this problem leveraging Implicit Neural Representations (INR). Our proposed INR jointly learns two different contrasts of complementary views in a continuous spatial function and benefits from exchanging anatomical information between them. Trained within minutes on a single commodity GPU, our model provides realistic super-resolution across different pairs of contrasts in our experiments with three datasets. Using Mutual Information (MI) as a metric, we find that our model converges to an optimum MI amongst sequences, achieving anatomically faithful reconstruction. Code is available at: https://github.com/jqmcginnis/multi_contrast_inr
    corecore