180 research outputs found

    RFQD - a Decelerating Radio Frequency Quadrupole for the CERN Antiproton Facility

    Get PDF
    The RFQD is designed to decelerate antiprotons of momentum 100 MeV/c (kinetic energy 5.33MeV)down to a kinetic energy variable between ~10 keV and 120 keV. Inside the RFQ body, at ground potential, the rf structure of the four-rod type is mounted on insulating supports. It can be biased between plus/minus 60 kV dc to achieve the continuous adjustment of output energy required by the ASACUSA experiment at the CERN Antiproton Decelerator AD. The different parts of the system are described and the present status reported

    First operating experience with the CERN decelerating RFQ for antiprotons

    Get PDF
    The RFQD decelerates antiprotons from a momentum of 100 MeV/c (kinetic energy 5.31 MeV) down to a kinetic energy variable between ~10 keV and 120 keV. A novel feature is the implementation of a floating internal RF structure, mounted on HV insulators, to allow continuous post-deceleration or acceleration by a DC bias. A description of the system is given, followed by reports on the first operating experience with the ASACUSA experiment, dedicated performance measurements and consolidation progress

    Deep MR Fingerprinting with total-variation and low-rank subspace priors

    Full text link
    Deep learning (DL) has recently emerged to address the heavy storage and computation requirements of the baseline dictionary-matching (DM) for Magnetic Resonance Fingerprinting (MRF) reconstruction. Fed with non-iterated back-projected images, the network is unable to fully resolve spatially-correlated corruptions caused from the undersampling artefacts. We propose an accelerated iterative reconstruction to minimize these artefacts before feeding into the network. This is done through a convex regularization that jointly promotes spatio-temporal regularities of the MRF time-series. Except for training, the rest of the parameter estimation pipeline is dictionary-free. We validate the proposed approach on synthetic and in-vivo datasets

    Archaeological Survey of the Proposed Location for a Borrow Pit, West of Solsberry, Greene County, Indiana

    Get PDF
    Abstracts are made available for research purposes. To view the full report, please contact the staff of the Glenn A. Black Laboratory of Archaeology (www.gbl.indiana.edu).At the request of Duncan Robertson, Inc., the Glenn A. Black Laboratory of Archaeology, Indiana University (GBL) performed a cultural resources survey of the proposed location for excavation of a borrow pit west of Solsberry, Greene County, Indiana. A total of approximately 0.45 acres was surveyed. The purposes of the survey were 1) to identify and document all of the cultural resources in the project area, 2) to evaluate any sites with regard to their eligibility for inclusion on the National Register of Historic Places (NRHP) and the Indiana Register of Historic Sites and Structures (IRHSS), and 3) to make recommendations for the protection of significant and potentially significant sites. Fieldwork was conducted August 5, 1999 by GBL archaeologist Andrew A. White. No cultural materials were discovered in the project area. Cultural resource clearance is recommended for the project area provided that all earth-moving activities are restricted to the current proposed area of impact

    Deep learning-based parameter mapping for joint relaxation and diffusion tensor MR Fingerprinting

    Full text link
    Magnetic Resonance Fingerprinting (MRF) enables the simultaneous quantification of multiple properties of biological tissues. It relies on a pseudo-random acquisition and the matching of acquired signal evolutions to a precomputed dictionary. However, the dictionary is not scalable to higher-parametric spaces, limiting MRF to the simultaneous mapping of only a small number of parameters (proton density, T1 and T2 in general). Inspired by diffusion-weighted SSFP imaging, we present a proof-of-concept of a novel MRF sequence with embedded diffusion-encoding gradients along all three axes to efficiently encode orientational diffusion and T1 and T2 relaxation. We take advantage of a convolutional neural network (CNN) to reconstruct multiple quantitative maps from this single, highly undersampled acquisition. We bypass expensive dictionary matching by learning the implicit physical relationships between the spatiotemporal MRF data and the T1, T2 and diffusion tensor parameters. The predicted parameter maps and the derived scalar diffusion metrics agree well with state-of-the-art reference protocols. Orientational diffusion information is captured as seen from the estimated primary diffusion directions. In addition to this, the joint acquisition and reconstruction framework proves capable of preserving tissue abnormalities in multiple sclerosis lesions

    Wearable Eye Tracking for Multisensor Physical Activity Recognition

    Get PDF
    This paper explores the use of wearable eye-tracking to detect physical activities and location information during assembly and construction tasks involving small groups of up to four people. Large physical activities, like carrying heavy items and walking, are analysed alongside more precise, hand-tool activities, like using a drill, or a screwdriver. In a first analysis, gazeinvariant features from the eye-tracker are classified (using Naive Bayes) alongside features obtained from wrist-worn accelerometers and microphones. An evaluation is presented using data from an 8-person dataset containing over 600 physical activity events, performed under real-world (noisy) conditions. Despite the challenges of working with complex, and sometimes unreliable, data we show that event-based precision and recall of 0.66 and 0.81 respectively can be achieved by combining all three sensing modalities (using experiment independent training, and temporal smoothing). In a further analysis, we apply state-ofthe-art computer vision methods like object recognition, scene recognition, and face detection, to generate features from the eye-trackers’ egocentric videos. Activity recognition trained on the output of an object recognition model (e.g., VGG16 trained on ImageNet) could predict Precise activities with an (overall average) f-measure of 0.45. Location of participants was similarly obtained using visual scene recognition, with average precision and recall of 0.58 and 0.56

    Design of the 28.15 MHz cavity for RHIC

    Full text link

    Rapid three-dimensional multiparametric MRI with quantitative transient-state imaging

    Get PDF
    Novel methods for quantitative, transient-state multiparametric imaging are increasingly being demonstrated for assessment of disease and treatment efficacy. Here, we build on these by assessing the most common Non-Cartesian readout trajectories (2D/3D radials and spirals), demonstrating efficient anti-aliasing with a k-space view-sharing technique, and proposing novel methods for parameter inference with neural networks that incorporate the estimation of proton density. Our results show good agreement with gold standard and phantom references for all readout trajectories at 1.5T and 3T. Parameters inferred with the neural network were within 6.58% difference from the parameters inferred with a high-resolution dictionary. Concordance correlation coefficients were above 0.92 and the normalized root mean squared error ranged between 4.2% - 12.7% with respect to gold-standard phantom references for T1 and T2. In vivo acquisitions demonstrate sub-millimetric isotropic resolution in under five minutes with reconstruction and inference times < 7 minutes. Our 3D quantitative transient-state imaging approach could enable high-resolution multiparametric tissue quantification within clinically acceptable acquisition and reconstruction times.Comment: 43 pages, 12 Figures, 5 Table
    • …
    corecore