268,267 research outputs found

    Tracking Performance of the Scintillating Fiber Detector in the K2K Experiment

    Full text link
    The K2K long-baseline neutrino oscillation experiment uses a Scintillating Fiber Detector (SciFi) to reconstruct charged particles produced in neutrino interactions in the near detector. We describe the track reconstruction algorithm and the performance of the SciFi after three years of operation.Comment: 24pages,18 figures, and 1 table. Preprint submitted to NI

    Automated classification of three-dimensional reconstructions of coral reefs using convolutional neural networks

    Get PDF
    © The Author(s), 2020. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Hopkinson, B. M., King, A. C., Owen, D. P., Johnson-Roberson, M., Long, M. H., & Bhandarkar, S. M. Automated classification of three-dimensional reconstructions of coral reefs using convolutional neural networks. PLoS One, 15(3), (2020): e0230671, doi: 10.1371/journal.pone.0230671.Coral reefs are biologically diverse and structurally complex ecosystems, which have been severally affected by human actions. Consequently, there is a need for rapid ecological assessment of coral reefs, but current approaches require time consuming manual analysis, either during a dive survey or on images collected during a survey. Reef structural complexity is essential for ecological function but is challenging to measure and often relegated to simple metrics such as rugosity. Recent advances in computer vision and machine learning offer the potential to alleviate some of these limitations. We developed an approach to automatically classify 3D reconstructions of reef sections and assessed the accuracy of this approach. 3D reconstructions of reef sections were generated using commercial Structure-from-Motion software with images extracted from video surveys. To generate a 3D classified map, locations on the 3D reconstruction were mapped back into the original images to extract multiple views of the location. Several approaches were tested to merge information from multiple views of a point into a single classification, all of which used convolutional neural networks to classify or extract features from the images, but differ in the strategy employed for merging information. Approaches to merging information entailed voting, probability averaging, and a learned neural-network layer. All approaches performed similarly achieving overall classification accuracies of ~96% and >90% accuracy on most classes. With this high classification accuracy, these approaches are suitable for many ecological applications.This study was funded by grants from the Alfred P. Sloan Foundation (BMH, BR2014-049; https://sloan.org), and the National Science Foundation (MHL, OCE-1657727; https://www.nsf.gov). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript

    Development of a Computer Vision-Based Three-Dimensional Reconstruction Method for Volume-Change Measurement of Unsaturated Soils during Triaxial Testing

    Get PDF
    Problems associated with unsaturated soils are ubiquitous in the U.S., where expansive and collapsible soils are some of the most widely distributed and costly geologic hazards. Solving these widespread geohazards requires a fundamental understanding of the constitutive behavior of unsaturated soils. In the past six decades, the suction-controlled triaxial test has been established as a standard approach to characterizing constitutive behavior for unsaturated soils. However, this type of test requires costly test equipment and time-consuming testing processes. To overcome these limitations, a photogrammetry-based method has been developed recently to measure the global and localized volume-changes of unsaturated soils during triaxial test. However, this method relies on software to detect coded targets, which often requires tedious manual correction of incorrectly coded target detection information. To address the limitation of the photogrammetry-based method, this study developed a photogrammetric computer vision-based approach for automatic target recognition and 3D reconstruction for volume-changes measurement of unsaturated soils in triaxial tests. Deep learning method was used to improve the accuracy and efficiency of coded target recognition. A photogrammetric computer vision method and ray tracing technique were then developed and validated to reconstruct the three-dimensional models of soil specimen

    Fat fraction mapping using bSSFP Signal Profile Asymmetries for Robust multi-Compartment Quantification (SPARCQ)

    Get PDF
    Purpose: To develop a novel quantitative method for detection of different tissue compartments based on bSSFP signal profile asymmetries (SPARCQ) and to provide a validation and proof-of-concept for voxel-wise water-fat separation and fat fraction mapping. Methods: The SPARCQ framework uses phase-cycled bSSFP acquisitions to obtain bSSFP signal profiles. For each voxel, the profile is decomposed into a weighted sum of simulated profiles with specific off-resonance and relaxation time ratios. From the obtained set of weights, voxel-wise estimations of the fractions of the different components and their equilibrium magnetization are extracted. For the entire image volume, component-specific quantitative maps as well as banding-artifact-free images are generated. A SPARCQ proof-of-concept was provided for water-fat separation and fat fraction mapping. Noise robustness was assessed using simulations. A dedicated water-fat phantom was used to validate fat fractions estimated with SPARCQ against gold-standard 1H MRS. Quantitative maps were obtained in knees of six healthy volunteers, and SPARCQ repeatability was evaluated in scan rescan experiments. Results: Simulations showed that fat fraction estimations are accurate and robust for signal-to-noise ratios above 20. Phantom experiments showed good agreement between SPARCQ and gold-standard (GS) fat fractions (fF(SPARCQ) = 1.02*fF(GS) + 0.00235). In volunteers, quantitative maps and banding-artifact-free water-fat-separated images obtained with SPARCQ demonstrated the expected contrast between fatty and non-fatty tissues. The coefficient of repeatability of SPARCQ fat fraction was 0.0512. Conclusion: The SPARCQ framework was proposed as a novel quantitative mapping technique for detecting different tissue compartments, and its potential was demonstrated for quantitative water-fat separation.Comment: 20 pages, 7 figures, submitted to Magnetic Resonance in Medicin
    corecore