12 research outputs found

    WYSIWYP: What You See Is What You Pick

    Full text link

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    The University of Maine Historic Preservation Master Plan

    Get PDF
    Historic Preservation Plan for the University of Maine. The plan represents the efforts of University of Maine personnel, including administrators, Facilities Management staff, faculty and students; and consulting architects, historians, and landscape architects, working closely with the CPC (and through representation on the CPC, the CBAC) and the MHPC. The University’s Board of Visitors periodically reviewed the work of the planning team and offered enthusiastic support. The goals of the Historic Preservation Master Plan are to:• identify and document historic resources of the core campus of the University of Maine;• identify more recent buildings and landscapes of the University of Maine that have acquired or are expected to acquire significance in the future;• determine and document existing conditions of these resources;• recommend appropriate preservation treatmentsand uses for these resources;• publicize and protect these resources through designation under institutional, local, state and federal historic preservation processes;• put in place University policies and procedures that will assure adequate protection, maintenance, and appropriate use of these resources;• use University resources to educate the University community about the importance and value of campus historic resources;• protect the historic resources of the University in order to maintain strong ties between the institution and its alumni family; and• provide campus planners with specific and practical information to assist them with the day-to-day management of the physical plant and with long-range development decisions

    Probing the Universe with Space Based Low-Frequency Radio Measurements

    Full text link
    Due to Earth’s ionosphere, it is not possible to image the sky below 10 MHz. Any waves below this cutoff frequency are absorbed by the plasma in Earth’s ionosphere, whose free electron density determines the cutoff. A constellation of small spacecraft above the ionosphere could enable radio imaging from space at frequencies below this cutoff, but the logistics and costs of doing this imaging using multiple satellites that are kilometers apart in a precise enough manner to form a radio array has until recently been unfeasible. With the lowering costs and increasing reliability of smallsats, the use of radio arrays in space is finally set to open up this new window through which we may observe the universe in a new light. For complex sources in the sky, analytical formulas are not enough to predict array performance; full simulations must be done to evaluate potential array configurations. Simulated outputs must be compared to a realistic input model to make sure that a given array configuration can meet its defined scientific requirements. Space-based arrays also introduce additional challenges in understanding novel data processing and errors from location retrieval of the receivers and budgeting for data transmission. In this thesis I demonstrate the feasibility for different space-based radio arrays by simulating their performance under realistic conditions. I outline the science goals involving radio imaging below 10 MHz for a range of solar, astrophysical, and magnetospheric targets. I then outline different strategies for creating synthetic apertures in space that are well suited for each of these targets. I describe the calculations needed for each style of correlation and create a data processing and science analysis pipeline for showcasing the imaging performance of each simulated array. I show that the SunRISE and RELIC array concepts are both able to meet their main scientific goals of localizing solar radio bursts and mapping radio galaxies respectively. I describe a novel way in which I use magnetohydrodynamic simulations of a solar eruption alongside real radio data to predict the sky brightness patterns of the radio bursts for input to the SunRISE pipeline across different theories of particle acceleration. This technique provides initial predictions of the location of solar type II burst generation in a coronal mass ejection that SunRISE can potentially confirm. I also demonstrate the feasibility of a lunar near side array powerful enough to image the Earth’s synchrotron emission, along with a zoo of brighter auroral emissions. Synchrotron measurements would provide a unique proxy measurement of the global energetic electron distribution in the Earth’s radiation belts. Such an array could also pinpoint the location of brighter transient events such as Auroral Kilometric Radiation with high precision, providing local, small scale electron data in addition to global data. The time finally seems ripe for low frequency radio astronomy to make its move to outer space. Increased feasibility of small satellites is a huge game changer for the entire space industry, incentivizing mission designs that can take advantage of the distributive nature of multiple small inexpensive spacecraft to do the jobs traditionally done, or unable to be done, by larger, more costly single spacecraft. In that same spirit, this work acts as a helpful starting point for the general mission design, data processing, and science analysis required for distributed space-based radio arrays.PHDAtmospheric, Oceanic & Space ScienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/153416/1/alexhege_1.pd

    Multi-element superconducting nanowire single photon detectors

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 140-148).Single-photon-detector arrays can provide unparalleled performance and detailed information in applications that require precise timing and single photon sensitivity. Such arrays have been demonstrated using a number of single-photon-detector technologies, but the high performance of superconducting nanowire single photon detectors (SNSPDs) and the unavoidable overhead of cryogenic cooling make SNSPDs particularly likely to be used in applications that require detectors with the highest performance available. These applications are also the most likely to benefit from and fully utilize the large amount of information and performance advantages provided by a single-photon-detector array.Although the performance advantages of individual superconducting nanowire single photon detectors (SNSPDs) have been investigated since their first demonstration in 2001, the advantages gained by building arrays of multiple SNSPDs may be even more unique among single photon detector technologies. First, the simplicity and nanoscale dimensions of these detectors make it possible to easily operate multiple elements and to closely space these elements such that the active area of an array is essentially identical to that of a single element. This ability to eliminate seam-loss between elements, as well as the performance advantages gained by using multiple smaller elements, makes the multi-element approach an attractive way to increase the general detector performance (detection efficiency and maximum counting rate) as well as to provide new capabilities (photon-number, spatial, and spectral resolution). Additionally, in contrast to semiconductor-based single-photon detectors, SNSPDs have a negligible probability of spontaneously emitting photons during the detection process, eliminating a potential source of crosstalk between array elements.(cont.) However, the SNSPD can be susceptible to other forms of crosstalk, such as thermal or electromagnetic interactions between elements, so it was important to investigate the operation and limitations of multi-element SNSPDs. This thesis will introduce the concept of a multi-element SNSPD with a continuous active area and will investigate its performance advantages, its potential drawbacks and finally its application to intensity correlation measurements.This work is sponsored by the United States Air Force under Contract #FA8721-05-C-0002. Opinions, interpretations, recommendations and conclusions are those of the authors and are not necessarily endorsed by the United States Government.by Eric Dauler.Ph.D

    Feature peeling

    No full text
    We present a novel rendering algorithm that analyses the ray profiles along the line of sight. The profiles are subdivided according to encountered peaks and valleys at so called transition points. The sensitivity of these transition points is calibrated via two thresholds. The slope threshold is based on the magnitude of a peak following a valley, while the peeling threshold measures the depth of the transition point relative to the neighboring rays. This technique separates the dataset into a number of feature layers. The user can scroll through the layers inspecting various features from the current view position. While our technique has been inspired by opacity peeling approach, we demonstrate that we can reveal detectable features even in the third and forth layers for both CT and MRI datasets
    corecore