23,865 research outputs found

    Progressive refinement rendering of implicit surfaces

    Get PDF
    The visualisation of implicit surfaces can be an inefficient task when such surfaces are complex and highly detailed. Visualising a surface by first converting it to a polygon mesh may lead to an excessive polygon count. Visualising a surface by direct ray casting is often a slow procedure. In this paper we present a progressive refinement renderer for implicit surfaces that are Lipschitz continuous. The renderer first displays a low resolution estimate of what the final image is going to be and, as the computation progresses, increases the quality of this estimate at an interactive frame rate. This renderer provides a quick previewing facility that significantly reduces the design cycle of a new and complex implicit surface. The renderer is also capable of completing an image faster than a conventional implicit surface rendering algorithm based on ray casting

    A progressive refinement approach for the visualisation of implicit surfaces

    Get PDF
    Visualising implicit surfaces with the ray casting method is a slow procedure. The design cycle of a new implicit surface is, therefore, fraught with long latency times as a user must wait for the surface to be rendered before being able to decide what changes should be introduced in the next iteration. In this paper, we present an attempt at reducing the design cycle of an implicit surface modeler by introducing a progressive refinement rendering approach to the visualisation of implicit surfaces. This progressive refinement renderer provides a quick previewing facility. It first displays a low quality estimate of what the final rendering is going to be and, as the computation progresses, increases the quality of this estimate at a steady rate. The progressive refinement algorithm is based on the adaptive subdivision of the viewing frustrum into smaller cells. An estimate for the variation of the implicit function inside each cell is obtained with an affine arithmetic range estimation technique. Overall, we show that our progressive refinement approach not only provides the user with visual feedback as the rendering advances but is also capable of completing the image faster than a conventional implicit surface rendering algorithm based on ray casting

    Streamlining Sound Speed Profile Pre-Processing: Case Studies and Field Trials

    Get PDF
    High rate sound speed profiling systems have the potential to maximize the efficiency of multibeam echosounder systems (MBES) by increasing the accuracy at the outer edges of the swath where refraction effects are at their worst. In some cases, high rate sampling on the order of tens of casts per hour is required to capture the spatio-temporal oceanographic variability and this increased sampling rate can challenge the data acquisition workflow if refraction corrections are to be applied in real-time. Common bottlenecks result from sound speed profile (SSP) preprocessing requirements, e.g. file format conversion, cast extension, reduction of the number of points in the cast, filtering, etc. Without the ability to quickly pre-process SSP data, the MBES operator can quickly become overwhelmed with SSP related tasks, potentially to the detriment of their other duties. A series of algorithms are proposed in which SSPs are automatically pre-processed to meet input criteria of MBES acquisition systems, specifically the problems of cast extrapolation and thinning are addressed. The algorithmic performance will be assessed in terms of sounding uncertainty through a series of case studies in a variety of oceanographic conditions and water depths. Results from a field trial in the French Mediterranean will be used to assess the improvement in real-time MBES acquisition workflow and survey accuracy and will also highlight where further improvements can be made in the pre-processing pipeline

    The role of graphics super-workstations in a supercomputing environment

    Get PDF
    A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms)

    Reducing Radiation Dose to the Female Breast during CT Coronary Angiography: A Simulation Study Comparing Breast Shielding, Angular Tube Current Modulation, Reduced kV, and Partial Angle Protocols Using an Unknown-location Signal-detectability Metric

    Get PDF
    Purpose: The authors compared the performance of five protocols intended to reduce dose to the breast during computed tomography (CT) coronary angiography scans using a model observer unknown-location signal-detectability metric. Methods: The authors simulated CT images of an anthropomorphic female thorax phantom for a 120 kV reference protocol and five “dose reduction” protocols intended to reduce dose to the breast: 120 kV partial angle (posteriorly centered), 120 kV tube-current modulated (TCM), 120 kV with shielded breasts, 80 kV, and 80 kV partial angle (posteriorly centered). Two image quality tasks were investigated: the detection and localization of 4-mm, 3.25 mg/ml and 1-mm, 6.0 mg/ml iodine contrast signals randomly located in the heart region. For each protocol, the authors plotted the signal detectability, as quantified by the area under the exponentially transformed free response characteristic curve estimator (AˆFE), as well as noise and contrast-to-noise ratio (CNR) versus breast and lung dose. In addition, the authors quantified each protocol\u27s dose performance as the percent difference in dose relative to the reference protocol achieved while maintaining equivalentAˆFE. Results: For the 4-mm signal-size task, the 80 kV full scan and 80 kV partial angle protocols decreased dose to the breast (80.5% and 85.3%, respectively) and lung (80.5% and 76.7%, respectively) withAˆFE= 0.96, but also resulted in an approximate three-fold increase in image noise. The 120 kV partial protocol reduced dose to the breast (17.6%) at the expense of increased lung dose (25.3%). The TCM algorithm decreased dose to the breast (6.0%) and lung (10.4%). Breast shielding increased breast dose (67.8%) and lung dose (103.4%). The 80 kV and 80 kV partial protocols demonstrated greater dose reductions for the 4-mm task than for the 1-mm task, and the shielded protocol showed a larger increase in dose for the 4-mm task than for the 1-mm task. In general, the CNR curves indicate a similar relative ranking of protocol performance as the correspondingAˆFEcurves, however, the CNR metric overestimated the performance of the shielded protocol for both tasks, leading to corresponding underestimates in the relative dose increases compared to those obtained when using theAˆFEmetric. Conclusions: The 80 kV and 80 kV partial angle protocols demonstrated the greatest reduction to breast and lung dose, however, the subsequent increase in image noise may be deemed clinically unacceptable. Tube output for these protocols can be adjusted to achieve a more desirable noise level with lesser breast dose savings. Breast shielding increased breast and lung dose when maintaining equivalentAˆFE. The results demonstrated that comparisons of dose performance depend on both the image quality metric and the specific task, and that CNR may not be a reliable metric of signal detectability
    corecore