388 research outputs found

    Improved Encoding for Compressed Textures

    Get PDF
    For the past few decades, graphics hardware has supported mapping a two dimensional image, or texture, onto a three dimensional surface to add detail during rendering. The complexity of modern applications using interactive graphics hardware have created an explosion of the amount of data needed to represent these images. In order to alleviate the amount of memory required to store and transmit textures, graphics hardware manufacturers have introduced hardware decompression units into the texturing pipeline. Textures may now be stored as compressed in memory and decoded at run-time in order to access the pixel data. In order to encode images to be used with these hardware features, many compression algorithms are run offline as a preprocessing step, often times the most time-consuming step in the asset preparation pipeline. This research presents several techniques to quickly serve compressed texture data. With the goal of interactive compression rates while maintaining compression quality, three algorithms are presented in the class of endpoint compression formats. The first uses intensity dilation to estimate compression parameters for low-frequency signal-modulated compressed textures and offers up to a 3X improvement in compression speed. The second, FasTC, shows that by estimating the final compression parameters, partition-based formats can choose an approximate partitioning and offer orders of magnitude faster encoding speed. The third, SegTC, shows additional improvement over selecting a partitioning by using a global segmentation to find the boundaries between image features. This segmentation offers an additional 2X improvement over FasTC while maintaining similar compressed quality. Also presented is a case study in using texture compression to benefit two dimensional concave path rendering. Compressing pixel coverage textures used for compositing yields both an increase in rendering speed and a decrease in storage overhead. Additionally an algorithm is presented that uses a single layer of indirection to adaptively select the block size compressed for each texture, giving a 2X increase in compression ratio for textures of mixed detail. Finally, a texture storage representation that is decoded at runtime on the GPU is presented. The decoded texture is still compressed for graphics hardware but uses 2X fewer bytes for storage and network bandwidth.Doctor of Philosoph

    Solar Magnetic Tracking. I. Software Comparison and Recommended Practices

    Full text link
    Feature tracking and recognition are increasingly common tools for data analysis, but are typically implemented on an ad-hoc basis by individual research groups, limiting the usefulness of derived results when selection effects and algorithmic differences are not controlled. Specific results that are affected include the solar magnetic turnover time, the distributions of sizes, strengths, and lifetimes of magnetic features, and the physics of both small scale flux emergence and the small-scale dynamo. In this paper, we present the results of a detailed comparison between four tracking codes applied to a single set of data from SOHO/MDI, describe the interplay between desired tracking behavior and parameterization of tracking algorithms, and make recommendations for feature selection and tracking practice in future work.Comment: In press for Astrophys. J. 200

    A Compact Neutron Scatter Camera Using Optical Coded-Aperture Imaging

    Get PDF
    The detection and localization of fast neutron resources is an important capability for a number of nuclear security areas such as emergency response and arms control treaty verification. Neutron scatter cameras are one technology that can be used to accomplish this task, but current instruments tend to be large (meter scale) and not portable. Using optical coded-aperture imaging, fast plastic scintillator, and fast photodetectors that were sensitive to single photons, a portable neutron scatter camera was designed and simulated. The design was optimized, an experimental prototype was constructed, and neutron imaging was demonstrated with a tagged 252Cf source in the lab

    Effect of scan time on resting state parameters

    Get PDF
    In the past decade the interest in studying the spontaneous low-frequency fluctuations (LFF) in a resting-state brain has steadily grown. By measuring LFF (\u3c 0.08 Hz) in blood-oxygen-level-dependent (BOLD) signals, resting-state functional magnetic resonance imaging (rs-fMRI) has proven to be a powerful tool in exploring brain network connectivity and functionality. Rs-fMRI data can be used to organize the brain into resting state networks (RSNs). In this thesis, rs-fMRI data are used to determine the minimum data acquisition time necessary to detect local intrinsic brain activity as a function of both the amplitude of low frequency fluctuations (ALFF) and the fractional amplitude of low frequency fluctuations (fALFF) in BOLD signals in healthy subjects. The data are obtained from 22 healthy subjects to use as a baseline for future rs-fMRI analysis. Voxel-wise analysis is performed on the whole brain, gray matter volume, and two previously established RSNs: the default mode network (DMN) and the visual system network, for all the subjects in this study. Pearson’s correlation coefficients (r-values) are calculated from each subject. The entire time series for one subject is divided into 31 subsections and the r-values are calculated between each consecutive subsection in a subject. In total, there are 30 r- values. To better understand what the results mean across subjects and within subjects Fisher transformations are applied to the 30 calculated r-values for each subject to get a normal z-distribution. The mean across 22 subjects’ z-values is calculated for group analysis. In the end, there are 30 mean values. Finally, an exponential curve fit model is calculated across the 22 subjects using the calculated mean values, and an asymptotic growth model is used to detect the minimum data acquisition time required to obtain both ALFF and fALFF of the BOLD signals at rest. The results show that the minimum time required to detect an ALFF and fALFF of the BOLD signals at rest is 12 and 13.33 minutes respectively. Future studies can focus on determining the minimum scanner time using similar analysis for different physiological states of the brain

    FPO++: Efficient Encoding and Rendering of Dynamic Neural Radiance Fields by Analyzing and Enhancing Fourier PlenOctrees

    Full text link
    Fourier PlenOctrees have shown to be an efficient representation for real-time rendering of dynamic Neural Radiance Fields (NeRF). Despite its many advantages, this method suffers from artifacts introduced by the involved compression when combining it with recent state-of-the-art techniques for training the static per-frame NeRF models. In this paper, we perform an in-depth analysis of these artifacts and leverage the resulting insights to propose an improved representation. In particular, we present a novel density encoding that adapts the Fourier-based compression to the characteristics of the transfer function used by the underlying volume rendering procedure and leads to a substantial reduction of artifacts in the dynamic model. Furthermore, we show an augmentation of the training data that relaxes the periodicity assumption of the compression. We demonstrate the effectiveness of our enhanced Fourier PlenOctrees in the scope of quantitative and qualitative evaluations on synthetic and real-world scenes

    Scanline calculation of radial influence for image processing

    Full text link
    Efficient methods for the calculation of radial influence are described and applied to two image processing problems, digital halftoning and mixed content image compression. The methods operate recursively on scanlines of image values, spreading intensity from scanline to scanline in proportions approximating a Cauchy distribution. For error diffusion halftoning, experiments show that this recursive scanline spreading provides an ideal pattern of distribution of error. Error diffusion using masks generated to provide this distribution of error alleviate error diffusion "worm" artifacts. The recursive scanline by scanline application of a spreading filter and a complementary filter can be used to reconstruct an image from its horizontal and vertical pixel difference values. When combined with the use of a downsampled image the reconstruction is robust to incomplete and quantized pixel difference data. Such gradient field integration methods are described in detail proceeding from representation of images by gradient values along contours through to a variety of efficient algorithms. Comparisons show that this form of gradient field integration by convolution provides reduced distortion compared to other high speed gradient integration methods. The reduced distortion can be attributed to success in approximating a radial pattern of influence. An approach to edge-based image compression is proposed using integration of gradient data along edge contours and regularly sampled low resolution image data. This edge-based image compression model is similar to previous sketch based image coding methods but allows a simple and efficient calculation of an edge-based approximation image. A low complexity implementation of this approach to compression is described. The implementation extracts and represents gradient data along edge contours as pixel differences and calculates an approximate image by performing integration of pixel difference data by scanline convolution. The implementation was developed as a prototype for compression of mixed content image data in printing systems. Compression results are reported and strengths and weaknesses of the implementation are identified

    UAV-Sim: NeRF-based Synthetic Data Generation for UAV-based Perception

    Full text link
    Tremendous variations coupled with large degrees of freedom in UAV-based imaging conditions lead to a significant lack of data in adequately learning UAV-based perception models. Using various synthetic renderers in conjunction with perception models is prevalent to create synthetic data to augment the learning in the ground-based imaging domain. However, severe challenges in the austere UAV-based domain require distinctive solutions to image synthesis for data augmentation. In this work, we leverage recent advancements in neural rendering to improve static and dynamic novelview UAV-based image synthesis, especially from high altitudes, capturing salient scene attributes. Finally, we demonstrate a considerable performance boost is achieved when a state-ofthe-art detection model is optimized primarily on hybrid sets of real and synthetic data instead of the real or synthetic data separately.Comment: Video Link: https://www.youtube.com/watch?v=ucPzbPLqqp

    Mapping the Monoceros Ring in 3D with Pan-STARRS1

    Get PDF
    Using the Pan-STARRS1 survey, we derive limiting magnitude, spatial completeness, and density maps that we use to probe the three-dimensional structure and estimate the stellar mass of the so-called Monoceros Ring. The Monoceros Ring is an enormous and complex stellar sub-structure in the outer Milky Way disk. It is most visible across the large Galactic Anticenter region, 120∘<l<240∘120^\circ \lt l\lt 240^\circ , −30∘<b<+40∘-30^\circ \lt b\lt +40^\circ . We estimate its stellar mass density profile along every line of sight in 2° × 2° pixels over the entire 30,000 deg2 Pan-STARRS1 survey using the previously developed match software. By parsing this distribution into a radially smooth component and the Monoceros Ring, we obtain its mass and distance from the Sun along each relevant line of sight. The Monoceros Ring is significantly closer to us in the south (6 kpc) than in the north (9 kpc). We also create 2D cross-sections parallel to the Galactic plane that show 135° of the Monoceros Ring in the south and 170° of the Monoceros Ring in the north. We show that the northern and southern structures are also roughly concentric circles, suggesting that they may be waves rippling from a common origin. Excluding the Galactic plane ∼±4∘\sim \pm 4^\circ , we observe an excess mass of 4×106M⊙4\times {10}^{6}{M}_{\odot } across 120∘<l<240∘120^\circ \lt l\lt 240^\circ . If we interpolate across the Galactic plane, we estimate that this region contains 8×106M⊙8\times {10}^{6}{M}_{\odot }. If we assume (somewhat boldly) that the Monoceros Ring is a set of two Galactocentric rings, its total mass is 6×107M⊙6\times {10}^{7}{M}_{\odot }. Finally, if we assume that it is a set of two circles centered at a point 4 kpc from the Galactic center in the anti-central direction, as our data suggests, we estimate its mass to be 4×107M⊙4\times {10}^{7}{M}_{\odot }
    • …
    corecore