5,890 research outputs found

    Shape from periodic texture using the eigenvectors of local affine distortion

    Get PDF
    This paper shows how the local slant and tilt angles of regularly textured curved surfaces can be estimated directly, without the need for iterative numerical optimization, We work in the frequency domain and measure texture distortion using the affine distortion of the pattern of spectral peaks. The key theoretical contribution is to show that the directions of the eigenvectors of the affine distortion matrices can be used to estimate local slant and tilt angles of tangent planes to curved surfaces. In particular, the leading eigenvector points in the tilt direction. Although not as geometrically transparent, the direction of the second eigenvector can be used to estimate the slant direction. The required affine distortion matrices are computed using the correspondences between spectral peaks, established on the basis of their energy ordering. We apply the method to a variety of real-world and synthetic imagery

    Deep Reflectance Maps

    Get PDF
    Undoing the image formation process and therefore decomposing appearance into its intrinsic properties is a challenging task due to the under-constraint nature of this inverse problem. While significant progress has been made on inferring shape, materials and illumination from images only, progress in an unconstrained setting is still limited. We propose a convolutional neural architecture to estimate reflectance maps of specular materials in natural lighting conditions. We achieve this in an end-to-end learning formulation that directly predicts a reflectance map from the image itself. We show how to improve estimates by facilitating additional supervision in an indirect scheme that first predicts surface orientation and afterwards predicts the reflectance map by a learning-based sparse data interpolation. In order to analyze performance on this difficult task, we propose a new challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg) using both synthetic and real images. Furthermore, we show the application of our method to a range of image-based editing tasks on real images.Comment: project page: http://homes.esat.kuleuven.be/~krematas/DRM

    3DTouch: A wearable 3D input device with an optical sensor and a 9-DOF inertial measurement unit

    Full text link
    We present 3DTouch, a novel 3D wearable input device worn on the fingertip for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally working across various 3D platforms. This paper presents a low-cost solution to designing and implementing such a device. Our approach relies on relative positioning technique using an optical laser sensor and a 9-DOF inertial measurement unit. 3DTouch is self-contained, and designed to universally work on various 3D platforms. The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. Modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on.Comment: 8 pages, 7 figure

    Visualizing Magnitude and Direction in Flow Fields

    Get PDF
    In weather visualizations, it is common to see vector data represented by glyphs placed on grids. The glyphs either do not encode magnitude in readable steps, or have designs that interfere with the data. The grids form strong but irrelevant patterns. Directional, quantitative glyphs bent along streamlines are more effective for visualizing flow patterns. With the goal of improving the perception of flow patterns in weather forecasts, we designed and evaluated two variations on a glyph commonly used to encode wind speed and direction in weather visualizations. We tested the ability of subjects to determine wind direction and speed: the results show the new designs are superior to the traditional. In a second study we designed and evaluated new methods for representing modeled wave data using similar streamline-based designs. We asked subjects to rate the marine weather visualizations: the results revealed a preference for some of the new designs

    Application of Fractal Dimension for Quantifying Noise Texture in Computed Tomography Images

    Get PDF
    Purpose Evaluation of noise texture information in CT images is important for assessing image quality. Noise texture is often quantified by the noise power spectrum (NPS), which requires numerous image realizations to estimate. This study evaluated fractal dimension for quantifying noise texture as a scalar metric that can potentially be estimated using one image realization. Methods The American College of Radiology CT accreditation phantom (ACR) was scanned on a clinical scanner (Discovery CT750, GE Healthcare) at 120 kV and 25 and 90 mAs. Images were reconstructed using filtered back projection (FBP/ASIR 0%) with varying reconstruction kernels: Soft, Standard, Detail, Chest, Lung, Bone, and Edge. For each kernel, images were also reconstructed using ASIR 50% and ASIR 100% iterative reconstruction (IR) methods. Fractal dimension was estimated using the differential box‐counting algorithm applied to images of the uniform section of ACR phantom. The two‐dimensional Noise Power Spectrum (NPS) and one‐dimensional‐radially averaged NPS were estimated using established techniques. By changing the radiation dose, the effect of noise magnitude on fractal dimension was evaluated. The Spearman correlation between the fractal dimension and the frequency of the NPS peak was calculated. The number of images required to reliably estimate fractal dimension was determined and compared to the number of images required to estimate the NPS‐peak frequency. The effect of Region of Interest (ROI) size on fractal dimension estimation was evaluated. Feasibility of estimating fractal dimension in an anthropomorphic phantom and clinical image was also investigated, with the resulting fractal dimension compared to that estimated within the uniform section of the ACR phantom. Results Fractal dimension was strongly correlated with the frequency of the peak of the radially averaged NPS curve, having a Spearman rank‐order coefficient of 0.98 (P‐value \u3c 0.01) for ASIR 0%. The mean fractal dimension at ASIR 0% was 2.49 (Soft), 2.51 (Standard), 2.52 (Detail), 2.57 (Chest), 2.61 (Lung), 2.66 (Bone), and 2.7 (Edge). A reduction in fractal dimension was observed with increasing ASIR levels for all investigated reconstruction kernels. Fractal dimension was found to be independent of noise magnitude. Fractal dimension was successfully estimated from four ROIs of size 64 × 64 pixels or one ROI of 128 × 128 pixels. Fractal dimension was found to be sensitive to non‐noise structures in the image, such as ring artifacts and anatomical structure. Fractal dimension estimated within a uniform region of an anthropomorphic phantom and clinical head image matched that estimated within the ACR phantom for filtered back projection reconstruction. Conclusions Fractal dimension correlated with the NPS‐peak frequency and was independent of noise magnitude, suggesting that the scalar metric of fractal dimension can be used to quantify the change in noise texture across reconstruction approaches. Results demonstrated that fractal dimension can be estimated from four, 64 × 64‐pixel ROIs or one 128 × 128 ROI within a head CT image, which may make it amenable for quantifying noise texture within clinical images

    Modeling, Estimation, and Pattern Analysis of Random Texture on 3-D Surfaces

    Get PDF
    To recover 3-D structure from a shaded and textural surface image involving textures, neither the Shape-from-shading nor the Shape-from-texture analysis is enough, because both radiance and texture information coexist within the scene surface. A new 3-D texture model is developed by considering the scene image as the superposition of a smooth shaded image and a random texture image. To describe the random part, the orthographical projection is adapted to take care of the non-isotropic distribution function of the intensity due to the slant and tilt of a 3-D textures surface, and the Fractional Differencing Periodic (FDP) model is chosen to describe the random texture, because this model is able to simultaneously represent the coarseness and the pattern of the 3-D texture surface, and enough flexible to synthesize both long-term and short-term correlation structures of random texture. Since the object is described by the model involving several free parameters and the values of these parameters are determined directly from its projected image, it is possible to extract 3-D information and texture pattern directly from the image without any preprocessing. Thus, the cumulative error obtained from each pre-processing can be minimized. For estimating the parameters, a hybrid method which uses both the least square and the maximum likelihood estimates is applied and the estimation of parameters and the synthesis are done in frequency domain. Among the texture pattern features which can be obtained from a single surface image, Fractal scaling parameter plays a major role for classifying and/or segmenting the different texture patterns tilted and slanted due to the 3-dimensional rotation, because of its rotational and scaling invariant properties. Also, since the Fractal scaling factor represents the coarseness of the surface, each texture pattern has its own Fractal scale value, and particularly at the boundary between the different textures, it has relatively higher value to the one within a same texture. Based on these facts, a new classification method and a segmentation scheme for the 3-D rotated texture patterns are develope

    Spatially resolved texture analysis of Napoleonic War era copper bolts

    Get PDF
    The spatial resolution achievable by a time-of-flight neutron strain scanner has been harnessed using a new data analysis methodology (NyRTex) to determine, nondestructively, the spatial variation of crystallographic texture in objects of cultural heritage. Previous studies on the crystallographic texture at the centre of three Napoleonic War era copper bolts, which demonstrated the value of this technique in differentiating between the different production processes of the different types of bolts, were extended to four copper bolts from the wrecks of HMS Impregnable (completed 1786), HMS Amethyst (1799), HMS Pomone (1805) and HMS Maeander (1840) along with a cylindrical `segment' of a further incomplete bolt from HMS Pomone. These included bolts with works stamps, allowing comparison with documentary accounts of the manufacturing processes used, and the results demonstrated unequivocally that bolts with a `Westwood and Collins' patent stamp were made using the Collins rather than the Westwood process. In some bolts there was a pronounced variation in texture across the cross section. In some cases this is consistent with what is known of the types of hot and cold working used, but the results from the latest study might also suggest that, even in the mature phase of this technology, some hand finishing was sometimes necessary. This examination of bolts from a wider range of dates is an important step in increasing our understanding of the introduction and evolution of copper fastenings in Royal Navy warships
    • 

    corecore