491 research outputs found

    Joint Material and Illumination Estimation from Photo Sets in the Wild

    Get PDF
    Faithful manipulation of shape, material, and illumination in 2D Internet images would greatly benefit from a reliable factorization of appearance into material (i.e., diffuse and specular) and illumination (i.e., environment maps). On the one hand, current methods that produce very high fidelity results, typically require controlled settings, expensive devices, or significant manual effort. To the other hand, methods that are automatic and work on 'in the wild' Internet images, often extract only low-frequency lighting or diffuse materials. In this work, we propose to make use of a set of photographs in order to jointly estimate the non-diffuse materials and sharp lighting in an uncontrolled setting. Our key observation is that seeing multiple instances of the same material under different illumination (i.e., environment), and different materials under the same illumination provide valuable constraints that can be exploited to yield a high-quality solution (i.e., specular materials and environment illumination) for all the observed materials and environments. Similar constraints also arise when observing multiple materials in a single environment, or a single material across multiple environments. The core of this approach is an optimization procedure that uses two neural networks that are trained on synthetic images to predict good gradients in parametric space given observation of reflected light. We evaluate our method on a range of synthetic and real examples to generate high-quality estimates, qualitatively compare our results against state-of-the-art alternatives via a user study, and demonstrate photo-consistent image manipulation that is otherwise very challenging to achieve

    MC2^2: Multi-wavelength and dynamical analysis of the merging galaxy cluster ZwCl 0008.8+5215: An older and less massive Bullet Cluster

    Get PDF
    We analyze a rich dataset including Subaru/SuprimeCam, HST/ACS and WFC3, Keck/DEIMOS, Chandra/ACIS-I, and JVLA/C and D array for the merging galaxy cluster ZwCl 0008.8+5215. With a joint Subaru/HST weak gravitational lensing analysis, we identify two dominant subclusters and estimate the masses to be M200=5.7βˆ’1.8+2.8Γ—1014 MβŠ™_{200}=\text{5.7}^{+\text{2.8}}_{-\text{1.8}}\times\text{10}^{\text{14}}\,\text{M}_{\odot} and 1.2βˆ’0.6+1.4Γ—1014^{+\text{1.4}}_{-\text{0.6}}\times10^{14} MβŠ™_{\odot}. We estimate the projected separation between the two subclusters to be 924βˆ’206+243^{+\text{243}}_{-\text{206}} kpc. We perform a clustering analysis on confirmed cluster member galaxies and estimate the line of sight velocity difference between the two subclusters to be 92Β±\pm164 km sβˆ’1^{-\text{1}}. We further motivate, discuss, and analyze the merger scenario through an analysis of the 42 ks of Chandra/ACIS-I and JVLA/C and D polarization data. The X-ray surface brightness profile reveals a remnant core reminiscent of the Bullet Cluster. The X-ray luminosity in the 0.5-7.0 keV band is 1.7Β±\pm0.1Γ—\times1044^{\text{44}} erg sβˆ’1^{-\text{1}} and the X-ray temperature is 4.90Β±\pm0.13 keV. The radio relics are polarized up to 40%\%. We implement a Monte Carlo dynamical analysis and estimate the merger velocity at pericenter to be 1800βˆ’300+400^{+\text{400}}_{-\text{300}} km sβˆ’1^{-\text{1}}. ZwCl 0008.8+5215 is a low-mass version of the Bullet Cluster and therefore may prove useful in testing alternative models of dark matter. We do not find significant offsets between dark matter and galaxies, as the uncertainties are large with the current lensing data. Furthermore, in the east, the BCG is offset from other luminous cluster galaxies, which poses a puzzle for defining dark matter -- galaxy offsets.Comment: 22 pages, 19 figures, accepted for publication in the Astrophysical Journal on March 13, 201

    AFFECT-PRESERVING VISUAL PRIVACY PROTECTION

    Get PDF
    The prevalence of wireless networks and the convenience of mobile cameras enable many new video applications other than security and entertainment. From behavioral diagnosis to wellness monitoring, cameras are increasing used for observations in various educational and medical settings. Videos collected for such applications are considered protected health information under privacy laws in many countries. Visual privacy protection techniques, such as blurring or object removal, can be used to mitigate privacy concern, but they also obliterate important visual cues of affect and social behaviors that are crucial for the target applications. In this dissertation, we propose to balance the privacy protection and the utility of the data by preserving the privacy-insensitive information, such as pose and expression, which is useful in many applications involving visual understanding. The Intellectual Merits of the dissertation include a novel framework for visual privacy protection by manipulating facial image and body shape of individuals, which: (1) is able to conceal the identity of individuals; (2) provide a way to preserve the utility of the data, such as expression and pose information; (3) balance the utility of the data and capacity of the privacy protection. The Broader Impacts of the dissertation focus on the significance of privacy protection on visual data, and the inadequacy of current privacy enhancing technologies in preserving affect and behavioral attributes of the visual content, which are highly useful for behavior observation in educational and medical settings. This work in this dissertation represents one of the first attempts in achieving both goals simultaneously

    Transfer function design based on user selected samples for intuitive multivariate volume exploration

    Get PDF
    pre-printMultivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. Β© 2011 IEEE

    Doctor of Philosophy

    Get PDF
    dissertationVisualization and exploration of volumetric datasets has been an active area of research for over two decades. During this period, volumetric datasets used by domain users have evolved from univariate to multivariate. The volume datasets are typically explored and classified via transfer function design and visualized using direct volume rendering. To improve classification results and to enable the exploration of multivariate volume datasets, multivariate transfer functions emerge. In this dissertation, we describe our research on multivariate transfer function design. To improve the classification of univariate volumes, various one-dimensional (1D) or two-dimensional (2D) transfer function spaces have been proposed; however, these methods work on only some datasets. We propose a novel transfer function method that provides better classifications by combining different transfer function spaces. Methods have been proposed for exploring multivariate simulations; however, these approaches are not suitable for complex real-world datasets and may be unintuitive for domain users. To this end, we propose a method based on user-selected samples in the spatial domain to make complex multivariate volume data visualization more accessible for domain users. However, this method still requires users to fine-tune transfer functions in parameter space transfer function widgets, which may not be familiar to them. We therefore propose GuideME, a novel slice-guided semiautomatic multivariate volume exploration approach. GuideME provides the user, an easy-to-use, slice-based user interface that suggests the feature boundaries and allows the user to select features via click and drag, and then an optimal transfer function is automatically generated by optimizing a response function. Throughout the exploration process, the user does not need to interact with the parameter views at all. Finally, real-world multivariate volume datasets are also usually of large size, which is larger than the GPU memory and even the main memory of standard work stations. We propose a ray-guided out-of-core, interactive volume rendering and efficient query method to support large and complex multivariate volumes on standard work stations
    • …
    corecore