65,262 research outputs found

    User-assisted intrinsic images

    Get PDF
    For many computational photography applications, the lighting and materials in the scene are critical pieces of information. We seek to obtain intrinsic images, which decompose a photo into the product of an illumination component that represents lighting effects and a reflectance component that is the color of the observed material. This is an under-constrained problem and automatic methods are challenged by complex natural images. We describe a new approach that enables users to guide an optimization with simple indications such as regions of constant reflectance or illumination. Based on a simple assumption on local reflectance distributions, we derive a new propagation energy that enables a closed form solution using linear least-squares. We achieve fast performance by introducing a novel downsampling that preserves local color distributions. We demonstrate intrinsic image decomposition on a variety of images and show applications.National Science Foundation (U.S.) (NSF CAREER award 0447561)Institut national de recherche en informatique et en automatique (France) (Associate Research Team “Flexible Rendering”)Microsoft Research (New Faculty Fellowship)Alfred P. Sloan Foundation (Research Fellowship)Quanta Computer, Inc. (MIT-Quanta T Party

    CNN based Learning using Reflection and Retinex Models for Intrinsic Image Decomposition

    Get PDF
    Most of the traditional work on intrinsic image decomposition rely on deriving priors about scene characteristics. On the other hand, recent research use deep learning models as in-and-out black box and do not consider the well-established, traditional image formation process as the basis of their intrinsic learning process. As a consequence, although current deep learning approaches show superior performance when considering quantitative benchmark results, traditional approaches are still dominant in achieving high qualitative results. In this paper, the aim is to exploit the best of the two worlds. A method is proposed that (1) is empowered by deep learning capabilities, (2) considers a physics-based reflection model to steer the learning process, and (3) exploits the traditional approach to obtain intrinsic images by exploiting reflectance and shading gradient information. The proposed model is fast to compute and allows for the integration of all intrinsic components. To train the new model, an object centered large-scale datasets with intrinsic ground-truth images are created. The evaluation results demonstrate that the new model outperforms existing methods. Visual inspection shows that the image formation loss function augments color reproduction and the use of gradient information produces sharper edges. Datasets, models and higher resolution images are available at https://ivi.fnwi.uva.nl/cv/retinet.Comment: CVPR 201

    High intrinsic hydrolytic activity of cyanobacterial RNA polymerase compensates for the absence of transcription proofreading factors

    Get PDF
    The vast majority of organisms possess transcription elongation factors, the functionally similar bacterial Gre and eukaryotic/archaeal TFIIS/TFS. Their main cellular functions are to proofread errors of transcription and to restart elongation via stimulation of RNA hydrolysis by the active centre of RNA polymerase (RNAP). However, a number of taxons lack these factors, including one of the largest and most ubiquitous groups of bacteria, cyanobacteria. Using cyanobacterial RNAP as a model, we investigated alternative mechanisms for maintaining a high fidelity of transcription and for RNAP arrest prevention. We found that this RNAP has very high intrinsic proofreading activity, resulting in nearly as low a level of in vivo mistakes in RNA as Escherichia coli. Features of the cyanobacterial RNAP hydrolysis are reminiscent of the Gre-assisted reaction—the energetic barrier is similarly low, and the reaction involves water activation by a general base. This RNAP is resistant to ubiquitous and most regulatory pausing signals, decreasing the probability to go off-pathway and thus fall into arrest. We suggest that cyanobacterial RNAP has a specific Trigger Loop domain conformation, and isomerises easier into a hydrolytically proficient state, possibly aided by the RNA 3′-end. Cyanobacteria likely passed these features of transcription to their evolutionary descendants, chloroplasts

    A Method of Drusen Measurement Based on the Geometry of Fundus Reflectance

    Get PDF
    BACKGROUND: The hallmarks of age-related macular degeneration, the leading cause of blindness in the developed world, are the subretinal deposits known as drusen. Drusen identification and measurement play a key role in clinical studies of this disease. Current manual methods of drusen measurement are laborious and subjective. Our purpose was to expedite clinical research with an accurate, reliable digital method. METHODS: An interactive semi-automated procedure was developed to level the macular background reflectance for the purpose of morphometric analysis of drusen. 12 color fundus photographs of patients with age-related macular degeneration and drusen were analyzed. After digitizing the photographs, the underlying background pattern in the green channel was leveled by an algorithm based on the elliptically concentric geometry of the reflectance in the normal macula: the gray scale values of all structures within defined elliptical boundaries were raised sequentially until a uniform background was obtained. Segmentation of drusen and area measurements in the central and middle subfields (1000 μm and 3000 μm diameters) were performed by uniform thresholds. Two observers using this interactive semi-automated software measured each image digitally. The mean digital measurements were compared to independent stereo fundus gradings by two expert graders (stereo Grader 1 estimated the drusen percentage in each of the 24 regions as falling into one of four standard broad ranges; stereo Grader 2 estimated drusen percentages in 1% to 5% intervals). RESULTS: The mean digital area measurements had a median standard deviation of 1.9%. The mean digital area measurements agreed with stereo Grader 1 in 22/24 cases. The 95% limits of agreement between the mean digital area measurements and the more precise stereo gradings of Grader 2 were -6.4 % to +6.8 % in the central subfield and -6.0 % to +4.5 % in the middle subfield. The mean absolute differences between the digital and stereo gradings 2 were 2.8 +/- 3.4% in the central subfield and 2.2 +/- 2.7% in the middle subfield. CONCLUSIONS: Semi-automated, supervised drusen measurements may be done reproducibly and accurately with adaptations of commercial software. This technique for macular image analysis has potential for use in clinical research

    Mobile learning: benefits of augmented reality in geometry teaching

    Get PDF
    As a consequence of the technological advances and the widespread use of mobile devices to access information and communication in the last decades, mobile learning has become a spontaneous learning model, providing a more flexible and collaborative technology-based learning. Thus, mobile technologies can create new opportunities for enhancing the pupils’ learning experiences. This paper presents the development of a game to assist teaching and learning, aiming to help students acquire knowledge in the field of geometry. The game was intended to develop the following competences in primary school learners (8-10 years): a better visualization of geometric objects on a plane and in space; understanding of the properties of geometric solids; and familiarization with the vocabulary of geometry. Findings show that by using the game, students have improved around 35% the hits of correct responses to the classification and differentiation between edge, vertex and face in 3D solids.This research was supported by the Arts and Humanities Research Council Design Star CDT (AH/L503770/1), the Portuguese Foundation for Science and Technology (FCT) projects LARSyS (UID/EEA/50009/2013) and CIAC-Research Centre for Arts and Communication.info:eu-repo/semantics/publishedVersio

    Unsupervised Deep Single-Image Intrinsic Decomposition using Illumination-Varying Image Sequences

    Full text link
    Machine learning based Single Image Intrinsic Decomposition (SIID) methods decompose a captured scene into its albedo and shading images by using the knowledge of a large set of known and realistic ground truth decompositions. Collecting and annotating such a dataset is an approach that cannot scale to sufficient variety and realism. We free ourselves from this limitation by training on unannotated images. Our method leverages the observation that two images of the same scene but with different lighting provide useful information on their intrinsic properties: by definition, albedo is invariant to lighting conditions, and cross-combining the estimated albedo of a first image with the estimated shading of a second one should lead back to the second one's input image. We transcribe this relationship into a siamese training scheme for a deep convolutional neural network that decomposes a single image into albedo and shading. The siamese setting allows us to introduce a new loss function including such cross-combinations, and to train solely on (time-lapse) images, discarding the need for any ground truth annotations. As a result, our method has the good properties of i) taking advantage of the time-varying information of image sequences in the (pre-computed) training step, ii) not requiring ground truth data to train on, and iii) being able to decompose single images of unseen scenes at runtime. To demonstrate and evaluate our work, we additionally propose a new rendered dataset containing illumination-varying scenes and a set of quantitative metrics to evaluate SIID algorithms. Despite its unsupervised nature, our results compete with state of the art methods, including supervised and non data-driven methods.Comment: To appear in Pacific Graphics 201

    Segmentation of the left ventricle of the heart in 3-D+t MRI data using an optimized nonrigid temporal model

    Get PDF
    Modern medical imaging modalities provide large amounts of information in both the spatial and temporal domains and the incorporation of this information in a coherent algorithmic framework is a significant challenge. In this paper, we present a novel and intuitive approach to combine 3-D spatial and temporal (3-D + time) magnetic resonance imaging (MRI) data in an integrated segmentation algorithm to extract the myocardium of the left ventricle. A novel level-set segmentation process is developed that simultaneously delineates and tracks the boundaries of the left ventricle muscle. By encoding prior knowledge about cardiac temporal evolution in a parametric framework, an expectation-maximization algorithm optimally tracks the myocardial deformation over the cardiac cycle. The expectation step deforms the level-set function while the maximization step updates the prior temporal model parameters to perform the segmentation in a nonrigid sense

    Live User-guided Intrinsic Video For Static Scenes

    Get PDF
    We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance
    corecore