209 research outputs found

    Data analysis using scale-space filtering and Bayesian probabilistic reasoning

    Get PDF
    This paper describes a program for analysis of output curves from Differential Thermal Analyzer (DTA). The program first extracts probabilistic qualitative features from a DTA curve of a soil sample, and then uses Bayesian probabilistic reasoning to infer the mineral in the soil. The qualifier module employs a simple and efficient extension of scale-space filtering suitable for handling DTA data. We have observed that points can vanish from contours in the scale-space image when filtering operations are not highly accurate. To handle the problem of vanishing points, perceptual organizations heuristics are used to group the points into lines. Next, these lines are grouped into contours by using additional heuristics. Probabilities are associated with these contours using domain-specific correlations. A Bayes tree classifier processes probabilistic features to infer the presence of different minerals in the soil. Experiments show that the algorithm that uses domain-specific correlation to infer qualitative features outperforms a domain-independent algorithm that does not

    Few-Shot Single-View 3-D Object Reconstruction with Compositional Priors

    Full text link
    The impressive performance of deep convolutional neural networks in single-view 3D reconstruction suggests that these models perform non-trivial reasoning about the 3D structure of the output space. However, recent work has challenged this belief, showing that complex encoder-decoder architectures perform similarly to nearest-neighbor baselines or simple linear decoder models that exploit large amounts of per category data in standard benchmarks. On the other hand settings where 3D shape must be inferred for new categories with few examples are more natural and require models that generalize about shapes. In this work we demonstrate experimentally that naive baselines do not apply when the goal is to learn to reconstruct novel objects using very few examples, and that in a \emph{few-shot} learning setting, the network must learn concepts that can be applied to new categories, avoiding rote memorization. To address deficiencies in existing approaches to this problem, we propose three approaches that efficiently integrate a class prior into a 3D reconstruction model, allowing to account for intra-class variability and imposing an implicit compositional structure that the model should learn. Experiments on the popular ShapeNet database demonstrate that our method significantly outperform existing baselines on this task in the few-shot setting

    Primal-dual coding to probe light transport

    Get PDF
    We present primal-dual coding, a photography technique that enables direct fine-grain control over which light paths contribute to a photo. We achieve this by projecting a sequence of patterns onto the scene while the sensor is exposed to light. At the same time, a second sequence of patterns, derived from the first and applied in lockstep, modulates the light received at individual sensor pixels. We show that photography in this regime is equivalent to a matrix probing operation in which the elements of the scene's transport matrix are individually re-scaled and then mapped to the photo. This makes it possible to directly acquire photos in which specific light transport paths have been blocked, attenuated or enhanced. We show captured photos for several scenes with challenging light transport effects, including specular inter-reflections, caustics, diffuse inter-reflections and volumetric scattering. A key feature of primal-dual coding is that it operates almost exclusively in the optical domain: our results consist of directly-acquired, unprocessed RAW photos or differences between them.Alfred P. Sloan Foundation (Research Fellowship)United States. Defense Advanced Research Projects Agency (DARPA Young Faculty Award)Massachusetts Institute of Technology. Media Laboratory (Consortium Members

    Optical computing for fast light transport analysis

    Get PDF

    Image-based photo hulls for fast and photo-realistic new view synthesis

    Get PDF
    We present an efficient image-based rendering algorithm that generates views of a scene's photo hull. The photo hull is the largest 3D shape that is photo-consistent with photographs taken of the scene from multiple viewpoints. Our algorithm, image-based photo hulls (IBPH), like the image-based visual hulls (IBVH) algorithm from Matusik et al. on which it is based, takes advantage of epipolar geometry to efficiently reconstruct the geometry and visibility of a scene. Our IBPH algorithm differs from IBVH in that it utilizes the color information of the images to identify scene geometry. These additional color constraints result in more accurately reconstructed geometry, which often projects to better synthesized virtual views of the scene. We demonstrate our algorithm running in a realtime 3D telepresence application using video data acquired from multiple viewpoints

    A patch-based approach to 3D plant shoot phenotyping

    Get PDF
    The emerging discipline of plant phenomics aims to measure key plant characteristics, or traits, though as yet the set of plant traits that should be measured by automated systems is not well defined. Methods capable of recovering generic representations of the 3D structure of plant shoots from images would provide a key technology underpinning quantification of a wide range of current and future physiological and morphological traits. We present a fully automatic approach to image-based 3D plant reconstruction which represents plants as series of small planar sections that together model the complex architecture of leaf surfaces. The initial boundary of each leaf patch is refined using a level set method, optimising the model based on image information, curvature constraints and the position of neighbouring surfaces. The reconstruction process makes few assumptions about the nature of the plant material being reconstructed. As such it is applicable to a wide variety of plant species and topologies, and can be extended to canopy-scale imaging. We demonstrate the effectiveness of our approach on real images of wheat and rice plants, an artificial plant with challenging architecture, as well as a novel virtual dataset that allows us to compute distance measures of reconstruction accuracy. We also illustrate the method’s potential to support the identification of individual leaves, and so the phenotyping of plant shoots, using a spectral clustering approach

    Topological evaluation of volume reconstructions by voxel carving

    Get PDF
    Space or voxel carving [1, 4, 10, 15] is a technique for creating a three-dimensional reconstruction of an object from a series of two-dimensional images captured from cameras placed around the object at different viewing angles. However, little work has been done to date on evaluating the quality of space carving results. This paper extends the work reported in [8], where application of persistent homology was initially proposed as a tool for providing a topological analysis of the carving process along the sequence of 3D reconstructions with increasing number of cameras. We give now a more extensive treatment by: (1) developing the formal framework by which persistent homology can be applied in this context; (2) computing persistent homology of the 3D reconstructions of 66 new frames, including different poses, resolutions and camera orders; (3) studying what information about stability, topological correctness and influence of the camera orders in the carving performance can be drawn from the computed barcodes

    Where is cognition? Towards an embodied, situated, and distributed interactionist theory of cognitive activity

    Get PDF
    In recent years researchers from a variety of cognitive science disciplines have begun to challenge some of the core assumptions of the dominant theoretical framework of cognitivism including the representation-computational view of cognition, the sense-model-plan-act understanding of cognitive architecture, and the use of a formal task description strategy for investigating the organisation of internal mental processes. Challenges to these assumptions are illustrated using empirical findings and theoretical arguments from the fields such as situated robotics, dynamical systems approaches to cognition, situated action and distributed cognition research, and sociohistorical studies of cognitive development. Several shared themes are extracted from the findings in these research programmes including: a focus on agent-environment systems as the primary unit of analysis; an attention to agent-environment interaction dynamics; a vision of the cognizer's internal mechanisms as essentially reactive and decentralised in nature; and a tendency for mutual definitions of agent, environment, and activity. It is argued that, taken together, these themes signal the emergence of a new approach to cognition called embodied, situated, and distributed interactionism. This interactionist alternative has many resonances with the dynamical systems approach to cognition. However, this approach does not provide a theory of the implementing substrate sufficient for an interactionist theoretical framework. It is suggested that such a theory can be found in a view of animals as autonomous systems coupled with a portrayal of the nervous system as a regulatory, coordinative, and integrative bodily subsystem. Although a number of recent simulations show connectionism's promise as a computational technique in simulating the role of the nervous system from an interactionist perspective, this embodied connectionist framework does not lend itself to understanding the advanced 'representation hungry' cognition we witness in much human behaviour. It is argued that this problem can be solved by understanding advanced cognition as the re-use of basic perception-action skills and structures that this feat is enabled by a general education within a social symbol-using environment
    corecore