3,651 research outputs found

    A model-based approach to recovering the structure of a plant from images

    Full text link
    We present a method for recovering the structure of a plant directly from a small set of widely-spaced images. Structure recovery is more complex than shape estimation, but the resulting structure estimate is more closely related to phenotype than is a 3D geometric model. The method we propose is applicable to a wide variety of plants, but is demonstrated on wheat. Wheat is made up of thin elements with few identifiable features, making it difficult to analyse using standard feature matching techniques. Our method instead analyses the structure of plants using only their silhouettes. We employ a generate-and-test method, using a database of manually modelled leaves and a model for their composition to synthesise plausible plant structures which are evaluated against the images. The method is capable of efficiently recovering accurate estimates of plant structure in a wide variety of imaging scenarios, with no manual intervention

    The Whole is Greater than the Sum of the Parts: Optimizing the Joint Science Return from LSST, Euclid and WFIRST

    Get PDF
    The focus of this report is on the opportunities enabled by the combination of LSST, Euclid and WFIRST, the optical surveys that will be an essential part of the next decade's astronomy. The sum of these surveys has the potential to be significantly greater than the contributions of the individual parts. As is detailed in this report, the combination of these surveys should give us multi-wavelength high-resolution images of galaxies and broadband data covering much of the stellar energy spectrum. These stellar and galactic data have the potential of yielding new insights into topics ranging from the formation history of the Milky Way to the mass of the neutrino. However, enabling the astronomy community to fully exploit this multi-instrument data set is a challenging technical task: for much of the science, we will need to combine the photometry across multiple wavelengths with varying spectral and spatial resolution. We identify some of the key science enabled by the combined surveys and the key technical challenges in achieving the synergies.Comment: Whitepaper developed at June 2014 U. Penn Workshop; 28 pages, 3 figure

    Agent and object aware tracking and mapping methods for mobile manipulators

    Get PDF
    The age of the intelligent machine is upon us. They exist in our factories, our warehouses, our military, our hospitals, on our roads, and on the moon. Most of these things we call robots. When placed in a controlled or known environment such as an automotive factory or a distribution warehouse they perform their given roles with exceptional efficiency, achieving far more than is within reach of a humble human being. Despite the remarkable success of intelligent machines in such domains, they have yet to make a full-hearted deployment into our homes. The missing link between the robots we have now and the robots that are soon to come to our houses is perception. Perception as we mean it here refers to a level of understanding beyond the collection and aggregation of sensory data. Much of the available sensory information is noisy and unreliable, our homes contain many reflective surfaces, repeating textures on large flat surfaces, and many disruptive moving elements, including humans. These environments change over time, with objects frequently moving within and between rooms. This idea of change in an environment is fundamental to robotic applications, as in most cases we expect them to be effectors of such change. We can identify two particular challenges1 that must be solved for robots to make the jump to less structured environments - how to manage noise and disruptive elements in observational data, and how to understand the world as a set of changeable elements (objects) which move over time within a wider environment. In this thesis we look at one possible approach to solving each of these problems. For the first challenge we use proprioception aboard a robot with an articulated arm to handle difficult and unreliable visual data caused both by the robot and the environment. We use sensor data aboard the robot to improve the pose tracking of a visual system when the robot moves rapidly, with high jerk, or when observing a scene with little visual variation. For the second challenge, we build a model of the world on the level of rigid objects, and relocalise them both as they change location between different sequences and as they move. We use semantics, image keypoints, and 3D geometry to register and align objects between sequences, showing how their position has moved between disparate observations.Open Acces

    Land and cryosphere products from Suomi NPP VIIRS: overview and status

    Get PDF
    [1] The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument was launched in October 2011 as part of the Suomi National Polar-Orbiting Partnership (S-NPP). The VIIRS instrument was designed to improve upon the capabilities of the operational Advanced Very High Resolution Radiometer and provide observation continuity with NASA's Earth Observing System's Moderate Resolution Imaging Spectroradiometer (MODIS). Since the VIIRS first-light images were received in November 2011, NASA- and NOAA-funded scientists have been working to evaluate the instrument performance and generate land and cryosphere products to meet the needs of the NOAA operational users and the NASA science community. NOAA's focus has been on refining a suite of operational products known as Environmental Data Records (EDRs), which were developed according to project specifications under the National Polar-Orbiting Environmental Satellite System. The NASA S-NPP Science Team has focused on evaluating the EDRs for science use, developing and testing additional products to meet science data needs, and providing MODIS data product continuity. This paper presents to-date findings of the NASA Science Team's evaluation of the VIIRS land and cryosphere EDRs, specifically Surface Reflectance, Land Surface Temperature, Surface Albedo, Vegetation Indices, Surface Type, Active Fires, Snow Cover, Ice Surface Temperature, and Sea Ice Characterization. The study concludes that, for MODIS data product continuity and earth system science, an enhanced suite of land and cryosphere products and associated data system capabilities are needed beyond the EDRs currently available from the VIIRS

    Calibration of non-conventional imaging systems

    Get PDF

    Photogrammetry for 3D Reconstruction in SOLIDWORKS and its Applications in Industry

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Close range, image based photogrammetry and LIDAR laser scanning technique are commonly utilized methodologies to snap real objects.3D models of already existing model or parts can be reconstructed by laser scanning and photogrammetry. These 3D models can be useful in applications like quality inspection, reverse engineering. With these techniques, they have their merits and limitations. Though laser scanners have higher accuracy, they require higher initial investment. Close-range photogrammetry is known for its simplicity, versatility and e ective detection of complex surfaces and 3D measurement of parts. But photogrammetry techniques can be initiated with comparatively much lower initial cost with acceptable accuracy. Currently, many industries are using photogrammetry for reverse engineering, quality inspection purposes. But, for photogrammetric object reconstruction, they are using di erent softwares. Industrial researchers are using commercial/open source codes for reconstruction and another stand-alone software for reverse engineering and mesh deviation analysis. So the problem statement here for this thesis is to integrate Photogrammetry, reverse engineering and deviation analysis to make one state-of-the-art work ow. xx The objectives of this thesis are as follows: 1. Comparative study between available source codes and identify suitable and stable code for integration; understand the photogrammetry methodology of that particular code. 2. To create a taskpane add-in using API for Integration of selected photogrammetry methodology and facilitate methodology with parameters. 3. To demonstrate the photogrammetric work ow followed by a reverse engineering case studies to showcase the potential of integration. 4. Parametric study for number of images vs accuracy 5. Comparison of Scan results, photogrammetry results with actual CAD dat

    Object Tracking in Distributed Video Networks Using Multi-Dimentional Signatures

    Get PDF
    From being an expensive toy in the hands of governmental agencies, computers have evolved a long way from the huge vacuum tube-based machines to today\u27s small but more than thousand times powerful personal computers. Computers have long been investigated as the foundation for an artificial vision system. The computer vision discipline has seen a rapid development over the past few decades from rudimentary motion detection systems to complex modekbased object motion analyzing algorithms. Our work is one such improvement over previous algorithms developed for the purpose of object motion analysis in video feeds. Our work is based on the principle of multi-dimensional object signatures. Object signatures are constructed from individual attributes extracted through video processing. While past work has proceeded on similar lines, the lack of a comprehensive object definition model severely restricts the application of such algorithms to controlled situations. In conditions with varying external factors, such algorithms perform less efficiently due to inherent assumptions of constancy of attribute values. Our approach assumes a variable environment where the attribute values recorded of an object are deemed prone to variability. The variations in the accuracy in object attribute values has been addressed by incorporating weights for each attribute that vary according to local conditions at a sensor location. This ensures that attribute values with higher accuracy can be accorded more credibility in the object matching process. Variations in attribute values (such as surface color of the object) were also addressed by means of applying error corrections such as shadow elimination from the detected object profile. Experiments were conducted to verify our hypothesis. The results established the validity of our approach as higher matching accuracy was obtained with our multi-dimensional approach than with a single-attribute based comparison
    corecore