1,156 research outputs found

    Period Integrals of CY and General Type Complete Intersections

    Full text link
    We develop a global Poincar\'e residue formula to study period integrals of families of complex manifolds. For any compact complex manifold XX equipped with a linear system V∗V^* of generically smooth CY hypersurfaces, the formula expresses period integrals in terms of a canonical global meromorphic top form on XX. Two important ingredients of our construction are the notion of a CY principal bundle, and a classification of such rank one bundles. We also generalize our construction to CY and general type complete intersections. When XX is an algebraic manifold having a sufficiently large automorphism group GG and V∗V^* is a linear representation of GG, we construct a holonomic D-module that governs the period integrals. The construction is based in part on the theory of tautological systems we have developed in the paper \cite{LSY1}, joint with R. Song. The approach allows us to explicitly describe a Picard-Fuchs type system for complete intersection varieties of general types, as well as CY, in any Fano variety, and in a homogeneous space in particular. In addition, the approach provides a new perspective of old examples such as CY complete intersections in a toric variety or partial flag variety.Comment: An erratum is included to correct Theorem 3.12 (Uniqueness of CY structure

    Randomizable phenology-dependent corn canopy for simulated remote sensing of agricultural scenes

    Get PDF
    Crop health assessment and yield prediction from multi-spectral remote sensing imagery are ongoing areas of interest in precision agriculture. It is in these contexts that simulation-based techniques are useful to investigate system parameters, perform preliminary experiments, etc., because remote sensing systems can be prohibitively expensive to design, deploy, and operate. However, such techniques require realistic and reliable models of the real world. We thus present a randomizable time-dependent model of corn (Zea mays L.) canopy, which is suitable for procedural generation of high-fidelity virtual corn fields at any time in the vegetative growth phase, with application to simulated remote sensing of agricultural scenes. This model unifies a physiological description of corn growth subject to environmental factors with a parametric description of corn canopy geometry, and prioritizes computational efficiency in the context of ray tracing for light transport simulation. We provide a reference implementation in C++, which includes a software plug-in for the 5th edition of the Digital Imaging and Remote Sensing Image Generation tool (DIRSIG5), in order to make simulation of agricultural scenes more readily accessible. For validation, we use our DIRSIG5 plug-in to simulate multi-spectral images of virtual corn plots that correspond to those of a United States Department of Agriculture (USDA) site at the Beltsville Agricultural Research Center (BARC), where reference data were collected in the summer of 2018. We show in particular that 1) the canopy geometry as a function of time is in agreement with field measurements, and 2) the radiance predicted by a DIRSIG5 simulation of the virtual corn plots is in agreement with radiance-calibrated imagery collected by a drone-mounted MicaSense RedEdge imaging system. We lastly remark that DIRSIG5 is able to simulate imagery directly as digital counts provided detailed knowledge of the detector array, e.g., quantum efficiency, read noise, and well capacity. That being the case, it is feasible to investigate the parameter space of a remote sensing system via “end-to-end” simulation

    The Decomposition Theorem and the topology of algebraic maps

    Full text link
    We give a motivated introduction to the theory of perverse sheaves, culminating in the Decomposition Theorem of Beilinson, Bernstein, Deligne and Gabber. A goal of this survey is to show how the theory develops naturally from classical constructions used in the study of topological properties of algebraic varieties. While most proofs are omitted, we discuss several approaches to the Decomposition Theorem, indicate some important applications and examples.Comment: 117 pages. New title. Major structure changes. Final version of a survey to appear in the Bulletin of the AM

    SSR-2D: Semantic 3D Scene Reconstruction from 2D Images

    Full text link
    Most deep learning approaches to comprehensive semantic modeling of 3D indoor spaces require costly dense annotations in the 3D domain. In this work, we explore a central 3D scene modeling task, namely, semantic scene reconstruction without using any 3D annotations. The key idea of our approach is to design a trainable model that employs both incomplete 3D reconstructions and their corresponding source RGB-D images, fusing cross-domain features into volumetric embeddings to predict complete 3D geometry, color, and semantics with only 2D labeling which can be either manual or machine-generated. Our key technical innovation is to leverage differentiable rendering of color and semantics to bridge 2D observations and unknown 3D space, using the observed RGB images and 2D semantics as supervision, respectively. We additionally develop a learning pipeline and corresponding method to enable learning from imperfect predicted 2D labels, which could be additionally acquired by synthesizing in an augmented set of virtual training views complementing the original real captures, enabling more efficient self-supervision loop for semantics. In this work, we propose an end-to-end trainable solution jointly addressing geometry completion, colorization, and semantic mapping from limited RGB-D images, without relying on any 3D ground-truth information. Our method achieves state-of-the-art performance of semantic scene reconstruction on two large-scale benchmark datasets MatterPort3D and ScanNet, surpasses baselines even with costly 3D annotations. To our knowledge, our method is also the first 2D-driven method addressing completion and semantic segmentation of real-world 3D scans

    Learning Sequential Acquisition Policies for Robot-Assisted Feeding

    Full text link
    A robot providing mealtime assistance must perform specialized maneuvers with various utensils in order to pick up and feed a range of food items. Beyond these dexterous low-level skills, an assistive robot must also plan these strategies in sequence over a long horizon to clear a plate and complete a meal. Previous methods in robot-assisted feeding introduce highly specialized primitives for food handling without a means to compose them together. Meanwhile, existing approaches to long-horizon manipulation lack the flexibility to embed highly specialized primitives into their frameworks. We propose Visual Action Planning OveR Sequences (VAPORS), a framework for long-horizon food acquisition. VAPORS learns a policy for high-level action selection by leveraging learned latent plate dynamics in simulation. To carry out sequential plans in the real world, VAPORS delegates action execution to visually parameterized primitives. We validate our approach on complex real-world acquisition trials involving noodle acquisition and bimanual scooping of jelly beans. Across 38 plates, VAPORS acquires much more efficiently than baselines, generalizes across realistic plate variations such as toppings and sauces, and qualitatively appeals to user feeding preferences in a survey conducted across 49 individuals. Code, datasets, videos, and supplementary materials can be found on our website: https://sites.google.com/view/vaporsbot
    • 

    corecore