485 research outputs found

    AFFECT-PRESERVING VISUAL PRIVACY PROTECTION

    Get PDF
    The prevalence of wireless networks and the convenience of mobile cameras enable many new video applications other than security and entertainment. From behavioral diagnosis to wellness monitoring, cameras are increasing used for observations in various educational and medical settings. Videos collected for such applications are considered protected health information under privacy laws in many countries. Visual privacy protection techniques, such as blurring or object removal, can be used to mitigate privacy concern, but they also obliterate important visual cues of affect and social behaviors that are crucial for the target applications. In this dissertation, we propose to balance the privacy protection and the utility of the data by preserving the privacy-insensitive information, such as pose and expression, which is useful in many applications involving visual understanding. The Intellectual Merits of the dissertation include a novel framework for visual privacy protection by manipulating facial image and body shape of individuals, which: (1) is able to conceal the identity of individuals; (2) provide a way to preserve the utility of the data, such as expression and pose information; (3) balance the utility of the data and capacity of the privacy protection. The Broader Impacts of the dissertation focus on the significance of privacy protection on visual data, and the inadequacy of current privacy enhancing technologies in preserving affect and behavioral attributes of the visual content, which are highly useful for behavior observation in educational and medical settings. This work in this dissertation represents one of the first attempts in achieving both goals simultaneously

    Deep learning of curvature features for shape completion

    Get PDF
    The paper presents a novel solution to the issue of incomplete regions in 3D meshes obtained through digitization. Traditional methods for estimating the surface of missing geometry and topology often yield unrealistic outcomes for intricate surfaces. To overcome this limitation, the paper proposes a neural network-based approach that generates points in areas where geometric information is lacking. The method employs 2D inpainting techniques on color images obtained from the original mesh parameterization and curvature values. The network used in this approach can reconstruct the curvature image, which then serves as a reference for generating a polygonal surface that closely resembles the predicted one. The paper’s experiments show that the proposed method effectively fills complex holes in 3D surfaces with a high degree of naturalness and detail. This paper improves the previous work in terms of a more in-depth explanation of the different stages of the approach as well as an extended results section with exhaustive experiments.Spanish Ministry of Science and Technology under projects PID2020-119478GB-I00TED2021-132702B-C21MCIN/AEI/10.13039/501100 011033European Regional Development Fund (ERDF

    Yield prediction by machine learning from UAS‑based mulit‑sensor data fusion in soybean

    Get PDF
    16 p.Nowadays, automated phenotyping of plants is essential for precise and cost-effective improvement in the efficiency of crop genetics. In recent years, machine learning (ML) techniques have shown great success in the classification and modelling of crop parameters. In this research, we consider the capability of ML to perform grain yield prediction in soybeans by combining data from different optical sensors via RF (Random Forest) and XGBoost (eXtreme Gradient Boosting). During the 2018 growing season, a panel of 382 soybean recombinant inbred lines were evaluated in a yield trial at the Agronomy Center for Research and Education (ACRE) in West Lafayette (Indiana, USA). Images were acquired by the Parrot Sequoia Multispectral Sensor and the S.O.D.A. compact digital camera on board a senseFly eBee UAS (Unnamed Aircraft System) solution at R4 and early R5 growth stages. Next, a standard photogrammetric pipeline was carried out by SfM (Structure from Motion). Multispectral imagery serves to analyse the spectral response of the soybean end-member in 2D. In addition, RGB images were used to reconstruct the study area in 3D, evaluating the physiological growth dynamics per plot via height variations and crop volume estimations. As ground truth, destructive grain yield measurements were taken at the end of the growing season.SI"Development of Analytical Tools for Drone-based Canopy Phenotyping in Crop Breeding" (American Institute of Food and Agriculture

    Ghost on the Shell: An Expressive Representation of General 3D Shapes

    Full text link
    The creation of photorealistic virtual worlds requires the accurate modeling of 3D surface geometry for a wide range of objects. For this, meshes are appealing since they 1) enable fast physics-based rendering with realistic material and lighting, 2) support physical simulation, and 3) are memory-efficient for modern graphics pipelines. Recent work on reconstructing and statistically modeling 3D shape, however, has critiqued meshes as being topologically inflexible. To capture a wide range of object shapes, any 3D representation must be able to model solid, watertight, shapes as well as thin, open, surfaces. Recent work has focused on the former, and methods for reconstructing open surfaces do not support fast reconstruction with material and lighting or unconditional generative modelling. Inspired by the observation that open surfaces can be seen as islands floating on watertight surfaces, we parameterize open surfaces by defining a manifold signed distance field on watertight templates. With this parameterization, we further develop a grid-based and differentiable representation that parameterizes both watertight and non-watertight meshes of arbitrary topology. Our new representation, called Ghost-on-the-Shell (G-Shell), enables two important applications: differentiable rasterization-based reconstruction from multiview images and generative modelling of non-watertight meshes. We empirically demonstrate that G-Shell achieves state-of-the-art performance on non-watertight mesh reconstruction and generation tasks, while also performing effectively for watertight meshes.Comment: Technical Report (26 pages, 16 figures, Project Page: https://gshell3d.github.io/

    Multimodal breast imaging: Registration, visualization, and image synthesis

    Get PDF
    The benefit of registration and fusion of functional images with anatomical images is well appreciated in the advent of combined positron emission tomography and x-ray computed tomography scanners (PET/CT). This is especially true in breast cancer imaging, where modalities such as high-resolution and dynamic contrast-enhanced magnetic resonance imaging (MRI) and F-18-FDG positron emission tomography (PET) have steadily gained acceptance in addition to x-ray mammography, the primary detection tool. The increased interest in combined PET/MRI images has facilitated the demand for appropriate registration and fusion algorithms. A new approach to MRI-to-PET non-rigid breast image registration was developed and evaluated based on the location of a small number of fiducial skin markers (FSMs) visible in both modalities. The observed FSM displacement vectors between MRI and PET, distributed piecewise linearly over the breast volume, produce a deformed Finite-Element mesh that reasonably approximates non-rigid deformation of the breast tissue between the MRI and PET scans. The method does not require a biomechanical breast tissue model, and is robust and fast. The method was evaluated both qualitatively and quantitatively on patients and a deformable breast phantom. The procedure yields quality images with average target registration error (TRE) below 4 mm. The importance of appropriately jointly displaying (i.e. fusing) the registered images has often been neglected and underestimated. A combined MRI/PET image has the benefits of directly showing the spatial relationships between the two modalities, increasing the sensitivity, specificity, and accuracy of diagnosis. Additional information on morphology and on dynamic behavior of the suspicious lesion can be provided, allowing more accurate lesion localization including mapping of hyper- and hypo-metabolic regions as well as better lesion-boundary definition, improving accuracy when grading the breast cancer and assessing the need for biopsy. Eight promising fusion-for-visualization techniques were evaluated by radiologists from University Hospital, in Syracuse, NY. Preliminary results indicate that the radiologists were better able to perform a series of tasks when reading the fused PET/MRI data sets using color tables generated by a newly developed genetic algorithm, as compared to other commonly used schemes. The lack of a known ground truth hinders the development and evaluation of new algorithms for tasks such as registration and classification. A preliminary mesh-based breast phantom containing 12 distinct tissue classes along with tissue properties necessary for the simulation of dynamic positron emission tomography scans was created. The phantom contains multiple components which can be separately manipulated, utilizing geometric transformations, to represent populations or a single individual being imaged in multiple positions. This phantom will support future multimodal breast imaging work

    A neural network-based exploratory learning and motor planning system for co-robots

    Get PDF
    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object
    corecore