1,038 research outputs found

    Interactive inspection of complex multi-object industrial assemblies

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1016/j.cad.2016.06.005The use of virtual prototypes and digital models containing thousands of individual objects is commonplace in complex industrial applications like the cooperative design of huge ships. Designers are interested in selecting and editing specific sets of objects during the interactive inspection sessions. This is however not supported by standard visualization systems for huge models. In this paper we discuss in detail the concept of rendering front in multiresolution trees, their properties and the algorithms that construct the hierarchy and efficiently render it, applied to very complex CAD models, so that the model structure and the identities of objects are preserved. We also propose an algorithm for the interactive inspection of huge models which uses a rendering budget and supports selection of individual objects and sets of objects, displacement of the selected objects and real-time collision detection during these displacements. Our solution–based on the analysis of several existing view-dependent visualization schemes–uses a Hybrid Multiresolution Tree that mixes layers of exact geometry, simplified models and impostors, together with a time-critical, view-dependent algorithm and a Constrained Front. The algorithm has been successfully tested in real industrial environments; the models involved are presented and discussed in the paper.Peer ReviewedPostprint (author's final draft

    VolumeEVM: A new surface/volume integrated model

    Get PDF
    Volume visualization is a very active research area in the field of scien-tific visualization. The Extreme Vertices Model (EVM) has proven to be a complete intermediate model to visualize and manipulate volume data using a surface rendering approach. However, the ability to integrate the advantages of surface rendering approach with the superiority in visual exploration of the volume rendering would actually produce a very complete visualization and edition system for volume data. Therefore, we decided to define an enhanced EVM-based model which incorporates the volumetric information required to achieved a nearly direct volume visualization technique. Thus, VolumeEVM was designed maintaining the same EVM-based data structure plus a sorted list of density values corresponding to the EVM-based VoIs interior voxels. A function which relates interior voxels of the EVM with the set of densities was mandatory to be defined. This report presents the definition of this new surface/volume integrated model based on the well known EVM encoding and propose implementations of the main software-based direct volume rendering techniques through the proposed model.Postprint (published version

    Synthetic 3D Pap smear nucleus generation

    Full text link
    Gómez Aguilar, S. (2010). Synthetic 3D Pap smear nucleus generation. http://hdl.handle.net/10251/10215.Archivo delegad

    Interaction of cortical networks mediating object motion detection by moving observers

    Full text link
    Published in final edited form as: Exp Brain Res. 2012 August ; 221(2): 177–189. doi:10.1007/s00221-012-3159-8.The task of parceling perceived visual motion into self- and object motion components is critical to safe and accurate visually guided navigation. In this paper, we used functional magnetic resonance imaging to determine the cortical areas functionally active in this task and the pattern connectivity among them to investigate the cortical regions of interest and networks that allow subjects to detect object motion separately from induced self-motion. Subjects were presented with nine textured objects during simulated forward self-motion and were asked to identify the target object, which had an additional, independent motion component toward or away from the observer. Cortical activation was distributed among occipital, intra-parietal and fronto-parietal areas. We performed a network analysis of connectivity data derived from partial correlation and multivariate Granger causality analyses among functionally active areas. This revealed four coarsely separated network clusters: bilateral V1 and V2; visually responsive occipito-temporal areas, including bilateral LO, V3A, KO (V3B) and hMT; bilateral VIP, DIPSM and right precuneus; and a cluster of higher, primarily left hemispheric regions, including the central sulcus, post-, pre- and sub-central sulci, pre-central gyrus, and FEF. We suggest that the visually responsive networks are involved in forming the representation of the visual stimulus, while the higher, left hemisphere cluster is involved in mediating the interpretation of the stimulus for action. Our main focus was on the relationships of activations during our task among the visually responsive areas. To determine the properties of the mechanism corresponding to the visual processing networks, we compared subjects’ psychophysical performance to a model of object motion detection based solely on relative motion among objects and found that it was inconsistent with observer performance. Our results support the use of scene context (e.g., eccentricity, depth) in the detection of object motion. We suggest that the cortical activation and visually responsive networks provide a potential substrate for this computation.This work was supported by NIH grant RO1NS064100 to L.M.V. We thank Victor Solo for discussions regarding models of functional connectivity and our subjects for participating in the psychophysical and fMRI experiments. This research was carried out in part at the Athinoula A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital, using resources provided by the Center for Functional Neuroimaging Technologies, P41RR14075, a P41 Regional Resource supported by the Biomedical Technology Program of the National Center for Research Resources (NCRR), National Institutes of Health. This work also involved the use of instrumentation supported by the NCRR Shared Instrumentation Grant Program and/or High-End Instrumentation Grant Program; specifically, grant number S10RR021110. (RO1NS064100 - NIH; National Center for Research Resources (NCRR), National Institutes of Health; S10RR021110 - NCRR)Accepted manuscrip

    The role of the ventral intraparietal area (VIP/pVIP) in parsing optic flow into visual motion caused by self-motion and visual motion produced by object-motion

    Get PDF
    Retinal image motion is a composite signal that contains information about two behaviourally significant factors: self-motion and the movement of environmental objects. It is thought that the brain separates the two relevant signals, and although multiple brain regions have been identified that respond selectively to the composite optic flow signal, which brain region(s) perform the parsing process remains unknown. Here, we present original evidence that the putative human ventral intraparietal area (pVIP), a region known to receive optic flow signals as well as independent self-motion signals from other sensory modalities, plays a critical role in the parsing process and acts to isolate object-motion. We localised pVIP using its multisensory response profile, and then tested its relative responses to simulated object-motion and self-motion stimuli; results indicated that responses were much stronger in pVIP to stimuli that specified object-motion. We report two further observations that will be significant for the future direction of research in this area; firstly, activation in pVIP was suppressed by distant stationary objects compared to the absence of objects or closer objects. Secondly, we describe several other brain regions that share with pVIP selectivity for visual object-motion over visual self-motion as well as a multisensory response

    Live User-guided Intrinsic Video For Static Scenes

    Get PDF
    We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance

    Real-Time Object Removal in Augmented Reality

    Get PDF
    Diminished reality, as a sub-topic of augmented reality where digital information is overlaid on an environment, is the perceived removal of an object from an environment. Previous approaches to diminished reality used digital replacement techniques, inpainting, and multi-view homographies. However, few used a virtual representation of the real environment, limiting their domains to planar environments. This thesis provides a framework to achieve real-time diminished reality on an augmented reality headset. Using state-of-the-art hardware, we combine a virtual representation of the real environment with inpainting to remove existing objects from complex environments. Our work is found to be competitive with previous results, with a similar qualitative outcome under the limitations of available technology. Additionally, by implementing new texturing algorithms, a more detailed representation of the real environment is achieved

    OpenFab: A programmable pipeline for multimaterial fabrication

    Get PDF
    Figure 1: Three rhinos, defined and printed using OpenFab. For each print, the same geometry was paired with a different fablet—a shaderlike program which procedurally defines surface detail and material composition throughout the object volume. This produces three unique prints by using displacements, texture mapping, and continuous volumetric material variation as a function of distance from the surface. 3D printing hardware is rapidly scaling up to output continuous mixtures of multiple materials at increasing resolution over ever larger print volumes. This poses an enormous computational challenge: large high-resolution prints comprise trillions of voxels and petabytes of data and simply modeling and describing the input with spatially varying material mixtures at this scale is challenging. Existing 3D printing software is insufficient; in particular, most software is designed to support only a few million primitives, with discrete material choices per object. We present OpenFab, a programmable pipeline for synthesis of multi-material 3D printed objects that is inspired by RenderMan and modern GPU pipelines. The pipeline supports procedural evaluation of geometric detail and material composition, using shader-like fablets, allowing models to be specified easily and efficiently. We describe a streaming architecture for OpenFab; only a small fraction of the final volume is stored in memory and output is fed to the printer with little startup delay. We demonstrate it on a variety of multi-material objects
    corecore