8,975 research outputs found

    Sketching space

    Get PDF
    In this paper, we present a sketch modelling system which we call Stilton. The program resembles a desktop VRML browser, allowing a user to navigate a three-dimensional model in a perspective projection, or panoramic photographs, which the program maps onto the scene as a `floor' and `walls'. We place an imaginary two-dimensional drawing plane in front of the user, and any geometric information that user sketches onto this plane may be reconstructed to form solid objects through an optimization process. We show how the system can be used to reconstruct geometry from panoramic images, or to add new objects to an existing model. While panoramic imaging can greatly assist with some aspects of site familiarization and qualitative assessment of a site, without the addition of some foreground geometry they offer only limited utility in a design context. Therefore, we suggest that the system may be of use in `just-in-time' CAD recovery of complex environments, such as shop floors, or construction sites, by recovering objects through sketched overlays, where other methods such as automatic line-retrieval may be impossible. The result of using the system in this manner is the `sketching of space' - sketching out a volume around the user - and once the geometry has been recovered, the designer is free to quickly sketch design ideas into the newly constructed context, or analyze the space around them. Although end-user trials have not, as yet, been undertaken we believe that this implementation may afford a user-interface that is both accessible and robust, and that the rapid growth of pen-computing devices will further stimulate activity in this area

    A framework for digital sunken relief generation based on 3D geometric models

    Get PDF
    Sunken relief is a special art form of sculpture whereby the depicted shapes are sunk into a given surface. This is traditionally created by laboriously carving materials such as stone. Sunken reliefs often utilize the engraved lines or strokes to strengthen the impressions of a 3D presence and to highlight the features which otherwise are unrevealed. In other types of reliefs, smooth surfaces and their shadows convey such information in a coherent manner. Existing methods for relief generation are focused on forming a smooth surface with a shallow depth which provides the presence of 3D figures. Such methods unfortunately do not help the art form of sunken reliefs as they omit the presence of feature lines. We propose a framework to produce sunken reliefs from a known 3D geometry, which transforms the 3D objects into three layers of input to incorporate the contour lines seamlessly with the smooth surfaces. The three input layers take the advantages of the geometric information and the visual cues to assist the relief generation. This framework alters existing techniques in line drawings and relief generation, and then combines them organically for this particular purpose

    A laboratory breadboard system for dual-arm teleoperation

    Get PDF
    The computing architecture of a novel dual-arm teleoperation system is described. The novelty of this system is that: (1) the master arm is not a replica of the slave arm; it is unspecific to any manipulator and can be used for the control of various robot arms with software modifications; and (2) the force feedback to the general purpose master arm is derived from force-torque sensor data originating from the slave hand. The computing architecture of this breadboard system is a fully synchronized pipeline with unique methods for data handling, communication and mathematical transformations. The computing system is modular, thus inherently extendable. The local control loops at both sites operate at 100 Hz rate, and the end-to-end bilateral (force-reflecting) control loop operates at 200 Hz rate, each loop without interpolation. This provides high-fidelity control. This end-to-end system elevates teleoperation to a new level of capabilities via the use of sensors, microprocessors, novel electronics, and real-time graphics displays. A description is given of a graphic simulation system connected to the dual-arm teleoperation breadboard system. High-fidelity graphic simulation of a telerobot (called Phantom Robot) is used for preview and predictive displays for planning and for real-time control under several seconds communication time delay conditions. High fidelity graphic simulation is obtained by using appropriate calibration techniques

    3D freeform surfaces from planar sketches using neural networks

    Get PDF
    A novel intelligent approach into 3D freeform surface reconstruction from planar sketches is proposed. A multilayer perceptron (MLP) neural network is employed to induce 3D freeform surfaces from planar freehand curves. Planar curves were used to represent the boundaries of a freeform surface patch. The curves were varied iteratively and sampled to produce training data to train and test the neural network. The obtained results demonstrate that the network successfully learned the inverse-projection map and correctly inferred the respective surfaces from fresh curves

    Constraining the bright-end of the UV luminosity function for z 7-9 galaxies: results from CANDELS/GOODS-South

    Get PDF
    The recent Hubble Space Telescope near-infrared imaging with the Wide-Field Camera #3 (WFC 3) of the Great Observatories Origins Deep Survey South (GOODS-S) field in the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) programme covering nearly 100 arcmin2, along with already existing Advanced Camera for Surveys optical data, makes possible the search for bright galaxy candidates at redshift z ≈ 7–9 using the Lyman break technique. We present the first analysis of z′-drop z ≈ 7 candidate galaxies in this area, finding 19 objects. We also analyse Y-drops at z ≈ 8, trebling the number of bright (HAB < 27 mag) Y-drops from our previous work, and compare our results with those of other groups based on the same data. The bright high-redshift galaxy candidates we find serve to better constrain the bright end of the luminosity function at those redshift, and may also be more amenable to spectroscopic confirmation than the fainter ones presented in various previous work on the smaller fields (the Hubble Ultra Deep Field and the WFC 3 Early Release Science observations). We also look at the agreement with previous luminosity functions derived from WFC 3 drop-out counts, finding a generally good agreement, except for the luminosity function of Yan et al. at z ≈ 8, which is strongly ruled out

    Near-Infrared Super Resolution Imaging with Metallic Nanoshell Particle Chain Array

    Full text link
    We propose a near-infrared super resolution imaging system without a lens or a mirror but with an array of metallic nanoshell particle chain. The imaging array can plasmonically transfer the near-field components of dipole sources in the incoherent and coherent manners and the super resolution images can be reconstructed in the output plane. By tunning the parameters of the metallic nanoshell particle, the plasmon resonance band of the isolate nanoshell particle red-shifts to the near-infrared region. The near-infrared super resolution images are obtained subsequently. We calculate the field intensity distribution at the different planes of imaging process using the finite element method and find that the array has super resolution imaging capability at near-infrared wavelengths. We also show that the image formation highly depends on the coherence of the dipole sources and the image-array distance.Comment: 15 pages, 6 figure

    Fast Back-Projection for Non-Line of Sight Reconstruction

    Get PDF
    Recent works have demonstrated non-line of sight (NLOS) reconstruction by using the time-resolved signal frommultiply scattered light. These works combine ultrafast imaging systems with computation, which back-projects the recorded space-time signal to build a probabilistic map of the hidden geometry. Unfortunately, this computation is slow, becoming a bottleneck as the imaging technology improves. In this work, we propose a new back-projection technique for NLOS reconstruction, which is up to a thousand times faster than previous work, with almost no quality loss. We base on the observation that the hidden geometry probability map can be built as the intersection of the three-bounce space-time manifolds defined by the light illuminating the hidden geometry and the visible point receiving the scattered light from such hidden geometry. This allows us to pose the reconstruction of the hidden geometry as the voxelization of these space-time manifolds, which has lower theoretic complexity and is easily implementable in the GPU. We demonstrate the efficiency and quality of our technique compared against previous methods in both captured and synthetic dat
    corecore