36,095 research outputs found

    Interferometric Optical Tomography

    Get PDF
    Embodiments of the present disclosure provide systems and methods for constructing a profile of sample object. Briefly described, in architecture, one embodiment of the system, among others, can be implemented as follows. An interferometer device is used to collect interference images of a sample object at a sequence of angles around the sample object. Accordingly, a controller device rotates the sample object to enable acquisition of the interference images; and a projection generator produces projections of the sample object from the interference images at the sequence of angles. Further, a tomographic device constructs the profile of the optical device from the projections of the interference images. The profile is capable of characterizing small index variations of less than 1x10?4. Other systems and methods are also included.Georgia Tech Research Corporatio

    Transverse Beam Profiles

    Full text link
    The performance and safe operation of a particle accelerator is closely connected to the transverse emittance of the beams it produces. For this reason many techniques have been developed over the years for monitoring the transverse distribution of particles along accelerator chains or over machine cycles. The definition of beam profiles is explained and the different techniques available for the detection of the particle distributions are explored. Examples of concrete applications of these techniques are given.Comment: 37 pages, 53 figure

    Video Registration in Egocentric Vision under Day and Night Illumination Changes

    Full text link
    With the spread of wearable devices and head mounted cameras, a wide range of application requiring precise user localization is now possible. In this paper we propose to treat the problem of obtaining the user position with respect to a known environment as a video registration problem. Video registration, i.e. the task of aligning an input video sequence to a pre-built 3D model, relies on a matching process of local keypoints extracted on the query sequence to a 3D point cloud. The overall registration performance is strictly tied to the actual quality of this 2D-3D matching, and can degrade if environmental conditions such as steep changes in lighting like the ones between day and night occur. To effectively register an egocentric video sequence under these conditions, we propose to tackle the source of the problem: the matching process. To overcome the shortcomings of standard matching techniques, we introduce a novel embedding space that allows us to obtain robust matches by jointly taking into account local descriptors, their spatial arrangement and their temporal robustness. The proposal is evaluated using unconstrained egocentric video sequences both in terms of matching quality and resulting registration performance using different 3D models of historical landmarks. The results show that the proposed method can outperform state of the art registration algorithms, in particular when dealing with the challenges of night and day sequences

    The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM

    Full text link
    New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array. These sensors have great potential for high-speed robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and very high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called "events") and synchronous grayscale frames. For this purpose, we present and release a collection of datasets captured with a DAVIS in a variety of synthetic and real environments, which we hope will motivate research on new algorithms for high-speed and high-dynamic-range robotics and computer-vision applications. In addition to global-shutter intensity images and asynchronous events, we provide inertial measurements and ground-truth camera poses from a motion-capture system. The latter allows comparing the pose accuracy of ego-motion estimation algorithms quantitatively. All the data are released both as standard text files and binary files (i.e., rosbag). This paper provides an overview of the available data and describes a simulator that we release open-source to create synthetic event-camera data.Comment: 7 pages, 4 figures, 3 table

    Single-shot layered reflectance separation using a polarized light field camera

    Get PDF
    We present a novel computational photography technique for single shot separation of diffuse/specular reflectance as well as novel angular domain separation of layered reflectance. Our solution consists of a two-way polarized light field (TPLF) camera which simultaneously captures two orthogonal states of polarization. A single photograph of a subject acquired with the TPLF camera under polarized illumination then enables standard separation of diffuse (depolarizing) and polarization preserving specular reflectance using light field sampling. We further demonstrate that the acquired data also enables novel angular separation of layered reflectance including separation of specular reflectance and single scattering in the polarization preserving component, and separation of shallow scattering from deep scattering in the depolarizing component. We apply our approach for efficient acquisition of facial reflectance including diffuse and specular normal maps, and novel separation of photometric normals into layered reflectance normals for layered facial renderings. We demonstrate our proposed single shot layered reflectance separation to be comparable to an existing multi-shot technique that relies on structured lighting while achieving separation results under a variety of illumination conditions

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    Efficient completeness inspection using real-time 3D color reconstruction with a dual-laser triangulation system

    Get PDF
    In this chapter, we present the final system resulting from the European Project \u201d3DComplete\u201d aimed at creating a low-cost and flexible quality inspection system capable of capturing 2.5D color data for completeness inspection. The system uses a single color camera to capture at the same time 3D data with laser triangulation and color texture with a special projector of a narrow line of white light, which are then combined into a color 2.5D model in real-time. Many examples of completeness inspection tasks are reported which are extremely difficult to analyze with state-of-the-art 2D-based methods. Our system has been integrated into a real production environment, showing that completeness inspection incorporating 3D technology can be readily achieved in a short time at low costs
    • …
    corecore