32,280 research outputs found

    Development of a Computer Vision-Based Three-Dimensional Reconstruction Method for Volume-Change Measurement of Unsaturated Soils during Triaxial Testing

    Get PDF
    Problems associated with unsaturated soils are ubiquitous in the U.S., where expansive and collapsible soils are some of the most widely distributed and costly geologic hazards. Solving these widespread geohazards requires a fundamental understanding of the constitutive behavior of unsaturated soils. In the past six decades, the suction-controlled triaxial test has been established as a standard approach to characterizing constitutive behavior for unsaturated soils. However, this type of test requires costly test equipment and time-consuming testing processes. To overcome these limitations, a photogrammetry-based method has been developed recently to measure the global and localized volume-changes of unsaturated soils during triaxial test. However, this method relies on software to detect coded targets, which often requires tedious manual correction of incorrectly coded target detection information. To address the limitation of the photogrammetry-based method, this study developed a photogrammetric computer vision-based approach for automatic target recognition and 3D reconstruction for volume-changes measurement of unsaturated soils in triaxial tests. Deep learning method was used to improve the accuracy and efficiency of coded target recognition. A photogrammetric computer vision method and ray tracing technique were then developed and validated to reconstruct the three-dimensional models of soil specimen

    Deployable Payloads with Starbug

    Get PDF
    We explore the range of wide field multi-object instrument concepts taking advantage of the unique capabilities of the Starbug focal plane positioning concept. Advances to familiar instrument concepts, such as fiber positioners and deployable fiber-fed IFUs, are discussed along with image relays and deployable active sensors. We conceive deployable payloads as components of systems more traditionally regarded as part of telescope systems rather than instruments - such as adaptive optics and ADCs. Also presented are some of the opportunities offered by the truly unique capabilities of Starbug, such as microtracking to apply intra-field distortion correction during the course of an observation.Comment: 12 pages, 8 figures, to be published in Proc. SPIE 6273 "Opto-Mechanical Technologies for Astronomy

    Reconfigurable quantum metamaterials

    Get PDF
    By coupling controllable quantum systems into larger structures we introduce the concept of a quantum metamaterial. Conventional meta-materials represent one of the most important frontiers in optical design, with applications in diverse fields ranging from medicine to aerospace. Up until now however, metamaterials have themselves been classical structures and interact only with the classical properties of light. Here we describe a class of dynamic metamaterials, based on the quantum properties of coupled atom-cavity arrays, which are intrinsically lossless, reconfigurable, and operate fundamentally at the quantum level. We show how this new class of metamaterial could be used to create a reconfigurable quantum superlens possessing a negative index gradient for single photon imaging. With the inherent features of quantum superposition and entanglement of metamaterial properties, this new class of dynamic quantum metamaterial, opens a new vista for quantum science and technology.Comment: 16 pages, 8 figure

    Precision Astrometry of a Sample of Speckle Binaries and Multiples with the Adaptive Optics Facilities at the Hale and Keck II Telescopes

    Get PDF
    Using the adaptive optics facilities at the 200-in Hale and 10-m Keck II, we observed in the near infrared a sample of 12 binary and multiple stars and one open cluster. We used the near diffraction limited images of these systems to measure the relative separations and position angles between their components. In this paper, we investigate and correct for the influence of the differential chromatic refraction and chip distortions on our relative astrometric measurements. Over one night, we achieve an astrometric precision typically well below 1 miliarcsecond and occasionally as small as 40 microarcseconds. Such a precision is in principle sufficient to astrometrically detect planetary mass objects around the components of nearby binary and multiple stars. Since we have not had sufficiently large data sets for the observed sample of stars to detect planets, we provide the limits to planetary mass objects based on the obtained astrometric precision.Comment: 18 pages, 8 figures, 9 tables, to appear in MNRA

    Fast Determination of Soil Behavior in the Capillary Zone Using Simple Laboratory Tests

    Get PDF
    INE/AUTC 13.1

    Mitigation of Through-Wall Distortions of Frontal Radar Images using Denoising Autoencoders

    Full text link
    Radar images of humans and other concealed objects are considerably distorted by attenuation, refraction and multipath clutter in indoor through-wall environments. While several methods have been proposed for removing target independent static and dynamic clutter, there still remain considerable challenges in mitigating target dependent clutter especially when the knowledge of the exact propagation characteristics or analytical framework is unavailable. In this work we focus on mitigating wall effects using a machine learning based solution -- denoising autoencoders -- that does not require prior information of the wall parameters or room geometry. Instead, the method relies on the availability of a large volume of training radar images gathered in through-wall conditions and the corresponding clean images captured in line-of-sight conditions. During the training phase, the autoencoder learns how to denoise the corrupted through-wall images in order to resemble the free space images. We have validated the performance of the proposed solution for both static and dynamic human subjects. The frontal radar images of static targets are obtained by processing wideband planar array measurement data with two-dimensional array and range processing. The frontal radar images of dynamic targets are simulated using narrowband planar array data processed with two-dimensional array and Doppler processing. In both simulation and measurement processes, we incorporate considerable diversity in the target and propagation conditions. Our experimental results, from both simulation and measurement data, show that the denoised images are considerably more similar to the free-space images when compared to the original through-wall images

    The effect of representation location on interaction in a tangible learning environment

    Get PDF
    Drawing on the 'representation' TUI framework [21], this paper reports a study that investigated the concept of 'representation location' and its effect on interaction and learning. A reacTIVision-based tangible interface was designed and developed to support children learning about the behaviour of light. Children aged eleven years worked with the environment in groups of three. Findings suggest that different representation locations lend themselves to different levels of abstraction and engender different forms and levels of activity, particularly with respect to speed of dynamics and differences in group awareness. Furthermore, the studies illustrated interaction effects according to different physical correspondence metaphors used, particularly with respect to combining familiar physical objects with digital--based table-top representation. The implications of these findings for learning are discussed
    corecore