7,813 research outputs found

    Second generation Robo-AO instruments and systems

    Get PDF
    The prototype Robo-AO system at the Palomar Observatory 1.5-m telescope is the world's first fully automated laser adaptive optics instrument. Scientific operations commenced in June 2012 and more than 12,000 observations have since been performed at the ~0.12" visible-light diffraction limit. Two new infrared cameras providing high-speed tip-tilt sensing and a 2' field-of-view will be integrated in 2014. In addition to a Robo-AO clone for the 2-m IGO and the natural guide star variant KAPAO at the 1-m Table Mountain telescope, a second generation of facility-class Robo-AO systems are in development for the 2.2-m University of Hawai'i and 3-m IRTF telescopes which will provide higher Strehl ratios, sharper imaging, ~0.07", and correction to {\lambda} = 400 nm.Comment: 11 pages, 4 figures, 3 table

    The Robo-AO-2 facility for rapid visible/near-infrared AO imaging and the demonstration of hybrid techniques

    Get PDF
    We are building a next-generation laser adaptive optics system, Robo-AO-2, for the UH 2.2-m telescope that will deliver robotic, diffraction-limited observations at visible and near-infrared wavelengths in unprecedented numbers. The superior Maunakea observing site, expanded spectral range and rapid response to high-priority events represent a significant advance over the prototype. Robo-AO-2 will include a new reconfigurable natural guide star sensor for exquisite wavefront correction on bright targets and the demonstration of potentially transformative hybrid AO techniques that promise to extend the faintness limit on current and future exoplanet adaptive optics systems.Comment: 15 page

    Intelligent composite layup by the application of low cost tracking and projection technologies

    Get PDF
    Hand layup is still the dominant forming process for the creation of the widest range of complex geometry and mixed material composite parts. However, this process is still poorly understood and informed, limiting productivity. This paper seeks to address this issue by proposing a novel and low cost system enabling a laminator to be guided in real-time, based on a predetermined instruction set, thus improving the standardisation of produced components. Within this paper the current methodologies are critiqued and future trends are predicted, prior to introducing the required input and outputs, and developing the implemented system. As a demonstrator a U-Shaped component typical of the complex geometry found in many difficult to manufacture composite parts was chosen, and its drapeability assessed by the use of a kinematic drape simulation tool. An experienced laminator's knowledgebase was then used to divide the tool into a finite number of features, with layup conducted by projecting and sequentially highlighting target features while tracking a laminator's hand movements across the ply. The system has been implemented with affordable hardware and demonstrates tangible benefits in comparison to currently employed laser-based systems. It has shown remarkable success to date, with rapid Technology Readiness Level advancement. This is a major stepping stone towards augmenting manual labour, with further benefits including more appropriate automation

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU
    • …
    corecore