11,716 research outputs found

    Making SPIFFI SPIFFIER: Upgrade of the SPIFFI instrument for use in ERIS and performance analysis from re-commissioning

    Full text link
    SPIFFI is an AO-fed integral field spectrograph operating as part of SINFONI on the VLT, which will be upgraded and reused as SPIFFIER in the new VLT instrument ERIS. In January 2016, we used new technology developments to perform an early upgrade to optical subsystems in the SPIFFI instrument so ongoing scientific programs can make use of enhanced performance before ERIS arrives in 2020. We report on the upgraded components and the performance of SPIFFI after the upgrade, including gains in throughput and spatial and spectral resolution. We show results from re-commissioning, highlighting the potential for scientific programs to use the capabilities of the upgraded SPIFFI. Finally, we discuss the additional upgrades for SPIFFIER which will be implemented before it is integrated into ERIS.Comment: 20 pages, 12 figures. Proceedings from SPIE Astronomical Telescopes and Instrumentation 201

    3DTouch: A wearable 3D input device with an optical sensor and a 9-DOF inertial measurement unit

    Full text link
    We present 3DTouch, a novel 3D wearable input device worn on the fingertip for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally working across various 3D platforms. This paper presents a low-cost solution to designing and implementing such a device. Our approach relies on relative positioning technique using an optical laser sensor and a 9-DOF inertial measurement unit. 3DTouch is self-contained, and designed to universally work on various 3D platforms. The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. Modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on.Comment: 8 pages, 7 figure

    Continuous Modeling of 3D Building Rooftops From Airborne LIDAR and Imagery

    Get PDF
    In recent years, a number of mega-cities have provided 3D photorealistic virtual models to support the decisions making process for maintaining the cities' infrastructure and environment more effectively. 3D virtual city models are static snap-shots of the environment and represent the status quo at the time of their data acquisition. However, cities are dynamic system that continuously change over time. Accordingly, their virtual representation need to be regularly updated in a timely manner to allow for accurate analysis and simulated results that decisions are based upon. The concept of "continuous city modeling" is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. However, developing a universal intelligent machine enabling continuous modeling still remains a challenging task. Therefore, this thesis proposes a novel research framework for continuously reconstructing 3D building rooftops using multi-sensor data. For achieving this goal, we first proposes a 3D building rooftop modeling method using airborne LiDAR data. The main focus is on the implementation of an implicit regularization method which impose a data-driven building regularity to noisy boundaries of roof planes for reconstructing 3D building rooftop models. The implicit regularization process is implemented in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). Secondly, we propose a context-based geometric hashing method to align newly acquired image data with existing building models. The novelty is the use of context features to achieve robust and accurate matching results. Thirdly, the existing building models are refined by newly proposed sequential fusion method. The main advantage of the proposed method is its ability to progressively refine modeling errors frequently observed in LiDAR-driven building models. The refinement process is conducted in the framework of MDL combined with HAT. Markov Chain Monte Carlo (MDMC) coupled with Simulated Annealing (SA) is employed to perform a global optimization. The results demonstrates that the proposed continuous rooftop modeling methods show a promising aspects to support various critical decisions by not only reconstructing 3D rooftop models accurately, but also by updating the models using multi-sensor data

    Cosmic Shears Should Not Be Measured In Conventional Ways

    Full text link
    A long standing problem in weak lensing is about how to construct cosmic shear estimators from galaxy images. Conventional methods average over a single quantity per galaxy to estimate each shear component. We show that any such shear estimators must reduce to a highly nonlinear form when the galaxy image is described by three parameters (pure ellipse), even in the absence of the point spread function (PSF). In the presence of the PSF, we argue that this class of shear estimators do not likely exist. Alternatively, we propose a new way of measuring the cosmic shear: instead of averaging over a single value from each galaxy, we average over two numbers, and then take the ratio to estimate the shear component. In particular, the two numbers correspond to the numerator and denominators which generate the quadrupole moments of the galaxy image in Fourier space, as proposed in Zhang (2008). This yields a statistically unbiased estimate of the shear component. Consequently, measurements of the n-point spatial correlations of the shear fields should also be modified: one needs to take the ratio of two correlation functions to get the desired, unbiased shear correlation.Comment: 13 pages, MNRAS in press. The title has been changed from "Ideal Cosmic Shear Estimators Do Not Exist" to the current one. Instead of showing that conventional/ideal shear estimators do not exist in the presence of PSF, we show in the current version that conventional shear estimators do not exist in convenient form

    Intelligent computational techniques and virtual environment for understanding cerebral visual impairment patients

    Get PDF
    Cerebral Visual Impairment (CVI) is a medical area that concerns the study of the effect of brain damages on the visual field (VF). People with CVI are not able to construct a perfect 3-Dimensional view of what they see through their eyes in their brain. Therefore, they have difficulties in their mobility and behaviours that others find hard to understand due to their visual impairment. A branch of Artificial Intelligence (AI) is the simulation of behaviour by building computational models that help to explain how people solve problems or why they behave in a certain way. This project describes a novel intelligent system that simulates the navigation problems faced by people with CVI. This will help relatives, friends, and ophthalmologists of CVI patients understand more about their difficulties in navigating their everyday environment. The navigation simulation system is implemented using the Unity3D game engine. Virtual scenes of different living environments are also created using the Unity modelling software. The vision of the avatar in the virtual environment is implemented using a camera provided by the 3D game engine. Given a visual field chart of a CVI patient with visual impairment, the system automatically creates a filter (mask) that mimics a visual defect and places it in front of the visual field of the avatar. The filters are created by extracting, classifying and converting the symbols of the defected areas in the visual field chart to numerical values and then converted to textures to mask the vision. Each numeric value represents a level of transparency and opacity according to the severity of the visual defect in that region. The filters represent the vision masks. Unity3D supports physical properties to facilitate the representation of the VF defects into a form of structures of rays. The length of each ray depends on the VF defect s numeric value. Such that, the greater values (means a greater percentage of opacity) represented by short rays in length. While the smaller values (means a greater percentage of transparency) represented by longer rays. The lengths of all rays are representing the vision map (how far the patient can see). Algorithms for navigation based on the generated rays have been developed to enable the avatar to move around in given virtual environments. The avatar depends on the generated vision map and will exhibit different behaviours to simulate the navigation problem of real patients. The avatar s behaviour of navigation differs from patient to another according to their different defects. An experiment of navigating virtual environments (scenes) using the HTC Oculus Vive Headset was conducted using different scenarios. The scenarios are designed to use different VF defects within different scenes. The experiment simulates the patient s navigation in virtual environments with static objects (rooms) and in virtual environments with moving objects. The behaviours of the experiment participants actions (avoid/bump) match the avatar s using the same scenario. This project has created a system that enables the CVI patient s parents and relatives to aid the understanding what the CVI patient encounter. Besides, it aids the specialists and educators to take into account all the difficulties that the patients experience. Then, is to design and develop appropriate educational programs that can help each individual patient

    Near-IR 2D-Spectroscopy of the 4''x 4'' region around the Active Galactic Nucleus of NGC1068 with ISAAC/VLT

    Full text link
    We present new near-IR long slit spectroscopic data obtained with ISAAC on VLT/ANTU (ESO/Paranal) of the central 4''x 4'' region surrounding the central engine of NGC1068 . Bracket Gamma (Bg) and H2 emission line maps and line profile grids are produced, at a spatial resolution~0.5" and spectral resolution 35km/s. Two conspicuous knots of H2 emission are detected at about 1'' on each side of the central engine along PA=90deg, with a projected velocity difference of 140km/s: this velocity jump has been interpreted in Alloin et al (2001) as the signature of a rotating disk of molecular material. Another knot with both H2 and Bg emission is detected to the North of the central engine, close to the radio source C where the small scale radio jet is redirected and close to the brightest [OIII] cloud NLR-B. At the achieved spectral resolution, the H2 emission line profiles appear highly asymmetric with their low velocity wing being systematically more extended than their high velocity wing. A simple way to account for the changes of the H2 line profiles (peak-shift with respect to the systemic velocity, width, asymmetry) over the entire 4''x 4'' region, is to consider that a radial outflow is superimposed over the emission of the rotating molecular disk. We present a model of such a kinematical configuration and compare our predicted H2 emission profiles to the observed ones.Comment: 15 pages, 11 figures, accepted for publication in A&

    Automated Classification of Airborne Laser Scanning Point Clouds

    Full text link
    Making sense of the physical world has always been at the core of mapping. Up until recently, this has always dependent on using the human eye. Using airborne lasers, it has become possible to quickly "see" more of the world in many more dimensions. The resulting enormous point clouds serve as data sources for applications far beyond the original mapping purposes ranging from flooding protection and forestry to threat mitigation. In order to process these large quantities of data, novel methods are required. In this contribution, we develop models to automatically classify ground cover and soil types. Using the logic of machine learning, we critically review the advantages of supervised and unsupervised methods. Focusing on decision trees, we improve accuracy by including beam vector components and using a genetic algorithm. We find that our approach delivers consistently high quality classifications, surpassing classical methods

    Tracking and Fusion Methods for Extended Targets Parameterized by Center, Orientation, and Semi-axes

    Get PDF
    The improvements in sensor technology, e.g., the development of automotive Radio Detection and Ranging (RADAR) or Light Detection and Ranging (LIDAR), which are able to provide a higher detail of the sensor’s environment, have introduced new opportunities but also new challenges to target tracking. In classic target tracking, targets are assumed as points. However, this assumption is no longer valid if targets occupy more than one sensor resolution cell, creating the need for extended targets, modeling the shape in addition to the kinematic parameters. Different shape models are possible and this thesis focuses on an elliptical shape, parameterized with center, orientation, and semi-axes lengths. This parameterization can be used to model rectangles as well. Furthermore, this thesis is concerned with multi-sensor fusion for extended targets, which can be used to improve the target tracking by providing information gathered from different sensors or perspectives. We also consider estimation of extended targets, i.e., to account for uncertainties, the target is modeled by a probability density, so we need to find a so-called point estimate. Extended target tracking provides a variety of challenges due to the spatial extent, which need to be handled, even for basic shapes like ellipses and rectangles. Among these challenges are the choice of the target model, e.g., how the measurements are distributed across the shape. Additional challenges arise for sensor fusion, as it is unclear how to best consider the geometric properties when combining two extended targets. Finally, the extent needs to be involved in the estimation. Traditional methods often use simple uniform distributions across the shape, which do not properly portray reality, while more complex methods require the use of optimization techniques or large amounts of data. In addition, for traditional estimation, metrics such as the Euclidean distance between state vectors are used. However, they might no longer be valid because they do not consider the geometric properties of the targets’ shapes, e.g., rotating an ellipse by 180 degree results in the same ellipse, but the Euclidean distance between them is not 0. In multi-sensor fusion, the same holds, i.e., simply combining the corresponding elements of the state vectors can lead to counter-intuitive fusion results. In this work, we compare different elliptic trackers and discuss more complex measurement distributions across the shape’s surface or contour. Furthermore, we discuss the problems which can occur when fusing extended target estimates from different sensors and how to handle them by providing a transformation into a special density. We then proceed to discuss how a different metric, namely the Gaussian Wasserstein (GW) distance, can be used to improve target estimation. We define an estimator and propose an approximation based on an extension of the square root distance. It can be applied on the posterior densities of the aforementioned trackers to incorporate the unique properties of ellipses in the estimation process. We also discuss how this can be applied to rectangular targets as well. Finally, we evaluate and discuss our approaches. We show the benefits of more complex target models in simulations and on real data and we demonstrate our estimation and fusion approaches compared to classic methods on simulated data.2022-01-2
    • …
    corecore