9,865 research outputs found

    Maximum information photoelectron metrology

    Full text link
    Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of this coherent observable

    Fast Color Space Transformations Using Minimax Approximations

    Full text link
    Color space transformations are frequently used in image processing, graphics, and visualization applications. In many cases, these transformations are complex nonlinear functions, which prohibits their use in time-critical applications. In this paper, we present a new approach called Minimax Approximations for Color-space Transformations (MACT).We demonstrate MACT on three commonly used color space transformations. Extensive experiments on a large and diverse image set and comparisons with well-known multidimensional lookup table interpolation methods show that MACT achieves an excellent balance among four criteria: ease of implementation, memory usage, accuracy, and computational speed

    Improved motion segmentation based on shadow detection

    Get PDF
    In this paper, we discuss common colour models for background subtraction and problems related to their utilisation are discussed. A novel approach to represent chrominance information more suitable for robust background modelling and shadow suppression is proposed. Our method relies on the ability to represent colours in terms of a 3D-polar coordinate system having saturation independent of the brightness function; specifically, we build upon an Improved Hue, Luminance, and Saturation space (IHLS). The additional peculiarity of the approach is that we deal with the problem of unstable hue values at low saturation by modelling the hue-saturation relationship using saturation-weighted hue statistics. The effectiveness of the proposed method is shown in an experimental comparison with approaches based on RGB, Normalised RGB and HSV

    Collision-free inverse kinematics of the redundant seven-link manipulator used in a cucumber picking robot

    Full text link
    The paper presents results of research on an inverse kinematics algorithm that has been used in a functional model of a cucumber-harvesting robot consisting of a redundant P6R manipulator. Within a first generic approach, the inverse kinematics problem was reformulated as a non-linear programming problem and solved with a Genetic Algorithm (GA). Although solutions were easily obtained, the considerable calculation time needed to solve the problem prevented on-line implementation. To circumvent this problem, a second, less generic, approach was developed which consisted of a mixed numerical-analytic solution of the inverse kinematics problem exploiting the particular structure of the P6R manipulator. Using the latter approach, calculation time was considerably reduced. During the early stages of the cucumber-harvesting project, this inverse kinematics algorithm was used off-line to evaluate the ability of the robot to harvest cucumbers using 3D-information obtained from a cucumber crop in a real greenhouse. Thereafter, the algorithm was employed successfully in a functional model of the cucumber harvester to determine if cucumbers were hanging within the reachable workspace of the robot and to determine a collision-free harvest posture to be used for motion control of the manipulator during harvesting. The inverse kinematics algorithm is presented and demonstrated with some illustrative examples of cucumber harvesting, both off-line during the design phase as well as on-line during a field test

    Head motion tracking in 3D space for drivers

    Get PDF
    Ce travail présente un système de vision par ordinateur capable de faire un suivi du mouvement en 3D de la tête d’une personne dans le cadre de la conduite automobile. Ce système de vision par ordinateur a été conçu pour faire partie d'un système intégré d’analyse du comportement des conducteurs tout en remplaçant des équipements et des accessoires coûteux, qui sont utilisés pour faire le suivi du mouvement de la tête, mais sont souvent encombrants pour le conducteur. Le fonctionnement du système est divisé en quatre étapes : l'acquisition d'images, la détection de la tête, l’extraction des traits faciaux, la détection de ces traits faciaux et la reconstruction 3D des traits faciaux qui sont suivis. Premièrement, dans l'étape d'acquisition d'images, deux caméras monochromes synchronisées sont employées pour former un système stéréoscopique qui facilitera plus tard la reconstruction 3D de la tête. Deuxièmement, la tête du conducteur est détectée pour diminuer la dimension de l’espace de recherche. Troisièmement, après avoir obtenu une paire d’images de deux caméras, l'étape d'extraction des traits faciaux suit tout en combinant les algorithmes de traitement d'images et la géométrie épipolaire pour effectuer le suivi des traits faciaux qui, dans notre cas, sont les deux yeux et le bout du nez du conducteur. Quatrièmement, dans une étape de détection des traits faciaux, les résultats 2D du suivi sont consolidés par la combinaison d'algorithmes de réseau de neurones et la géométrie du visage humain dans le but de filtrer les mauvais résultats. Enfin, dans la dernière étape, le modèle 3D de la tête est reconstruit grâce aux résultats 2D du suivi et ceux du calibrage stéréoscopique des caméras. En outre, on détermine les mesures 3D selon les six axes de mouvement connus sous le nom de degrés de liberté de la tête (longitudinal, vertical, latéral, roulis, tangage et lacet). La validation des résultats est effectuée en exécutant nos algorithmes sur des vidéos préenregistrés des conducteurs utilisant un simulateur de conduite afin d'obtenir des mesures 3D avec notre système et par la suite, à les comparer et les valider plus tard avec des mesures 3D fournies par un dispositif pour le suivi de mouvement installé sur la tête du conducteur.This work presents a computer vision module capable of tracking the head motion in 3D space for drivers. This computer vision module was designed to be part of an integrated system to analyze the behaviour of the drivers by replacing costly equipments and accessories that track the head of a driver but are often cumbersome for the user. The vision module operates in five stages: image acquisition, head detection, facial features extraction, facial features detection, and 3D reconstruction of the facial features that are being tracked. Firstly, in the image acquisition stage, two synchronized monochromatic cameras are used to set up a stereoscopic system that will later make the 3D reconstruction of the head simpler. Secondly the driver’s head is detected to reduce the size of the search space for finding facial features. Thirdly, after obtaining a pair of images from the two cameras, the facial features extraction stage follows by combining image processing algorithms and epipolar geometry to track the chosen features that, in our case, consist of the two eyes and the tip of the nose. Fourthly, in a detection stage, the 2D tracking results are consolidated by combining a neural network algorithm and the geometry of the human face to discriminate erroneous results. Finally, in the last stage, the 3D model of the head is reconstructed from the 2D tracking results (e.g. tracking performed in each image independently) and calibration of the stereo pair. In addition 3D measurements according to the six axes of motion known as degrees of freedom of the head (longitudinal, vertical and lateral, roll, pitch and yaw) are obtained. The validation of the results is carried out by running our algorithms on pre-recorded video sequences of drivers using a driving simulator in order to obtain 3D measurements to be compared later with the 3D measurements provided by a motion tracking device installed on the driver’s head

    Spatial relationship between bone formation and mechanical stimulus within cortical bone: Combining 3D fluorochrome mapping and poroelastic finite element modelling

    Get PDF
    Bone is a dynamic tissue and adapts its architecture in response to biological and mechanical factors. Here we investigate how cortical bone formation is spatially controlled by the local mechanical environment in the murine tibia axial loading model (C57BL/6). We obtained 3D locations of new bone formation by performing ‘slice and view’3D fluorochrome mapping of the entire bone and compared these sites with the regions of high fluid velocity or strain energy density estimated using a finite element model, validated with ex-vivo bone surface strain map acquired ex-vivo using digital image correlation. For the comparison, 2D maps of the average bone formation and peak mechanical stimulus on the tibial endosteal and periosteal surface across the entire cortical surface were created. Results showed that bone formed on the periosteal and endosteal surface in regions of high fluid flow. Peak strain energy density predicted only the formation of bone periosteally. Understanding how the mechanical stimuli spatially relates with regions of cortical bone formation in response to loading will eventually guide loading regime therapies to maintain or restore bone mass in specific sites in skeletal pathologies

    Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning data

    Get PDF
    Virtual reconstruction of historic sites, planning of restorations and attachments of new building parts, as well as forest inventory are few examples of fields that benefit from the application of 3D surveying data. Originally using 2D photo based documentation and manual distance measurements, the 3D information obtained from multi camera and laser scanning systems realizes a noticeable improvement regarding the surveying times and the amount of generated 3D information. The 3D data allows a detailed post processing and better visualization of all relevant spatial information. Yet, for the extraction of the required information from the raw scan data and for the generation of useable visual output, time-consuming, complex user-based data processing is still required, using the commercially available 3D software tools. In this context, the automatic object recognition from 3D point cloud and depth data has been discussed in many different works. The developed tools and methods however, usually only focus on a certain kind of object or the detection of learned invariant surface shapes. Although the resulting methods are applicable for certain practices of data segmentation, they are not necessarily suitable for arbitrary tasks due to the varying requirements of the different fields of research. This thesis presents a more widespread solution for automatic scene reconstruction from 3D point clouds, targeting street scenarios, specifically for the task of traffic accident scene analysis and documentation. The data, obtained by sampling the scene using a mobile scanning system is evaluated, segmented, and finally used to generate detailed 3D information of the scanned environment. To realize this aim, this work adapts and validates various existing approaches on laser scan segmentation regarding the application on accident relevant scene information, including road surfaces and markings, vehicles, walls, trees and other salient objects. The approaches are therefore evaluated regarding their suitability and limitations for the given tasks, as well as for possibilities concerning the combined application together with other procedures. The obtained knowledge is used for the development of new algorithms and procedures to allow a satisfying segmentation and reconstruction of the scene, corresponding to the available sampling densities and precisions. Besides the segmentation of the point cloud data, this thesis presents different visualization and reconstruction methods to achieve a wider range of possible applications of the developed system for data export and utilization in different third party software tools

    Probing Ultrafast Dynamics with Time-resolved Multi-dimensional Coincidence Imaging: Butadiene

    Full text link
    Time-resolved coincidence imaging of photoelectrons and photoions represents the most complete experimental measurement of ultrafast excited state dynamics, a multi-dimensional measurement for a multi-dimensional problem. Here we present the experimental data from recent coincidence imaging experiments, undertaken with the aim of gaining insight into the complex ultrafast excited-state dynamics of 1,3-butadiene initiated by absorption of 200 nm light. We discuss photoion and photoelectron mappings of increasing dimensionality, and focus particularly on the time-resolved photoelectron angular distributions (TRPADs), expected to be a sensitive probe of the electronic evolution of the excited state and to provide significant information beyond the time-resolved photoelectron spectrum (TRPES). Complex temporal behaviour is observed in the TRPADs, revealing their sensitivity to the dynamics while also emphasising the difficulty of interpretation of these complex observables. From the experimental data some details of the wavepacket dynamics are discerned relatively directly, and we make some tentative comparisons with existing ab initio calculations in order to gain deeper insight into the experimental measurements; finally, we sketch out some considerations for taking this comparison further in order to bridge the gap between experiment and theory.Comment: 18 pages, 10 figures. Pre-print of JMO submissio

    Identification of snow avalanche release areas and flow characterization based on seismic data studies

    Get PDF
    [eng] The main objectives of this PhD thesis are the identification of the snow avalanche release areas and the snow avalanche flow regime characterization with the use of seismic data. We aim to search for new methods for conducting detailed studies of the seismic signal generated by snow avalanches, considered as a moving seismic source. One of the main bases for achieving these objectives is to use widely tested seismological methods designed for the study of single seismic sources. In order to adapt them to a record of a moving source generating seismic vibrations (snow avalanches), we decided to apply windowing methods to the seismic signal so that the seismic signal could be processed in small portions. Over each time window of the seismic signal, we perform the study of ground particle motion polarization (3D) and obtain the frequency content information (Power Spectral Density). In the seismic signal produced by snow avalanches can be identified sections linked to the relative position of the avalanche front with respect to the position of the seismic sensor. The evolution of the total envelope of the seismic signal (At(t)) allows us to establish a criterion for the identification of the different sections based on the seismic signal amplitude. Furthermore, we are able to identify the moment when the snow avalanche mass starts to move (first vibrations generated by the avalanche) using an appropriate configuration of the STA/LTA algorithm adapted for our purposes. For each of the sections of the seismic signal, we apply a different methodology to obtain different information about the snow avalanche flow in each avalanche part. We seek to identify the snow avalanche release area by analysing the signal produced at the beginning of the snow avalanche mass movement (Signal Onset - SON). The study of the polarization of the ground particle motion (3D, ZNE coordinate system) makes it possible to identify the area where the first vibrations are generated, and by extension link it to the start of the snow avalanche mass movement. By analysing the seismic signal section corresponding to when the snow avalanche mass is flowing over the seismic sensor (Signal Over the sensor - SOV), we characterize the snow avalanche flow and identify the different regions in the snow avalanche body. The seismic signal is rotated to the QLT coordinate system in order to better link the information of the seismic signal to the flow progression direction. Work on this PhD thesis has also enabled us to establish a homogenization of the seismic data processing steps for studying snow avalanches. We automated these processes for all the data acquired in the Vallée de la Sionne test site between winter seasons 2013 and 2020, thereby creating a database consisting of more than 420 seismic events (source: automatic data acquisition system activations at VDLS). From all these events we managed to identify the snow avalanches and to test the procedures designed in this thesis for the release area identification and the flow characterization. Although the designed methods are subject to some limitations, we consider that the contribution to novel approaches for the study of the seismic signal produced by moving sources is proved. The release area identification has a very good success rate (78%) in a supervised application. The automated execution could be improved with a better isolation process for the seismic signal SON section. The method for the flow characterization provides new information regarding the interaction of the flow with the ground and with the snow cover. We consider that future studies in this direction may yield information about the snow avalanche basal friction as an indirect measurement.[cat] En aquesta tesi els objectius principals són la identificació de les zones d’alliberament d’allaus de neu i la caracterització del seu tipus de flux a partir de l’ús de dades sísmiques. Perseguim noves maneres de realitzar una caracterització detallada del senyal sísmic generat per allaus de neu. Ens basem en l’ús de mètodes sismològics àmpliament provats per a l'estudi de fonts sísmiques puntuals. Per tal de poder implementar aquests mètodes al registre sísmic d’una font sísmica en moviment (allaus de neu), utilitzem procediments de tractament del senyal en finestres de temps. En cada finestra del senyal sísmic, realitzem l’estudi de la polarització del moviment de la partícula (3D) i n’obtenim la informació del contingut en freqüències. L’evolució de l’evolvent total del senyal sísmic (At(t)), ens permet establir un criteri per a la identificació de diferents seccions en el senyal, relacionades amb la posició relativa de l’allau respecte el sensor sísmic. També podem identificar l’instant en què s’inicia el moviment de l’allau mitjançant una configuració adequada de l’algorisme STA/LTA. En cadascuna d’aquestes seccions hi apliquem una metodologia diferent per tal d’obtenir informació específica. Amb l'estudi de la polarització del moviment de la partícula sísmica en l’anàlisi del senyal sísmic produït per l’inici del moviment de l’allau de neu (secció SON), és possible identificar l'àrea des d'on es generen les primeres vibracions (78% de taxa d’èxit). Analitzant el senyal sísmic corresponent al pas de l’allau per sobre el sensor sísmic (secció SOV), caracteritzem el tipus de flux i identifiquem les regions en el cos de l’allau a partir de l’evolució del moviment de la partícula sísmica i l’evolució del contingut en freqüències. L'elaboració d'aquesta tesi també ens ha portat a homogeneïtzar i automatitzat els passos en el processament de dades sísmiques per a l’estudi de les allaus, creant una base de dades amb més de 420 esdeveniments sísmics al lloc experimental de Vallée de la Sionne (2013-2020). Tot i que trobem algunes limitacions, considerem que queda demostrada la contribució en nous enfocaments per a l’estudi del senyal sísmic produït per fonts sísmiques en moviment

    Acceleration Techniques for Photo Realistic Computer Generated Integral Images

    Get PDF
    The research work presented in this thesis has approached the task of accelerating the generation of photo-realistic integral images produced by integral ray tracing. Ray tracing algorithm is a computationally exhaustive algorithm, which spawns one ray or more through each pixel of the pixels forming the image, into the space containing the scene. Ray tracing integral images consumes more processing time than normal images. The unique characteristics of the 3D integral camera model has been analysed and it has been shown that different coherency aspects than normal ray tracing can be investigated in order to accelerate the generation of photo-realistic integral images. The image-space coherence has been analysed describing the relation between rays and projected shadows in the scene rendered. Shadow cache algorithm has been adapted in order to minimise shadow intersection tests in integral ray tracing. Shadow intersection tests make the majority of the intersection tests in ray tracing. Novel pixel-tracing styles are developed uniquely for integral ray tracing to improve the image-space coherence and the performance of the shadow cache algorithm. Acceleration of the photo-realistic integral images generation using the image-space coherence information between shadows and rays in integral ray tracing has been achieved with up to 41 % of time saving. Also, it has been proven that applying the new styles of pixel-tracing does not affect of the scalability of integral ray tracing running over parallel computers. The novel integral reprojection algorithm has been developed uniquely through geometrical analysis of the generation of integral image in order to use the tempo-spatial coherence information within the integral frames. A new derivation of integral projection matrix for projecting points through an axial model of a lenticular lens has been established. Rapid generation of 3D photo-realistic integral frames has been achieved with a speed four times faster than the normal generation
    corecore