13,278 research outputs found

    Defocusing digital particle image velocimetry and the three-dimensional characterization of two-phase flows

    Get PDF
    Defocusing digital particle image velocimetry (DDPIV) is the natural extension of planar PIV techniques to the third spatial dimension. In this paper we give details of the defocusing optical concept by which scalar and vector information can be retrieved within large volumes. The optical model and computational procedures are presented with the specific purpose of mapping the number density, the size distribution, the associated local void fraction and the velocity of bubbles or particles in two-phase flows. Every particle or bubble is characterized in terms of size and of spatial coordinates, used to compute a true three-component velocity field by spatial three-dimensional cross-correlation. The spatial resolution and uncertainty limits are established through numerical simulations. The performance of the DDPIV technique is established in terms of number density and void fraction. Finally, the velocity evaluation methodology, using the spatial cross-correlation technique, is described and discussed in terms of velocity accuracy

    A method for three-dimensional particle sizing in two-phase flows

    Get PDF
    A method is devised for true three-dimensional (3D) particle sizing in two-phase systems. Based on a ray-optics approximation of the Mie scattering theory for spherical particles, and under given assumptions, the principle is applicable to intensity data from scatterers within arbitrary interrogation volumes. It requires knowledge of the particle 3D location and intensity, and of the spatial distribution of the incident light intensity throughout the measurement volume. The new methodology is particularly suited for Lagrangian measurements: we demonstrate its use with the defocusing digital particle image velocimetry technique, a 3D measurement technique that provides the location, intensity and velocity of particles in large volume domains. We provide a method to characterize the volumetric distribution of the incident illumination and we assess experimentally the size measurement uncertainty

    Heteroscedastic Gaussian processes for uncertainty modeling in large-scale crowdsourced traffic data

    Full text link
    Accurately modeling traffic speeds is a fundamental part of efficient intelligent transportation systems. Nowadays, with the widespread deployment of GPS-enabled devices, it has become possible to crowdsource the collection of speed information to road users (e.g. through mobile applications or dedicated in-vehicle devices). Despite its rather wide spatial coverage, crowdsourced speed data also brings very important challenges, such as the highly variable measurement noise in the data due to a variety of driving behaviors and sample sizes. When not properly accounted for, this noise can severely compromise any application that relies on accurate traffic data. In this article, we propose the use of heteroscedastic Gaussian processes (HGP) to model the time-varying uncertainty in large-scale crowdsourced traffic data. Furthermore, we develop a HGP conditioned on sample size and traffic regime (SRC-HGP), which makes use of sample size information (probe vehicles per minute) as well as previous observed speeds, in order to more accurately model the uncertainty in observed speeds. Using 6 months of crowdsourced traffic data from Copenhagen, we empirically show that the proposed heteroscedastic models produce significantly better predictive distributions when compared to current state-of-the-art methods for both speed imputation and short-term forecasting tasks.Comment: 22 pages, Transportation Research Part C: Emerging Technologies (Elsevier

    Generating descriptive text from functional brain images

    Get PDF
    Recent work has shown that it is possible to take brain images of a subject acquired while they saw a scene and reconstruct an approximation of that scene from the images. Here we show that it is also possible to generate _text_ from brain images. We began with images collected as participants read names of objects (e.g., ``Apartment'). Without accessing information about the object viewed for an individual image, we were able to generate from it a collection of semantically pertinent words (e.g., "door," "window"). Across images, the sets of words generated overlapped consistently with those contained in articles about the relevant concepts from the online encyclopedia Wikipedia. The technique described, if developed further, could offer an important new tool in building human computer interfaces for use in clinical settings

    Multi-Output Gaussian Processes for Crowdsourced Traffic Data Imputation

    Full text link
    Traffic speed data imputation is a fundamental challenge for data-driven transport analysis. In recent years, with the ubiquity of GPS-enabled devices and the widespread use of crowdsourcing alternatives for the collection of traffic data, transportation professionals increasingly look to such user-generated data for many analysis, planning, and decision support applications. However, due to the mechanics of the data collection process, crowdsourced traffic data such as probe-vehicle data is highly prone to missing observations, making accurate imputation crucial for the success of any application that makes use of that type of data. In this article, we propose the use of multi-output Gaussian processes (GPs) to model the complex spatial and temporal patterns in crowdsourced traffic data. While the Bayesian nonparametric formalism of GPs allows us to model observation uncertainty, the multi-output extension based on convolution processes effectively enables us to capture complex spatial dependencies between nearby road segments. Using 6 months of crowdsourced traffic speed data or "probe vehicle data" for several locations in Copenhagen, the proposed approach is empirically shown to significantly outperform popular state-of-the-art imputation methods.Comment: 10 pages, IEEE Transactions on Intelligent Transportation Systems, 201

    Scalable Population Synthesis with Deep Generative Modeling

    Full text link
    Population synthesis is concerned with the generation of synthetic yet realistic representations of populations. It is a fundamental problem in the modeling of transport where the synthetic populations of micro-agents represent a key input to most agent-based models. In this paper, a new methodological framework for how to 'grow' pools of micro-agents is presented. The model framework adopts a deep generative modeling approach from machine learning based on a Variational Autoencoder (VAE). Compared to the previous population synthesis approaches, including Iterative Proportional Fitting (IPF), Gibbs sampling and traditional generative models such as Bayesian Networks or Hidden Markov Models, the proposed method allows fitting the full joint distribution for high dimensions. The proposed methodology is compared with a conventional Gibbs sampler and a Bayesian Network by using a large-scale Danish trip diary. It is shown that, while these two methods outperform the VAE in the low-dimensional case, they both suffer from scalability issues when the number of modeled attributes increases. It is also shown that the Gibbs sampler essentially replicates the agents from the original sample when the required conditional distributions are estimated as frequency tables. In contrast, the VAE allows addressing the problem of sampling zeros by generating agents that are virtually different from those in the original data but have similar statistical properties. The presented approach can support agent-based modeling at all levels by enabling richer synthetic populations with smaller zones and more detailed individual characteristics.Comment: 27 pages, 15 figures, 4 table
    corecore