1,024 research outputs found

    Machine learning for flow field measurements: a perspective

    Get PDF
    Advancements in machine-learning (ML) techniques are driving a paradigm shift in image processing. Flow diagnostics with optical techniques is not an exception. Considering the existing and foreseeable disruptive developments in flow field measurement techniques, we elaborate this perspective, particularly focused to the field of particle image velocimetry. The driving forces for the advancements in ML methods for flow field measurements in recent years are reviewed in terms of image preprocessing, data treatment and conditioning. Finally, possible routes for further developments are highlighted.Stefano Discetti acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 949085). Yingzheng Liu acknowledges financial support from the National Natural Science Foundation of China (11725209)

    Machine Learning for Fluid Mechanics

    Full text link
    The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202

    An end-to-end KNN-based PTV approach for high-resolution measurements and uncertainty quantification

    Get PDF
    We introduce a novel end-to-end approach to improving the resolution of PIV measurements. The method blends information from different snapshots without the need for time-resolved measurements on grounds of similarity of flow regions in different snapshots. The main hypothesis is that, with a sufficiently large ensemble of statistically-independent snapshots, the identification of flow structures that are morphologically similar but occurring at different time instants is feasible. Measured individual vectors from different snapshots with similar flow organisation can thus be merged, resulting in an artificially increased particle concentration. This allows to refine the interrogation region and, consequently, increase the spatial resolution. The measurement domain is split in subdomains. The similarity is enforced only on a local scale, i.e. morphologically-similar regions are sought only among subdomains corresponding to the same flow region. The identification of locally-similar snapshots is based on unsupervised K-nearest neighbours search in a space of significant flow features. Such features are defined in terms of a Proper Orthogonal Decomposition, performed in subdomains on the original low-resolution data, obtained either with standard cross-correlation or with binning of Particle Tracking Velocimetry data with a relatively large bin size. A refined bin size is then selected according to the number of sufficiently close snapshots identified. The statistical dispersion of the velocity vectors within the bin is then used to estimate the uncertainty and to select the optimal K which minimises it. The method is tested and validated against datasets with a progressively increasing level of complexity: two virtual experiments based on direct simulations of the wake of a fluidic pinball and a channel flow and the experimental data collected in a turbulent boundary layer.This project has received funding from the European Research Council (ERC) under the European Union Horizon 2020 research and innovation program (grant agreement No 949085). Funding for APC: Universidad Carlos III de Madrid (Read & Publish Agreement CRUE-CSIC 2022)

    Stochastic particle advection velocimetry (SPAV): theory, simulations, and proof-of-concept experiments

    Full text link
    Particle tracking velocimetry (PTV) is widely used to measure time-resolved, three-dimensional velocity and pressure fields in fluid dynamics research. Inaccurate localization and tracking of particles is a key source of error in PTV, especially for single camera defocusing, plenoptic imaging, and digital in-line holography (DIH) sensors. To address this issue, we developed stochastic particle advection velocimetry (SPAV): a statistical data loss that improves the accuracy of PTV. SPAV is based on an explicit particle advection model that predicts particle positions over time as a function of the estimated velocity field. The model can account for non-ideal effects like drag on inertial particles. A statistical data loss that compares the tracked and advected particle positions, accounting for arbitrary localization and tracking uncertainties, is derived and approximated. We implement our approach using a physics-informed neural network, which simultaneously minimizes the SPAV data loss, a Navier-Stokes physics loss, and a wall boundary loss, where appropriate. Results are reported for simulated and experimental DIH-PTV measurements of laminar and turbulent flows. Our statistical approach significantly improves the accuracy of PTV reconstructions compared to a conventional data loss, resulting in an average reduction of error close to 50%. Furthermore, our framework can be readily adapted to work with other data assimilation techniques like state observer, Kalman filter, and adjoint-variational methods

    Adaptive FPGA NoC-based Architecture for Multispectral Image Correlation

    Full text link
    An adaptive FPGA architecture based on the NoC (Network-on-Chip) approach is used for the multispectral image correlation. This architecture must contain several distance algorithms depending on the characteristics of spectral images and the precision of the authentication. The analysis of distance algorithms is required which bases on the algorithmic complexity, result precision, execution time and the adaptability of the implementation. This paper presents the comparison of these distance computation algorithms on one spectral database. The result of a RGB algorithm implementation was discussed

    Adaptive Sampling in Particle Image Velocimetry

    Get PDF

    Dynamic Adaptive Real-Time Particle Image Velocimetry

    Get PDF
    Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 65-67).Particle Image Velocimetry (PIV) is a technique that allows for the detailed visualization of fluid flow. By performing computational analysis on images taken by a high-sensitivity camera that monitors the movement of laser-illuminated tracer particles over time, PIV is capable of producing a vector field describing instantaneous velocity measurements of the fluid captured in the field of view. Nearly all PIV implementations perform offline processing of the collected data, a feature that limits the scope of the applications of this technique. Recently, however, researchers have begun to explore the possibility of using FPGAs or PCs to greatly improve the efficiency of these algorithms in order to obtain real-time speeds for use in feedback loops. Such approaches are very promising and can help expand the use of PIV into previously unexplored fields, such as high performance Unmanned Aerial Vehicles (UAVs). Yet these real-time algorithms have the potential to be improved even further. This thesis outlines an approach to make real-time PIV algorithms more accurate and versatile in large part by applying principles from another emerging technique called adaptive PIV, and in doing so will also address new issues created from the conversion of traditional PIV to a real-time context. This thesis also documents the implementation of this Dynamic Adaptive Real- Time PIV (DARTPIV) algorithm on a PC with CUDA parallel computing, and its performance and results analyzed in the context of normal real-time PIV.by Samvaran Sharma.M. Eng

    Roadmap on signal processing for next generation measurement systems

    Get PDF
    Signal processing is a fundamental component of almost any sensor-enabled system, with a wide range of applications across different scientific disciplines. Time series data, images, and video sequences comprise representative forms of signals that can be enhanced and analysed for information extraction and quantification. The recent advances in artificial intelligence and machine learning are shifting the research attention towards intelligent, data-driven, signal processing. This roadmap presents a critical overview of the state-of-the-art methods and applications aiming to highlight future challenges and research opportunities towards next generation measurement systems. It covers a broad spectrum of topics ranging from basic to industrial research, organized in concise thematic sections that reflect the trends and the impacts of current and future developments per research field. Furthermore, it offers guidance to researchers and funding agencies in identifying new prospects.AerodynamicsMicrowave Sensing, Signals & System
    corecore