5,134 research outputs found

    Machine Learning for Fluid Mechanics

    Full text link
    The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202

    Learned simulation as the engine of physical scene understanding

    Get PDF
    La cognición humana evoca las habilidades del razonamiento, la comunicación y la interacción. Esto incluye la interpretación de la física del mundo real para comprender las leyes que subyacen en ella. Algunas teorías postulan la semejanza entre esta capacidad de razonamiento con simulaciones para interpretar la física de la escena, que abarca la percepción para la comprensión del estado físico actual, y el razonamiento acerca de la evolución temporal de un sistema dado. En este contexto se propone el desarrollo de un sistema para realizar simulación aprendida. Establecido un objetivo, el algoritmo se entrena para aprender una aproximación de la dinámica real, para construir así un gemelo digital del entorno. Entonces, el sistema de simulación emulará la física subyacente con información obtenida mediante observaciones de la escena. Para ello, se empleará una cámara estéreo para adquirir datos a partir de secuencias de video. El trabajo se centra los fenómenos oscilatorios de fluidos. Los fluidos están presentes en muchas de nuestras acciones diarias y constituyen un reto físico para el sistema propuesto. Son deformables, no lineales, y presentan un carácter disipativo dominante, lo que los convierte en un sistema complejo para ser aprendido. Además, sólo se tiene acceso a mediciones parciales de su estado ya que la cámara sólo proporciona información acerca de la superficie libre. El resultado es un sistema capaz de percibir y razonar sobre la dinámica del fluido. El gemelo digital cognitivo así construido proporciona una interpretación del estado del mismo para integrar su evolución en tiempo real, aprendiendo con información observada del gemelo físico. El sistema, entrenado originalmente para un líquido concreto, se adaptará a cualquier otro a través del aprendizaje por refuerzo produciendo así resultados precisos para líquidos desconocidos. Finalmente, se emplea la realidad aumentada (RA) para ofrecer una representación visual de los resultados, así como información adicional sobre el estado del líquido que no es accesible al ojo humano. Este objetivo se alcanza mediante el uso de técnicas de aprendizaje de variedades, y aprendizaje automático, como las redes neuronales, enriquecido con información física. Empleamos sesgos inductivos basados en el conocimiento de la termodinámica para desarrollar un sistema inteligente que cumpla con estos principios para dar soluciones con sentido sobre la dinámica. El problema abordado en esta tesis constituye una dificultad de primer orden en el desarrollo de sistemas robóticos destinados a la manipulación de fluidos. En acciones como el vertido o el movimiento, la oscilación de los líquidos juega un papel importante en el desarrollo de sistemas de asistencia a personas con movilidad reducida o aplicaciones industriales. Cognition evokes human abilities for reasoning, communication, and interaction. This includes the interpretation of real-world physics so as to understand its underlying laws. Theories postulate the similarity of human reasoning about these phenomena with simulations for physical scene understanding, which gathers perception for comprehension of the current dynamical state, and reasoning for time evolution prediction of a given system. In this context, we propose the development of a system for learned simulation. Given a design objective, an algorithm is trained to learn an approximation to the real dynamics to build a digital twin of the environment. Then, the underlying physics will be emulated with information coming from observations of the scene. For this purpose, we use a commodity camera to acquire data exclusively from video recordings. We focus on the sloshing problem as a benchmark. Fluids are widely present in several daily actions and portray a physically rich challenge for the proposed systems. They are highly deformable, nonlinear, and present a dominant dissipative behavior, making them a complex entity to be emulated. In addition, we only have access to partial measurements of their dynamical state, since a commodity camera only provides information about the free surface. The result is a system capable of perceiving and reasoning about the dynamics of the fluid. This cognitive digital twin provides an interpretation of the state of the fluid to integrate its dynamical evolution in real-time, updated with information observed from the real twin. The system, trained originally for one liquid, will be able to adapt itself to any other fluid through reinforcement learning and produce accurate results for previously unseen liquids. Augmented reality is used in the design of this application to offer a visual interpretation of the solutions to the user, and include information about the dynamics that is not accessible to the human eye. This objective is to be achieved through the use of manifold learning and machine learning techniques, such as neural networks, enriched with physics information. We use inductive biases based on the knowledge of thermodynamics to develop machine intelligence systems that fulfill these principles to provide meaningful solutions to the dynamics. This problem is considered one of the main targets in fluid manipulation for the development of robotic systems. Pursuing actions such as pouring or moving, sloshing dynamics play a capital role for the correct performance of aiding systems for the elderly or industrial applications that involve liquids. <br /

    Data efficiency in imitation learning with a focus on object manipulation

    Get PDF
    Imitation is a natural human behaviour that helps us learn new skills. Modelling this behaviour in robots, however, has many challenges. This thesis investigates the challenge of handling the expert demonstrations in an efficient way, so as to minimise the number of demonstrations required for robots to learn. To achieve this, it focuses on demonstration data efficiency at various steps of the imitation process. Specifically, it presents new methodologies that offer ways to acquire, augment and combine demonstrations in order to improve the overall imitation process. Firstly, the thesis explores an inexpensive and non-intrusive way of acquiring dexterous human demonstrations. Human hand actions are quite complex, especially when they involve object manipulation. The proposed framework tackles this by using a camera to capture the hand information and then retargeting it to a dexterous hand model. It does this by combining inverse kinematics with stochastic optimisation. The demonstrations collected with this framework can then be used in the imitation process. Secondly, the thesis presents a novel way to apply data augmentation to demonstrations. The main difficulty of augmenting demonstrations is that their trajectorial nature can make them unsuccessful. Whilst previous works require additional knowledge about the task or demonstrations to achieve this, this method performs augmentation automatically. To do this, it introduces a correction network that corrects the augmentations based on the distribution of the original experts. Lastly, the thesis investigates data efficiency in a multi-task scenario where it additionally proposes a data combination method. Its aim is to automatically divide a set of tasks into sub-behaviours. Contrary to previous works, it does this without any additional knowledge about the tasks. To achieve this, it uses both task-specific and shareable modules. This minimises negative transfer and allows for the method to be applied to various task sets with different commonalities.Open Acces

    Quantum State Estimation and Tracking for Superconducting Processors Using Machine Learning

    Get PDF
    Quantum technology has been rapidly growing; in particular, the experiments that have been performed with superconducting qubits and circuit QED have allowed us to explore the light-matter interaction at its most fundamental level. The study of coherent dynamics between two-level systems and resonator modes can provide insight into fundamental aspects of quantum physics, such as how the state of a system evolves while being continuously observed. To study such an evolving quantum system, experimenters need to verify the accuracy of state preparation and control since quantum systems are very fragile and sensitive to environmental disturbance. In this thesis, I look at these continuous monitoring and state estimation problems from a modern point of view. With the help of machine learning techniques, it has become possible to explore regimes that are not accessible with traditional methods: for example, tracking the state of a superconducting transmon qubit continuously with dynamics fast compared with the detector bandwidth. These results open up a new area of quantum state tracking, enabling us to potentially diagnose errors that occur during quantum gates. In addition, I investigate the use of supervised machine learning, in the form of a modified denoising autoencoder, to simultaneously remove experimental noise while encoding one and two-qubit quantum state estimates into a minimum number of nodes within the latent layer of a neural network. I automate the decoding of these latent representations into positive density matrices and compare them to similar estimates obtained via linear inversion and maximum likelihood estimation. Using a superconducting multiqubit chip, I experimentally verify that the neural network estimates the quantum state with greater fidelity than either traditional method. Furthermore, the network can be trained using only product states and still achieve high fidelity for entangled states. This simplification of the training overhead permits the network to aid experimental calibration, such as the diagnosis of multi-qubit crosstalk. As quantum processors increase in size and complexity, I expect automated methods such as those presented in this thesis to become increasingly attractive

    Redes neuronales de convolución profundas para la regionalización estadística de proyecciones de cambio climático

    Get PDF
    RESUMEN Las proyecciones climáticas a escala local y/o regional son muy demandadas por diversos sectores socioeconómicos para elaborar sus planes de adaptación y mitigación al cambio climático. Sin embargo, los modelos climáticos globales actuales presentan una resolución espacial muy baja, lo que dificulta enormemente la elaboración de este tipo de estudios. Una manera de aumentar esta resolución es establecer relaciones estadísticas entre la variable local de interés (por ejemplo la temperatura y/o precipitación en una localidad dada) y un conjunto de variables de larga escala (por ejemplo, geopotencial y/o vientos en distintos niveles verticales) dadas por los modelos climáticos. En particular, en esta Tesis se explora la idoneidad de las redes neuronales de convolución (CNN) como método de downscaling estadístico para generar proyecciones de cambio climático a alta resolución sobre Europa. Para ello se evalúa primero la capacidad de estos modelos para reproducir la variabilidad local de precipitación y de temperatura en un período histórico reciente, comparándolas contra otros métodos estadísticos de referencia. A continuación, se analiza la idoneidad de estos modelos para regionalizar las proyecciones climáticas en el futuro (hasta el año 2100). Además, se desarrollan diversos estudios de interpretabilidad sobre redes neuronales para ganar confianza y conocimiento sobre el uso de este tipo de técnicas para aplicaciones climáticas, puesto que a menudo son rechazadas por ser consideradas “cajas negras”.ABSTRACT Regional climate projections are very demanded by different socioeconomics sectors to elaborate their adaptation and mitigation plans to climate change. Nevertheless, the state-of-the-art Global Glimate Models (GCMs) present very coarse spatial resolutions what limits their use in most of practical applications and impact studies. One way to increase this limited spatial resolution is to establish empirical/statistical functions which link the local variable of interest (e.g. temperature and/or precipitation at a given site) with a set of large-scale atmospheric variables (e.g. geopotential and/or winds at different vertical levels), which are typically well-reproduced by GCMs. In this context, this Thesis explores the suitability of deep learning, and in particular modern Convolutional Neural Networks (CNNs), as statistical downscaling techniques to produce regional climate change projections over Europe. To achieve this ambitious goal, the capacity of CNNs to reproduce the local variability of precipitation and temperature fields in present climate conditions is first assessed by comparing their performance with that from a set of traditional, benchmark statistical methods. Subsequently, their suitability to produce plausible future (up to 2100) high-resolution scenarios is put to the test by comparing their projected signals of change with those given by a set of state-of-the-art GCMs from CMIP5 and Regional Climate Models (RCMs) from the flagship EURO-CORDEX initiative. Also, a variety of interpretability techniques are also carried out to gain confidence and knowledge on the use of CNNs for climate applications, which have typically discarded until now for being considered as "black-boxes"

    Neural Network Methods for Radiation Detectors and Imaging

    Full text link
    Recent advances in image data processing through machine learning and especially deep neural networks (DNNs) allow for new optimization and performance-enhancement schemes for radiation detectors and imaging hardware through data-endowed artificial intelligence. We give an overview of data generation at photon sources, deep learning-based methods for image processing tasks, and hardware solutions for deep learning acceleration. Most existing deep learning approaches are trained offline, typically using large amounts of computational resources. However, once trained, DNNs can achieve fast inference speeds and can be deployed to edge devices. A new trend is edge computing with less energy consumption (hundreds of watts or less) and real-time analysis potential. While popularly used for edge computing, electronic-based hardware accelerators ranging from general purpose processors such as central processing units (CPUs) to application-specific integrated circuits (ASICs) are constantly reaching performance limits in latency, energy consumption, and other physical constraints. These limits give rise to next-generation analog neuromorhpic hardware platforms, such as optical neural networks (ONNs), for high parallel, low latency, and low energy computing to boost deep learning acceleration
    corecore