223 research outputs found

    Machine Learning for Fluid Mechanics

    Full text link
    The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202

    Sensorimotor representation learning for an "active self" in robots: A model survey

    Get PDF
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in \sout{humans' world} \rev{spaces populated by people} and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyse what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration

    Sensorimotor Representation Learning for an “Active Self” in Robots: A Model Survey

    Get PDF
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Projekt DEALPeer Reviewe

    Artificial self-awareness for robots

    Get PDF
    Robots are evolving and entering into various sectors and aspects of life. While humans are aware of their bodies and capabilities, which help them work on a task in different environments, robots are not. This thesis is about defining and developing a robotic artificial self-awareness framework. The aim is to allow robots to adapt to their environment and better manage their task. The robot’s artificial self-aware knowledge is captured based on levels where each level helps a robot acquire higher self-awareness competence. These levels are inspired by Rochat [1] self-awareness development levels in humans, where each level is associated with a complexity of self-knowledge. Self-awareness in humans leads to distinguishing themselves from the environment, allowing humans to understand themselves and control their capabilities. This work focuses on the first and second levels of self awareness through differentiation and situation (minimal self). The artificial self-awareness level-1 proposes the first step towards a basic, minimal self-awareness in a robot. The artificial self-awareness level-2 proposes an increasing capacity of self-awareness knowledge in the robot. That is, this thesis posits an experimental methodology to evaluate whether the robot can differentiate and situate itself from the environment and to test whether artificial self-awareness level-1 and level-2 increase a robot’s self-certainty in an unseen environment. The research utilises deep neural network techniques to allow a dual-arm robot to identify itself within different environments. The robot vision and proprioception are captured using a camera and robot sensors to build a model that allows a robot to differentiate itself from the environment. The level-1 results indicate that a robot can distinguish itself with an accuracy of 80.3% on average in different environmental settings and under confounding input signals. Also, the level-2 results show that a robot can situate itself in different environments with an accuracy of 86.01% yielding a higher artificial self-certainty of 5.71%. This thesis work helps a robot be aware of itself in different environments

    Exploring Audio Sensing in Detecting Social Interactions Using Smartphone Devices

    Get PDF
    In recent years, the fast proliferation of smartphones devices has provided powerful and portable methodologies for integrating sensing systems which can run continuously and provide feedback in real-time. The mobile crowd-sensing of human behaviour is an emerging computing paradigm that offers a challenge of sensing everyday social interactions performed by people who carry smartphone devices upon themselves. Typical smartphone sensors and the mobile crowd-sensing paradigm compose a process where the sensors present, such as the microphone, are used to infer social relationships between people in diverse social settings, where environmental factors can be dynamic and the infrastructure of buildings can vary. The typical approaches in detecting social interactions between people consider the use of co-location as a proxy for real-world interactions. Such approaches can under-perform in challenging situations where multiple social interactions can occur within close proximity to each other, for example when people are in a queue at the supermarket but not a part of the same social interaction. Other approaches involve a limitation where all participants of a social interaction must carry a smartphone device with themselves at all times and each smartphone must have the sensing app installed. The problem here is the feasibility of the sensing system, which relies heavily on each participant's smartphone acting as nodes within a social graph, connected together with weighted edges of proximity between the devices; when users uninstall the app or disable background sensing, the system is unable to accurately determine the correct number of participants. In this thesis, we present two novel approaches to detecting co-located social interac- tions using smartphones. The first relies on the use of WiFi signals and audio signals to distinguish social groups interacting within a few meters from each other with 88% precision. We orchestrated preliminary experiments using WiFi as a proxy for co-location between people who are socially interacting. Initial results showed that in more challenging scenarios, WiFi is not accurate enough to determine if people are socially interacting within the same social group. We then made use of audio as a second modality to capture the sound patterns of conversations to identify and segment social groups within close proximity to each other. Through a range of real-world experiments (social interactions in meeting scenarios, coffee shop scenarios, conference scenarios), we demonstrate a technique that utilises WiFi fingerprinting, along with sound fingerprinting to identify these social groups. We built a system which performs well, and then optimized the power consumption and improved the performance to 88% precision in the most challenging scenarios using duty cycling and data averaging techniques. The second approach explores the feasibility of detecting social interactions without the need of all social contacts to carry a social sensing device. This work explores the use of supervised and unsupervised Deep Learning techniques before concluding on the use of an Autoencoder model to perform a Speaker Identification task. We demonstrate how machine learning can be used with the audio data collected from a singular device as a speaker identification framework. Speech from people is used as the input to our Autoencoder model and then classified against a list of "social contacts" to determine if the user has spoken a person before or not. By doing this, the system can count the number of social contacts belonging to the user, and develop a database of common social contacts. Through the use 100 randomly-generated social conversations and the use of state-of-the-art Deep Learning techniques, we demonstrate how this system can accurately distinguish new and existing speakers from a data set of voices, to count the number of daily social interactions a user encounters with a precision of 75%. We then optimize the model using Hyperparameter Optimization to ensure that the model is most optimal for the task. Unlike most systems in the literature, this approach would work without the need to modify the existing infrastructure of a building, and without all participants needing to install the same ap

    Advancing Gesture Recognition with Millimeter Wave Radar

    Full text link
    Wireless sensing has attracted significant interest over the years, and with the dawn of emerging technologies, it has become more integrated into our daily lives. Among the various wireless communication platforms, WiFi has gained widespread deployment in indoor settings. Consequently, the utilization of ubiquitous WiFi signals for detecting indoor human activities has garnered considerable attention in the past decade. However, more recently, mmWave Radar-based sensing has emerged as a promising alternative, offering advantages such as enhanced sensitivity to motion and increased bandwidth. This thesis introduces innovative approaches to enhance contactless gesture recognition by leveraging emerging low-cost millimeter wave radar technology. It makes three key contributions. Firstly, a cross-modality training technique is proposed, using mmWave radar as a supplementary aid for training WiFi-based deep learning models. The proposed model enables precise gesture detection based solely on WiFi signals, significantly improving WiFi-based recognition. Secondly, a novel beamforming-based gesture detection system is presented, utilizing commodity mmWave radars for accurate detection in low signal-to-noise scenarios. By steering multiple beams around the gesture performer, independent views of the gesture are captured. A selfattention based deep neural network intelligently fuses information from these beams, surpassing single-beam accuracy. The model incorporates a unique data augmentation algorithm accounting for Doppler shift and multipath effects, enhancing generalization. Notably, the proposed method achieves superior gesture classification performance, outperforming state-of-the-art approaches by 31-43% with only two beams. Thirdly, the research explores receiver antenna diversity in mmWave radars to further improve gesture recognition accuracy by deep learning techniques to combine data from multiple receiver antennas, leveraging inherent diversity for enhanced detection. Extensive experimentation and evaluation demonstrate substantial advancements in contactless gesture recognition using low-cost mmWave radar technology
    • 

    corecore