36 research outputs found
Imbibition in Disordered Media
The physics of liquids in porous media gives rise to many interesting
phenomena, including imbibition where a viscous fluid displaces a less viscous
one. Here we discuss the theoretical and experimental progress made in recent
years in this field. The emphasis is on an interfacial description, akin to the
focus of a statistical physics approach. Coarse-grained equations of motion
have been recently presented in the literature. These contain terms that take
into account the pertinent features of imbibition: non-locality and the
quenched noise that arises from the random environment, fluctuations of the
fluid flow and capillary forces. The theoretical progress has highlighted the
presence of intrinsic length-scales that invalidate scale invariance often
assumed to be present in kinetic roughening processes such as that of a
two-phase boundary in liquid penetration. Another important fact is that the
macroscopic fluid flow, the kinetic roughening properties, and the effective
noise in the problem are all coupled. Many possible deviations from simple
scaling behaviour exist, and we outline the experimental evidence. Finally,
prospects for further work, both theoretical and experimental, are discussed.Comment: Review article, to appear in Advances in Physics, 53 pages LaTe
Recommended from our members
Automatic facial expression analysis
Humans spend a large amount of their time interacting with computers of one type or another. However, computers are emotionally blind and indifferent to the affective states of their users. Human-computer interaction which does not consider emotions, ignores a whole channel of available information.
Faces contain a large portion of our emotionally expressive behaviour. We use facial expressions to display our emotional states and to manage our interactions. Furthermore, we express and read emotions in faces effortlessly. However, automatic understanding of facial expressions is a very difficult task computationally, especially in the presence of highly variable pose, expression and illumination. My work furthers the field of automatic facial expression tracking by tackling these issues, bringing emotionally aware computing closer to reality.
Firstly, I present an in-depth analysis of the Constrained Local Model (CLM) for facial expression and head pose tracking. I propose a number of extensions that make location of facial features more accurate.
Secondly, I introduce a 3D Constrained Local Model (CLM-Z) which takes full advantage of depth information available from various range scanners. CLM-Z is robust to changes in illumination and shows better facial tracking performance.
Thirdly, I present the Constrained Local Neural Field (CLNF), a novel instance of CLM that deals with the issues of facial tracking in complex scenes. It achieves this through the use of a novel landmark detector and a novel CLM fitting algorithm. CLNF outperforms state-of-the-art models for facial tracking in presence of difficult illumination and varying pose.
Lastly, I demonstrate how tracked facial expressions can be used for emotion inference from videos. I also show how the tools developed for facial tracking can be applied to emotion inference in music
Computer graphic manipulations in the study of face perceptions
The face is of unparalleled importance in communication, containing cues used not only in identity recognition but, also for the assessment of character, mood, health and attractiveness. Computer graphic image (CGI) manipulation has enabled the effects of facial cues on perception to be studied from cognitive neuroscience and evolutionary psychology perspectives. A set of studies employing novel computer graphic methods to investigate facial expression, symmetry and dynamic cues related to taste is presented in six experimental chapters (2-7). In Chapter 2 novel photo-realistic stimuli are employed to study the perceptual lateralization of facial cues for perceptions of age, gender, attractiveness, expression and lip-reading. Results suggest a right hemisphere lateralization for all perceptions except lip-reading, which appears left lateralized. Previous studies with photographic and CGI manipulations have implied that humans unlike other animals prefer asymmetry in attractiveness judgements. In Chapter 3, new, more appropriate, CGI techniques were applied to investigate facial symmetry preference. In a series of experiments humans were found to judge more symmetrical faces as more attractive and possible individuals differences in symmetry preference strength were investigated. CGI techniques have enabled consistent qualities related to attractiveness and age to be captured from groups of face images and subsequently manipulated. In Chapter 4, these techniques are applied to capture and manipulate qualities associated with perceived skin health. Chapter 5 represents a foray into dynamic cues related to food consumption using video. Possible facial cues to the strength, taste and the hedonic value of flavours that an observed individual was consuming were investigated. Chapter 6 presents a novel test investigating individual differences in the percept of neutral expression. To illustrate the test: when asked to make faces expressively neutral, depressed individuals chose higher levels anger and disgust compared to controls. The test used novel 'anti-face' expression stimuli. These were later used in Chapter 7 to investigate a recent finding that adaptation to the anti-faces of individuals (faces with the opposite characteristics to a particular individual), facilitated recognition of subsequently presented corresponding individuals. The presence of analogous effects for emotional expressions was found. This effect appears to be robust to changes in individual identity, pattern masking and delays of up to a second between the adaptation and test stimuli. Overall the thesis demonstrates the use of CGI manipulation in testing hypotheses from a variety of areas within face perception and presents a number of novel techniques that may be useful in future face perception research
Recommended from our members
Artificial intelligence system for continuous affect estimation from naturalistic human expressions
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe analysis and automatic affect estimation system from human expression has been acknowledged as an active research topic in computer vision community. Most reported affect recognition systems, however, only consider subjects performing well-defined acted expression, in a very controlled condition, so they are not robust enough for real-life recognition tasks with subject variation, acoustic surrounding and illumination change. In this thesis, an artificial intelligence system is proposed to continuously (represented along a continuum e.g., from -1 to +1) estimate affect behaviour in terms of latent dimensions (e.g., arousal and valence) from naturalistic human expressions. To tackle the issues, feature representation and machine
learning strategies are addressed. In feature representation, human expression is represented by modalities such as audio, video, physiological signal and text modality. Hand- crafted features is extracted from each modality per frame, in order to match with consecutive affect label. However, the features extracted maybe missing information due to several factors such as background noise or lighting condition. Haar Wavelet Transform is employed to determine if noise cancellation mechanism in feature space should be considered in the design of affect estimation system. Other than hand-crafted features, deep learning features are also analysed in terms of the layer-wise; convolutional and fully connected layer. Convolutional Neural Network
such as AlexNet, VGGFace and ResNet has been selected as deep learning architecture to do feature extraction on top of facial expression images. Then, multimodal fusion scheme is applied by fusing deep learning feature and hand-crafted feature together to improve the performance. In machine learning strategies, two-stage regression approach is introduced. In the first stage, baseline regression methods such as Support Vector Regression are applied to estimate each affect per time. Then in the second stage, subsequent model such as Time Delay Neural Network, Long Short-Term Memory and Kalman Filter is proposed to model the
temporal relationships between consecutive estimation of each affect. In doing so, the temporal information employed by a subsequent model is not biased by high variability present in consecutive frame and at the same time, it allows the network to exploit the slow changing dynamic between emotional dynamic more efficiently. Following of two-stage regression approach for unimodal affect analysis, fusion information from different modalities is elaborated. Continuous emotion recognition in-the-wild is leveraged by investigating mathematical modelling for each emotion dimension. Linear Regression, Exponent Weighted Decision Fusion and Multi-Gene Genetic Programming are implemented to quantify the relationship between each modality. In summary, the research work presented in this thesis reveals a fundamental approach to automatically estimate affect value continuously from naturalistic human expression. The proposed system, which consists of feature smoothing, deep learning feature, two-stage regression framework and fusion using mathematical equation between modalities is demonstrated. It offers strong basis towards the development artificial intelligent system on estimation continuous affect estimation, and more broadly towards building a real-time
emotion recognition system for human-computer interaction.Majlis Amanah Rakyat (MARA), Malaysi
Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems
Advances in artificial intelligence (AI) are fueling a new paradigm of
discoveries in natural sciences. Today, AI has started to advance natural
sciences by improving, accelerating, and enabling our understanding of natural
phenomena at a wide range of spatial and temporal scales, giving rise to a new
area of research known as AI for science (AI4Science). Being an emerging
research paradigm, AI4Science is unique in that it is an enormous and highly
interdisciplinary area. Thus, a unified and technical treatment of this field
is needed yet challenging. This work aims to provide a technically thorough
account of a subarea of AI4Science; namely, AI for quantum, atomistic, and
continuum systems. These areas aim at understanding the physical world from the
subatomic (wavefunctions and electron density), atomic (molecules, proteins,
materials, and interactions), to macro (fluids, climate, and subsurface) scales
and form an important subarea of AI4Science. A unique advantage of focusing on
these areas is that they largely share a common set of challenges, thereby
allowing a unified and foundational treatment. A key common challenge is how to
capture physics first principles, especially symmetries, in natural systems by
deep learning methods. We provide an in-depth yet intuitive account of
techniques to achieve equivariance to symmetry transformations. We also discuss
other common technical challenges, including explainability,
out-of-distribution generalization, knowledge transfer with foundation and
large language models, and uncertainty quantification. To facilitate learning
and education, we provide categorized lists of resources that we found to be
useful. We strive to be thorough and unified and hope this initial effort may
trigger more community interests and efforts to further advance AI4Science