3,219 research outputs found
A Spark Of Emotion: The Impact of Electrical Facial Muscle Activation on Emotional State and Affective Processing
Facial feedback, which involves the brain receiving information about the activation of facial muscles, has the potential to influence our emotional states and judgments. The extent to which this applies is still a matter of debate, particularly considering a failed replication of a seminal study. One factor contributing to the lack of replication in facial feedback effects may be the imprecise manipulation of facial muscle activity in terms of both degree and timing. To overcome these limitations, this thesis proposes a non-invasive method for inducing precise facial muscle contractions, called facial neuromuscular electrical stimulation (fNMES). I begin by presenting a systematic literature review that lays the groundwork for standardising the use of fNMES in psychological research, by evaluating its application in existing studies. This review highlights two issues, the lack of use of fNMES in psychology research and the lack of parameter reporting. I provide practical recommendations for researchers interested in implementing fNMES. Subsequently, I conducted an online experiment to investigate participants' willingness to participate in fNMES research. This experiment revealed that concerns over potential burns and involuntary muscle movements are significant deterrents to participation. Understanding these anxieties is critical for participant management and expectation setting. Subsequently, two laboratory studies are presented that investigated the facial FFH using fNMES. The first study showed that feelings of happiness and sadness, and changes in peripheral physiology, can be induced by stimulating corresponding facial muscles with 5–seconds of fNMES. The second experiment showed that fNMES-induced smiling alters the perception of ambiguous facial emotions, creating a bias towards happiness, and alters neural correlates of face processing, as measured with event-related potentials (ERPs). In summary, the thesis presents promising results for testing the facial feedback hypothesis with fNMES and provides practical guidelines and recommendations for researchers interested in using fNMES for psychological research
Computational and experimental studies on the reaction mechanism of bio-oil components with additives for increased stability and fuel quality
As one of the world’s largest palm oil producers, Malaysia encountered a major disposal problem as vast amount of oil palm biomass wastes are produced. To overcome this problem, these biomass wastes can be liquefied into biofuel with fast pyrolysis technology. However, further upgradation of fast pyrolysis bio-oil via direct solvent addition was required to overcome it’s undesirable attributes. In addition, the high production cost of biofuels often hinders its commercialisation. Thus, the designed solvent-oil blend needs to achieve both fuel functionality and economic targets to be competitive with the conventional diesel fuel.
In this thesis, a multi-stage computer-aided molecular design (CAMD) framework was employed for bio-oil solvent design. In the design problem, molecular signature descriptors were applied to accommodate different classes of property prediction models. However, the complexity of the CAMD problem increases as the height of signature increases due to the combinatorial nature of higher order signature. Thus, a consistency rule was developed reduce the size of the CAMD problem. The CAMD problem was then further extended to address the economic aspects via fuzzy multi-objective optimisation approach.
Next, a rough-set based machine learning (RSML) model has been proposed to correlate the feedstock characterisation and pyrolysis condition with the pyrolysis bio-oil properties by generating decision rules. The generated decision rules were analysed from a scientific standpoint to identify the underlying patterns, while ensuring the rules were logical. The decision rules generated can be used to select optimal feedstock composition and pyrolysis condition to produce pyrolysis bio-oil of targeted fuel properties.
Next, the results obtained from the computational approaches were verified through experimental study. The generated pyrolysis bio-oils were blended with the identified solvents at various mixing ratio. In addition, emulsification of the solvent-oil blend in diesel was also conducted with the help of surfactants. Lastly, potential extensions and prospective work for this study have been discuss in the later part of this thesis. To conclude, this thesis presented the combination of computational and experimental approaches in upgrading the fuel properties of pyrolysis bio-oil. As a result, high quality biofuel can be generated as a cleaner burning replacement for conventional diesel fuel
Conversations on Empathy
In the aftermath of a global pandemic, amidst new and ongoing wars, genocide, inequality, and staggering ecological collapse, some in the public and political arena have argued that we are in desperate need of greater empathy — be this with our neighbours, refugees, war victims, the vulnerable or disappearing animal and plant species. This interdisciplinary volume asks the crucial questions: How does a better understanding of empathy contribute, if at all, to our understanding of others? How is it implicated in the ways we perceive, understand and constitute others as subjects? Conversations on Empathy examines how empathy might be enacted and experienced either as a way to highlight forms of otherness or, instead, to overcome what might otherwise appear to be irreducible differences. It explores the ways in which empathy enables us to understand, imagine and create sameness and otherness in our everyday intersubjective encounters focusing on a varied range of "radical others" – others who are perceived as being dramatically different from oneself. With a focus on the importance of empathy to understand difference, the book contends that the role of empathy is critical, now more than ever, for thinking about local and global challenges of interconnectedness, care and justice
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
A novel evaluation framework for recommender systems in big data environments
Henriques, R., & Pinto, L. (2023). A novel evaluation framework for recommender systems in big data environments. Expert Systems with Applications. https://doi.org/10.1016/j.eswa.2023.120659---We gratefully acknowledge the support of Aptoide in providing access to the data which made this project possible. This work was supported by national funds through FCT (Fundação para a Ciência e a Tecnologia), under the project—UIDB/04152/2020—Centro de Investigação em Gestão de Informação (MagIC)/NOVA IMS.Recommender systems were first introduced to solve information overload problems in enterprises. Over the last few decades, recommender systems have found applications in several major websites related to e-commerce, music and video streaming, travel and movie sites, social media, and mobile app stores. Several methods have been proposed over the years to build recommender systems. However, very little work has been done in recommender system evaluation metrics. The most common approach to measuring recommender system’s performance in offline settings is to employ micro or macro averaged versions of standard machine-learning measures. Profit or other business-oriented metrics have been proposed for other predictive analytics problems, such as churn prediction. However, no such metrics have emerged for the recommender system context. In this work, we propose a novel evaluation metric that incorporates information from the online-platform userbase’s behavior. This metric’s rationale is that the recommender system ought to improve customers’ repeatead use of an online platform beyond the baseline level (i.e. in the absence of a recommender system). An empirical application of this novel metric is also presented in a real-world mobile app store, which integrates the dynamics of large-scale big data environments, which are common deployment scenarios for these types of recommender systems. The resulting profit metric is shown to correlate with the existing metrics while also being capable of integrating cost information, thereby providing an additional business benefit context, which allows us to differentiate between two similarly performing models.publishersversionepub_ahead_of_prin
AI-based design methodologies for hot form quench (HFQ®)
This thesis aims to develop advanced design methodologies that fully exploit the capabilities of the Hot Form Quench (HFQ®) stamping process in stamping complex geometric features in high-strength aluminium alloy structural components. While previous research has focused on material models for FE simulations, these simulations are not suitable for early-phase design due to their high computational cost and expertise requirements. This project has two main objectives: first, to develop design guidelines for the early-stage design phase; and second, to create a machine learning-based platform that can optimise 3D geometries under hot stamping constraints, for both early and late-stage design. With these methodologies, the aim is to facilitate the incorporation of HFQ capabilities into component geometry design, enabling the full realisation of its benefits.
To achieve the objectives of this project, two main efforts were undertaken. Firstly, the analysis of aluminium alloys for stamping deep corners was simplified by identifying the effects of corner geometry and material characteristics on post-form thinning distribution. New equation sets were proposed to model trends and design maps were created to guide component design at early stages. Secondly, a platform was developed to optimise 3D geometries for stamping, using deep learning technologies to incorporate manufacturing capabilities. This platform combined two neural networks: a geometry generator based on Signed Distance Functions (SDFs), and an image-based manufacturability surrogate model. The platform used gradient-based techniques to update the inputs to the geometry generator based on the surrogate model's manufacturability information. The effectiveness of the platform was demonstrated on two geometry classes, Corners and Bulkheads, with five case studies conducted to optimise under post-stamped thinning constraints. Results showed that the platform allowed for free morphing of complex geometries, leading to significant improvements in component quality.
The research outcomes represent a significant contribution to the field of technologically advanced manufacturing methods and offer promising avenues for future research. The developed methodologies provide practical solutions for designers to identify optimal component geometries, ensuring manufacturing feasibility and reducing design development time and costs. The potential applications of these methodologies extend to real-world industrial settings and can significantly contribute to the continued advancement of the manufacturing sector.Open Acces
Learning and Control of Dynamical Systems
Despite the remarkable success of machine learning in various domains in recent years, our understanding of its fundamental limitations remains incomplete. This knowledge gap poses a grand challenge when deploying machine learning methods in critical decision-making tasks, where incorrect decisions can have catastrophic consequences. To effectively utilize these learning-based methods in such contexts, it is crucial to explicitly characterize their performance. Over the years, significant research efforts have been dedicated to learning and control of dynamical systems where the underlying dynamics are unknown or only partially known a priori, and must be inferred from collected data. However, much of these classical results have focused on asymptotic guarantees, providing limited insights into the amount of data required to achieve desired control performance while satisfying operational constraints such as safety and stability, especially in the presence of statistical noise.
In this thesis, we study the statistical complexity of learning and control of unknown dynamical systems. By utilizing recent advances in statistical learning theory, high-dimensional statistics, and control theoretic tools, we aim to establish a fundamental understanding of the number of samples required to achieve desired (i) accuracy in learning the unknown dynamics, (ii) performance in the control of the underlying system, and (iii) satisfaction of the operational constraints such as safety and stability. We provide finite-sample guarantees for these objectives and propose efficient learning and control algorithms that achieve the desired performance at these statistical limits in various dynamical systems. Our investigation covers a broad range of dynamical systems, starting from fully observable linear dynamical systems to partially observable linear dynamical systems, and ultimately, nonlinear systems.
We deploy our learning and control algorithms in various adaptive control tasks in real-world control systems and demonstrate their strong empirical performance along with their learning, robustness, and stability guarantees. In particular, we implement one of our proposed methods, Fourier Adaptive Learning and Control (FALCON), on an experimental aerodynamic testbed under extreme turbulent flow dynamics in a wind tunnel. The results show that FALCON achieves state-of-the-art stabilization performance and consistently outperforms conventional and other learning-based methods by at least 37%, despite using 8 times less data. The superior performance of FALCON arises from its physically and theoretically accurate modeling of the underlying nonlinear turbulent dynamics, which yields rigorous finite-sample learning and performance guarantees. These findings underscore the importance of characterizing the statistical complexity of learning and control of unknown dynamical systems.</p
Efficiently Sampling the PSD Cone with the Metric Dikin Walk
Semi-definite programs represent a frontier of efficient computation. While
there has been much progress on semi-definite optimization, with moderate-sized
instances currently solvable in practice by the interior-point method, the
basic problem of sampling semi-definite solutions remains a formidable
challenge. The direct application of known polynomial-time algorithms for
sampling general convex bodies to semi-definite sampling leads to a
prohibitively high running time. In addition, known general methods require an
expensive rounding phase as pre-processing. Here we analyze the Dikin walk, by
first adapting it to general metrics, then devising suitable metrics for the
PSD cone with affine constraints. The resulting mixing time and per-step
complexity are considerably smaller, and by an appropriate choice of the
metric, the dependence on the number of constraints can be made
polylogarithmic. We introduce a refined notion of self-concordant matrix
functions and give rules for combining different metrics. Along the way, we
further develop the theory of interior-point methods for sampling.Comment: 54 page
Less is More: Restricted Representations for Better Interpretability and Generalizability
Deep neural networks are prevalent in supervised learning for large amounts of tasks such as image classification, machine translation and even scientific discovery.
Their success is often at the sacrifice of interpretability and generalizability. The increasing complexity of models and involvement of the pre-training process make the inexplicability more imminent. The outstanding performance when labeled data are abundant while prone to overfit when labeled data are limited demonstrates the difficulty of deep neural networks' generalizability to different datasets.
This thesis aims to improve interpretability and generalizability by restricting representations. We choose to approach interpretability by focusing on attribution analysis to understand which features contribute to prediction on BERT, and to approach generalizability by focusing on effective methods in a low-data regime.
We consider two strategies of restricting representations: (1) adding bottleneck, and (2) introducing compression. Given input x, suppose we want to learn y with the latent representation z (i.e. x→z→y), adding bottleneck means adding function R such that L(R(z)) < L(z) and introducing compression means adding function R so that L(R(y)) < L(y) where L refers to the number of bits. In other words, the restriction is added either in the middle of the pipeline or at the end of it.
We first introduce how adding information bottleneck can help attribution analysis and apply it to investigate BERT's behavior on text classification in Chapter 3.
We then extend this attribution method to analyze passage reranking in Chapter 4, where we conduct a detailed analysis to understand cross-layer and cross-passage behavior.
Adding bottleneck can not only provide insight to understand deep neural networks but can also be used to increase generalizability.
In Chapter 5, we demonstrate the equivalence between adding bottleneck and doing neural compression. We then leverage this finding with a framework called Non-Parametric learning by Compression with Latent Variables (NPC-LV), and show how optimizing neural compressors can be used in the non-parametric image classification with few labeled data.
To further investigate how compression alone helps non-parametric learning without latent variables (NPC), we carry out experiments with a universal compressor gzip on text classification in Chapter 6.
In Chapter 7, we elucidate methods of adopting the perspective of doing compression but without the actual process of compression using T5.
Using experimental results in passage reranking, we show that our method is highly effective in a low-data regime when only one thousand query-passage pairs are available.
In addition to the weakly supervised scenario, we also extend our method to large language models like GPT under almost no supervision --- in one-shot and zero-shot settings. The experiments show that without extra parameters or in-context learning, GPT can be used for semantic similarity, text classification, and text ranking and outperform strong baselines, which is presented in Chapter 8.
The thesis proposes to tackle two big challenges in machine learning --- "interpretability" and "generalizability" through restricting representation. We provide both theoretical derivation and empirical results to show the effectiveness of using information-theoretic approaches. We not only design new algorithms but also provide numerous insights on why and how "compression" is so important in understanding deep neural networks and improving generalizability
- …