732 research outputs found
Addiction in context
The dissertation provides a comprehensive exploration of the interplay between social and cultural factors in substance use, specifically focusing on alcohol use disorder (AUD) and cannabis use disorder (CUD). It begins by introducing the concept of social plasticity, which posits that adolescents' susceptibility to AUD is influenced by their heightened sensitivity to their social environment, but this sensitivity increases the potential for recovery in the transition to adulthood.A series of studies delves into how social cues impact alcohol craving and consumption. One study using functional magnetic resonance imaging (fMRI) investigated social alcohol cue reactivity and its relationship to social drinking behavior, revealing increased craving but no significant change in brain activity in response to alcohol cues. Another fMRI study compared social processes in alcohol cue reactivity between adults and adolescents, showing age-related differences in how social attunement affects drinking behavior. Shifting focus to cannabis, this dissertation discusses how cultural factors, including norms, legal policies, and attitudes, influence cannabis use and processes underlying CUD. The research presented examined various facets of cannabis use, including how cannabinoid concentrations in hair correlate with self-reported use, the effects of cannabis and cigarette co-use on brain reactivity, and cross-cultural differences in CUD between Amsterdam and Texas. Furthermore, the evidence for the relationship between cannabis use, CUD, and mood disorders is reviewed, suggesting a bidirectional relationship, with cannabis use potentially preceding the onset of bipolar disorder and contributing to the development and worse prognosis of mood disorders and mood disorders leading to more cannabis use
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Recommended from our members
Foundations of Node Representation Learning
Low-dimensional node representations, also called node embeddings, are a cornerstone in the modeling and analysis of complex networks. In recent years, advances in deep learning have spurred development of novel neural network-inspired methods for learning node representations which have largely surpassed classical \u27spectral\u27 embeddings in performance. Yet little work asks the central questions of this thesis: Why do these novel deep methods outperform their classical predecessors, and what are their limitations?
We pursue several paths to answering these questions. To further our understanding of deep embedding methods, we explore their relationship with spectral methods, which are better understood, and show that some popular deep methods are equivalent to spectral methods in a certain natural limit. We also introduce the problem of inverting node embeddings in order to probe what information they contain. Further, we propose a simple, non-deep method for node representation learning, and find it to often be competitive with modern deep graph networks in downstream performance.
To better understand the limitations of node embeddings, we prove some upper and lower bounds on their capabilities. Most notably, we prove that node embeddings are capable of exact low-dimensional representation of networks with bounded max degree or arboricity, and we further show that a simple algorithm can find such exact embeddings for real-world networks. By contrast, we also prove inherent bounds on random graph models, including those derived from node embeddings, to capture key structural properties of networks without simply memorizing a given graph
Recommended from our members
Operational modal analysis and prediction of remaining useful life for rotating machinery
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe significance of rotating machinery spans areas from household items to vital industry sectors, such as aerospace, automotive, railway, sea transport, resource extraction, and manufacturing. Hence, our technologised society depends on efficient and reliable operation of rotating machinery. To contribute to this aim, this thesis leverages measurable quantities during its operation for structural-mechanical evaluation employing Operational Modal Analysis (OMA) and the prediction of Remaining Useful Life (RUL). Modal parameters determined by OMA are central for the design, test, and validation of rotating machinery. This thesis introduces the first open parametric simulation dataset of rotating machinery during an acceleration run. As there is a lack of similar open datasets suitable for OMA, it lays a foundation for improved reproducibility and comparability of future research. Based on this, the Averaged Order-Based Modal Analysis (AOBMA) method is developed. The novel addition of scaling and weighted averaging of individual machine orders in AOBMA alleviates the analysis effort of the existing Order-Based Modal Analysis (OBMA) method by providing a unified set of modal parameters with higher accuracy. As such, AOBMA showed a lower mean absolute relative error of 0.03 for damping ratio estimations across compared modes while OBMA provided an error value of 0.32 depending on the processed order. At excitation with high harmonic contributions, AOBMA also resulted in the highest number of accurately identified modes among the compared methods. At a harmonic ratio of 0.8, for example, AOBMA identified an average of 11.9 modes per estimation, while OBMA and baseline OMA followed with 9.5 and 9 modes, respectively. Moreover, it is the first study, which systematically evaluates the impact of excitation conditions on the compared methods and finds an advantage of OBMA and AOBMA over traditional OMA regarding mode shape estimation accuracy. While OMA can be used to evaluate significant structural changes, Machine Learning (ML) methods have seen substantially greater success in condition monitoring, including RUL prediction. However, as these methods often require large amounts of time and cost-
intensive training data, a novel data-efficient RUL prediction methodology is introduced, taking advantage of distinct healthy and faulty condition data. When the number of training sequences from an open dataset is reduced to 5%, an average prediction Root Mean Square Error (RMSE) of 24.9 operation cycles is achieved, outperforming the baseline method with an RMSE of 28.1. Motivated by environmental considerations, the impact of data reduction on the training duration of several method variants is quantified. When the full training set is
utilised, the most resource-saving variant of the proposed approach achieves an average training duration of 8.9% compared to the baseline method
Mesoscopic Physics of Quantum Systems and Neural Networks
We study three different kinds of mesoscopic systems – in the intermediate region between macroscopic and microscopic scales consisting of many interacting constituents:
We consider particle entanglement in one-dimensional chains of interacting fermions. By employing a field theoretical bosonization calculation, we obtain the one-particle entanglement entropy in the ground state and its time evolution after an interaction quantum quench which causes relaxation towards non-equilibrium steady states. By pushing the boundaries of the numerical exact diagonalization and density matrix renormalization group computations, we are able to accurately scale to the thermodynamic limit where we make contact to the analytic field theory model. This allows to fix an interaction cutoff required in the continuum bosonization calculation to account for the short range interaction of the lattice model, such that the bosonization result provides accurate predictions for the one-body reduced density matrix in the Luttinger liquid phase.
Establishing a better understanding of how to control entanglement in mesoscopic systems is also crucial for building qubits for a quantum computer. We further study a popular scalable qubit architecture that is based on Majorana zero modes in topological superconductors. The two major challenges with realizing Majorana qubits currently lie in trivial pseudo-Majorana states that mimic signatures of the topological bound states and in strong disorder in the proposed topological hybrid systems that destroys the topological phase. We study coherent transport through interferometers with a Majorana wire embedded into one arm.
By combining analytical and numerical considerations, we explain the occurrence of an amplitude maximum as a function of the Zeeman field at the onset of the topological phase – a signature unique to MZMs – which has recently been measured experimentally [Whiticar et al., Nature Communications, 11(1):3212, 2020]. By placing an array of gates in proximity to the nanowire, we made a fruitful connection to the field of Machine Learning by using the CMA-ES algorithm to tune the gate voltages in order to maximize the amplitude of coherent transmission. We find that the algorithm is capable of learning disorder profiles and even to restore Majorana modes that were fully destroyed by strong disorder by optimizing a feasible number of gates.
Deep neural networks are another popular machine learning approach which not only has many direct applications to physical systems but which also behaves similarly to physical mesoscopic systems. In order to comprehend the effects of the complex dynamics from the training, we employ Random Matrix Theory (RMT) as a zero-information hypothesis: before training, the weights are randomly initialized and therefore are perfectly described by RMT. After training, we attribute deviations from these predictions to learned information in the weight matrices.
Conducting a careful numerical analysis, we verify that the spectra of weight matrices consists of a random bulk and a few important large singular values and corresponding vectors that carry almost all learned information. By further adding label noise to the training data, we find that more singular values in intermediate parts of the spectrum contribute by fitting the randomly labeled images. Based on these observations, we propose a noise filtering algorithm that both removes the singular values storing the noise and reverts the level repulsion of the large singular values due to the random bulk
On discretisation drift and smoothness regularisation in neural network training
The deep learning recipe of casting real-world problems as mathematical optimisation and tackling the optimisation by training deep neural networks using gradient-based optimisation has undoubtedly proven to be a fruitful one. The understanding behind why deep learning works, however, has lagged behind its practical significance. We aim to make steps towards an improved understanding of deep learning with a focus on optimisation and model regularisation. We start by investigating gradient descent (GD), a discrete-time algorithm at the basis of most popular deep learning optimisation algorithms. Understanding the dynamics of GD has been hindered by the presence of discretisation drift, the numerical integration error between GD and its often studied continuous-time counterpart, the negative gradient flow (NGF). To add to the toolkit available to study GD, we derive novel continuous-time flows that account for discretisation drift. Unlike the NGF, these new flows can be used to describe learning rate specific behaviours of GD, such as training instabilities observed in supervised learning and two-player games. We then translate insights from continuous time into mitigation strategies for unstable GD dynamics, by constructing novel learning rate schedules and regularisers that do not require additional hyperparameters. Like optimisation, smoothness regularisation is another pillar of deep learning's success with wide use in supervised learning and generative modelling. Despite their individual significance, the interactions between smoothness regularisation and optimisation have yet to be explored. We find that smoothness regularisation affects optimisation across multiple deep learning domains, and that incorporating smoothness regularisation in reinforcement learning leads to a performance boost that can be recovered using adaptions to optimisation methods
AI for time-resolved imaging: from fluorescence lifetime to single-pixel time of flight
Time-resolved imaging is a field of optics which measures the arrival time of light on the camera. This thesis looks at two time-resolved imaging modalities: fluorescence lifetime imaging and time-of-flight measurement for depth imaging and ranging. Both of these applications require temporal accuracy on the order of pico- or nanosecond (10−12 − 10−9s) scales.
This demands special camera technology and optics that can sample light-intensity extremely quickly, much faster than an ordinary video camera. However, such detectors can be very expensive compared to regular cameras while offering lower image quality. Further, information of interest is often hidden (encoded) in the raw temporal data. Therefore, computational imaging algorithms are used to enhance, analyse and extract information from time-resolved images.
"A picture is worth a thousand words". This describes a fundamental blessing and curse of image analysis: images contain extreme amounts of data. Consequently, it is very difficult to design algorithms that encompass all the possible pixel permutations and combinations that can encode this information. Fortunately, the rise of AI and machine learning (ML) allow us to instead create algorithms in a data-driven way. This thesis demonstrates the application of ML to time-resolved imaging tasks, ranging from parameter estimation in noisy data and decoding of overlapping information, through super-resolution, to inferring 3D information from 1D (temporal) data
Computational modelling and optimal control of interacting particle systems: connecting dynamic density functional theory and PDE-constrained optimization
Processes that can be described by systems of interacting particles are ubiquitous in nature, society, and industry, ranging from animal flocking, the spread of diseases, and formation of opinions to nano-filtration, brewing, and printing. In real-world applications it is often relevant to not only model a process of interest, but to also optimize it in order to achieve a desired outcome with minimal resources, such as time, money, or energy.
Mathematically, the dynamics of interacting particle systems can be described using Dynamic Density Functional Theory (DDFT). The resulting models are nonlinear, nonlocal partial differential equations (PDEs) that include convolution integral terms. Such terms also enter the naturally arising no-flux boundary conditions. Due to the nonlocal, nonlinear nature of such problems they are challenging both to analyse and solve numerically.
In order to optimize processes that are modelled by PDEs, one can apply tools from PDE-constrained optimization. The aim here is to drive a quantity of interest towards a target state by varying a control variable. This is constrained by a PDE describing the process of interest, in which the control enters as a model parameter. Such problems can be tackled by deriving and solving the (first-order) optimality system, which couples the PDE model with a second PDE and an algebraic equation. Solving such a system numerically is challenging, since large matrices arise in its discretization, for which efficient solution strategies have to be found. Most work in PDE-constrained optimization addresses problems in which the control is applied linearly, and which are constrained by local, often linear PDEs, since introducing nonlinearity significantly increases the complexity in both the analysis and numerical solution
of the optimization problem.
However, in order to optimize real-world processes described by nonlinear, nonlocal DDFT models, one has to develop an optimal control framework for such models. The aim is to drive the particles to some desired distribution by applying control either linearly, through a particle source, or bilinearly, though an advective field. The optimization process is constrained by the DDFT model that describes how the particles move under the influence of advection, diffusion, external forces, and particle–particle interactions. In order to tackle this, the (first-order) optimality system is derived, which, since it involves nonlinear (integro-)PDEs that are coupled nonlocally in space and time, is significantly harder than in the standard case. Novel numerical methods are developed, effectively combining pseudospectral methods and iterative solvers, to efficiently and accurately solve such a system.
In a next step this framework is extended so that it can capture and optimize industrially relevant processes, such as brewing and nano-filtration. In order to do so, extensions to both the DDFT model and the numerical method are made. Firstly, since industrial processes often involve tubes, funnels, channels, or tanks of various shapes, the PDE model itself, as well as the optimization problem, need to be solved on complicated domains. This is achieved by developing a novel spectral element approach that is compatible with both the PDE solver and the optimal control framework. Secondly, many industrial processes, such as nano-filtration, involve more than one type of particle. Therefore, the DDFT model is extended to describe multiple particle species. Finally, depending on the application of interest, additional physical effects need to be included in the model. In this thesis, to model sedimentation processes in brewing, the model is modified to capture volume exclusion effects
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
- …