1,912 research outputs found

    Single-cell time-series analysis of metabolic rhythms in yeast

    Get PDF
    The yeast metabolic cycle (YMC) is a biological rhythm in budding yeast (Saccharomyces cerevisiae). It entails oscillations in the concentrations and redox states of intracellular metabolites, oscillations in transcript levels, temporal partitioning of biosynthesis, and, in chemostats, oscillations in oxygen consumption. Most studies on the YMC have been based on chemostat experiments, and it is unclear whether YMCs arise from interactions between cells or are generated independently by each cell. This thesis aims at characterising the YMC in single cells and its response to nutrient and genetic perturbations. Specifically, I use microfluidics to trap and separate yeast cells, then record the time-dependent intensity of flavin autofluorescence, which is a component of the YMC. Single-cell microfluidics produces a large amount of time series data. Noisy and short time series produced from biological experiments restrict the computational tools that are useful for analysis. I developed a method to filter time series, a machine learning model to classify whether time series are oscillatory, and an autocorrelation method to examine the periodicity of time series data. My experimental results show that yeast cells show oscillations in the fluorescence of flavins. Specifically, I show that in high glucose conditions, cells generate flavin oscillations asynchronously within a population, and these flavin oscillations couple with the cell division cycle. I show that cells can individually reset the phase of their flavin oscillations in response to abrupt nutrient changes, independently of the cell division cycle. I also show that deletion strains generate flavin oscillations that exhibit different behaviour from dissolved oxygen oscillations from chemostat conditions. Finally, I use flux balance analysis to address whether proteomic constraints in cellular metabolism mean that temporal partitioning of biosynthesis is advantageous for the yeast cell, and whether such partitioning explains the timing of the metabolic cycle. My results show that under proteomic constraints, it is advantageous for the cell to sequentially synthesise biomass components because doing so shortens the timescale of biomass synthesis. However, the degree of advantage of sequential over parallel biosynthesis is lower when both carbon and nitrogen sources are limiting. This thesis thus confirms autonomous generation of flavin oscillations, and suggests a model in which the YMC responds to nutrient conditions and subsequently entrains the cell division cycle. It also emphasises the possibility that subpopulations in the culture explain chemostat-based observations of the YMC. Furthermore, this thesis paves the way for using computational methods to analyse large datasets of oscillatory time series, which is useful for various fields of study beyond the YMC

    Natural and Technological Hazards in Urban Areas

    Get PDF
    Natural hazard events and technological accidents are separate causes of environmental impacts. Natural hazards are physical phenomena active in geological times, whereas technological hazards result from actions or facilities created by humans. In our time, combined natural and man-made hazards have been induced. Overpopulation and urban development in areas prone to natural hazards increase the impact of natural disasters worldwide. Additionally, urban areas are frequently characterized by intense industrial activity and rapid, poorly planned growth that threatens the environment and degrades the quality of life. Therefore, proper urban planning is crucial to minimize fatalities and reduce the environmental and economic impacts that accompany both natural and technological hazardous events

    Machine learning applications in search algorithms for gravitational waves from compact binary mergers

    Get PDF
    Gravitational waves from compact binary mergers are now routinely observed by Earth-bound detectors. These observations enable exciting new science, as they have opened a new window to the Universe. However, extracting gravitational-wave signals from the noisy detector data is a challenging problem. The most sensitive search algorithms for compact binary mergers use matched filtering, an algorithm that compares the data with a set of expected template signals. As detectors are upgraded and more sophisticated signal models become available, the number of required templates will increase, which can make some sources computationally prohibitive to search for. The computational cost is of particular concern when low-latency alerts should be issued to maximize the time for electromagnetic follow-up observations. One potential solution to reduce computational requirements that has started to be explored in the last decade is machine learning. However, different proposed deep learning searches target varying parameter spaces and use metrics that are not always comparable to existing literature. Consequently, a clear picture of the capabilities of machine learning searches has been sorely missing. In this thesis, we closely examine the sensitivity of various deep learning gravitational-wave search algorithms and introduce new methods to detect signals from binary black hole and binary neutron star mergers at previously untested statistical confidence levels. By using the sensitive distance as our core metric, we allow for a direct comparison of our algorithms to state-of-the-art search pipelines. As part of this thesis, we organized a global mock data challenge to create a benchmark for machine learning search algorithms targeting compact binaries. This way, the tools developed in this thesis are made available to the greater community by publishing them as open source software. Our studies show that, depending on the parameter space, deep learning gravitational-wave search algorithms are already competitive with current production search pipelines. We also find that strategies developed for traditional searches can be effectively adapted to their machine learning counterparts. In regions where matched filtering becomes computationally expensive, available deep learning algorithms are also limited in their capability. We find reduced sensitivity to long duration signals compared to the excellent results for short-duration binary black hole signals

    Complex systems methods characterizing nonlinear processes in the near-Earth electromagnetic environment: recent advances and open challenges

    Get PDF
    Learning from successful applications of methods originating in statistical mechanics, complex systems science, or information theory in one scientific field (e.g., atmospheric physics or climatology) can provide important insights or conceptual ideas for other areas (e.g., space sciences) or even stimulate new research questions and approaches. For instance, quantification and attribution of dynamical complexity in output time series of nonlinear dynamical systems is a key challenge across scientific disciplines. Especially in the field of space physics, an early and accurate detection of characteristic dissimilarity between normal and abnormal states (e.g., pre-storm activity vs. magnetic storms) has the potential to vastly improve space weather diagnosis and, consequently, the mitigation of space weather hazards. This review provides a systematic overview on existing nonlinear dynamical systems-based methodologies along with key results of their previous applications in a space physics context, which particularly illustrates how complementary modern complex systems approaches have recently shaped our understanding of nonlinear magnetospheric variability. The rising number of corresponding studies demonstrates that the multiplicity of nonlinear time series analysis methods developed during the last decades offers great potentials for uncovering relevant yet complex processes interlinking different geospace subsystems, variables and spatiotemporal scales

    Regional groundwater levels in crystalline aquifers: structural domains, groundwater level monitoring, and factors controlling the response time and variability

    Get PDF
    This thesis aims to determine the degree to which fracture networks control the response time and fluctuation of groundwater levels in regional crystalline aquifers in comparison to topography, sediment deposits, precipitation and snowmelt. In this respect, the compartmentalization of the crystalline aquifer into structural domains is necessary, in order to take into account the heterogeneity of the crystalline aquifer in relation to the different fracture networks existing in the rock mass. Field investigations were conducted in the Lanaudiere region, Quebec, Canada, where the underlying crystalline rock outcrops in several locations, allowing access to outcrops for fracture sampling. In addition, four unequipped boreholes drilled into the crystalline rock were available for fracture sampling. Typically, fracture sampling involves the collection of multiple fracture samples, which involve numerous fracture clusters. Grouping fracture samples into structural domains is generally useful for geologists, hydrogeologists, and geomechanicians as a region of fractured rocks is subdivided into sub-regions with similar behavior in terms of their hydromechanical properties. One of the commonly used methods to group fracture samples into structural domains is Mahtab and Yegulalp's method, considering the orientation of fracture clusters and ignoring several fracture parameters, such as fracture spacing, aperture, and persistence, that are important for fluid circulation in the rock mass. In this thesis, we proposed a new cluster-based similarity method that considers cluster orientation as well as the aperture, persistence and spacing. In addition, a method for compartmentalizing a given study area into structural domains using Voronoi diagrams has also been proposed. The proposed method is more suitable than the previous method for applications in hydrogeology and rock mechanics, especially for regional studies of fluid flow in the rock mass. The study of response time and variability of groundwater levels requires a groundwater level monitoring network. The inclusion of private boreholes in these monitoring networks can provide a cost-effective means of obtaining a larger data set; however, the use of these boreholes is limited by the fact that frequent pumping, in these boreholes, generates outliers in the recorded time series. In this thesis, a slope criterion is applied to identify and remove outliers from groundwater level time series from exploited private boreholes. Nevertheless, the removal of outliers creates a missing value problem, which biases the subsequent time series analysis. Thus, 14 imputation methods were used to replace missing values. The proposed approach is applied to groundwater level time series from a monitoring network of 20 boreholes in the Lanaudiere region, Quebec, Canada. The slope criterion is shown to be very effective in identifying outliers in exploited private boreholes. Among the characteristics of the missing value pattern, the gap size and gap position in the time series are the most important parameters that affect the performance of the imputation methods. Among the imputation methods tested, linear and Stineman interpolations, and Kalman filtering were the most effective. This thesis demonstrates that privately operated boreholes can be used for groundwater monitoring by removing outliers and imputing missing values. At local and regional scales, groundwater level is controlled by several factors. The most commonly studied factors are climatic, geologic and geomorphologic controls on groundwater level variability and response time, and in many cases only one controlling factor is considered in the analysis. However, many other factors can affect groundwater level variability and response time, such as the sediment deposit properties and fracture network characteristics in crystalline aquifers. In this study, a more inclusive approach is used to consider climatic, geomorphological, and fracture network parameters as potential controlling factors. A total of 18 parameters were analyzed for interrelationships as each controlling factor is described by several parameters. The study analyzed a two-year record of groundwater levels in 20 boreholes, drilled into the crystalline rock of the Canadian Shield in the Lanaudière region, Québec, Canada. Factors associated to geomorpgology and fracture network are related to groundwater level variability and its response time. Of the various parameters analyzed in each control factor, sediment thickness and local slope of the geomorphological factor, as well as average persistence and equivalent hydraulic conductivity of the fracture network factor, are most closely related to groundwater level variability and response time. However, further studies are needed to elucidate the physical processes behind certain interrelationships between fracture network parameters and groundwater level variability parameters. Cette thèse a pour but de déterminer le degré auquel les réseaux de fractures contrôlent le temps de réponse et la fluctuation du niveau des eaux souterraines dans les aquifères cristallins régionaux par rapport à la topographie, aux dépôts de sédiments, aux précipitations et à la fonte des neiges. À cet égard, la compartimentation de l'aquifère cristallin en domaines structuraux est nécessaire, afin de prendre en compte l'hétérogénéité de l'aquifère cristallin par rapport aux différents réseaux de fractures existants dans le massif rocheux. Des investigations de terrain ont été menées dans la région de Lanaudière, Québec, Canada, où la roche cristalline sous-jacente affleure à plusieurs endroits, permettant un accès aux affleurements pour l'échantillonnage des fractures. De plus, quatre forages non équipés, forés dans la roche cristalline, étaient disponibles pour l'échantillonnage des fractures. Habituellement, l'échantillonnage de fractures comprend la collecte de plusieurs échantillons de fractures, qui impliquent de nombreux groupes de fractures. Le regroupement des échantillons de fractures en domaines structuraux est généralement utile pour les géologues, les hydrogéologues et les géomécaniciens dans la mesure où une région de roches fracturées est subdivisée en sous-régions ayant un comportement similaire en termes de propriétés hydromécaniques. L'une des méthodes couramment utilisées pour regrouper les échantillons de fractures en domaines structuraux est celle de Mahtab and Yegulalp, considérant l'orientation des clusters de fractures et ignorant plusieurs paramètres de fractures, tels que l'espacement, l'ouverture et la persistance des fractures, qui sont importants pour la circulation des fluides dans le massif rocheux. Dans cette thèse, nous avons proposé une nouvelle méthode de similarité basée sur les clusters qui considère l'orientation des clusters ainsi que l'ouverture, la persistance et l'espacement des clusters. En outre, une méthode pour la compartimentation d'une zone d'étude donnée en domaines structuraux au moyen de diagrammes de Voronoï a également été proposée. La méthode proposée est plus adaptée que la méthode précédente pour des applications en hydrogéologie et en mécanique des roches, notamment pour les études régionales de la circulation des fluides dans la masse rocheuse. L'étude du temps de réponse et de la variabilité du niveau des eaux souterraines nécessite un réseau de surveillance du niveau des eaux souterraines. L'inclusion de forages privés dans ces réseaux de surveillance peut fournir un moyen peu coûteux d'obtenir un ensemble plus large de données ; cependant, l'utilisation de ces forages est limitée par le fait que le pompage fréquent de ces forages génère des valeurs aberrantes dans les séries temporelles enregistrées. Dans cette thèse, un critère de pente est appliqué pour identifier et éliminer les valeurs aberrantes des séries temporelles du niveau des eaux souterraines provenant de forages privés exploités. Néanmoins, l'élimination des valeurs aberrantes crée un problème de valeurs manquantes, ce qui biaise l'analyse ultérieure des séries temporelles. Ainsi, 14 méthodes d'imputation ont été utilisées pour remplacer les valeurs manquantes. L'approche proposée est appliquée aux séries temporelles du niveau des eaux souterraines provenant d'un réseau de surveillance de 20 forages dans la région de Lanaudière, Québec, Canada. Le critère de pente s'avère très efficace pour identifier les valeurs aberrantes dans les forages privés exploités. Parmi les caractéristiques du modèle de valeurs manquantes, la taille et la position des lacunes dans la série temporelle sont les paramètres les plus importants qui affectent les performances des méthodes d'imputation. Parmi les méthodes d'imputation testées, les interpolations linéaires et de Stineman, ainsi que le filtrage de Kalman ont été les plus efficaces. La présente thèse démontre que les forages privés exploités peuvent être utilisés pour la surveillance des eaux souterraines en éliminant les valeurs aberrantes et en imputant les valeurs manquantes. À l'échelle locale et régionale, le niveau des eaux souterraines est contrôlé par plusieurs facteurs. Les facteurs les plus couramment étudiés sont les contrôles climatiques, géologiques et géomorphologiques sur la variabilité du niveau des eaux souterraines et le temps de réponse, et dans de nombreux cas, un seul facteur de contrôle est pris en compte dans l'analyse. Cependant, de nombreux autres facteurs peuvent affecter la variabilité du niveau des eaux souterraines et le temps de réponse, tels que les propriétés des dépôts de sédiments et les caractéristiques du réseau de fractures dans les aquifères cristallins. Dans cette étude, une approche plus globale est utilisée pour considérer les paramètres climatiques, géomorphologiques et du réseau de fractures comme des facteurs de contrôle potentiels. Au total, 18 paramètres ont été analysés pour déterminer les interrelations, sachant que chaque facteur de contrôle est décrit par plusieurs paramètres. L'étude a analysé un jeu de données de deux ans sur les niveaux d'eau souterraine dans 20 forages réalisés dans la roche cristalline du Bouclier canadien dans la région de Lanaudière, au Québec, Canada Les facteurs liés à la géomorphologie et au réseau de fractures sont liés à la variabilité du niveau des eaux souterraines et à son temps de réponse. Parmi les divers paramètres analysés dans chaque facteur de contrôle, l'épaisseur des sédiments et la pente locale du facteur géomorphologique, ainsi que la persistance moyenne et la conductivité hydraulique équivalente du facteur réseau de fractures, sont les plus étroitement liés à la variabilité du niveau des eaux souterraines et à son temps de réponse. Toutefois, des études complémentaires sont nécessaires pour élucider les processus physiques à l'origine de certaines interrelations entre les paramètres du réseau de fractures et les paramètres de variabilité du niveau des eaux souterraines

    Machine learning in solar physics

    Full text link
    The application of machine learning in solar physics has the potential to greatly enhance our understanding of the complex processes that take place in the atmosphere of the Sun. By using techniques such as deep learning, we are now in the position to analyze large amounts of data from solar observations and identify patterns and trends that may not have been apparent using traditional methods. This can help us improve our understanding of explosive events like solar flares, which can have a strong effect on the Earth environment. Predicting hazardous events on Earth becomes crucial for our technological society. Machine learning can also improve our understanding of the inner workings of the sun itself by allowing us to go deeper into the data and to propose more complex models to explain them. Additionally, the use of machine learning can help to automate the analysis of solar data, reducing the need for manual labor and increasing the efficiency of research in this field.Comment: 100 pages, 13 figures, 286 references, accepted for publication as a Living Review in Solar Physics (LRSP

    Seamless Multimodal Biometrics for Continuous Personalised Wellbeing Monitoring

    Full text link
    Artificially intelligent perception is increasingly present in the lives of every one of us. Vehicles are no exception, (...) In the near future, pattern recognition will have an even stronger role in vehicles, as self-driving cars will require automated ways to understand what is happening around (and within) them and act accordingly. (...) This doctoral work focused on advancing in-vehicle sensing through the research of novel computer vision and pattern recognition methodologies for both biometrics and wellbeing monitoring. The main focus has been on electrocardiogram (ECG) biometrics, a trait well-known for its potential for seamless driver monitoring. Major efforts were devoted to achieving improved performance in identification and identity verification in off-the-person scenarios, well-known for increased noise and variability. Here, end-to-end deep learning ECG biometric solutions were proposed and important topics were addressed such as cross-database and long-term performance, waveform relevance through explainability, and interlead conversion. Face biometrics, a natural complement to the ECG in seamless unconstrained scenarios, was also studied in this work. The open challenges of masked face recognition and interpretability in biometrics were tackled in an effort to evolve towards algorithms that are more transparent, trustworthy, and robust to significant occlusions. Within the topic of wellbeing monitoring, improved solutions to multimodal emotion recognition in groups of people and activity/violence recognition in in-vehicle scenarios were proposed. At last, we also proposed a novel way to learn template security within end-to-end models, dismissing additional separate encryption processes, and a self-supervised learning approach tailored to sequential data, in order to ensure data security and optimal performance. (...)Comment: Doctoral thesis presented and approved on the 21st of December 2022 to the University of Port

    Анализ временных рядов и прогнозирование цен на золото (XAUUSD) с использованием машинного обучения

    Get PDF
    Каждый день происходит множество розничных и коммерческих банковских сделок с около 11 миллиардами золота. Чтобы получить прибыль на этом волатильном рынке, нам необходимо разработать различные инструменты для прогнозирования и анализа будущих цен, чтобы принимать соответствующие решения. В своем исследовании я использовал исторические данные о золоте, полученные от банковской группы Dukascopy Swiss и использовал инструменты искусственного интеллекта, такие как LSTM и ARIMA, для прогнозирования будущих цен.Every day there are many retail and commercial banking trades around 11B of gold. To make a profit in this violent market we need to develop different tools to predict or analyze future prices to make suitable decisions. In my research, I used the historical data of gold and I obtained this data from Dukascopy Swiss banking group and used AI tools like LSTM and Arima to predict future prices

    Probabilistic Solar Proxy Forecasting with Neural Network Ensembles

    Full text link
    Space weather indices are used commonly to drive forecasts of thermosphere density, which directly affects objects in low-Earth orbit (LEO) through atmospheric drag. One of the most commonly used space weather proxies, F10.7cmF_{10.7 cm}, correlates well with solar extreme ultra-violet (EUV) energy deposition into the thermosphere. Currently, the USAF contracts Space Environment Technologies (SET), which uses a linear algorithm to forecast F10.7cmF_{10.7 cm}. In this work, we introduce methods using neural network ensembles with multi-layer perceptrons (MLPs) and long-short term memory (LSTMs) to improve on the SET predictions. We make predictions only from historical F10.7cmF_{10.7 cm} values, but also investigate data manipulation to improve forecasting. We investigate data manipulation methods (backwards averaging and lookback) as well as multi step and dynamic forecasting. This work shows an improvement over the baseline when using ensemble methods. The best models found in this work are ensemble approaches using multi step or a combination of multi step and dynamic predictions. Nearly all approaches offer an improvement, with the best models improving between 45 and 55\% on relative MSE. Other relative error metrics were shown to improve greatly when ensembles methods were used. We were also able to leverage the ensemble approach to provide a distribution of predicted values; allowing an investigation into forecast uncertainty. Our work found models that produced less biased predictions at elevated and high solar activity levels. Uncertainty was also investigated through the use of a calibration error score metric (CES), our best ensemble reached similar CES as other work.Comment: 23 pages, 12 figures, 5 Table
    corecore