58 research outputs found

    The Hyperdimensional Transform for Distributional Modelling, Regression and Classification

    Full text link
    Hyperdimensional computing (HDC) is an increasingly popular computing paradigm with immense potential for future intelligent applications. Although the main ideas already took form in the 1990s, HDC recently gained significant attention, especially in the field of machine learning and data science. Next to efficiency, interoperability and explainability, HDC offers attractive properties for generalization as it can be seen as an attempt to combine connectionist ideas from neural networks with symbolic aspects. In recent work, we introduced the hyperdimensional transform, revealing deep theoretical foundations for representing functions and distributions as high-dimensional holographic vectors. Here, we present the power of the hyperdimensional transform to a broad data science audience. We use the hyperdimensional transform as a theoretical basis and provide insight into state-of-the-art HDC approaches for machine learning. We show how existing algorithms can be modified and how this transform can lead to a novel, well-founded toolbox. Next to the standard regression and classification tasks of machine learning, our discussion includes various aspects of statistical modelling, such as representation, learning and deconvolving distributions, sampling, Bayesian inference, and uncertainty estimation

    In a squeeze: Epibiosis may affect the distribution of kelp forests

    Get PDF
    The processes limiting the population recovery of the kelp Saccharina latissima after recent large‐scale loss from the south coast of Norway are poorly understood. Previous investigations do, however, suggest that the impacts of biotic interactions (epibiosis and competition) and increased water turbidity are important. We investigated the depth‐related patterns of growth, epibiosis, and mortality in two sample populations of kelp, from the south and the southwest coast of Norway. The investigations were performed over a period of seven months, in a crossed translocational study, where kelps were mounted on rigs at six depths (1, 3, 6, 9, 15, and 24 m). In a second experiment, the amounts of light blocked by different epibiont layers growing on the kelp frond were investigated. While growth decreased with depth in spring and summer, the kelp grew faster at 15 m than at shallower depths in fall. Survival was low both in shallow water and below 15 m depth. Epibionts covered the kelp growing at depths from 1 to 9 m, and the laboratory study showed that the coverage may have deprived the individuals of as much as 90% of the available light. Although the depth‐related results we present apply—in the strictest sense—only to kelp translocated on rigs, we argue that the relative patterns are relevant for natural populations. Growth and survival of S. latissima is likely to be reduced by heavy loads of epibionts, while depths where epibionts are sparse may be close to the lower limit of the kelps depth distribution along the south coast of Norway. This suggests that a vertical squeeze, or narrowing of the distribution range of kelp forests may be occurring in Norway.publishedVersio

    The GARCH-EVT-Copula model and simulation in scenario-based asset allocation

    Get PDF
    Financial market integration, in particular, portfolio allocations from advanced economies to South African markets, continues to strengthen volatility linkages and quicken volatility transmissions between participating markets. Largely as a result, South African portfolios are net recipients of returns and volatility shocks emanating from major world markets. In light of these, and other, sources of risk, this dissertation proposes a methodology to improve risk management systems in funds by building a contemporary asset allocation framework that offers practitioners an opportunity to explicitly model combinations of hypothesised global risks and the effects on their investments. The framework models portfolio return variables and their key risk driver variables separately and then joins them to model their combined dependence structure. The separate modelling of univariate and multivariate (MV) components admits the benefit of capturing the data generating processes with improved accuracy. Univariate variables were modelled using ARMA-GARCH-family structures paired with a variety of skewed and leptokurtic conditional distributions. Model residuals were fit using the Peaks-over-Threshold method from Extreme Value Theory for the tails and a non-parametric, kernel density for the interior, forming a completed semi-parametric distribution (SPD) for each variable. Asset and risk factor returns were then combined and their dependence structure jointly modelled with a MV Student t copula. Finally, the SPD margins and Student t copula were used to construct a MV meta t distribution. Monte Carlo simulations were generated from the fitted MV meta t distribution on which an out-of-sample test was conducted. The 2014-to-2015 horizon served to proxy as an out-of-sample, forward-looking scenario for a set of key risk factors against which a hypothetical, diversified portfolio was optimised. Traditional mean-variance and contemporary mean-CVaR optimisation techniques were used and their results compared. As an addendum, performance over the in-sample 2008 financial crisis was reported. The final Objective (7) addressed management and conservation strategies for the NMBM. The NMBM wetland database that was produced during this research is currently being used by the Municipality and will be added to the latest National Wetland Map. From the database, and tools developed in this research, approximately 90 wetlands have been identified as being highly vulnerable due to anthropogenic and environmental factors (Chapter 6) and should be earmarked as key conservation priority areas. Based on field experience and data collected, this study has also made conservation and rehabilitation recommendations for eight locations. Recommendations are also provided for six more wetland systems (or regions) that should be prioritised for further research, as these systems lack fundamental information on where the threat of anthropogenic activities affecting them is greatest. This study has made a significant contribution to understanding the underlying geomorphological processes in depressions, seeps and wetland flats. The desktop mapping component of this study illustrated the dominance of wetlands in the wetter parts of the Municipality. Perched wetland systems were identified in the field, on shallow bedrock, calcrete or clay. The prevalence of these perches in depressions, seeps and wetland flats also highlighted the importance of rainfall in driving wetland formation, by allowing water to pool on these perches, in the NMBM. These perches are likely to be a key factor in the high number of small, ephemeral wetlands that were observed in the study area, compared to other semi-arid regions. Therefore, this research highlights the value of multi-faceted and multi-scalar wetland research and how similar approaches should be used in future research methods has been highlighted. The approach used, along with the tools/methods developed in this study have facilitated the establishment of priority areas for conservation and management within the NMBM. Furthermore, the research approach has revealed emergent wetland properties that are only apparent when looking at different spatial scales. This research has highlighted the complex biological and geomorphological interactions between wetlands that operate over various spatial and temporal scales. As such, wetland management should occur across a wetland complex, rather than individual sites, to account for these multi-scalar influences

    Methods for Optimization and Regularization of Generative Models

    Get PDF
    This thesis studies the problem of regularizing and optimizing generative models, often using insights and techniques from kernel methods. The work proceeds in three main themes. Conditional score estimation. We propose a method for estimating conditional densities based on a rich class of RKHS exponential family models. The algorithm works by solving a convex quadratic problem for fitting the gradient of the log density, the score, thus avoiding the need for estimating the normalizing constant. We show the resulting estimator to be consistent and provide convergence rates when the model is well-specified. Structuring and regularizing implicit generative models. In a first contribution, we introduce a method for learning Generative Adversarial Networks, a class of Implicit Generative Models, using a parametric family of Maximum Mean Discrepancies (MMD). We show that controlling the gradient of the critic function defining the MMD is vital for having a sensible loss function. Moreover, we devise a method to enforce exact, analytical gradient constraints. As a second contribution, we introduce and study a new generative model suited for data with low intrinsic dimension embedded in a high dimensional space. This model combines two components: an implicit model, which can learn the low-dimensional support of data, and an energy function, to refine the probability mass by importance sampling on the support of the implicit model. We further introduce algorithms for learning such a hybrid model and for efficient sampling. Optimizing implicit generative models. We first study the Wasserstein gradient flow of the Maximum Mean Discrepancy in a non-parametric setting and provide smoothness conditions on the trajectory of the flow to ensure global convergence. We identify cases when this condition does not hold and propose a new algorithm based on noise injection to mitigate this problem. In a second contribution, we consider the Wasserstein gradient flow of generic loss functionals in a parametric setting. This flow is invariant to the model's parameterization, just like the Fisher gradient flows in information geometry. It has the additional benefit to be well defined even for models with varying supports, which is particularly well suited for implicit generative models. We then introduce a general framework for approximating the Wasserstein natural gradient by leveraging a dual formulation of the Wasserstein pseudo-Riemannian metric that we restrict to a Reproducing Kernel Hilbert Space. The resulting estimator is scalable and provably consistent as it relies on Nystrom methods

    Development of Protocols for Metabolomics in Biomedical Research using Chemometrics

    Get PDF
    Metabolomics is a rapidly growing research field. It aims for quantification of all the metabolites in a biological sample such as plasma, saliva, cerebrospinal fluid or cells. Because the metabolite levels in a biological sample are the end result of the regulatory processes in cells, metabolomics is a very powerful approach for characterisation of phenotypes. Metabolomics has been used to find disease biomarkers, investigate influences of heavy metals on the metabolism and to elucidate gene function. However, analysis of the complete metabolome puts high demands on the methods used. For instance, the methods should be unbiased to accurately depict the in vivo status in the cell. Furthermore, the methods must have very high resolution and sensitivity to allow detection of all metabolites. To approach these high goals, the protocols used in metabolomics need to be thoroughly optimised. The amount of information contained in the metabolome is immense. Consequently, the data set collected from a metabolomics study is very large. To extract the relevant information from such large sets of data, efficient methods are needed both to plan experiments and to convert the data to useful information. For this task, chemometrics is an ideal approach as it allows efficient experimental planning and multivariate data analysis. The experimental planning is sometimes referred to as statistical experimental design or design of experiments. It aims to systematically and simultaneously vary experimental factors in a structured manner. Hence, fewer experiments are generally needed to efficiently map how the system is affected by prevailing factors. The multivariate data analysis employs powerful projection and regression methods to find patterns in data, create system models and classify data. Hence, chemometrics provides a framework for efficient experimental design and an efficient approach for information retrieval. In this thesis two thorough developments of metabolomics protocols and three metabolomics investigations, relevant to metabolic regulation in diabetes patients and insulin-producing cells, are presented. The design of experiments approach and multivariate data analysis were applied. The developed protocols were optimised and validated for the analysis of human blood plasma and adherent cell cultures, respectively, and included optimisation from the sample preparation to the analysis with gas chromatography/mass spectrometry. The first of the metabolomics studies aimed to find biomarkers reflecting metabolic regulation during an oral glucose tolerance test in humans to aid in the diagnosis of diabetes. The second study was performed on clonal β-cells and aimed to find metabolic regulation coupled to the amplifying pathway of insulin secretion. The last study aimed to identify metabolic dysregulation in clonal β-cells growing under lipotoxic and glucotoxic conditions, respectively. In all studies, metabolomics extended and deepened the understanding of metabolic regulation in cells and patients. As such, metabolomics will help to find explanations for metabolic diseases such as diabete

    THE EFFECTS OF THE EFFORTFUL SWALLOW MANEUVER ON HYOLARYNGEAL MOVEMENT AND TONGUE-TO-PALATE PRESSURE

    Get PDF
    Purpose: The purpose of this study was to investigate the effects of the effortful swallow maneuver under two different instructions on tongue-to-palate pressure and hyolaryngeal displacement in healthy adults. Studying typical kinematic parameters and pressure generation in healthy individuals is critical for differentiating normal from pathological patterns and for determining swallowing parameters that can be targeted to optimize and individualize treatment plans for people with swallowing disorders. The primary objectives in this study were: (1) to determine the physiological effects of two different types of effortful swallows on anterior and posterior tongue pressure generation, hyoid displacement, and hyoid-larynx approximation in healthy adults, (2) to determine age-related differences in tongue-to-palate pressure and hyolaryngeal displacement in healthy adults, and (3) to determine the association between perceived effort used to swallow and tongue pressure within swallowing conditions.Method: Forty healthy adults (20 younger, 20 older) participated in this study. All participants were in general good health, were screened for normal oral structures, function, and swallowing skills, had normal tongue strength, were eating a normal diet, and had normal auditory comprehension skills. Experimental procedures included simultaneous data acquisition of tongue pressure, submental muscle activity, and hyolaryngeal movement during normal saliva swallows and effortful saliva swallows under two different instructions (tongue emphasis and neck squeezing). Measures of tongue pressure were obtained using the Iowa Oral Performance Instrument. Submental muscle activity during swallows was assessed using surface electromyography. Hyoid excursion and hyoid-larynx approximation were obtained during ultrasonography. All outcome measures were scaled to account for differences between participants and they reflected activity during swallowing. Moreover, participants rated their perceived effort used to swallow with a visual analog scale. Results: Significant tongue pressure differences were observed between swallowing condition and tongue region. The effortful swallows performed with tongue emphasis (EFSst) and pharyngeal squeezing (EFSsp) produced greater tongue-to-plate pressures than normal swallows (NSs). Additionally, posterior tongue pressures were greater than pressures generated in the anterior tongue region during NSs and EFSst. Hyolaryngeal measures were also greater during EFSst and EFSsp than NSs. Significant differences were found between the two types of effortful swallows in tongue pressure and hyoid displacement measurements. Overall, EFSst produced greater changes in these physiological measures than EFSsp. Significant age-related differences were only found in hyoid-larynx approximation during the EFSst. Moderate correlations were identified between tongue pressure and hyoid displacement during NSs and EFSst and between tongue pressure and hyoid-larynx approximation during NSs and EFSst. Results also showed that participants perceived greater effort used to swallow during EFSst and EFSsp than NSs. Finally, there was a significant, moderate correlation between perceived swallowing effort and objectively measured tongue-to-palate pressure during NSs and EFSst. Conclusions: The effortful swallow maneuver increases tongue-to-palate pressure and hyolaryngeal excursion in healthy adults across the age span. Additionally, different instructions for the effortful swallow affect those physiological measures. These findings have the potential to guide treatment decisions when recommending and training the effortful swallow maneuver. It may be helpful for clinicians to individualize and determine the optimal effortful swallow instructions for each patient based on their physiological swallowing impairments

    Dry granulation via roller compaction : investigation on scale up strategies integrating process parameters and critical material attributes

    Get PDF
    Granulation processes for solid oral dosage forms are commonly used in the pharmaceutical industry to enhance the quality of the final product, i.e. tablets. Today, roller compaction is one of the most common granulation techniques for solid oral dosage forms as it provides advantages like simple operation, due to integrated process control mechanisms, suitability for water- or heat-sensitive APIs and opportunity for an implementation in a continuous manufacturing process. Although roller compaction was intensively investigated, the impact of upscaling from a small to a larger roller compactor or vice versa is not fully understood. To account for this knowledge gap, in this thesis the effect of a scale up on the quality attributes of intermediate- and final products, was investigated. Therefore, the controversially discussed topic of reduced tabletability of roller compacted granules caused by work hardening phenomena, particle size enlargement effect, porosity of granules and lubricant sensitivity was investigated. Two formulations, one predominantly plastic deforming and the other predominantly brittle deforming, were used. Both had been previously characterised in respect to their compressibility, tabletability and compactibility and processed at both scales to differentiate between material and scale dependent effects on the intermediate- and final product. Finally, a successful scale up strategy was developed to achieve the same product quality for all scales. Solid fraction of the ribbons is well known as key intermediate critical quality attribute for downstream processing of a roller compaction process. Different established analytical methods were compared for the measurement of the solid fraction of ribbons. The GeoPycnometer method (volume displacement) turned out as the most reliable and most robust method. Subsequently, both formulations were dry granulated at both scales with equal process settings. A higher solid fraction of the ribbons was obtained for both formulations at the larger scale. For the predominantly plastic deforming formulation the particle size distribution of the granules was similar for both scales, resulting in a lower tensile strength of the tablets of the large scale, which was mainly impacted by the work hardening effect and sensitivity towards lubricant. The increased solid fraction of the ribbon produced by the large scale compared to the small scale correlated with a lower tensile strength of the tablets. In contrast, negligible differences of the tensile strength of the tablets between both scales were observed for the predominantly brittle deforming formulation, although the particle size distribution of the granules differed at higher specific compaction forces of the large scale. This was driven by the impact of the brittle deforming component, which enhances the fracturing behaviour of the granules and resulted in a negligible susceptibility towards work hardening, lubricant and particle size enlargement effect. In conclusion, even though differences existed between ribbons produced at both scales, these could be balanced if the formulation contains a high proportion of a brittle component. This strategy will allow enhancing the robustness of the scalability of the process and the final product quality. Previously it was demonstrated that a different solid fraction of the ribbon resulted in a different tensile strength of the tablets between scales for the predominantly plastic formulation. This formulation however, is commonly used to counteract the main disadvantage of the roller compaction; the reduced tabletability of granules of tablets (loss of tensile strength). To account for this, a new approach (Scale Model) was developed for the predominantly plastic formulation to achieve the same solid fraction of the ribbon at both scales. Same solid fraction at both scales resulted in a same porosity of the granules, compressibility and tensile strength of the tablets, although a different particle size distribution of the granules was obtained. This demonstrated that the particle size distribution of granules should not to be considered as the main intermediate quality attribute to achieve a successful scale up for a roller compaction process, because the porosity and the compressibility of the granules defining the microstructure of a tablet during tableting and subsequently the resulting tensile strength of the tablets. The Scale Model approach demonstrated a practicable solution for the pharmaceutical industry to scale the process from small development batches to commercial batches and still achieve equal quality of the tablets. In order to investigate the observed higher solid fraction at the large scale at equal process settings for both scales a new analytical method via NIR was developed to measure the solid fraction distribution along the roll width. It was possible to predict the solid fraction of unknown samples by acquiring the NIR spectra comprising reduction of analysis time compared to the GeoPycnometer method, which measured the “total” solid fraction of the ribbon. The effect of the cheek plates (lower solid fraction at the edges) decreased with increased distance to the cheek plates, which was especially the case for the larger scale with a broader roll width. This led to a higher “total” solid fraction of the ribbons produced by the large scale compared to the small scale at equal process settings. These results explained the previously observed different quality attributes of intermediate- and final products. The proposed scale up approach showed that the differences of resulting granules and tablets between scales can be balanced for a predominantly plastic deforming formulation through the adaption of the specific compaction force. Thus, adapting the specific compaction force by measurements of the “total” solid fraction (GeoPycnometer) is a suitable scale up strategy for a roller compaction process. Moreover, the solid fraction of a tablet (compressibility) was an important impact factor, which reinforced the development of theoretical models to predict the solid fraction for unknown powder mixtures based on single component compression analysis. A new theoretical developed Percolation and a modified Kawakita model were evaluated for model application. An exponential model was added to elucidate whether the two-parametrised models with theoretical background are superior in terms of predictability of solid fraction compared to a model without parametrised variables. Four mixtures were compressed over a wide pressure range at various fractions of a plastic and brittle deforming component. Based on single compression analysis of the pure excipients and application of these models, it was possible to predict the solid fraction of all mixtures. The Kawakita model showed overall superior prediction accuracy, whereas the Percolation model resulted in the best fit for mixtures containing the plastic deforming component in a range of 72%–48%. Both models were in good agreement at residuals below 3%. The prediction could serve as a systematic guidance for the formulator to select appropriate excipients depending on the active pharmaceutical ingredient to build quality into the drug product according to the Quality by Design approach. In summary, this thesis provides a new profound knowledge and an appropriate guidance for the scale up of a roller compaction process. An effect of a scale up of a roller compaction process on the quality attributes of intermediate- and final products was demonstrated. This effect can be balanced by applying the proposed scale up strategy or by diminishing the formulation susceptibility to scale dependent effects with an increased proportion of a predominantly brittle deforming component in the formulation

    A scientific framework for evaluating coral reef resilience to climate change

    Full text link

    Global patterns of international fisheries conflict

    Get PDF
    International conflict over fishery resources is a growing security concern. Increasing incidences of conflict, diminishing fishery resources and climate impacts on marine systems have alerted the international community to the potential of fisheries conflict as a security threat. However, we lack knowledge on conflict events over time, as well as the contextual variables shaping conflict. To address that gap, Jessica's thesis provides detailed understanding of the patterns of international fisheries conflict, including its frequency, type, geography, temporal dimensions, immediate drivers and underlying conditions (1974-2016). Insight into these patterns can aid improved development of conflict management strategies and policies to ensure future ocean security

    Developmental differences in the prospective organisation of goal-directed movement between children with autism and typically developing children : a smart tablet serious game study

    Get PDF
    Movement is prospective. It structures self-generated engagement with objects and social partners and is fundamental to children's learning and development. In autistic children, previous reports of differences in movement kinematics compared to neurotypical peers suggest that its prospective organisation might be disrupted. Here, we employed a smart tablet serious game paradigm to assess differences in the feedforward and feedback mechanisms of prospective action organisation, between autistic and neurotypical preschool children. We analysed 3926 goal-directed finger movements made during smart-tablet ecological gameplay, from 28 children with Childhood Autism (ICD-10; ASD) and 43 neurotypical children (TD), aged 3–6 years old. Using linear and generalised linear mixed-effect models, we found the ASD group executed movements with longer movement time (MT) and time to peak velocity (TTPV), lower peak velocity (PV), with PV less likely to occur in the first movement unit (MU) and with a greater number of movement units after peak velocity (MU-APV). Interestingly, compared to the TD group, the ASD group showed smaller increases in PV, TTPV and MT with an increase in age (ASD × age interaction), together with a smaller reduction in MU-APV and an increase in MU-APV at shorter target distances (ASD × Dist interaction). Our results are the first to highlight different developmental trends in anticipatory feedforward and compensatory feedback mechanisms of control, contributing to differences in movement kinematics observed between autistic and neurotypical children. These findings point to differences in integration of prospective perceptuomotor information, with implications for embodied cognition and learning from self-generated action in autism
    corecore