802 research outputs found

    Genetic Association Testing of Copy Number Variation

    Get PDF
    Copy-number variation (CNV) has been implicated in many complex diseases. It is of great interest to detect and locate such regions through genetic association testings. However, the association testings are complicated by the fact that CNVs usually span multiple markers and thus such markers are correlated to each other. To overcome the difficulty, it is desirable to pool information across the markers. In this thesis, we propose a kernel-based method for aggregation of marker-level tests, in which first we obtain a bunch of p-values through association tests for every marker and then the association test involving CNV is based on the statistic of p-values combinations. In addition, we explore several aspects of its implementation. Since p-values among markers are correlated, it is complicated to obtain the null distribution of test statistics for kernel-base aggregation of marker-level tests. To solve the problem, we develop two proper methods that are both demonstrated to preserve the family-wise error rate of the test procedure. They are permutation based and correlation base approaches. Many implementation aspects of kernel-based method are compared through the empirical power studies in a number of simulations constructed from real data involving a pharmacogenomic study of gemcitabine. In addition, more performance comparisons are shown between permutation-based and correlation-based approach. We also apply those two approaches to the real data. The main contribution of the dissertation is the development of marker-level association testing, a comparable and powerful approach to detect phenotype-associated CNVs. Furthermore, the approach is extended to high dimension setting with high efficiency

    - Spike Trains as Event Sequences: Fundamental Implications

    Full text link

    Isomorphisms between psychological processes and neural mechanisms: From stimulus elements to genetic markers of activity

    Get PDF
    Traditional learning theory has developed models that can accurately predict and describe the course of learned behavior. These “psychological process” models rely on hypothetical constructs that are usually thought to be not directly measurable or manipulable. Recently, and mostly in parallel, the neural mechanisms underlying learning have been fairly well elucidated. The argument in this essay is that we can successfully uncover isomorphisms between process and mechanism and that this effort will help advance our theories about both processes and mechanisms. We start with a brief review of error-correction circuits as a successful example. Then we turn to the concept of stimulus elements, where the conditional stimulus is hypothesized to be constructed of a multitude of elements only some of which are sampled during any given experience. We discuss such elements with respect to how they explain acquisition of associative strength as an incremental process. Then we propose that for fear conditioning, stimulus elements and basolateral amygdala projection neurons are isomorphic and that the activational state of these “elements” can be monitored by the expression of the mRNA for activity-regulated cytoskeletal protein (ARC). Finally we apply these ideas to analyze recent data examining ARC expression during contextual fear conditioning and find that there are indeed many similarities between stimulus elements and amygdala neurons. The data also suggest some revisions in the conceptualization of how the population of stimulus elements is sampled from

    MRI Artefact Augmentation: Robust Deep Learning Systems and Automated Quality Control

    Get PDF
    Quality control (QC) of magnetic resonance imaging (MRI) is essential to establish whether a scan or dataset meets a required set of standards. In MRI, many potential artefacts must be identified so that problematic images can either be excluded or accounted for in further image processing or analysis. To date, the gold standard for the identification of these issues is visual inspection by experts. A primary source of MRI artefacts is caused by patient movement, which can affect clinical diagnosis and impact the accuracy of Deep Learning systems. In this thesis, I present a method to simulate motion artefacts from artefact-free images to augment convolutional neural networks (CNNs), increasing training appearance variability and robustness to motion artefacts. I show that models trained with artefact augmentation generalise better and are more robust to real-world artefacts, with negligible cost to performance on clean data. I argue that it is often better to optimise frameworks end-to-end with artefact augmentation rather than learning to retrospectively remove artefacts, thus enforcing robustness to artefacts at the feature level representation of the data. The labour-intensive and subjective nature of QC has increased interest in automated methods. To address this, I approach MRI quality estimation as the uncertainty in performing a downstream task, using probabilistic CNNs to predict segmentation uncertainty as a function of the input data. Extending this framework, I introduce a novel decoupled uncertainty model, enabling separate uncertainty predictions for different types of image degradation. Training with an extended k-space artefact augmentation pipeline, the model provides informative measures of uncertainty on problematic real-world scans classified by QC raters and enables sources of segmentation uncertainty to be identified. Suitable quality for algorithmic processing may differ from an image's perceptual quality. Exploring this, I pose MRI visual quality assessment as an image restoration task. Using Bayesian CNNs to recover clean images from noisy data, I show that the uncertainty indicates the possible recoverability of an image. A multi-task network combining uncertainty-aware artefact recovery with tissue segmentation highlights the distinction between visual and algorithmic quality, which has the impact that, depending on the downstream task, less data should be discarded for purely visual quality reasons

    A conformal test of linear models via permutation-augmented regressions

    Full text link
    Permutation tests are widely recognized as robust alternatives to tests based on the normal theory. Random permutation tests have been frequently employed to assess the significance of variables in linear models. Despite their widespread use, existing random permutation tests lack finite-sample and assumption-free guarantees for controlling type I error in partial correlation tests. To address this standing challenge, we develop a conformal test through permutation-augmented regressions, which we refer to as PALMRT. PALMRT not only achieves power competitive with conventional methods but also provides reliable control of type I errors at no more than 2α2\alpha given any targeted level α\alpha, for arbitrary fixed-designs and error distributions. We confirmed this through extensive simulations. Compared to the cyclic permutation test (CPT), which also offers theoretical guarantees, PALMRT does not significantly compromise power or set stringent requirements on the sample size, making it suitable for diverse biomedical applications. We further illustrate their differences in a long-Covid study where PALMRT validated key findings previously identified using the t-test, while CPT suffered from a drastic loss of power. We endorse PALMRT as a robust and practical hypothesis test in scientific research for its superior error control, power preservation, and simplicity

    PHYSICAL TESTING OF POTENTIAL FOOTBALL HELMET DESIGN ENHANCEMENTS

    Get PDF
    Football is a much loved sport in the United States. Unfortunately, it is also hard on the players and puts them at very high risk of concussion. To combat this an inventor in Santa Barbara brought a new design to Cal Poly to be tested. The design was tested in small scale first in order to make some preliminary conclusions about the design. In order to fully test the helmet design; however, full scale testing was required. In order to carry out this testing a drop tower was built based on National Operating Committee on Standards for Athletic Equipment, NOCSAE, specification. The drop tower designed for Cal Poly is a lower cost and highly portable version of the standard NOCSAE design. Using this drop tower and a 3D printed prototype the new design was tested in full scale

    Data analysis tools for mass spectrometry proteomics

    Get PDF
    ABSTRACT Proteins are large biomolecules which consist of amino acid chains. They differ from one another in their amino acid sequences, which are mainly dictated by the nucleotide sequence of their corresponding genes. Proteins fold into specific threedimensional structures that determine their activity. Because many of the proteins act as catalytes in biochemical reactions, they are considered as the executive molecules in the cells and therefore their research is fundamental in biotechnology and medicine. Currently the most common method to investigate the activity, interactions, and functions of proteins on a large scale, is high-throughput mass spectrometry (MS). The mass spectrometers are used for measuring the molecule masses, or more specifically, their mass-to-charge ratios. Typically the proteins are digested into peptides and their masses are measured by mass spectrometry. The masses are matched against known sequences to acquire peptide identifications, and subsequently, the proteins from which the peptides were originated are quantified. The data that are gathered from these experiments contain a lot of noise, leading to loss of relevant information and even to wrong conclusions. The noise can be related, for example, to differences in the sample preparation or to technical limitations of the analysis equipment. In addition, assumptions regarding the data might be wrong or the chosen statistical methods might not be suitable. Taken together, these can lead to irreproducible results. Developing algorithms and computational tools to overcome the underlying issues is of most importance. Thus, this work aims to develop new computational tools to address these problems. In this PhD Thesis, the performance of existing label-free proteomics methods are evaluated and new statistical data analysis methods are proposed. The tested methods include several widely used normalization methods, which are thoroughly evaluated using multiple gold standard datasets. Various statistical methods for differential expression analysis are also evaluated. Furthermore, new methods to calculate differential expression statistic are developed and their superior performance compared to the existing methods is shown using a wide set of metrics. The tools are published as open source software packages.TIIVISTELMÄ Proteiinit ovat aminohappoketjuista muodostuvia isoja biomolekyylejä. Ne eroavat toisistaan aminohappojen järjestyksen osalta, mikä pääosin määräytyy proteiineja koodaavien geenien perusteella. Lisäksi proteiinit laskostuvat kolmiulotteisiksi rakenteiksi, jotka osaltaan määrittelevät niiden toimintaa. Koska proteiinit toimivat katalyytteinä biokemiallisissa reaktioissa, niillä katsotaan olevan keskeinen rooli soluissa ja siksi myös niiden tutkimusta pidetään tärkeänä. Tällä hetkellä yleisin menetelmä laajamittaiseen proteiinien aktiivisuuden, interaktioiden sekä funktioiden tutkimiseen on suurikapasiteettinen massaspektrometria (MS). Massaspektrometreja käytetään mittaamaan molekyylien massoja – tai tarkemmin massan ja varauksen suhdetta. Tyypillisesti proteiinit hajotetaan peptideiksi massojen mittausta varten. Massaspektrometrillä havaittuja massoja verrataan tunnetuista proteiinisekvensseistä koottua tietokantaa vasten, jotta peptidit voidaan tunnistaa. Peptidien myötä myös proteiinit on mahdollista päätellä ja kvantitoida. Kokeissa kerätty data sisältää normaalisti runsaasti kohinaa, joka saattaa johtaa olennaisen tiedon hukkumiseen ja jopa pahimmillaan johtaa vääriin johtopäätöksiin. Tämä kohina voi johtua esimerkiksi näytteen käsittelystä johtuvista eroista tai mittalaitteiden teknisistä rajoitteista. Lisäksi olettamukset datan luonteesta saattavat olla virheellisiä tai käytetään datalle soveltumattomia tilastollisia malleja. Pahimmillaan tämä johtaa tilanteisiin, joissa tutkimuksen tuloksia ei pystytä toistamaan. Erilaisten laskennallisten työkalujen sekä algoritmien kehittäminen näiden ongelmien ehkäisemiseksi onkin ensiarvoisen tärkeää tutkimusten luotettavuuden kannalta. Tässä työssä keskitytäänkin sovelluksiin, joilla pyritään ratkaisemaan tällä osa-alueella ilmeneviä ongelmia. Tutkimuksessa vertaillaan yleisesti käytössä olevia kvantitatiivisen proteomiikan ohjelmistoja ja yleisimpiä datan normalisointimenetelmiä, sekä kehitetään uusia datan analysointityökaluja. Menetelmien keskinäiset vertailut suoritetaan useiden sellaisten standardiaineistojen kanssa, joiden todellinen sisältö tiedetään. Tutkimuksessa vertaillaan lisäksi joukko tilastollisia menetelmiä näytteiden välisten erojen havaitsemiseen sekä kehitetään kokonaan uusia tehokkaita menetelmiä ja osoitetaan niiden parempi suorituskyky suhteessa aikaisempiin menetelmiin. Kaikki tutkimuksessa kehitetyt työkalut on julkaistu avoimen lähdekoodin sovelluksina

    Neutralizing antibody responses in HIV-1 dual infection : lessons for vaccine design

    Get PDF
    Includes abstract. Includes bibliographical references
    corecore