229 research outputs found

    Technologies for Proteomic and Genomic Biomarker Analysis

    Get PDF
    In the first part of this dissertation, we systematically validated the application of molecular weight cut-off ultrafiltration in separation and enrichment of low-molecular-weight peptides from human serum. Under optimized conditions, both free-phase and bound LMW peptides could be separated and enriched. The method proved to be highly efficient and reproducible coupled with MALDI-TOF MS proteomic pattern analysis. Three marker peaks were found to be eligible for distinguishing normal and ovarian cancer samples. A novel organic solvent precipitation method coupled with enzymatic deglycosylation was also developed for biomarker detection from human serum. This method allowed us to generate reproducible free-phase peptide patterns comparing with the ultrafiltration method. A potential marker was found up-regulated in benign and ovarian cancer patients. It was further identified as des-alanine-fibrinopeptide A using LC tandem mass spectrometry. In the second part of this dissertation, a new sample preparation procedure was developed to improve the MALDI-TOF analysis of low-concentration oligonucleotides. The oligonucleotide solutions are first dispensed and allow shrinking onto a small spot on an anchoring target. A small volume (0.1uL) of saturated 3-HPA matrix solution is then added on top of each dried oligonucleotide spot. Samples prepared by this procedure are homogenous and reduces the need to search for \u27sweet\u27 spots. The increased shot-to-shot and sample-to-sample reproducibility makes it useful for high-throughput quantitative analysis. This procedure allowed robust detection of oligonucleotides at 0.01℗æM level and mini-sequencing products produced using only 50 fmol of extension primer. And a strategy called probe-clamping-primer-extension-PCR (PCPE-PCR) was developed to detect MRS alterations in a large background of wild-type DNA. PCR errors often generate false positive mutant alleles. In PCPE-PCR, mutant single-strand DNA molecules are preferentially produced and enriched. Thereafter, the r

    Parametric study of size, curvature and free edge effects on the predicted strength of bonded composite joints

    Get PDF
    This paper presents the effects of size, curvature and free edges of laboratory lap joints on the debond fracture behaviour of joints that more realistically represent fuselage skin structures than conventional flat, narrow specimens. Finite Element Analysis is used in conjunction with Cohesive Zone Modelling (CZM) to predict thestrength of selected joint features. The modelling approach was verified by simple single lap joint geometry. Four realistic joint features were then modelled by this validated modelling approach. The results show that moderate curvature has negligible effect on the peak load. There is a significant difference in the load vs displacement response of flat lab coupon joints with free edges and realistic curved joints with constrained edges. Further detail design features were investigated in this study, including (i) the joint runout and (ii) the presence of initial damage (thumbnail delamination). The modelling results show that the joggle configuration has an effect on the distribution of interlaminar stresses that affect the damage initiation and propagation. Fracture behaviour from different initial crack geometries associated with wider specimens has been simulated. From a design standpoint, an expansion of modelling capability is suggested to reduce the number of component tests in the traditional test pyramid

    Overcoming Catastrophic Forgetting in Graph Neural Networks

    Full text link
    Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks. Prior methods have been focused on overcoming this problem on convolutional neural networks (CNNs), where the input samples like images lie in a grid domain, but have largely overlooked graph neural networks (GNNs) that handle non-grid data. In this paper, we propose a novel scheme dedicated to overcoming catastrophic forgetting problem and hence strengthen continual learning in GNNs. At the heart of our approach is a generic module, termed as topology-aware weight preserving~(TWP), applicable to arbitrary form of GNNs in a plug-and-play fashion. Unlike the main stream of CNN-based continual learning methods that rely on solely slowing down the updates of parameters important to the downstream task, TWP explicitly explores the local structures of the input graph, and attempts to stabilize the parameters playing pivotal roles in the topological aggregation. We evaluate TWP on different GNN backbones over several datasets, and demonstrate that it yields performances superior to the state of the art. Code is publicly available at \url{https://github.com/hhliu79/TWP}.Comment: Accepted by AAAI 202

    Quality and Rate Control of JPEG XR

    Get PDF
    Driven by the need for seismic data compression with high dynamic range and 32-bit resolution, we propose two algorithms to efficiently and precisely control the signal-to-noise ratio (SNR) and bit rate in JPEG XR image compression to allow users to compress seismic data with a target SNR or a target bit rate. Based on the quantization properties of JPEG XR and the nature of blank macroblocks, we build a reliable model between the quantization parameter (QP) and SNR. This enables us to estimate the right QP with target quality for the JPEG XR encoder

    A General Method to Couple Prior Distributions

    Get PDF
    There is a lot of statistic models based on marginal distribution and joint distribution relationships. Such statistical models are widely used in medicine, biology, finance, etc. Many modern medical datasets contain observations from multiple time points and treatment conditions. Adaptive shrinkage, a general method to estimate marginal prior distributions, has been developed to analyze such data for a single time point or condition, few method has been developed to analyze joint distribution for different time points or different conditions. The reason is mainly because the difficulty of constructing multi-dimensional prior distributions with dependent variables. A few Bayes' methods can be applied to these type data. Although, it is non-trivial and difficult to estimate joint distribution directly, we can easily estimate marginal prior distributions separately. In this thesis, I develop a simple, general and straightforward method to couple prior distributions for multi-dimensional genetic effects. The main goal is to research the relationship between the sign of effect of a phenotype at different time points. I couple prior distributions to model joint distribution and estimate parameters at multiple time points. Copula Estimation described from Copula Theory and Its Applications provides a parametric copula inference method for my estimation. I construct a model and develop a method to couple prior distributions to estimate my parameters at multiple time points by deriving useful expressions, applying R language for data simulations, and using maximum likelihood estimation. I simulate data from both the real copula model and multivariate normal distribution. The true model performs better in estimating the parameters. This copula model successfully bridge the gap between joint distribution and dependent marginal distributions. There is still more room to improve my copula model

    Learning Options via Compression

    Full text link
    Identifying statistical regularities in solutions to some tasks in multi-task reinforcement learning can accelerate the learning of new tasks. Skill learning offers one way of identifying these regularities by decomposing pre-collected experiences into a sequence of skills. A popular approach to skill learning is maximizing the likelihood of the pre-collected experience with latent variable models, where the latent variables represent the skills. However, there are often many solutions that maximize the likelihood equally well, including degenerate solutions. To address this underspecification, we propose a new objective that combines the maximum likelihood objective with a penalty on the description length of the skills. This penalty incentivizes the skills to maximally extract common structures from the experiences. Empirically, our objective learns skills that solve downstream tasks in fewer samples compared to skills learned from only maximizing likelihood. Further, while most prior works in the offline multi-task setting focus on tasks with low-dimensional observations, our objective can scale to challenging tasks with high-dimensional image observations.Comment: Published at NeurIPS 202

    Modeling and Application of a New Nonlinear Fractional Financial Model

    Get PDF
    The paper proposes a new nonlinear dynamic econometric model with fractional derivative. The fractional derivative is defined in the Jumarie type. The corresponding discrete financial system is considered by removing the limit operation in Jumarie derivative’s. We estimate the coefficients and parameters of the model by using the least squared principle. The new approach to financial system modeling is illustrated by an application to model the behavior of Japanese national financial system which consists of interest rate, investment, and inflation. The empirical results with different time step sizes of discretization are shown, and a comparison of the actual data against the data estimated by empirical model is illustrated. We find that our discrete financial model can describe the actual data that include interest rate, investment, and inflation accurately
    • …
    corecore