2,433,217 research outputs found

    Cooperation between the {B} method and the automata theory to check the component interoperability

    No full text
    International audienceComponent interoperability is one of the essential issues in the component based development, since it allows the composition of reusable heterogenous components developed by different people. In this paper, we propose an approach to formally verify component interoperability at signature, semantics, and protocol levels. It is based on the use of the B formal method for specifying component interfaces and finite transition systems for specifying component protocols. The verification is done with the B theorem prover and the verification of the simulation relation between transition systems. This approach allows to decide whether two components can interoperate if assembled together and whether a component can be replaced by another component

    Transcription Factor-DNA Binding Via Machine Learning Ensembles

    Full text link
    We present ensemble methods in a machine learning (ML) framework combining predictions from five known motif/binding site exploration algorithms. For a given TF the ensemble starts with position weight matrices (PWM's) for the motif, collected from the component algorithms. Using dimension reduction, we identify significant PWM-based subspaces for analysis. Within each subspace a machine classifier is built for identifying the TF's gene (promoter) targets (Problem 1). These PWM-based subspaces form an ML-based sequence analysis tool. Problem 2 (finding binding motifs) is solved by agglomerating k-mer (string) feature PWM-based subspaces that stand out in identifying gene targets. We approach Problem 3 (binding sites) with a novel machine learning approach that uses promoter string features and ML importance scores in a classification algorithm locating binding sites across the genome. For target gene identification this method improves performance (measured by the F1 score) by about 10 percentage points over the (a) motif scanning method and (b) the coexpression-based association method. Top motif outperformed 5 component algorithms as well as two other common algorithms (BEST and DEME). For identifying individual binding sites on a benchmark cross species database (Tompa et al., 2005) we match the best performer without much human intervention. It also improved the performance on mammalian TFs. The ensemble can integrate orthogonal information from different weak learners (potentially using entirely different types of features) into a machine learner that can perform consistently better for more TFs. The TF gene target identification component (problem 1 above) is useful in constructing a transcriptional regulatory network from known TF-target associations. The ensemble is easily extendable to include more tools as well as future PWM-based information.Comment: 33 page

    A Vision-Based Vehicle Follower Navigation Using Fuzzy Logic Controller

    Get PDF
    This research presents the vision-based approach to ground vehicle follower navigation. The system utilize fuzzy logic controller to navigate itself. There are two components of the prototype which is the vision system component and the actuating component. The vision system component is controlled by a microprocessor, Raspberry Pi. The actuating component is controlled by the microcontroller, Arduino Mega. The vision system component utilizes Camshift tracking and the illumination inconsistency is corrected using histogram equalization. The consequent parameters obtained from the pilot test is used to design the appropriate fuzzy membership functions and rules. The are two type of rules tested. The first one which is method A utilized 15 rules of fuzzy logics whereas the second method which is method B introduced three additional hedges rules to the existing 15 rules. The results show that both methods produce desirable results as the prototype is able to navigate itself to follow the lead vehicle with Method B produces the best results

    Reconstruction of lensing from the cosmic microwave background polarization

    Get PDF
    Gravitational lensing of the cosmic microwave background (CMB) polarization field has been recognized as a potentially valuable probe of the cosmological density field. We apply likelihood-based techniques to the problem of lensing of CMB polarization and show that if the B-mode polarization is mapped, then likelihood-based techniques allow significantly better lensing reconstruction than is possible using the previous quadratic estimator approach. With this method the ultimate limit to lensing reconstruction is not set by the lensed CMB power spectrum. Second-order corrections are known to produce a curl component of the lensing deflection field that cannot be described by a potential; we show that this does not significantly affect the reconstruction at noise levels greater than 0.25 microK arcmin. The reduction of the mean squared error in the lensing reconstruction relative to the quadratic method can be as much as a factor of two at noise levels of 1.4 microK arcmin to a factor of ten at 0.25 microK arcmin, depending on the angular scale of interest.Comment: matches PRD accepted version. 28 pages, 8 fig

    Visual task identification and characterisation using polynomial models

    Get PDF
    Developing robust and reliable control code for autonomous mobile robots is difficult, because the interaction between a physical robot and the environment is highly complex, subject to noise and variation, and therefore partly unpredictable. This means that to date it is not possible to predict robot behaviour based on theoretical models. Instead, current methods to develop robot control code still require a substantial trial-and-error component to the software design process. This paper proposes a method of dealing with these issues by a) establishing task-achieving sensor-motor couplings through robot training, and b) representing these couplings through transparent mathematical functions that can be used to form hypotheses and theoretical analyses of robot behaviour. We demonstrate the viability of this approach by teaching a mobile robot to track a moving football and subsequently modelling this task using the NARMAX system identification technique

    A six-factor asset pricing model

    Full text link
    The present study introduce the human capital component to the Fama and French five-factor model proposing an equilibrium six-factor asset pricing model. The study employs an aggregate of four sets of portfolios mimicking size and industry with varying dimensions. The first set consists of three set of six portfolios each sorted on size to B/M, size to investment, and size to momentum. The second set comprises of five index portfolios, third, a four-set of twenty-five portfolios each sorted on size to B/M, size to investment, size to profitability, and size to momentum, and the final set constitute thirty industry portfolios. To estimate the parameters of six-factor asset pricing model for the four sets of variant portfolios, we use OLS and Generalized method of moments based robust instrumental variables technique (IVGMM). The results obtained from the relevance, endogeneity, overidentifying restrictions, and the Hausman's specification, tests indicate that the parameter estimates of the six-factor model using IVGMM are robust and performs better than the OLS approach. The human capital component shares equally the predictive power alongside the factors in the framework in explaining the variations in return on portfolios. Furthermore, we assess the t-ratio of the human capital component of each IVGMM estimates of the six-factor asset pricing model for the four sets of variant portfolios. The t-ratio of the human capital of the eighty-three IVGMM estimates are more than 3.00 with reference to the standard proposed by Harvey et al. (2016). This indicates the empirical success of the six-factor asset-pricing model in explaining the variation in asset returns

    Fast and precise map-making for massively multi-detector CMB experiments

    Full text link
    Future cosmic microwave background (CMB) polarisation experiments aim to measure an unprecedentedly small signal - the primordial gravity wave component of the polarisation field B-mode. To achieve this, they will analyse huge datasets, involving years worth of time-ordered data (TOD) from massively multi-detector focal planes. This creates the need for fast and precise methods to complement the M-L approach in analysis pipelines. In this paper, we investigate fast map-making methods as applied to long duration, massively multi-detector, ground-based experiments, in the context of the search for B-modes. We focus on two alternative map-making approaches: destriping and TOD filtering, comparing their performance on simulated multi-detector polarisation data. We have written an optimised, parallel destriping code, the DEStriping CARTographer DESCART, that is generalised for massive focal planes, including the potential effect of cross-correlated TOD 1/f noise. We also determine the scaling of computing time for destriping as applied to a simulated full-season data-set for a realistic experiment. We find that destriping can out-perform filtering in estimating both the large-scale E and B-mode angular power spectra. In particular, filtering can produce significant spurious B-mode power via EB mixing. Whilst this can be removed, it contributes to the variance of B-mode bandpower estimates at scales near the primordial B-mode peak. For the experimental configuration we simulate, this has an effect on the possible detection significance for primordial B-modes. Destriping is a viable alternative fast method to the full M-L approach that does not cause the problems associated with filtering, and is flexible enough to fit into both M-L and Monte-Carlo pseudo-Cl pipelines.Comment: 16 pages, 14 figures. MNRAS accepted. Typos corrected and computing time/memory requirement orders-of-magnitude numbers in section 4 replaced by precise number

    Charge variants characterization and release assay development for co-formulated antibodies as a combination therapy

    Full text link
    © 2019, © 2019 The Author(s). Published with license by Taylor & Francis Group, LLC. Combination therapy is a fast-growing strategy to maximize therapeutic benefits to patients. Co-formulation of two or more therapeutic proteins has advantages over the administration of multiple medications, including reduced medication errors and convenience for patients. Characterization of co-formulated biologics can be challenging due to the high degree of similarity in the physicochemical properties of co-formulated proteins, especially at different concentrations of individual components. We present the results of a deamidation study of one monoclonal antibody component (mAb-B) in co-formulated combination antibodies (referred to as COMBO) that contain various ratios of mAb-A and mAb-B. A single deamidation site in the complementarity-determining region of mAb-B was identified as a critical quality attribute (CQA) due to its impact on biological activity. A conventional charge-based method of monitoring mAb-B deamidation presented specificity and robustness challenges, especially when mAb-B was a minor component in the COMBO, making it unsuitable for lot release and stability testing. We developed and qualified a new, quality-control-friendly, single quadrupole Dalton mass detector (QDa)–based method to monitor site-specific deamidation. Our approach can be also used as a multi-attribute method for monitoring other quality attributes in COMBO. This analytical paradigm is applicable to the identification of CQAs in combination therapeutic molecules, and to the subsequent development of a highly specific, highly sensitive, and sufficiently robust method for routine monitoring CQAs for lot release test and during stability studies
    corecore