739,516 research outputs found

    CAMUR: Knowledge extraction from RNA-seq cancer data through equivalent classification rules

    Get PDF
    Nowadays, knowledge extraction methods from Next Generation Sequencing data are highly requested. In this work, we focus on RNA-seq gene expression analysis and specifically on case-control studies with rule-based supervised classification algorithms that build a model able to discriminate cases from controls. State of the art algorithms compute a single classification model that contains few features (genes). On the contrary, our goal is to elicit a higher amount of knowledge by computing many classification models, and therefore to identify most of the genes related to the predicted class

    DAKS: An R Package for Data Analysis Methods in Knowledge Space Theory

    Get PDF
    Knowledge space theory is part of psychometrics and provides a theoretical framework for the modeling, assessment, and training of knowledge. It utilizes the idea that some pieces of knowledge may imply others, and is based on order and set theory. We introduce the R package DAKS for performing basic and advanced operations in knowledge space theory. This package implements three inductive item tree analysis algorithms for deriving quasi orders from binary data, the original, corrected, and minimized corrected algorithms, in sample as well as population quantities. It provides functions for computing population and estimated asymptotic variances of and one and two sample Z tests for the diff fit measures, and for switching between test item and knowledge state representations. Other features are a function for computing response pattern and knowledge state frequencies, a data (based on a finite mixture latent variable model) and quasi order simulation tool, and a Hasse diagram drawing device. We describe the functions of the package and demonstrate their usage by real and simulated data examples.

    Adiabatic Quantum State Generation and Statistical Zero Knowledge

    Get PDF
    The design of new quantum algorithms has proven to be an extremely difficult task. This paper considers a different approach to the problem, by studying the problem of 'quantum state generation'. This approach provides intriguing links between many different areas: quantum computation, adiabatic evolution, analysis of spectral gaps and groundstates of Hamiltonians, rapidly mixing Markov chains, the complexity class statistical zero knowledge, quantum random walks, and more. We first show that many natural candidates for quantum algorithms can be cast as a state generation problem. We define a paradigm for state generation, called 'adiabatic state generation' and develop tools for adiabatic state generation which include methods for implementing very general Hamiltonians and ways to guarantee non negligible spectral gaps. We use our tools to prove that adiabatic state generation is equivalent to state generation in the standard quantum computing model, and finally we show how to apply our techniques to generate interesting superpositions related to Markov chains.Comment: 35 pages, two figure

    Necessary and Sufficient Conditions on Partial Orders for Modeling Concurrent Computations

    Full text link
    Partial orders are used extensively for modeling and analyzing concurrent computations. In this paper, we define two properties of partially ordered sets: width-extensibility and interleaving-consistency, and show that a partial order can be a valid state based model: (1) of some synchronous concurrent computation iff it is width-extensible, and (2) of some asynchronous concurrent computation iff it is width-extensible and interleaving-consistent. We also show a duality between the event based and state based models of concurrent computations, and give algorithms to convert models between the two domains. When applied to the problem of checkpointing, our theory leads to a better understanding of some existing results and algorithms in the field. It also leads to efficient detection algorithms for predicates whose evaluation requires knowledge of states from all the processes in the system

    Deriving item features relevance from collaborative domain knowledge

    Get PDF
    An Item based recommender system works by computing a similarity between items, which can exploit past user interactions (collaborative filtering) or item features (content based filtering). Collaborative algorithms have been proven to achieve better recommendation quality then content based algorithms in a variety of scenarios, being more effective in modeling user behaviour. However, they can not be applied when items have no interactions at all, i.e. cold start items. Content based algorithms, which are applicable to cold start items, often require a lot of feature engineering in order to generate useful recommendations. This issue is specifically relevant as the content descriptors become large and heterogeneous. The focus of this paper is on how to use a collaborative models domain-specific knowledge to build a wrapper feature weighting method which embeds collaborative knowledge in a content based algorithm. We present a comparative study for different state of the art algorithms and present a more general model. This machine learning approach to feature weighting shows promising results and high flexibility
    corecore