84,453 research outputs found
Machine Learning and Integrative Analysis of Biomedical Big Data.
Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues
Online Dispute Resolution Through the Lens of Bargaining and Negotiation Theory: Toward an Integrated Model
[Excerpt] In this article we apply negotiation and bargaining theory to the analysis of online dispute resolution. Our principal objective is to develop testable hypotheses based on negotiation theory that can be used in ODR research. We have not conducted the research necessary to test the hypotheses we develop; however, in a later section of the article we suggest a possible methodology for doing so. There is a vast literature on negotiation and bargaining theory. For the purposes of this article, we realized at the outset that we could only use a small part of that literature in developing a model that might be suitable for empirical testing. We decided to use the behavioral theory of negotiation developed by Richard Walton and Robert McKersie, which was initially formulated in the 1960s. This theory has stood the test of time. Initially developed to explain union-management negotiations, it has proven useful in analyzing a wide variety of disputes and conflict situations. In constructing their theory, Walton and McKersie built on the contributions and work of many previous bargaining theorists including economists, sociologists, game theorists, and industrial relations scholars. In this article, we have incorporated a consideration of the foundations on which their theory was based. In the concluding section of the article we discuss briefly how other negotiation and bargaining theories might be applied to the analysis of ODR
Corpus specificity in LSA and Word2vec: the role of out-of-domain documents
Latent Semantic Analysis (LSA) and Word2vec are some of the most widely used
word embeddings. Despite the popularity of these techniques, the precise
mechanisms by which they acquire new semantic relations between words remain
unclear. In the present article we investigate whether LSA and Word2vec
capacity to identify relevant semantic dimensions increases with size of
corpus. One intuitive hypothesis is that the capacity to identify relevant
dimensions should increase as the amount of data increases. However, if corpus
size grow in topics which are not specific to the domain of interest, signal to
noise ratio may weaken. Here we set to examine and distinguish these
alternative hypothesis. To investigate the effect of corpus specificity and
size in word-embeddings we study two ways for progressive elimination of
documents: the elimination of random documents vs. the elimination of documents
unrelated to a specific task. We show that Word2vec can take advantage of all
the documents, obtaining its best performance when it is trained with the whole
corpus. On the contrary, the specialization (removal of out-of-domain
documents) of the training corpus, accompanied by a decrease of dimensionality,
can increase LSA word-representation quality while speeding up the processing
time. Furthermore, we show that the specialization without the decrease in LSA
dimensionality can produce a strong performance reduction in specific tasks.
From a cognitive-modeling point of view, we point out that LSA's word-knowledge
acquisitions may not be efficiently exploiting higher-order co-occurrences and
global relations, whereas Word2vec does
An integrative clustering approach combining particle swarm optimization and formal concept analysis
Interactive data exploration with targeted projection pursuit
Data exploration is a vital, but little considered, part of the scientific process; but few visualisation tools can cope with truly complex data. Targeted Projection Pursuit (TPP) is an interactive data exploration technique that provides an intuitive and transparent interface for data exploration. A prototype has been evaluated quantitatively and found to outperform algorithmic techniques on standard visual analysis tasks
- âŠ