106 research outputs found
Machine Learning and Integrative Analysis of Biomedical Big Data.
Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues
Learning Interpretable Features of Graphs and Time Series Data
Graphs and time series are two of the most ubiquitous representations of data of modern time. Representation learning of real-world graphs and time-series data is a key component for the downstream supervised and unsupervised machine learning tasks such as classification, clustering, and visualization. Because of the inherent high dimensionality, representation learning, i.e., low dimensional vector-based embedding of graphs and time-series data is very challenging. Learning interpretable features incorporates transparency of the feature roles, and facilitates downstream analytics tasks in addition to maximizing the performance of the downstream machine learning models. In this thesis, we leveraged tensor (multidimensional array) decomposition for generating interpretable and low dimensional feature space of graphs and time-series data found from three domains: social networks, neuroscience, and heliophysics. We present the theoretical models and empirical results on node embedding of social networks, biomarker embedding on fMRI-based brain networks, and prediction and visualization of multivariate time-series-based flaring and non-flaring solar events
Feature Selection Based on Sequential Orthogonal Search Strategy
This thesis introduces three new feature selection methods based on sequential orthogonal search strategy that addresses three different contexts of feature selection problem being considered. The first method is a supervised feature selection called the maximum relevance–minimum multicollinearity (MRmMC), which can overcome some shortcomings associated with existing methods that apply the same form of feature selection criterion, especially those that are based on mutual information. In the proposed method, relevant features are measured by correlation characteristics based on conditional variance while redundancy elimination is achieved according to multiple correlation assessment using an orthogonal projection scheme. The second method is an unsupervised feature selection based on Locality Preserving Projection (LPP), which is incorporated in a sequential orthogonal search (SOS) strategy. Locality preserving criterion has been proved a successful measure to evaluate feature importance in many feature selection methods but most of which ignore feature correlation and this means these methods ignore redundant features. This problem has motivated the introduction of the second method that evaluates feature importance jointly rather than individually. In the method, the first LPP component which contains the information of local largest structure (LLS) is utilized as a reference variable to guide the search for significant features. This method is referred to as sequential orthogonal search for local largest structure (SOS-LLS). The third method is also an unsupervised feature selection with essentially the same SOS strategy but it is specifically designed to be robust on noisy data. As limited work has been reported concerning feature selection in the presence of attribute noise, the third method is thus attempts to make an effort towards this scarcity by further exploring the second proposed method. The third method is designed to deal with attribute noise in the search for significant features, and kernel pre-images (KPI) based on kernel PCA are used in the third method to replace the role of the first LPP component as the reference variable used in the second method. This feature selection scheme is referred to as sequential orthogonal search for kernel pre-images (SOS-KPI) method. The performance of these three feature selection methods are demonstrated based on some comprehensive analysis on public real datasets of different characteristics and comparative studies with a number of state-of-the-art methods. Results show that each of the proposed methods has the capacity to select more efficient feature subsets than the other feature selection methods in the comparative studies
Systems Analytics and Integration of Big Omics Data
A “genotype"" is essentially an organism's full hereditary information which is obtained from its parents. A ""phenotype"" is an organism's actual observed physical and behavioral properties. These may include traits such as morphology, size, height, eye color, metabolism, etc. One of the pressing challenges in computational and systems biology is genotype-to-phenotype prediction. This is challenging given the amount of data generated by modern Omics technologies. This “Big Data” is so large and complex that traditional data processing applications are not up to the task. Challenges arise in collection, analysis, mining, sharing, transfer, visualization, archiving, and integration of these data. In this Special Issue, there is a focus on the systems-level analysis of Omics data, recent developments in gene ontology annotation, and advances in biological pathways and network biology. The integration of Omics data with clinical and biomedical data using machine learning is explored. This Special Issue covers new methodologies in the context of gene–environment interactions, tissue-specific gene expression, and how external factors or host genetics impact the microbiome
Ranking to Learn and Learning to Rank: On the Role of Ranking in Pattern Recognition Applications
The last decade has seen a revolution in the theory and application of
machine learning and pattern recognition. Through these advancements, variable
ranking has emerged as an active and growing research area and it is now
beginning to be applied to many new problems. The rationale behind this fact is
that many pattern recognition problems are by nature ranking problems. The main
objective of a ranking algorithm is to sort objects according to some criteria,
so that, the most relevant items will appear early in the produced result list.
Ranking methods can be analyzed from two different methodological perspectives:
ranking to learn and learning to rank. The former aims at studying methods and
techniques to sort objects for improving the accuracy of a machine learning
model. Enhancing a model performance can be challenging at times. For example,
in pattern classification tasks, different data representations can complicate
and hide the different explanatory factors of variation behind the data. In
particular, hand-crafted features contain many cues that are either redundant
or irrelevant, which turn out to reduce the overall accuracy of the classifier.
In such a case feature selection is used, that, by producing ranked lists of
features, helps to filter out the unwanted information. Moreover, in real-time
systems (e.g., visual trackers) ranking approaches are used as optimization
procedures which improve the robustness of the system that deals with the high
variability of the image streams that change over time. The other way around,
learning to rank is necessary in the construction of ranking models for
information retrieval, biometric authentication, re-identification, and
recommender systems. In this context, the ranking model's purpose is to sort
objects according to their degrees of relevance, importance, or preference as
defined in the specific application.Comment: European PhD Thesis. arXiv admin note: text overlap with
arXiv:1601.06615, arXiv:1505.06821, arXiv:1704.02665 by other author
Ranking to Learn and Learning to Rank: On the Role of Ranking in Pattern Recognition Applications
The last decade has seen a revolution in the theory and application of
machine learning and pattern recognition. Through these advancements, variable
ranking has emerged as an active and growing research area and it is now
beginning to be applied to many new problems. The rationale behind this fact is
that many pattern recognition problems are by nature ranking problems. The main
objective of a ranking algorithm is to sort objects according to some criteria,
so that, the most relevant items will appear early in the produced result list.
Ranking methods can be analyzed from two different methodological perspectives:
ranking to learn and learning to rank. The former aims at studying methods and
techniques to sort objects for improving the accuracy of a machine learning
model. Enhancing a model performance can be challenging at times. For example,
in pattern classification tasks, different data representations can complicate
and hide the different explanatory factors of variation behind the data. In
particular, hand-crafted features contain many cues that are either redundant
or irrelevant, which turn out to reduce the overall accuracy of the classifier.
In such a case feature selection is used, that, by producing ranked lists of
features, helps to filter out the unwanted information. Moreover, in real-time
systems (e.g., visual trackers) ranking approaches are used as optimization
procedures which improve the robustness of the system that deals with the high
variability of the image streams that change over time. The other way around,
learning to rank is necessary in the construction of ranking models for
information retrieval, biometric authentication, re-identification, and
recommender systems. In this context, the ranking model's purpose is to sort
objects according to their degrees of relevance, importance, or preference as
defined in the specific application.Comment: European PhD Thesis. arXiv admin note: text overlap with
arXiv:1601.06615, arXiv:1505.06821, arXiv:1704.02665 by other author
Application of advanced machine learning techniques to early network traffic classification
The fast-paced evolution of the Internet is drawing a complex context which
imposes demanding requirements to assure end-to-end Quality of Service. The
development of advanced intelligent approaches in networking is envisioning
features that include autonomous resource allocation, fast reaction against
unexpected network events and so on. Internet Network Traffic Classification
constitutes a crucial source of information for Network Management, being decisive
in assisting the emerging network control paradigms. Monitoring traffic flowing
through network devices support tasks such as: network orchestration, traffic
prioritization, network arbitration and cyberthreats detection, amongst others.
The traditional traffic classifiers became obsolete owing to the rapid Internet
evolution. Port-based classifiers suffer from significant accuracy losses due to port
masking, meanwhile Deep Packet Inspection approaches have severe user-privacy
limitations. The advent of Machine Learning has propelled the application of
advanced algorithms in diverse research areas, and some learning approaches have
proved as an interesting alternative to the classic traffic classification approaches.
Addressing Network Traffic Classification from a Machine Learning perspective
implies numerous challenges demanding research efforts to achieve feasible
classifiers. In this dissertation, we endeavor to formulate and solve important
research questions in Machine-Learning-based Network Traffic Classification. As a
result of numerous experiments, the knowledge provided in this research constitutes
an engaging case of study in which network traffic data from two different
environments are successfully collected, processed and modeled.
Firstly, we approached the Feature Extraction and Selection processes providing our
own contributions. A Feature Extractor was designed to create Machine-Learning
ready datasets from real traffic data, and a Feature Selection Filter based on fast
correlation is proposed and tested in several classification datasets. Then, the
original Network Traffic Classification datasets are reduced using our Selection
Filter to provide efficient classification models. Many classification models based on
CART Decision Trees were analyzed exhibiting excellent outcomes in identifying
various Internet applications. The experiments presented in this research comprise
a comparison amongst ensemble learning schemes, an exploratory study on Class
Imbalance and solutions; and an analysis of IP-header predictors for early traffic
classification. This thesis is presented in the form of compendium of JCR-indexed
scientific manuscripts and, furthermore, one conference paper is included.
In the present work we study a wide number of learning approaches employing the
most advance methodology in Machine Learning. As a result, we identify the
strengths and weaknesses of these algorithms, providing our own solutions to
overcome the observed limitations. Shortly, this thesis proves that Machine
Learning offers interesting advanced techniques that open prominent prospects in
Internet Network Traffic Classification.Departamento de Teoría de la Señal y Comunicaciones e Ingeniería TelemáticaDoctorado en Tecnologías de la Información y las Telecomunicacione
Simple but Not Simplistic: Reducing the Complexity of Machine Learning Methods
Programa Oficial de Doutoramento en Computación . 5009V01[Resumo]
A chegada do Big Data e a explosión do Internet das cousas supuxeron un gran
reto para os investigadores en Aprendizaxe Automática, facendo que o proceso de
aprendizaxe sexa mesmo roáis complexo. No mundo real, os problemas da aprendizaxe
automática xeralmente teñen complexidades inherentes, como poden ser as
características intrínsecas dos datos, o gran número de mostras, a alta dimensión dos
datos de entrada, os cambios na distribución entre o conxunto de adestramento e
test, etc. Todos estes aspectos son importantes, e requiren novoS modelos que poi dan
facer fronte a estas situacións. Nesta tese, abordáronse todos estes problemas, tratando
de simplificar o proceso de aprendizaxe automática no escenario actual. En
primeiro lugar, realízase unha análise de complexidade para observar como inflúe
esta na tarefa de clasificación, e se é posible que a aplicación dun proceso previo
de selección de características reduza esta complexidade. Logo, abórdase o proceso
de simplificación da fase de aprendizaxe automática mediante a filosofía divide e
vencerás, usando un enfoque distribuído. Seguidamente, aplicamos esa mesma filosofía
sobre o proceso de selección de características. Finalmente, optamos por un
enfoque diferente seguindo a filosofía do Edge Computing, a cal permite que os datos
producidos polos dispositivos do Internet das cousas se procesen máis preto de
onde se crearon. Os enfoques propostos demostraron a súa capacidade para reducir
a complexidade dos métodos de aprendizaxe automática tradicionais e, polo tanto,
espérase que a contribución desta tese abra as portas ao desenvolvemento de novos
métodos de aprendizaxe máquina máis simples, máis robustos, e máis eficientes
computacionalmente.[Resumen]
La llegada del Big Data y la explosión del Internet de las cosas han supuesto
un gran reto para los investigadores en Aprendizaje Automático, haciendo que el
proceso de aprendizaje sea incluso más complejo. En el mundo real, los problemas de
aprendizaje automático generalmente tienen complejidades inherentes) como pueden
ser las características intrínsecas de los datos, el gran número de muestras, la alta
dimensión de los datos de entrada, los cambios en la distribución entre el conjunto de
entrenamiento y test, etc. Todos estos aspectos son importantes, y requieren nuevos
modelos que puedan hacer frente a estas situaciones. En esta tesis, se han abordado
todos estos problemas, tratando de simplificar el proceso de aprendizaje automático
en el escenario actual. En primer lugar, se realiza un análisis de complejidad para
observar cómo influye ésta en la tarea de clasificación1 y si es posible que la aplicación
de un proceso previo de selección de características reduzca esta complejidad.
Luego, se aborda el proceso de simplificación de la fase de aprendizaje automático
mediante la filosofía divide y vencerás, usando un enfoque distribuido. A continuación,
aplicamos esa misma filosofía sobre el proceso de selección de características.
Finalmente, optamos por un enfoque diferente siguiendo la filosofía del Edge Computing,
la cual permite que los datos producidos por los dispositivos del Internet de
las cosas se procesen más cerca de donde se crearon. Los enfoques propuestos han
demostrado su capacidad para reducir la complejidad de los métodos de aprendizaje
automático tnidicionales y, por lo tanto, se espera que la contribución de esta
tesis abra las puertas al desarrollo de nuevos métodos de aprendizaje máquina más
simples, más robustos, y más eficientes computacionalmente.[Abstract]
The advent of Big Data and the explosion of the Internet of Things, has brought
unprecedented challenges to Machine Learning researchers, making the learning task
more complexo Real-world machine learning problems usually have inherent complexities,
such as the intrinsic characteristics of the data, large number of instauces,
high input dimensionality, dataset shift, etc. AH these aspects matter, and can
fOI new models that can confront these situations. Thus, in this thesis, we have
addressed aH these issues) simplifying the machine learning process in the current
scenario. First, we carry out a complexity analysis to see how it inftuences the
classification models, and if it is possible that feature selection might result in a
deerease of that eomplexity. Then, we address the proeess of simplifying learning
with the divide-and-conquer philosophy of the distributed approaeh. Later, we aim
to reduce the complexity of the feature seleetion preprocessing through the same
philosophy. FinallYl we opt for a different approaeh following the eurrent philosophy
Edge eomputing, whieh allows the data produeed by Internet of Things deviees
to be proeessed closer to where they were ereated. The proposed approaehes have
demonstrated their eapability to reduce the complexity of traditional maehine learning
algorithms, and thus it is expeeted that the eontribution of this thesis will open
the doors to the development of new maehine learning methods that are simpler,
more robust, and more eomputationally efficient
A practical view of large-scale classification: feature selection and real-time classification
Tesis doctoral inédita, Universidad Autónoma de Madrid, Escuela Politécnica Superior, mayo de 201
- …