119,262 research outputs found
Autism research : An objective quantitative review of progress and focus between 1994 and 2015
The nosology and epidemiology of Autism has undergone transformation following consolidation of once disparate disorders under the umbrella diagnostic, autism spectrum disorders. Despite this re-conceptualization, research initiatives, including the NIMH's Research Domain Criteria and Precision Medicine, highlight the need to bridge psychiatric and psychological classification methodologies with biomedical techniques. Combining traditional bibliometric co-word techniques, with tenets of graph theory and network analysis, this article provides an objective thematic review of research between 1994 and 2015 to consider evolution and focus. Results illustrate growth in Autism research since 2006, with nascent focus on physiology. However, modularity and citation analytics demonstrate dominance of subjective psychological or psychiatric constructs, which may impede progress in the identification and stratification of biomarkers as endorsed by new research initiatives.Peer reviewedFinal Published versio
The Importance of Being Clustered: Uncluttering the Trends of Statistics from 1970 to 2015
In this paper we retrace the recent history of statistics by analyzing all
the papers published in five prestigious statistical journals since 1970,
namely: Annals of Statistics, Biometrika, Journal of the American Statistical
Association, Journal of the Royal Statistical Society, series B and Statistical
Science. The aim is to construct a kind of "taxonomy" of the statistical papers
by organizing and by clustering them in main themes. In this sense being
identified in a cluster means being important enough to be uncluttered in the
vast and interconnected world of the statistical research. Since the main
statistical research topics naturally born, evolve or die during time, we will
also develop a dynamic clustering strategy, where a group in a time period is
allowed to migrate or to merge into different groups in the following one.
Results show that statistics is a very dynamic and evolving science, stimulated
by the rise of new research questions and types of data
Graphs in machine learning: an introduction
Graphs are commonly used to characterise interactions between objects of
interest. Because they are based on a straightforward formalism, they are used
in many scientific fields from computer science to historical sciences. In this
paper, we give an introduction to some methods relying on graphs for learning.
This includes both unsupervised and supervised methods. Unsupervised learning
algorithms usually aim at visualising graphs in latent spaces and/or clustering
the nodes. Both focus on extracting knowledge from graph topologies. While most
existing techniques are only applicable to static graphs, where edges do not
evolve through time, recent developments have shown that they could be extended
to deal with evolving networks. In a supervised context, one generally aims at
inferring labels or numerical values attached to nodes using both the graph
and, when they are available, node characteristics. Balancing the two sources
of information can be challenging, especially as they can disagree locally or
globally. In both contexts, supervised and un-supervised, data can be
relational (augmented with one or several global graphs) as described above, or
graph valued. In this latter case, each object of interest is given as a full
graph (possibly completed by other characteristics). In this context, natural
tasks include graph clustering (as in producing clusters of graphs rather than
clusters of nodes in a single graph), graph classification, etc. 1 Real
networks One of the first practical studies on graphs can be dated back to the
original work of Moreno [51] in the 30s. Since then, there has been a growing
interest in graph analysis associated with strong developments in the modelling
and the processing of these data. Graphs are now used in many scientific
fields. In Biology [54, 2, 7], for instance, metabolic networks can describe
pathways of biochemical reactions [41], while in social sciences networks are
used to represent relation ties between actors [66, 56, 36, 34]. Other examples
include powergrids [71] and the web [75]. Recently, networks have also been
considered in other areas such as geography [22] and history [59, 39]. In
machine learning, networks are seen as powerful tools to model problems in
order to extract information from data and for prediction purposes. This is the
object of this paper. For more complete surveys, we refer to [28, 62, 49, 45].
In this section, we introduce notations and highlight properties shared by most
real networks. In Section 2, we then consider methods aiming at extracting
information from a unique network. We will particularly focus on clustering
methods where the goal is to find clusters of vertices. Finally, in Section 3,
techniques that take a series of networks into account, where each network i
Numerical Simulations of the Dark Universe: State of the Art and the Next Decade
We present a review of the current state of the art of cosmological dark
matter simulations, with particular emphasis on the implications for dark
matter detection efforts and studies of dark energy. This review is intended
both for particle physicists, who may find the cosmological simulation
literature opaque or confusing, and for astro-physicists, who may not be
familiar with the role of simulations for observational and experimental probes
of dark matter and dark energy. Our work is complementary to the contribution
by M. Baldi in this issue, which focuses on the treatment of dark energy and
cosmic acceleration in dedicated N-body simulations. Truly massive dark
matter-only simulations are being conducted on national supercomputing centers,
employing from several billion to over half a trillion particles to simulate
the formation and evolution of cosmologically representative volumes (cosmic
scale) or to zoom in on individual halos (cluster and galactic scale). These
simulations cost millions of core-hours, require tens to hundreds of terabytes
of memory, and use up to petabytes of disk storage. The field is quite
internationally diverse, with top simulations having been run in China, France,
Germany, Korea, Spain, and the USA. Predictions from such simulations touch on
almost every aspect of dark matter and dark energy studies, and we give a
comprehensive overview of this connection. We also discuss the limitations of
the cold and collisionless DM-only approach, and describe in some detail
efforts to include different particle physics as well as baryonic physics in
cosmological galaxy formation simulations, including a discussion of recent
results highlighting how the distribution of dark matter in halos may be
altered. We end with an outlook for the next decade, presenting our view of how
the field can be expected to progress. (abridged)Comment: 54 pages, 4 figures, 3 tables; invited contribution to the special
issue "The next decade in Dark Matter and Dark Energy" of the new Open Access
journal "Physics of the Dark Universe". Replaced with accepted versio
Dynamic clustering of time series with Echo State Networks
In this paper we introduce a novel methodology for unsupervised analysis of time series, based upon the iterative implementation of a clustering algorithm embedded into the evolution of a recurrent Echo State Network. The main features of the temporal data are captured by the dynamical evolution of the network states, which are then subject to a clustering procedure. We apply the proposed algorithm to time series coming from records of eye movements, called saccades, which are recorded for diagnosis of a neurodegenerative form of ataxia. This is a hard classification problem, since saccades from patients at an early stage of the disease are practically indistinguishable from those coming from healthy subjects. The unsupervised clustering algorithm implanted within the recurrent network produces more compact clusters, compared to conventional clustering of static data, and provides a source of information that could aid diagnosis and assessment of the disease.Universidad de Málaga. Campus de Excelencia Internacional AndalucĂa Tec
Review of analytical instruments for EEG analysis
Since it was first used in 1926, EEG has been one of the most useful
instruments of neuroscience. In order to start using EEG data we need not only
EEG apparatus, but also some analytical tools and skills to understand what our
data mean. This article describes several classical analytical tools and also
new one which appeared only several years ago. We hope it will be useful for
those researchers who have only started working in the field of cognitive EEG
- …