23,316 research outputs found
The physics of brain network structure, function, and control
The brain is a complex organ characterized by heterogeneous patterns of
structural connections supporting unparalleled feats of cognition and a wide
range of behaviors. New noninvasive imaging techniques now allow these patterns
to be carefully and comprehensively mapped in individual humans and animals.
Yet, it remains a fundamental challenge to understand how the brain's
structural wiring supports cognitive processes, with major implications for the
personalized treatment of mental health disorders. Here, we review recent
efforts to meet this challenge that draw on intuitions, models, and theories
from physics, spanning the domains of statistical mechanics, information
theory, and dynamical systems and control. We begin by considering the
organizing principles of brain network architecture instantiated in structural
wiring under constraints of symmetry, spatial embedding, and energy
minimization. We next consider models of brain network function that stipulate
how neural activity propagates along these structural connections, producing
the long-range interactions and collective dynamics that support a rich
repertoire of system functions. Finally, we consider perturbative experiments
and models for brain network control, which leverage the physics of signal
transmission along structural wires to infer intrinsic control processes that
support goal-directed behavior and to inform stimulation-based therapies for
neurological disease and psychiatric disorders. Throughout, we highlight
several open questions in the physics of brain network structure, function, and
control that will require creative efforts from physicists willing to brave the
complexities of living matter
Cognitive computational neuroscience
To learn how cognition is implemented in the brain, we must build
computational models that can perform cognitive tasks, and test such models
with brain and behavioral experiments. Cognitive science has developed
computational models of human cognition, decomposing task performance into
computational components. However, its algorithms still fall short of human
intelligence and are not grounded in neurobiology. Computational neuroscience
has investigated how interacting neurons can implement component functions of
brain computation. However, it has yet to explain how those components interact
to explain human cognition and behavior. Modern technologies enable us to
measure and manipulate brain activity in unprecedentedly rich ways in animals
and humans. However, experiments will yield theoretical insight only when
employed to test brain-computational models. It is time to assemble the pieces
of the puzzle of brain computation. Here we review recent work in the
intersection of cognitive science, computational neuroscience, and artificial
intelligence. Computational models that mimic brain information processing
during perceptual, cognitive, and control tasks are beginning to be developed
and tested with brain and behavioral data.Comment: 31 pages, 4 figure
Reverse-engineering biological networks from large data sets
Much of contemporary systems biology owes its success to the abstraction of a
network, the idea that diverse kinds of molecular, cellular, and organismal
species and interactions can be modeled as relational nodes and edges in a
graph of dependencies. Since the advent of high-throughput data acquisition
technologies in fields such as genomics, metabolomics, and neuroscience, the
automated inference and reconstruction of such interaction networks directly
from large sets of activation data, commonly known as reverse-engineering, has
become a routine procedure. Whereas early attempts at network
reverse-engineering focused predominantly on producing maps of system
architectures with minimal predictive modeling, reconstructions now play
instrumental roles in answering questions about the statistics and dynamics of
the underlying systems they represent. Many of these predictions have clinical
relevance, suggesting novel paradigms for drug discovery and disease treatment.
While other reviews focus predominantly on the details and effectiveness of
individual network inference algorithms, here we examine the emerging field as
a whole. We first summarize several key application areas in which inferred
networks have made successful predictions. We then outline the two major
classes of reverse-engineering methodologies, emphasizing that the type of
prediction that one aims to make dictates the algorithms one should employ. We
conclude by discussing whether recent breakthroughs justify the computational
costs of large-scale reverse-engineering sufficiently to admit it as a mainstay
in the quantitative analysis of living systems.Comment: 24 pages, 2 figures. To appear as Chapter 10 of 'Quantitative
Biology: Theory, Computational Methods and Examples of Models'. Brian Munsky,
Lev Tsimring, William S. Hlavacek, editors. MIT Press, 201
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
The interpretation of deep learning models is a challenge due to their size,
complexity, and often opaque internal state. In addition, many systems, such as
image classifiers, operate on low-level features rather than high-level
concepts. To address these challenges, we introduce Concept Activation Vectors
(CAVs), which provide an interpretation of a neural net's internal state in
terms of human-friendly concepts. The key idea is to view the high-dimensional
internal state of a neural net as an aid, not an obstacle. We show how to use
CAVs as part of a technique, Testing with CAVs (TCAV), that uses directional
derivatives to quantify the degree to which a user-defined concept is important
to a classification result--for example, how sensitive a prediction of "zebra"
is to the presence of stripes. Using the domain of image classification as a
testing ground, we describe how CAVs may be used to explore hypotheses and
generate insights for a standard image classification network as well as a
medical application
Challenges and Prospects in Vision and Language Research
Language grounded image understanding tasks have often been proposed as a
method for evaluating progress in artificial intelligence. Ideally, these tasks
should test a plethora of capabilities that integrate computer vision,
reasoning, and natural language understanding. However, rather than behaving as
visual Turing tests, recent studies have demonstrated state-of-the-art systems
are achieving good performance through flaws in datasets and evaluation
procedures. We review the current state of affairs and outline a path forward
A Novel Semantics and Feature Preserving Perspective for Content Aware Image Retargeting
There is an increasing requirement for efficient image retargeting techniques
to adapt the content to various forms of digital media. With rapid growth of
mobile communications and dynamic web page layouts, one often needs to resize
the media content to adapt to the desired display sizes. For various layouts of
web pages and typically small sizes of handheld portable devices, the
importance in the original image content gets obfuscated after resizing it with
the approach of uniform scaling. Thus, there occurs a need for resizing the
images in a content aware manner which can automatically discard irrelevant
information from the image and present the salient features with more
magnitude. There have been proposed some image retargeting techniques keeping
in mind the content awareness of the input image. However, these techniques
fail to prove globally effective for various kinds of images and desired sizes.
The major problem is the inefficiency of these algorithms to process these
images with minimal visual distortion while also retaining the meaning conveyed
from the image. In this dissertation, we present a novel perspective for
content aware image retargeting, which is well implementable in real time. We
introduce a novel method of analysing semantic information within the input
image while also maintaining the important and visually significant features.
We present the various nuances of our algorithm mathematically and logically,
and show that the results prove better than the state-of-the-art techniques.Comment: 74 Pages, 46 Figures, Masters Thesi
Network Analysis of Particles and Grains
The arrangements of particles and forces in granular materials have a complex
organization on multiple spatial scales that ranges from local structures to
mesoscale and system-wide ones. This multiscale organization can affect how a
material responds or reconfigures when exposed to external perturbations or
loading. The theoretical study of particle-level, force-chain, domain, and bulk
properties requires the development and application of appropriate physical,
mathematical, statistical, and computational frameworks. Traditionally,
granular materials have been investigated using particulate or continuum
models, each of which tends to be implicitly agnostic to multiscale
organization. Recently, tools from network science have emerged as powerful
approaches for probing and characterizing heterogeneous architectures across
different scales in complex systems, and a diverse set of methods have yielded
fascinating insights into granular materials. In this paper, we review work on
network-based approaches to studying granular matter and explore the potential
of such frameworks to provide a useful description of these systems and to
enhance understanding of their underlying physics. We also outline a few open
questions and highlight particularly promising future directions in the
analysis and design of granular matter and other kinds of material networks
Case studies in network community detection
Community structure describes the organization of a network into subgraphs
that contain a prevalence of edges within each subgraph and relatively few
edges across boundaries between subgraphs. The development of
community-detection methods has occurred across disciplines, with numerous and
varied algorithms proposed to find communities. As we present in this Chapter
via several case studies, community detection is not just an "end game" unto
itself, but rather a step in the analysis of network data which is then useful
for furthering research in the disciplinary domain of interest. These
case-study examples arise from diverse applications, ranging from social and
political science to neuroscience and genetics, and we have chosen them to
demonstrate key aspects of community detection and to highlight that community
detection, in practice, should be directed by the application at hand.Comment: 21 pages, 5 figure
Radiological images and machine learning: trends, perspectives, and prospects
The application of machine learning to radiological images is an increasingly
active research area that is expected to grow in the next five to ten years.
Recent advances in machine learning have the potential to recognize and
classify complex patterns from different radiological imaging modalities such
as x-rays, computed tomography, magnetic resonance imaging and positron
emission tomography imaging. In many applications, machine learning based
systems have shown comparable performance to human decision-making. The
applications of machine learning are the key ingredients of future clinical
decision making and monitoring systems. This review covers the fundamental
concepts behind various machine learning techniques and their applications in
several radiological imaging areas, such as medical image segmentation, brain
function studies and neurological disease diagnosis, as well as computer-aided
systems, image registration, and content-based image retrieval systems.
Synchronistically, we will briefly discuss current challenges and future
directions regarding the application of machine learning in radiological
imaging. By giving insight on how take advantage of machine learning powered
applications, we expect that clinicians can prevent and diagnose diseases more
accurately and efficiently.Comment: 13 figure
Learning crystal plasticity using digital image correlation: Examples from discrete dislocation dynamics
Digital image correlation (DIC) is a well-established, non-invasive technique
for tracking and quantifying the deformation of mechanical samples under
strain. While it provides an obvious way to observe incremental and aggregate
displacement information, it seems likely that DIC data sets, which after all
reflect the spatially-resolved response of a microstructure to loads, contain
much richer information than has generally been extracted from them. In this
paper, we demonstrate a machine-learning approach to quantifying the prior
deformation history of a crystalline sample based on its response to a
subsequent DIC test. This prior deformation history is encoded in the
microstructure through the inhomogeneity of the dislocation microstructure, and
in the spatial correlations of the dislocation patterns, which mediate the
system's response to the DIC test load. Our domain consists of deformed
crystalline thin films generated by a discrete dislocation plasticity
simulation. We explore the range of applicability of machine learning (ML) for
typical experimental protocols, and as a function of possible size effects and
stochasticity. Plasticity size effects may directly influence the data,
rendering unsupervised techniques unable to distinguish different plasticity
regimes.Comment: 35 pages, 31 figure
- …