250 research outputs found
Goal-directed cross-system interactions in brain and deep learning networks
Deep neural networks (DNN) have recently emerged as promising models for the mammalian ventral visual stream. However, how ventral stream adapts to various goal-directed influences and coordinates with higher-level brain regions during learning remain poorly understood. By incorporating top-down influences involving attentional cues, linguistic labels and novel category learning into DNN models, the thesis offers an explanation for how the tasks we do shape representations across levels in models and related brain regions including ventral visual stream, HPC and ventromedial prefrontal cortex (vmPFC) via a theoretical modelling approach.
The thesis include three main contributions. In the first contribution, I developed a goal-directed attention mechanism which extends general-purpose DNN with the ability to reconfigure itself to better suit the current task goal, much like PFC modulates activity along the ventral stream.
In the second contribution, I uncovered how linguistic labelling shapes semantic representation by amending existing DNN to both predict the meaning and the categorical label of an object. Supported by simulation results involving fine-grained and coarse-grained labels, I concluded that differences in label use, whether across languages or levels of expertise, manifest in differences in the semantic representations that support label discrimination.
In the third contribution, I aimed to better understand cross-brain mechanisms in a novel learning task by combining insights on labelling and attention obtained from preceding efforts. Integrating DNN with a novel clustering model built off from SUSTAIN, the proposed account captures human category learning behaviour and the underlying neural mechanisms across multiple interacting brain areas involving HPC, vmPFC and the ventral visual stream.
By extending models of the ventral stream to incorporate goal-directed cross-system coordination, I hope the thesis can inform understanding of the neurobiology supporting object recognition and category learning which in turn help us advance designs of deep learning models
Tagba Tone: a case of tier hierarchization
Tagba (Senufo) is a tone language that presents tonal patterns that appear quite regular at first glance. However, these are difficult to model under standard autosegmental hierarchical assumptions regarding the placement of skeleton, segments and tones. In this paper, we will present Tagba tonology and demonstrate that it can be accounted for by readjusting the tiers of hierarchization. From this, we reason that the proposed hierarchy is parametric and language specific.Le tagba (senufo) est une langue à tons qui présente des patrons tonals plutôt réguliers au premier abord, cependant, le point de vue traditionnel sur la hiérarchie des lignes de représentation dans le modèle autosegmental fait des prédictions qui s’avèrent inappropriées sur certains patrons tonals dans cette langue. Dans cet article nous présenterons la tonologie du tagba et démontrerons qu’un aménagement de la hiérarchisation des lignes de représentation rendra compte de ses patrons tonals complexes, ainsi nous raisonnerons que cette hiérarchie, au lieu d’être universelle, est paramétrique en fonction des langues
Constructing Languages to Explore Theoretical Principles
The construction of languages has always been related to linguistics. Most of these initiatives address real scientific questions but from a non-academic point of view. The fact that Ferdinand de Saussure's own brother, René de Saussure, wrote a theoretical essay on the construction of the Esperanto word (de Saussure 1914) is an amusing illustration of this. In this paper, we propose a method inspired by experimental archaeology. The experiment consists in trying to obtain an artifact similar to the one observed using this or that construction method. An equivalent approach in linguistics would be the generation of linguistic systems based on explicitly formulated principles. Trying to generate similar systems pushes the linguist to explicitly define the principles that are needed and to explore all their consequences. In this context, we show that the use of notions induced by the observation of natural languages leads to a certain degree of circularity and that it is therefore more interesting to explore a priori principles based on very general assumptions
The Costs and Benefits of Goal-Directed Attention in Deep Convolutional Neural Networks
People deploy top-down, goal-directed attention to accomplish tasks, such as
finding lost keys. By tuning the visual system to relevant information sources,
object recognition can become more efficient (a benefit) and more biased toward
the target (a potential cost). Motivated by selective attention in
categorisation models, we developed a goal-directed attention mechanism that
can process naturalistic (photographic) stimuli. Our attention mechanism can be
incorporated into any existing deep convolutional neural network (DCNNs). The
processing stages in DCNNs have been related to ventral visual stream. In that
light, our attentional mechanism incorporates top-down influences from
prefrontal cortex (PFC) to support goal-directed behaviour. Akin to how
attention weights in categorisation models warp representational spaces, we
introduce a layer of attention weights to the mid-level of a DCNN that amplify
or attenuate activity to further a goal. We evaluated the attentional mechanism
using photographic stimuli, varying the attentional target. We found that
increasing goal-directed attention has benefits (increasing hit rates) and
costs (increasing false alarm rates). At a moderate level, attention improves
sensitivity (i.e., increases ) at only a moderate increase in bias
for tasks involving standard images, blended images, and natural adversarial
images chosen to fool DCNNs. These results suggest that goal-directed attention
can reconfigure general-purpose DCNNs to better suit the current task goal,
much like PFC modulates activity along the ventral stream. In addition to being
more parsimonious and brain consistent, the mid-level attention approach
performed better than a standard machine learning approach for transfer
learning, namely retraining the final network layer to accommodate the new
task
A note on the strength of vowels
International audienceAbstract This paper is a modest contribution to the understanding of vocalic strength. Our aim is to show that the strength of consonants and the strength of vowels can be unified. For this, we propose that the only factor of strength is length. More precisely: branching segments are stronger and segments sharing their positions with other segments are weaker. We discuss several examples of phenomena related to vowels which illustrate this strength hierarchy
Effects of ulinastatin and docataxel on breast tumor growth and expression of IL-6, IL-8, and TNF-α
<p>Abstract</p> <p>Objective</p> <p>This study investigated the effects of Ulinastatin (UTI) and docataxel (Taxotere, TAX) on tumor growth and expression of interleukin-6 (IL-6), interleukin-8 (IL-8), and tumor necrosis factor-α (TNF-α) in breast cancer.</p> <p>Methods</p> <p>MDA-MB-231 human breast carcinoma cells were cultured in vitro and injected into nude mice to establish breast tumor xenografts in vivo. Cultured cells and mice with tumors were randomly divided into four groups for treatment with TAX, UTI, and TAX+UTI. The effects of these drug treatments on cell proliferation and apoptosis was measured using the MTT assay and the Annexin V/propidium iodide (PI) double-staining method, respectively. IL-6, IL-8, and TNF-α expression levels were determined by measuring mRNA transcripts in cultured cells by RT-PCR and cytokine proteins in solid tumors using immunohistochemistry.</p> <p>Results</p> <p>UTI, TAX, and UTI+TAX inhibited the growth of MDA-MB-231 cells in vitro and tumors in vivo. These two drugs, particularly when used in combination, promote tumor cell apoptosis and down-regulate the expression IL-6, IL-8, and TNF-α cytokines.</p> <p>Conclusion</p> <p>Both UTI and TAX inhibited the growth of MDA-MB-231 breast carcinoma cells. UTI enhanced the inhibitory effect of TAX by a mechanism consistent with the down-regulated expression of IL-6, IL-8, and TNF-α.</p
Routing-Guided Learned Product Quantization for Graph-Based Approximate Nearest Neighbor Search
Given a vector dataset , a query vector , graph-based
Approximate Nearest Neighbor Search (ANNS) aims to build a proximity graph (PG)
as an index of and approximately return vectors with minimum
distances to by searching over the PG index. It suffers from the
large-scale because a PG with full vectors is too large to fit
into the memory, e.g., a billion-scale in 128 dimensions would
consume nearly 600 GB memory. To solve this, Product Quantization (PQ)
integrated graph-based ANNS is proposed to reduce the memory usage, using
smaller compact codes of quantized vectors in memory instead of the large
original vectors. Existing PQ methods do not consider the important routing
features of PG, resulting in low-quality quantized vectors that affect the
ANNS's effectiveness. In this paper, we present an end-to-end Routing-guided
learned Product Quantization (RPQ) for graph-based ANNS. It consists of (1) a
\textit{differentiable quantizer} used to make the standard discrete PQ
differentiable to suit for back-propagation of end-to-end learning, (2) a
\textit{sampling-based feature extractor} used to extract neighborhood and
routing features of a PG, and (3) a \textit{multi-feature joint training
module} with two types of feature-aware losses to continuously optimize the
differentiable quantizer. As a result, the inherent features of a PG would be
embedded into the learned PQ, generating high-quality quantized vectors.
Moreover, we integrate our RPQ with the state-of-the-art DiskANN and existing
popular PGs to improve their performance. Comprehensive experiments on
real-world large-scale datasets (from 1M to 1B) demonstrate RPQ's superiority,
e.g., 1.7-4.2 improvement on QPS at the same recall@10 of 95\%.Comment: 14 pages, 12 figure
- …