63 research outputs found

    Generalization of the Lee-O'Sullivan List Decoding for One-Point AG Codes

    Get PDF
    We generalize the list decoding algorithm for Hermitian codes proposed by Lee and O'Sullivan based on Gr\"obner bases to general one-point AG codes, under an assumption weaker than one used by Beelen and Brander. Our generalization enables us to apply the fast algorithm to compute a Gr\"obner basis of a module proposed by Lee and O'Sullivan, which was not possible in another generalization by Lax.Comment: article.cls, 14 pages, no figure. The order of authors was changed. To appear in Journal of Symbolic Computation. This is an extended journal paper version of our earlier conference paper arXiv:1201.624

    On the evaluation codes given by simple d-sequences

    Get PDF
    Plane valuations at infinity are classified in five types. Valuations in one of them determine weight functions which take values on semigroups of Z2. These semigroups are generated by ή-sequences in Z2. We introduce simple ή-sequences in Z2 and study the evaluation codes of maximal length that they define. These codes are geometric and come from order domains. We give a bound on their minimum distance which improves the Andersen–Geil one. We also give coset bounds for the involved codes

    Neuroimaging investigations of cortical specialisation for different types of semantic knowledge

    Get PDF
    Embodied theories proposed that semantic knowledge is grounded in motor and perceptual experiences. This leads to two questions: (1) whether the neural underpinnings of perception are also necessary for semantic cognition; (2) how do biases towards different sensorimotor experiences cause brain regions to specialise for particular types of semantic information. This thesis tackles these questions in a series of neuroimaging and behavioural investigations. Regarding question 1, strong embodiment theory holds that semantic representation is reenactment of corresponding experiences, and brain regions for perception are necessary for comprehending modality-specific concepts. However, the weak embodiment view argues that reenactment may not be necessary, and areas near to perceiving regions may be sufficient to support semantic representation. In the particular case of motion concepts, lateral occipital temporal cortex (LOTC) has been long identified as an important area, but the roles of its different subregions are still uncertain. Chapter 3 examined how different parts of LOTC reacted to written descriptions of motion and static events, using multiple analysis methods. A series of anterior to posterior sub-regions were analyzed through univariate, multivariate pattern analysis (MVPA), and psychophysical interaction (PPI) analyses. MVPA revealed strongest decoding effects for motion vs. static events in the posterior parts of LOTC, including both visual motion area (V5) and posterior middle temporal gyrus (pMTG). In contrast, only the middle portion of LOTC showed increased activation for motion sentences in univariate analyses. PPI analyses showed increased functional connectivity between posterior LOTC and the multiple demand network for motion events. These findings suggest that posterior LOTC, which overlapped with the motion perception V5 region, is selectively involved in comprehending motion events, while the anterior part of LOTC contributes to general semantic processing. Regarding question 2, the hub-and-spoke theory suggests that anterior temporal lobe (ATL) acts as a hub, using inputs from modality-specific regions to construct multimodal concepts. However, some researchers propose temporal parietal cortex (TPC) as an additional hub, specialised in processing and integrating interaction and contextual information (e.g., for actions and locations). These hypotheses are summarized as the "dual-hub theory" and different aspects of this theory were investigated in in Chapters 4 and 5. Chapter 4 focuses on taxonomic and thematic relations. Taxonomic relations (or categorical relations) occur when two concepts belong to the same category (e.g., ‘dog’ and ‘wolf’ are both canines). In contrast, thematic relations (or associative relations) refer to situations that two concepts co-occur in events or scenes (e.g., ‘dog’ and ‘bone’), focusing on the interaction or association between concepts. Some studies have indicated ATL specialization for taxonomic relations and TPC specialization for thematic relations, but others have reported inconsistent or even converse results. Thus Chapter 4 first conducted an activation likelihood estimation (ALE) meta-analysis of neuroimaging studies contrasting taxonomic and thematic relations. This found that thematic relations reliably engage action and location processing regions (left pMTG and SMG), while taxonomic relations only showed consistent effects in the right occipital lobe. A primed semantic judgement task was then used to test the dual-hub theory’s prediction that taxonomic relations are heavily reliant on colour and shape knowledge, while thematic relations rely on action and location knowledge. This behavioural experiment revealed that action or location priming facilitated thematic relation processing, but colour and shape did not lead to priming effects for taxonomic relations. This indicates that thematic relations rely more on action and location knowledge, which may explain why the preferentially engage TPC, whereas taxonomic relations are not specifically linked to shape and colour features. This may explain why they did not preferentially engage left ATL. Chapter 5 concentrates on event and object concepts. Previous studies suggest ATL specialization for coding similarity of objects’ semantics, and angular gyrus (AG) specialization for sentence and event structure representation. In addition, in neuroimaging studies, event semantics are usually investigated using complex temporally extended stimuli, unlike than the single-concept stimuli used to investigate object semantics. Thus chapter 5 used representational similarity analysis (RSA), univariate analysis, and PPI analysis to explore neural activation patterns for event and object concepts presented as static images. Bilateral AGs encoded semantic similarity for event concepts, with the left AG also coding object similarity. Bilateral ATLs encoded semantic similarity for object concepts but also for events. Left ATL exhibited stronger coding for events than objects. PPI analysis revealed stronger connections between left ATL and right pMTG, and between right AG and bilateral inferior temporal gyrus (ITG) and middle occipital gyrus, for event concepts compared to object concepts. Consistent with the meta-analysis in chapter 4, the results in chapter 5 support the idea of partial specialization in AG for event semantics but do not support ATL specialization for object semantics. In fact, both the meta-analysis and chapter 5 findings suggest greater ATL involvement in coding objects' associations compared to their similarity. To conclude, the thesis provides support for the idea that perceptual brain regions are engaged in conceptual processing, in the case of motion concepts. It also provides evidence for a specialised role for TPC regions in processing thematic relations (pMTG) and event concepts (AG). There was mixed evidence for specialisation within the ATLs and this remains an important target for future research

    æ·±ć±€ć­Šçż’ă«ćŸșă„ăæ„Ÿæƒ…äŒšè©±ćˆ†æžă«é–ąă™ă‚‹ç ”ç©¶

    Get PDF
    Owning the capability to express specific emotions by a chatbot during a conversation is one of the key parts of artificial intelligence, which has an intuitive and quantifiable impact on the improvement of chatbot’s usability and user satisfaction. Enabling machines to emotion recognition in conversation is challenging, mainly because the information in human dialogue innately conveys emotions by long-term experience, abundant knowledge, context, and the intricate patterns between the affective states. Recently, many studies on neural emotional conversational models have been conducted. However, enabling the chatbot to control what kind of emotion to respond to upon its own characters in conversation is still underexplored. At this stage, people are no longer satisfied with using a dialogue system to solve specific tasks, and are more eager to achieve spiritual communication. In the chat process, if the robot can perceive the user's emotions and can accurately process them, it can greatly enrich the content of the dialogue and make the user empathize. In the process of emotional dialogue, our ultimate goal is to make the machine understand human emotions and give matching responses. Based on these two points, this thesis explores and in-depth emotion recognition in conversation task and emotional dialogue generation task. In the past few years, although considerable progress has been made in emotional research in dialogue, there are still some difficulties and challenges due to the complex nature of human emotions. The key contributions in this thesis are summarized as below: (1) Researchers have paid more attention to enhancing natural language models with knowledge graphs these days, since knowledge graph has gained a lot of systematic knowledge. A large number of studies had shown that the introduction of external commonsense knowledge is very helpful to improve the characteristic information. We address the task of emotion recognition in conversations using external knowledge to enhance semantics. In this work, we employ an external knowledge graph ATOMIC to extract the knowledge sources. We proposed KES model, a new framework that incorporates different elements of external knowledge and conversational semantic role labeling, where build upon them to learn interactions between interlocutors participating in a conversation. The conversation is a sequence of coherent and orderly discourses. For neural networks, the capture of long-range context information is a weakness. We adopt Transformer a structure composed of self-attention and feed forward neural network, instead of the traditional RNN model, aiming at capturing remote context information. We design a self-attention layer specialized for enhanced semantic text features with external commonsense knowledge. Then, two different networks composed of LSTM are responsible for tracking individual internal state and context external state. In addition, the proposed model has experimented on three datasets in emotion detection in conversation. The experimental results show that our model outperforms the state-of-the-art approaches on most of the tested datasets. (2) We proposed an emotional dialogue model based on Seq2Seq, which is improved from three aspects: model input, encoder structure, and decoder structure, so that the model can generate responses with rich emotions, diversity, and context. In terms of model input, emotional information and location information are added based on word vectors. In terms of the encoder, the proposed model first encodes the current input and sentence sentiment to generate a semantic vector, and additionally encodes the context and sentence sentiment to generate a context vector, adding contextual information while ensuring the independence of the current input. On the decoder side, attention is used to calculate the weights of the two semantic vectors separately and then decode, to fully integrate the local emotional semantic information and the global emotional semantic information. We used seven objective evaluation indicators to evaluate the model's generation results, context similarity, response diversity, and emotional response. Experimental results show that the model can generate diverse responses with rich sentiment, contextual associations

    The neuro-cognitive representation of word meaning resolved in space and time.

    Get PDF
    One of the core human abilities is that of interpreting symbols. Prompted with a perceptual stimulus devoid of any intrinsic meaning, such as a written word, our brain can access a complex multidimensional representation, called semantic representation, which corresponds to its meaning. Notwithstanding decades of neuropsychological and neuroimaging work on the cognitive and neural substrate of semantic representations, many questions are left unanswered. The research in this dissertation attempts to unravel one of them: are the neural substrates of different components of concrete word meaning dissociated? In the first part, I review the different theoretical positions and empirical findings on the cognitive and neural correlates of semantic representations. I highlight how recent methodological advances, namely the introduction of multivariate methods for the analysis of distributed patterns of brain activity, broaden the set of hypotheses that can be empirically tested. In particular, they allow the exploration of the representational geometries of different brain areas, which is instrumental to the understanding of where and when the various dimensions of the semantic space are activated in the brain. Crucially, I propose an operational distinction between motor-perceptual dimensions (i.e., those attributes of the objects referred to by the words that are perceived through the senses) and conceptual ones (i.e., the information that is built via a complex integration of multiple perceptual features). In the second part, I present the results of the studies I conducted in order to investigate the automaticity of retrieval, topographical organization, and temporal dynamics of motor-perceptual and conceptual dimensions of word meaning. First, I show how the representational spaces retrieved with different behavioral and corpora-based methods (i.e., Semantic Distance Judgment, Semantic Feature Listing, WordNet) appear to be highly correlated and overall consistent within and across subjects. Second, I present the results of four priming experiments suggesting that perceptual dimensions of word meaning (such as implied real world size and sound) are recovered in an automatic but task-dependent way during reading. Third, thanks to a functional magnetic resonance imaging experiment, I show a representational shift along the ventral visual path: from perceptual features, preferentially encoded in primary visual areas, to conceptual ones, preferentially encoded in mid and anterior temporal areas. This result indicates that complementary dimensions of the semantic space are encoded in a distributed yet partially dissociated way across the cortex. Fourth, by means of a study conducted with magnetoencephalography, I present evidence of an early (around 200 ms after stimulus onset) simultaneous access to both motor-perceptual and conceptual dimensions of the semantic space thanks to different aspects of the signal: inter-trial phase coherence appears to be key for the encoding of perceptual while spectral power changes appear to support encoding of conceptual dimensions. These observations suggest that the neural substrates of different components of symbol meaning can be dissociated in terms of localization and of the feature of the signal encoding them, while sharing a similar temporal evolution

    Unreliable and resource-constrained decoding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 185-213).Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like material, memory, and energy. This thesis studies how constraining reliability and resources in the decoder limits the performance of communication systems. Five communication problems are investigated. Broadly speaking these are communication using decoders that are wiring cost-limited, that are memory-limited, that are noisy, that fail catastrophically, and that simultaneously harvest information and energy. For each of these problems, fundamental trade-offs between communication system performance and reliability or resource consumption are established. For decoding repetition codes using consensus decoding circuits, the optimal tradeoff between decoding speed and quadratic wiring cost is defined and established. Designing optimal circuits is shown to be NP-complete, but is carried out for small circuit size. The natural relaxation to the integer circuit design problem is shown to be a reverse convex program. Random circuit topologies are also investigated. Uncoded transmission is investigated when a population of heterogeneous sources must be categorized due to decoder memory constraints. Quantizers that are optimal for mean Bayes risk error, a novel fidelity criterion, are designed. Human decision making in segregated populations is also studied with this framework. The ratio between the costs of false alarms and missed detections is also shown to fundamentally affect the essential nature of discrimination. The effect of noise on iterative message-passing decoders for low-density parity check (LDPC) codes is studied. Concentration of decoding performance around its average is shown to hold. Density evolution equations for noisy decoders are derived. Decoding thresholds degrade smoothly as decoder noise increases, and in certain cases, arbitrarily small final error probability is achievable despite decoder noisiness. Precise information storage capacity results for reliable memory systems constructed from unreliable components are also provided. Limits to communicating over systems that fail at random times are established. Communication with arbitrarily small probability of error is not possible, but schemes that optimize transmission volume communicated at fixed maximum message error probabilities are determined. System state feedback is shown not to improve performance. For optimal communication with decoders that simultaneously harvest information and energy, a coding theorem that establishes the fundamental trade-off between the rates at which energy and reliable information can be transmitted over a single line is proven. The capacity-power function is computed for several channels; it is non-increasing and concave.by Lav R. Varshney.Ph.D

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    The Role of Unimodal and Transmodal Cortex in Perceptually-Coupled and Decoupled Semantic Cognition: Evidence from fMRI

    Get PDF
    Semantic retrieval extends beyond the here-and-now, to draw on abstract knowledge that has been extracted across multiple experiences; for instance, we can easily bring to mind what a dog looks and sounds like, even when a dog is not present in our environment. However, a clear understanding of the neural substrates that support patterns of semantic retrieval that are not immediately driven by stimuli in the environment is lacking. This thesis sought to investigate the neural basis of semantic retrieval within unimodal and heteromodal networks, whilst manipulating the availability of information in the environment. Much of the empirical work takes inspiration from modern accounts of transmodal regions (Lambon Ralph et al. 2017; Margulies et al. 2016), which suggest the anterior temporal lobe (ATL) and default mode network (DMN) support both abstraction and perceptual decoupling. The first empirical chapter examines whether words and experiences activate common neural substrates in sensory regions and where, within the ATLs, representations are transmodal. The second empirical chapter investigates how perceptually-decoupled forms of semantic retrieval in imagination are represented across unimodal and transmodal regions. The third empirical chapter interrogates whether transmodal regions respond in a similar manner to conceptually-guided and perceptually-decoupled cognition, and whether these two factors interact. The data suggests ventrolateral ATL processes both abstract modality-invariant semantic representations (Chapter 3) and decoupled semantic processing during imagination (Chapter 4). In addition, this thesis found comparable networks recruited for both conceptual processing and perceptually-decoupled retrieval corresponding to the broader DMN (Chapter 5). Further interrogation of these sites confirmed lateral MTG and bilateral angular gyrus were pivotal in the combination of conceptual retrieval from memory. Collectively, this data suggests that brain regions situated farthest from sensory input systems in both functional and connectivity space are required for the most abstract forms of cognition

    Computation and representation in decision making and emotion

    Get PDF
    This thesis deals with three components of an organism’s interactions with its environment: learning, decision making, and emotions. In a series of 5 studies, I detail relationships between these processes, and investigate the representation and computations whereby they are achieved. In the first experiment I show how subjective wellbeing is influenced by one’s own rewards and expectations, but also those of other people. Furthermore, I find that parameter estimates of empathy predict decision-making in a distinct test of economic generosity. In my second study, I ask how stressful experiences modulate subsequent learning, detailing a specific impairment in action-learning under stress which also manifests itself in altered pupillary responses. In the third, I use a hierarchical model of learning to show that subjective uncertainty in aversive contexts predicts several dimensions of acute stress responses. Furthermore, I find that individuals who show greater uncertainty-tuning in their stress responses are better at predicting the presence of threat. In the final pair of studies I ask how decision variables for value-based choice are represented in the brain. I describe the combination of quality and quantity into value estimates in humans, revealing a central role for the Anterior Cingulate Cortex in value integration using functional magnetic resonance imaging. I next characterize the neural code for value in non-human primate frontal cortex, using single-neuron data from collaborators. These two studies provide convergent evidence that the value code may be more diverse and non-linear than previously reported, potentially conferring the ability to incorporate uncertainty signals directly in the activity of value coding neurons
    • 

    corecore