78 research outputs found
Entropy of Polysemantic Words for the Same Part of Speech
In this paper, a special type of polysemantic words, that is, words with multiple meanings for the same part of speech, are analyzed under the name of neutrosophic words. These words represent the most dif cult cases for the disambiguation algorithms as they represent the most ambiguous natural language utterances. For approximate their meanings, we developed a semantic representation framework made by means of concepts from neutrosophic theory and entropy measure in which we incorporate sense related data. We show the advantages of the proposed framework in a sentiment classification task
Is Information Out There?
In this paper, I argue that the distinction between information and data lies at the root of much confusion that surrounds the concept of information. Although data are ‘out there’ and concrete, informational content is abstract and always co-constituted by information agents – a set which includes at least linguistically capable human beings. Information is thus not an intrinsic property of concrete data, but rather a relational property, which relies on the existence of information agents. To reach this conclusion I first argue that the semantic content of human-generated data is co-constituted by the information agent. In the second part I broaden the scope and argue that environmental information also depends on information agents. I further consider and reject both Dretske’s view of information as an objective commodity and foundational accounts of information, that take information to be the fundamental ingredient of reality
Знакомство з M-логикой: базовые положения касательно ключевых понятий
У статті розглянуто теоретичні основи міждисциплінарних студій, орієнтованих на виявлення системних універсалій у явищах лінгвального, ментального, фізичного, культурного планів. Комплекс положень, що детермінують вибір дослідницького інструментарію та процедур, розглядається як "міфологічна логіка" (M-logic). M-logic спирається на принцип нео-антропоцентризму, принцип енігматичності поступу систем і нечіткий характер їхніх складників, враховує теорію міфологічно-орієнтованого семіозису, використовує нелінійну каузативну логіку аналізу, інкорпорує аналітичні й синтетичні процедури, розглядає мовний код як квантовий феномен.В статье рассмотрены теоретические основы междисциплинарных исследований, ориентированных на системные универсалии у явлениях лингвального, ментального, физического, культурного планов. Комплекс положений, детерминирующих выбор исследовательского инструментария и процедур, рассматривается как "мифологическая логика" (M-logic). M-logic опирается на принцип нео-антропоцентризма, принцип энигматичности развития систем и нечеткий характер их компонентов, учитывает теори мифологически ориентированного семиозиса, использует нелинейную каузативную логику анализа, объединяет аналитические и синтетические процедури, рассматривает языковой код як квантовый феномен.The article presents theoretical premises of interdisciplinary studies targeting systemic universalia of lingual, mental, physical, cultural nature. The suggested set of methodological concepts identified as "Mythic logic" (M-logic) employs broad interdisciplinary parallels, encompasses rational-analytic and irrational-synthetic research procedures. The key notions of the suggested approach are neo-anthropocentrism, myth-oriented semiosis theory, fuzzy entities' interpretation, recognizing quantum nature of lingual phenomena, causative non-linear logic, enigmatic (fuzzily synergetic) nature of system's development, inverse nature of systems' fluctuations. The said notions are employed in the interdisciplinary analysis and suggest further elaboration of meta-language, dynamic sets of interpretational coordinates, as well as interdisciplinary experimental research
Decoding of metaphoric form of homonymous scientific term by a linguist and an expert
The article considers the problem of distinguishing terminological homonymy as a semantic category, and an attempt to model the process of decoding (understanding) metaphorical homonymous scientific terms is made. Integrating the conceptual provisions of the term theory, the theory of metaphor and the category of homonymy, the author offers the scheme of logical (categorial) and semantic analyses of the dictionary definition of genetic homonymic terms - from the metaphorical form to the special concept. The process of the homonymous terms deciphering is limited in this research to two steps: 1) to establish the linguistic term form motivation, 2) to determine the denotation of special concepts designated by one form. As a result of the analysis of dictionary definitions of genetic terms four types of a homonymy - intralingual, lexical, interscientific and mixed have been identified, and also the features of associative chains forming by the linguist-terminologist and the expert in the process of distinguishing of homonymous terms content are described
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
The last decade of machine learning has seen drastic increases in scale and
capabilities. Deep neural networks (DNNs) are increasingly being deployed in
the real world. However, they are difficult to analyze, raising concerns about
using them without a rigorous understanding of how they function. Effective
tools for interpreting them will be important for building more trustworthy AI
by helping to identify problems, fix bugs, and improve basic understanding. In
particular, "inner" interpretability techniques, which focus on explaining the
internal components of DNNs, are well-suited for developing a mechanistic
understanding, guiding manual modifications, and reverse engineering solutions.
Much recent work has focused on DNN interpretability, and rapid progress has
thus far made a thorough systematization of methods difficult. In this survey,
we review over 300 works with a focus on inner interpretability tools. We
introduce a taxonomy that classifies methods by what part of the network they
help to explain (weights, neurons, subnetworks, or latent representations) and
whether they are implemented during (intrinsic) or after (post hoc) training.
To our knowledge, we are also the first to survey a number of connections
between interpretability research and work in adversarial robustness, continual
learning, modularity, network compression, and studying the human visual
system. We discuss key challenges and argue that the status quo in
interpretability research is largely unproductive. Finally, we highlight the
importance of future work that emphasizes diagnostics, debugging, adversaries,
and benchmarking in order to make interpretability tools more useful to
engineers in practical applications
AdapterEM: Pre-trained Language Model Adaptation for Generalized Entity Matching using Adapter-tuning
Entity Matching (EM) involves identifying different data representations
referring to the same entity from multiple data sources and is typically
formulated as a binary classification problem. It is a challenging problem in
data integration due to the heterogeneity of data representations.
State-of-the-art solutions have adopted NLP techniques based on pre-trained
language models (PrLMs) via the fine-tuning paradigm, however, sequential
fine-tuning of overparameterized PrLMs can lead to catastrophic forgetting,
especially in low-resource scenarios. In this study, we propose a
parameter-efficient paradigm for fine-tuning PrLMs based on adapters, small
neural networks encapsulated between layers of a PrLM, by optimizing only the
adapter and classifier weights while the PrLMs parameters are frozen.
Adapter-based methods have been successfully applied to multilingual speech
problems achieving promising results, however, the effectiveness of these
methods when applied to EM is not yet well understood, particularly for
generalized EM with heterogeneous data. Furthermore, we explore using (i)
pre-trained adapters and (ii) invertible adapters to capture token-level
language representations and demonstrate their benefits for transfer learning
on the generalized EM benchmark. Our results show that our solution achieves
comparable or superior performance to full-scale PrLM fine-tuning and
prompt-tuning baselines while utilizing a significantly smaller computational
footprint of the PrLM parameters
An Approach to Conceptualisation and Semantic Knowledge: Some Preliminary Observations
The paper below takes up the question of whether it is possible to transfer the notion of ‘semantic knowledge’—as a human process of making language generate and confer meanings—to machines, which have as one of their properties the capability of handling high amounts of information. This issue is presented in an extended introduction to the paper’s account of and solutions to this intricate problem. Thereafter, the theoretical notion of ‘knowledge’ is considered in its philosophical, and thereby scientific, context, and the basis of its modern import is pointed to being Immanuel Kant’s deliberations on a priori vs. a posteriori knowledge. The author’s solution to the predicament of modern ideas about knowledge is the proposed theory of Occurrence Logic, invented by the author, which abandons truth-values from valid reasoning, and this approach is briefly accounted for. It presupposes a theoretical model of human cognitive systems, and the author has such a model under development which, in the future, may be able to solve the question of what ‘semantic knowledge’ actually is. So far, the theoretical account in this paper points to the critical issue of whether natural language semantics can be grasped as words explaining words or must include the connection between words and objects in the world. The author is in favour of the last option. This leads to the question of the functions of the human brain as the organ connecting words with the outer world. The idea of the so-called ‘predictive brain’ is referred to as a possible solution to the brain/cognition issue, and the paper concludes with a suggestion that an emulation of the interaction between the mentioned cognitive systems may cast some new light on the field of Artificial Intelligence
The Importance of Quantum Information in the Stock Market and Financial Decision Making in Conditions of Radical Uncertainty
The Universe is a coin that’s already been flipped, heads or tails predetermined: all we’re doing is uncovering it the ‘paradox’ is only a conflict between reality and your feeling of what reality ‘ought to be’.Richard FeynmanThe aim of the research takes place through two parallel directions. The first is gaining an understanding of the applicability of quantum mechanics/quantum physics to human decision-making processes in the stock market with quantum information as a decision-making lever, and the second direction is neuroscience and artificial intelligence using postulates analogous to the postulates of quantum mechanics and radical uncertainty in conditions of radical uncertainty.The world of radical uncertainty (radical uncertainty is based on the knowledge of quantum mechanics from the claim that there is no causal certainty). it is everywhere in our world. "Radical uncertainty is characterized by vagueness, ignorance, indeterminacy, ambiguity and lack of information. He prefers to create 'mysteries' rather than 'puzzles' with defined solutions. Mysteries are ill-defined problems in which action is required, but the future is uncertain, the consequences unpredictable, and disagreement inevitable. "How should we make decisions in these circumstances?" (J. Kay and M. King, 2020), while "uncertainty and ambiguity are at the very core of the stock market. "Narratives are the currency of uncertainty" (N. Mangee, 2022)
The Importance of Quantum Information in the Stock Market and Financial Decision Making in Conditions of Radical Uncertainty
The Universe is a coin that’s already been flipped, heads or tails predetermined: all we’re doing is uncovering it the ‘paradox’ is only a conflict between reality and your feeling of what reality ‘ought to be’.Richard FeynmanThe aim of the research takes place through two parallel directions. The first is gaining an understanding of the applicability of quantum mechanics/quantum physics to human decision-making processes in the stock market with quantum information as a decision-making lever, and the second direction is neuroscience and artificial intelligence using postulates analogous to the postulates of quantum mechanics and radical uncertainty in conditions of radical uncertainty.The world of radical uncertainty (radical uncertainty is based on the knowledge of quantum mechanics from the claim that there is no causal certainty). it is everywhere in our world. "Radical uncertainty is characterized by vagueness, ignorance, indeterminacy, ambiguity and lack of information. He prefers to create 'mysteries' rather than 'puzzles' with defined solutions. Mysteries are ill-defined problems in which action is required, but the future is uncertain, the consequences unpredictable, and disagreement inevitable. "How should we make decisions in these circumstances?" (J. Kay and M. King, 2020), while "uncertainty and ambiguity are at the very core of the stock market. "Narratives are the currency of uncertainty" (N. Mangee, 2022)
- …