3,788 research outputs found
A foundation for synthesising programming language semantics
Programming or scripting languages used in real-world systems are seldom designed
with a formal semantics in mind from the outset. Therefore, the first step for developing well-founded analysis tools for these systems is to reverse-engineer a formal
semantics. This can take months or years of effort.
Could we automate this process, at least partially? Though desirable, automatically reverse-engineering semantics rules from an implementation is very challenging,
as found by Krishnamurthi, Lerner and Elberty. They propose automatically learning
desugaring translation rules, mapping the language whose semantics we seek to a simplified, core version, whose semantics are much easier to write. The present thesis
contains an analysis of their challenge, as well as the first steps towards a solution.
Scaling methods with the size of the language is very difficult due to state space
explosion, so this thesis proposes an incremental approach to learning the translation
rules. I present a formalisation that both clarifies the informal description of the challenge by Krishnamurthi et al, and re-formulates the problem, shifting the focus to the
conditions for incremental learning. The central definition of the new formalisation is
the desugaring extension problem, i.e. extending a set of established translation rules
by synthesising new ones.
In a synthesis algorithm, the choice of search space is important and non-trivial,
as it needs to strike a good balance between expressiveness and efficiency. The rest
of the thesis focuses on defining search spaces for translation rules via typing rules.
Two prerequisites are required for comparing search spaces. The first is a series of
benchmarks, a set of source and target languages equipped with intended translation
rules between them. The second is an enumerative synthesis algorithm for efficiently
enumerating typed programs. I show how algebraic enumeration techniques can be applied to enumerating well-typed translation rules, and discuss the properties expected
from a type system for ensuring that typed programs be efficiently enumerable.
The thesis presents and empirically evaluates two search spaces. A baseline search
space yields the first practical solution to the challenge. The second search space is
based on a natural heuristic for translation rules, limiting the usage of variables so that
they are used exactly once. I present a linear type system designed to efficiently enumerate translation rules, where this heuristic is enforced. Through informal analysis
and empirical comparison to the baseline, I then show that using linear types can speed
up the synthesis of translation rules by an order of magnitude
An examination of the verbal behaviour of intergroup discrimination
This thesis examined relationships between psychological flexibility, psychological inflexibility, prejudicial attitudes, and dehumanization across three cross-sectional studies with an additional proposed experimental study. Psychological flexibility refers to mindful attention to the present moment, willing acceptance of private experiences, and engaging in behaviours congruent with one’s freely chosen values. Inflexibility, on the other hand, indicates a tendency to suppress unwanted thoughts and emotions, entanglement with one’s thoughts, and rigid behavioural patterns. Study 1 found limited correlations between inflexibility and sexism, racism, homonegativity, and dehumanization. Study 2 demonstrated more consistent positive associations between inflexibility and prejudice. And Study 3 controlled for right-wing authoritarianism and social dominance orientation, finding inflexibility predicted hostile sexism and racism beyond these factors. While showing some relationships, particularly with sexism and racism, psychological inflexibility did not consistently correlate with varied prejudices across studies.
The proposed randomized controlled trial aims to evaluate an Acceptance and Commitment Therapy intervention to reduce sexism through enhanced psychological flexibility. Overall, findings provide mixed support for the utility of flexibility-based skills in addressing complex societal prejudices. Research should continue examining flexibility integrated with socio-cultural approaches to promote equity
Dataflow Programming and Acceleration of Computationally-Intensive Algorithms
The volume of unstructured textual information continues to grow due to recent technological advancements. This resulted in an exponential growth of information generated in various formats, including blogs, posts, social networking, and enterprise documents. Numerous Enterprise Architecture (EA) documents are also created daily, such as reports, contracts, agreements, frameworks, architecture requirements, designs, and operational guides. The processing and computation of this massive amount of unstructured information necessitate substantial computing capabilities and the implementation of new techniques. It is critical to manage this unstructured information through a centralized knowledge management platform. Knowledge management is the process of managing information within an organization. This involves creating, collecting, organizing, and storing information in a way that makes it easily accessible and usable. The research involved the development textual knowledge management system, and two use cases were considered for extracting textual knowledge from documents. The first case study focused on the safety-critical documents of a railway enterprise. Safety is of paramount importance in the railway industry. There are several EA documents including manuals, operational procedures, and technical guidelines that contain critical information. Digitalization of these documents is essential for analysing vast amounts of textual knowledge that exist in these documents to improve the safety and security of railway operations. A case study was conducted between the University of Huddersfield and the Railway Safety Standard Board (RSSB) to analyse EA safety documents using Natural language processing (NLP). A graphical user interface was developed that includes various document processing features such as semantic search, document mapping, text summarization, and visualization of key trends. For the second case study, open-source data was utilized, and textual knowledge was extracted. Several features were also developed, including kernel distribution, analysis offkey trends, and sentiment analysis of words (such as unique, positive, and negative) within the documents. Additionally, a heterogeneous framework was designed using CPU/GPU and FPGAs to analyse the computational performance of document mapping
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Textual Entailment Recognition with Semantic Features from Empirical Text Representation
Textual entailment recognition is one of the basic natural language
understanding(NLU) tasks. Understanding the meaning of sentences is a
prerequisite before applying any natural language processing(NLP) techniques to
automatically recognize the textual entailment. A text entails a hypothesis if
and only if the true value of the hypothesis follows the text. Classical
approaches generally utilize the feature value of each word from word embedding
to represent the sentences. In this paper, we propose a novel approach to
identifying the textual entailment relationship between text and hypothesis,
thereby introducing a new semantic feature focusing on empirical
threshold-based semantic text representation. We employ an element-wise
Manhattan distance vector-based feature that can identify the semantic
entailment relationship between the text-hypothesis pair. We carried out
several experiments on a benchmark entailment classification(SICK-RTE) dataset.
We train several machine learning(ML) algorithms applying both semantic and
lexical features to classify the text-hypothesis pair as entailment, neutral,
or contradiction. Our empirical sentence representation technique enriches the
semantic information of the texts and hypotheses found to be more efficient
than the classical ones. In the end, our approach significantly outperforms
known methods in understanding the meaning of the sentences for the textual
entailment classification task.Comment: Pre-print for our paper at International Conference on Speech &
Language Technology for Low-resource Languages (SPELLL'2022
Hyperbolic Image-Text Representations
Visual and linguistic concepts naturally organize themselves in a hierarchy,
where a textual concept ``dog'' entails all images that contain dogs. Despite
being intuitive, current large-scale vision and language models such as CLIP do
not explicitly capture such hierarchy. We propose MERU, a contrastive model
that yields hyperbolic representations of images and text. Hyperbolic spaces
have suitable geometric properties to embed tree-like data, so MERU can better
capture the underlying hierarchy in image-text data. Our results show that MERU
learns a highly interpretable representation space while being competitive with
CLIP's performance on multi-modal tasks like image classification and
image-text retrieval.Comment: Technical repor
REV: Information-Theoretic Evaluation of Free-Text Rationales
Generating free-text rationales is a promising step towards explainable NLP,
yet evaluating such rationales remains a challenge. Existing metrics have
mostly focused on measuring the association between the rationale and a given
label. We argue that an ideal metric should focus on the new information
uniquely provided in the rationale that is otherwise not provided in the input
or the label. We investigate this research problem from an
information-theoretic perspective using conditional V-information (Hewitt et
al., 2021). More concretely, we propose a metric called REV (Rationale
Evaluation with conditional V-information), to quantify the amount of new,
label-relevant information in a rationale beyond the information already
available in the input or the label. Experiments across four benchmarks with
reasoning tasks, including chain-of-thought, demonstrate the effectiveness of
REV in evaluating rationale-label pairs, compared to existing metrics. We
further demonstrate REV is consistent with human judgments on rationale
evaluations and provides more sensitive measurements of new information in
free-text rationales. When used alongside traditional performance metrics, REV
provides deeper insights into models' reasoning and prediction processes.Comment: ACL 202
ORCA: A Challenging Benchmark for Arabic Language Understanding
Due to their crucial role in all NLP, several benchmarks have been proposed
to evaluate pretrained language models. In spite of these efforts, no public
benchmark of diverse nature currently exists for evaluation of Arabic. This
makes it challenging to measure progress for both Arabic and multilingual
language models. This challenge is compounded by the fact that any benchmark
targeting Arabic needs to take into account the fact that Arabic is not a
single language but rather a collection of languages and varieties. In this
work, we introduce ORCA, a publicly available benchmark for Arabic language
understanding evaluation. ORCA is carefully constructed to cover diverse Arabic
varieties and a wide range of challenging Arabic understanding tasks exploiting
60 different datasets across seven NLU task clusters. To measure current
progress in Arabic NLU, we use ORCA to offer a comprehensive comparison between
18 multilingual and Arabic language models. We also provide a public
leaderboard with a unified single-number evaluation metric (ORCA score) to
facilitate future research.Comment: All authors contributed equally. Accepted at ACL 2023, Toronto,
Canad
- …