2,456 research outputs found
Protocols and Structures for Inference: A RESTful API for Machine Learning
Diversity in machine learning APIs (in both software toolkits and web services), works against realising machine learningâs full potential, making it difficult to draw on individual algorithms from different products or to compose multiple algorithms to solve complex tasks. This paper introduces the Protocols and Structures for Inference (PSI) service architecture and specification, which presents inferential entitiesârelations, attributes, learners and predictorsâas RESTful web resources that are accessible via a common but flexible and extensible interface. Resources describe the data they ingest or emit using a variant of the JSON schema language, and the API has mechanisms to support non-JSON data and future extension of service features
Recommended from our members
The SIMIAN architecture-an object-orientated framework for integrated power system modelling, analysis and control
This paper details the work conducted by the Brunel Institute of Power Systems, UK, into an object orientated framework for power systems modelling, analysis and control. Based around a central OODBMS (object orientated database management system), the architecture provides a framework for the construction of analysis and control applications and the sharing of calculated or real-time data between the applications. Although the paper details the architecture only in so far as its applicability to two applications, the framework is designed such that further applications, either client output (such as control applications) or input(such as SCADA systems) may easily be added to the basic structure. To illustrate the architecture, a load flow simulation application is presented, along with the strategy for incorporating other applications. The mechanism by which these `applications' interact with the OODBMS and core structure of the architecture is illustrate
Formal Contexts, Formal Concept Analysis, and Galois Connections
Formal concept analysis (FCA) is built on a special type of Galois
connections called polarities. We present new results in formal concept
analysis and in Galois connections by presenting new Galois connection results
and then applying these to formal concept analysis. We also approach FCA from
the perspective of collections of formal contexts. Usually, when doing FCA, a
formal context is fixed. We are interested in comparing formal contexts and
asking what criteria should be used when determining when one formal context is
better than another formal context. Interestingly, we address this issue by
studying sets of polarities.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455
State-of-the-art on evolution and reactivity
This report starts by, in Chapter 1, outlining aspects of querying and updating resources on
the Web and on the Semantic Web, including the development of query and update languages
to be carried out within the Rewerse project.
From this outline, it becomes clear that several existing research areas and topics are of
interest for this work in Rewerse. In the remainder of this report we further present state of
the art surveys in a selection of such areas and topics. More precisely: in Chapter 2 we give
an overview of logics for reasoning about state change and updates; Chapter 3 is devoted to briefly describing existing update languages for the Web, and also for updating logic programs;
in Chapter 4 event-condition-action rules, both in the context of active database systems and
in the context of semistructured data, are surveyed; in Chapter 5 we give an overview of some relevant rule-based agents frameworks
AdapterEM: Pre-trained Language Model Adaptation for Generalized Entity Matching using Adapter-tuning
Entity Matching (EM) involves identifying different data representations
referring to the same entity from multiple data sources and is typically
formulated as a binary classification problem. It is a challenging problem in
data integration due to the heterogeneity of data representations.
State-of-the-art solutions have adopted NLP techniques based on pre-trained
language models (PrLMs) via the fine-tuning paradigm, however, sequential
fine-tuning of overparameterized PrLMs can lead to catastrophic forgetting,
especially in low-resource scenarios. In this study, we propose a
parameter-efficient paradigm for fine-tuning PrLMs based on adapters, small
neural networks encapsulated between layers of a PrLM, by optimizing only the
adapter and classifier weights while the PrLMs parameters are frozen.
Adapter-based methods have been successfully applied to multilingual speech
problems achieving promising results, however, the effectiveness of these
methods when applied to EM is not yet well understood, particularly for
generalized EM with heterogeneous data. Furthermore, we explore using (i)
pre-trained adapters and (ii) invertible adapters to capture token-level
language representations and demonstrate their benefits for transfer learning
on the generalized EM benchmark. Our results show that our solution achieves
comparable or superior performance to full-scale PrLM fine-tuning and
prompt-tuning baselines while utilizing a significantly smaller computational
footprint of the PrLM parameters
XML stream transformer generation through program composition and dependency analysis
AbstractXML stream transformation, which sequentially processes the input XML data on the fly, makes it possible to process large sized data within a limited amount of memory. Though being efficient in memory-use, stream transformation requires stateful programming, which is error-prone and hard to manage.This paper proposes a scheme for generating XML stream transformers. Given an attribute grammar definition of transformation over an XML tree structure, we systematically derive a stream transformer in two steps. First, an attribute grammar definition of the XML stream transformation is inferred by applying a program composition method. Second, a finite state transition machine is constructed through a dependency analysis. Due to the closure property of the program composition method, our scheme also allows modular construction of XML stream transformers.We have implemented a prototype XML stream transformer generator, called altSAX. The experimental results show that the generated transformers are efficient in memory consumption as well as in execution time
Tackling Domain-Specific Winograd Schemas with Knowledge-Based Reasoning and Machine Learning
The Winograd Schema Challenge (WSC) is a commonsense reasoning task that requires background knowledge. In this paper, we contribute to tackling WSC in four ways. Firstly, we suggest a keyword method to define a restricted domain where distinctive high-level semantic patterns can be found. A thanking domain was defined by keywords, and the data set in this domain is used in our experiments. Secondly, we develop a high-level knowledge-based reasoning method using semantic roles which is based on the method of Sharma [Sharma, 2019]. Thirdly, we propose an ensemble method to combine knowledge-based reasoning and machine learning which shows the best performance in our experiments. As a machine learning method, we used Bidirectional Encoder Representations from Transformers (BERT) [Jacob Devlin et al., 2018; Vid Kocijan et al., 2019]. Lastly, in terms of evaluation, we suggest a "robust" accuracy measurement by modifying that of Trichelair et al. [Trichelair et al., 2018]. As with their switching method, we evaluate a model by considering its performance on trivial variants of each sentence in the test set
Compensation methods to support generic graph editing: A case study in automated verification of schema requirements for an advanced transaction model
Compensation plays an important role in advanced transaction models, cooperative work, and workflow systems. However, compensation operations are often simply written as a^â1 in
transaction model literature. This notation ignores any operation parameters, results, and side effects. A schema designer intending to use an advanced transaction model is expected (required) to write correct method code. However, in the days of cut-and-paste, this is much easier said than done. In this paper, we demonstrate the feasibility of using an off-the-shelf theorem prover (also called a proof assistant) to perform automated verification of compensation requirements for an OODB schema. We report on the results of a case study in verification for a particular advanced transaction model that supports cooperative applications. The case study is based on an OODB schema that provides generic graph editing functionality for the creation, insertion, and manipulation of nodes and links
XML stream transformer generation through program composition and dependency analysis
AbstractXML stream transformation, which sequentially processes the input XML data on the fly, makes it possible to process large sized data within a limited amount of memory. Though being efficient in memory-use, stream transformation requires stateful programming, which is error-prone and hard to manage.This paper proposes a scheme for generating XML stream transformers. Given an attribute grammar definition of transformation over an XML tree structure, we systematically derive a stream transformer in two steps. First, an attribute grammar definition of the XML stream transformation is inferred by applying a program composition method. Second, a finite state transition machine is constructed through a dependency analysis. Due to the closure property of the program composition method, our scheme also allows modular construction of XML stream transformers.We have implemented a prototype XML stream transformer generator, called altSAX. The experimental results show that the generated transformers are efficient in memory consumption as well as in execution time
- âŚ