232 research outputs found
When Prolog meets generative models: a new approach for managing knowledge and planning in robotic applications
In this paper, we propose a robot oriented knowledge management system based
on the use of the Prolog language. Our framework hinges on a special
organisation of knowledge base that enables: 1. its efficient population from
natural language texts using semi-automated procedures based on Large Language
Models, 2. the bumpless generation of temporal parallel plans for multi-robot
systems through a sequence of transformations, 3. the automated translation of
the plan into an executable formalism (the behaviour trees). The framework is
supported by a set of open source tools and is shown on a realistic
application.Comment: 7 pages, 4 figures, submitted to ICRA 202
TALK COMMONSENSE TO ME! ENRICHING LANGUAGE MODELS WITH COMMONSENSE KNOWLEDGE
Human cognition is exciting, it is a mesh up of several neural phenomena which really
strive our ability to constantly reason and infer about the involving world. In cognitive
computer science, Commonsense Reasoning is the terminology given to our ability to
infer uncertain events and reason about Cognitive Knowledge. The introduction of Commonsense
to intelligent systems has been for years desired, but the mechanism for this
introduction remains a scientific jigsaw. Some, implicitly believe language understanding
is enough to achieve some level of Commonsense [90]. In a less common ground, there
are others who think enriching language with Knowledge Graphs might be enough for
human-like reasoning [63], while there are others who believe human-like reasoning can
only be truly captured with symbolic rules and logical deduction powered by Knowledge
Bases, such as taxonomies and ontologies [50]. We focus on Commonsense Knowledge
integration to Language Models, because we believe that this integration is a step towards
a beneficial embedding of Commonsense Reasoning to interactive Intelligent Systems,
such as conversational assistants.
Conversational assistants, such as Alexa from Amazon, are user driven systems. Thus,
giving birth to a more human-like interaction is strongly desired to really capture the
user’s attention and empathy. We believe that such humanistic characteristics can be
leveraged through the introduction of stronger Commonsense Knowledge and Reasoning
to fruitfully engage with users.
To this end, we intend to introduce a new family of models, the Relation-Aware
BART (RA-BART), leveraging language generation abilities of BART [51] with explicit
Commonsense Knowledge extracted from Commonsense Knowledge Graphs to further
extend human capabilities on these models.
We evaluate our model on three different tasks: Abstractive Question Answering, Text
Generation conditioned on certain concepts and aMulti-Choice Question Answering task.
We find out that, on generation tasks, RA-BART outperforms non-knowledge enriched
models, however, it underperforms on the multi-choice question answering task.
Our Project can be consulted in our open source, public GitHub repository (Explicit
Commonsense).A cognição humana é entusiasmante, é uma malha de vários fenómenos neuronais que
nos estimulam vivamente a capacidade de raciocinar e inferir constantemente sobre o
mundo envolvente. Na ciência cognitiva computacional, o raciocínio de senso comum é
a terminologia dada à nossa capacidade de inquirir sobre acontecimentos incertos e de
raciocinar sobre o conhecimento cognitivo. A introdução do senso comum nos sistemas
inteligentes é desejada há anos, mas o mecanismo para esta introdução continua a ser
um quebra-cabeças científico. Alguns acreditam que apenas compreensão da linguagem
é suficiente para alcançar o senso comum [90], num campo menos similar há outros que
pensam que enriquecendo a linguagem com gráfos de conhecimento pode serum caminho
para obter um raciocínio mais semelhante ao ser humano [63], enquanto que há outros
ciêntistas que acreditam que o raciocínio humano só pode ser verdadeiramente capturado
com regras simbólicas e deduções lógicas alimentadas por bases de conhecimento, como
taxonomias e ontologias [50]. Concentramo-nos na integração de conhecimento de censo
comum em Modelos Linguísticos, acreditando que esta integração é um passo no sentido
de uma incorporação benéfica no racíocinio de senso comum em Sistemas Inteligentes
Interactivos, como é o caso dos assistentes de conversação.
Assistentes de conversação, como o Alexa da Amazon, são sistemas orientados aos
utilizadores. Assim, dar origem a uma comunicação mais humana é fortemente desejada
para captar realmente a atenção e a empatia do utilizador. Acreditamos que tais características
humanísticas podem ser alavancadas por meio de uma introdução mais rica de
conhecimento e raciocínio de senso comum de forma a proporcionar uma interação mais
natural com o utilizador.
Para tal, pretendemos introduzir uma nova família de modelos, o Relation-Aware
BART (RA-BART), alavancando as capacidades de geração de linguagem do BART [51]
com conhecimento de censo comum extraído a partir de grafos de conhecimento explícito
de senso comum para alargar ainda mais as capacidades humanas nestes modelos.
Avaliamos o nosso modelo em três tarefas distintas: Respostas a Perguntas Abstratas,
Geração de Texto com base em conceitos e numa tarefa de Resposta a Perguntas de Escolha Múltipla . Descobrimos que, nas tarefas de geração, o RA-BART tem um desempenho
superior aos modelos sem enriquecimento de conhecimento, contudo, tem um
desempenho inferior na tarefa de resposta a perguntas de múltipla escolha.
O nosso Projecto pode ser consultado no nosso repositório GitHub público, de código
aberto (Explicit Commonsense)
Rule-Based Intelligence on the Semantic Web: Implications for Military Capabilities
Rules are a key element of the Semantic Web vision, promising to provide a foundation for reasoning capabilities that underpin the intelligent manipulation and exploitation of information content. Although ontologies provide the basis for some forms of reasoning, it is unlikely that ontologies, by themselves, will support the range of knowledge-based services that are likely to be required on the Semantic Web. As such, it is important to consider the contribution that rule-based systems can make to the realization of advanced machine intelligence on the Semantic Web. This report aims to review the current state-of-the-art with respect to semantic rule-based technologies. It provides an overview of the rules, rule languages and rule engines that are currently available to support ontology-based reasoning, and it discusses some of the limitations of these technologies in terms of their inability to cope with uncertain or imprecise data and their poor performance in some reasoning contexts. This report also describes the contribution of reasoning systems to military capabilities, and suggests that current technological shortcomings pose a significant barrier to the widespread adoption of reasoning systems within the defence community. Some solutions to these shortcomings are presented and a timescale for technology adoption within the military domain is proposed. It is suggested that application areas such as semantic integration, semantic interoperability, data fusion and situation awareness provide the best opportunities for technology adoption within the 2015 timeframe. Other capabilities, such as decision support and the emulation of human-style reasoning capabilities are seen to depend on the resolution of significant challenges that may hinder attempts at technology adoption and exploitation within the 2020 timeframe
Topics in Programming Languages, a Philosophical Analysis through the case of Prolog
[EN]Programming languages seldom find proper anchorage in philosophy of logic, language and science. is more, philosophy of language seems to be restricted to natural languages and linguistics, and even philosophy of logic is rarely framed into programming languages topics. The logic programming paradigm and Prolog are, thus, the most adequate paradigm and programming language to work on this subject, combining natural language processing and linguistics, logic programming and constriction methodology on both algorithms and procedures, on an overall philosophizing declarative status. Not only this, but the dimension of the Fifth Generation Computer system related to strong Al wherein Prolog took a major role. and its historical frame in the very crucial dialectic between procedural and declarative paradigms, structuralist and empiricist biases, serves, in exemplar form, to treat straight ahead philosophy of logic, language and science in the contemporaneous age as well.
In recounting Prolog's philosophical, mechanical and algorithmic harbingers, the opportunity is open to various routes. We herein shall exemplify some:
- the mechanical-computational background explored by Pascal, Leibniz, Boole, Jacquard, Babbage, Konrad Zuse, until reaching to the ACE (Alan Turing) and EDVAC (von Neumann), offering the backbone in computer architecture, and the work of Turing, Church, Gödel, Kleene, von Neumann, Shannon, and others on computability, in parallel lines, throughly studied in detail, permit us to interpret ahead the evolving realm of programming languages. The proper line from lambda-calculus, to the Algol-family, the declarative and procedural split with the C language and Prolog, and the ensuing branching and programming languages explosion and further delimitation, are thereupon inspected as to relate them with the proper syntax, semantics and philosophical élan of logic programming and Prolog
From Biological to Synthetic Neurorobotics Approaches to Understanding the Structure Essential to Consciousness (Part 2)
We have been left with a big challenge, to articulate
consciousness and also to prove it in an artificial agent
against a biological standard. After introducing Boltuc’s
h-consciousness in the last paper, we briefly reviewed
some salient neurology in order to sketch less of a standard
than a series of targets for artificial consciousness, “most-consciousness” and “myth-consciousness.”
With these targets on the horizon, we began reviewing the research
program pursued by Jun Tani and colleagues in the isolation
of the formal dynamics essential to either. In this paper,
we describe in detail Tani’s research program, in order to
make the clearest case for artificial consciousness in these
systems. In the next paper, the third in the series, we will
return to Boltuc’s naturalistic non-reductionism in light of
the neurorobotics models introduced (alongside some
others), and evaluate them more completely
Statistical Relational Learning for Proteomics: Function, Interactions and Evolution
In recent years, the field of Statistical Relational Learning (SRL) [1, 2] has
produced new, powerful learning methods that are explicitly designed to solve
complex problems, such as collective classification, multi-task learning and
structured output prediction, which natively handle relational data, noise,
and partial information. Statistical-relational methods rely on some First-
Order Logic as a general, expressive formal language to encode both the data
instances and the relations or constraints between them. The latter encode
background knowledge on the problem domain, and are use to restrict or bias
the model search space according to the instructions of domain experts. The
new tools developed within SRL allow to revisit old computational biology
problems in a less ad hoc fashion, and to tackle novel, more complex ones.
Motivated by these developments, in this thesis we describe and discuss the
application of SRL to three important biological problems, highlighting the
advantages, discussing the trade-offs, and pointing out the open problems.
In particular, in Chapter 3 we show how to jointly improve the outputs
of multiple correlated predictors of protein features by means of a very gen-
eral probabilistic-logical consistency layer. The logical layer — based on
grounding-specific Markov Logic networks [3] — enforces a set of weighted
first-order rules encoding biologically motivated constraints between the pre-
dictions. The refiner then improves the raw predictions so that they least
violate the constraints. Contrary to canonical methods for the prediction
of protein features, which typically take predicted correlated features as in-
puts to improve the output post facto, our method can jointly refine all
predictions together, with potential gains in overall consistency. In order
to showcase our method, we integrate three stand-alone predictors of corre-
lated features, namely subcellular localization (Loctree[4]), disulfide bonding
state (Disulfind[5]), and metal bonding state (MetalDetector[6]), in a way
that takes into account the respective strengths and weaknesses. The ex-
perimental results show that the refiner can improve the performance of the
underlying predictors by removing rule violations. In addition, the proposed
method is fully general, and could in principle be applied to an array of
heterogeneous predictions without requiring any change to the underlying
software.
In Chapter 4 we consider the multi-level protein–protein interaction (PPI)
prediction problem. In general, PPIs can be seen as a hierarchical process
occurring at three related levels: proteins bind by means of specific domains,
which in turn form interfaces through patches of residues. Detailed knowl-
edge about which domains and residues are involved in a given interaction has
extensive applications to biology, including better understanding of the bind-
ing process and more efficient drug/enzyme design. We cast the prediction
problem in terms of multi-task learning, with one task per level (proteins,
domains and residues), and propose a machine learning method that collec-
tively infers the binding state of all object pairs, at all levels, concurrently.
Our method is based on Semantic Based Regularization (SBR) [7], a flexible
and theoretically sound SRL framework that employs First-Order Logic con-
straints to tie the learning tasks together. Contrarily to most current PPI
prediction methods, which neither identify which regions of a protein actu-
ally instantiate an interaction nor leverage the hierarchy of predictions, our
method resolves the prediction problem up to residue level, enforcing con-
sistent predictions between the hierarchy levels, and fruitfully exploits the
hierarchical nature of the problem. We present numerical results showing
that our method substantially outperforms the baseline in several experi-
mental settings, indicating that our multi-level formulation can indeed lead
to better predictions.
Finally, in Chapter 5 we consider the problem of predicting drug-resistant
protein mutations through a combination of Inductive Logic Programming [8,
9] and Statistical Relational Learning. In particular, we focus on viral pro-
teins: viruses are typically characterized by high mutation rates, which allow
them to quickly develop drug-resistant mutations. Mining relevant rules from
mutation data can be extremely useful to understand the virus adaptation
mechanism and to design drugs that effectively counter potentially resistant
mutants. We propose a simple approach for mutant prediction where the in-
put consists of mutation data with drug-resistance information, either as sets
of mutations conferring resistance to a certain drug, or as sets of mutants with
information on their susceptibility to the drug. The algorithm learns a set
of relational rules characterizing drug-resistance, and uses them to generate
a set of potentially resistant mutants. Learning a weighted combination of
rules allows to attach generated mutants with a resistance score as predicted
by the statistical relational model and select only the highest scoring ones.
Promising results were obtained in generating resistant mutations for both
nucleoside and non-nucleoside HIV reverse transcriptase inhibitors. The ap-
proach can be generalized quite easily to learning mutants characterized by
more complex rules correlating multiple mutations
Efficient Decision Support Systems
This series is directed to diverse managerial professionals who are leading the transformation of individual domains by using expert information and domain knowledge to drive decision support systems (DSSs). The series offers a broad range of subjects addressed in specific areas such as health care, business management, banking, agriculture, environmental improvement, natural resource and spatial management, aviation administration, and hybrid applications of information technology aimed to interdisciplinary issues. This book series is composed of three volumes: Volume 1 consists of general concepts and methodology of DSSs; Volume 2 consists of applications of DSSs in the biomedical domain; Volume 3 consists of hybrid applications of DSSs in multidisciplinary domains. The book is shaped upon decision support strategies in the new infrastructure that assists the readers in full use of the creative technology to manipulate input data and to transform information into useful decisions for decision makers
Database marketing intelligence methodology supported by ontologies and knowlegde discovery in databases
Tese de doutoramento em Tecnologias e Sistemas de InformaçãoActualmente as organizações actuam em ambientes caracterizados pela inconstância,
elevada competitividade e pressão no desenvolvimento de novas abordagens ao
mercado e aos clientes. Nesse contexto, o acesso à informação, o suporte à tomada de
decisão e a partilha de conhecimento tornam-se essenciais para o desempenho
organizativo.
No domínio do marketing têm surgido diversas abordagens para a exploração do
conteúdo das suas bases de dados. Uma das abordagens, utilizadas com maior sucesso,
tem sido o processo para a descoberta de conhecimento em bases de dados. Por outro
lado, a necessidade de representação e partilha de conhecimento tem contribuído para
um crescente desenvolvimento das ontologias em áreas diversas como sejam medicina,
aviação ou segurança.
O presente trabalho cruza diversas áreas: tecnologias e sistemas de informação (em
particular a descoberta de conhecimento), o marketing (especificamente o database
marketing) e as ontologias. O objectivo principal desta investigação foca o papel das
ontologias em termos de suporte e assistência ao processo de descoberta de
conhecimento em bases de dados num contexto de database marketing. Através de
abordagens distintas foram formuladas duas ontologias: ontologia para o processo de
descoberta de conhecimento em bases de dados e, a ontologia para o processo database
marketing suportado na extracção de conhecimento em bases de dados (com
reutilização da ontologia anterior). O processo para licitação e validação de
conhecimento, baseou-se no método de Delphi (ontologia de database marketing) e no
processo de investigação baseada na revisão de literatura (ontologia de descoberta de
conhecimento). A concretização das ontologias suportou-se em duas metodologias:
metodologia methontology, para a ontologia de descoberta de conhecimento e
metodologia 101 para a ontologia de database marketing. A última, evidencia a
reutilização de ontologias, viabilizando assim a reutilização da ontologia de descoberta
de conhecimento na ontologia de database marketing. Ambas ontologias foram desenvolvidas sobre a ferramenta Protege-OWL permitindo não só a criação de toda a
hierarquia de classes, propriedades e relações, como também, a realização de métodos
de inferência através de linguagens baseadas em regras de Web semântica.
Posteriormente, procedeu-se à experimentação da ontologia em casos práticos de
extracção de conhecimento a partir de bases de dados de marketing.
O emprego das ontologias neste contexto de investigação, representa uma abordagem
pioneira e inovadora, uma vez que são propostas para assistirem em cada uma das fases
do processo de extracção de conhecimento em bases de dados através de métodos de
inferência. È assim possível assistir o utilizador em cada fase do processo de database
marketing em acções tais como de selecção de actividades de marketing em função dos
objectivos de marketing (e.g., perfil de cliente), em acções de selecção dados (e.g., tipos
de dados a utilizar em função da actividade a desenvolver) ou mesmo no processo de
selecção de algoritmos (e.g. inferir sobre o tipo de algoritmo a usar em função do
objectivo definido).
A integração das duas ontologias num contexto mais lato permite, propor uma
metodologia com vista ao efectivo suporte do processo de database marketing baseado
no processo de descoberta de conhecimento em bases de dados, denominado nesta
dissertação como: Database Marketing Intelligence. Para a demonstração da viabilidade
da metodologia proposta foi seguido o método action-research com o qual se observou
e testou o papel das ontologias no suporte à descoberta de conhecimento em bases de
dados (através de um caso prático) num contexto de database marketing. O trabalho de
aplicação prática decorreu sobre uma base de dados real relativa a um cartão de
fidelização de uma companhia petrolífera a operar em Portugal.
Os resultados obtidos serviram para demonstrar em duas vertente o sucesso da
abordagem proposta: por um lado foi possível formalizar e acompanhar todo o processo
de descoberta de conhecimento em bases de dados; por outro lado, foi possível
perspectivar uma metodologia para um domínio concreto suportado por ontologias
(suporte á decisão na selecção de métodos e tarefas) e na descoberta de conhecimento
em bases de dados.Nowadays, the environment in which companies work is turbulent, very competitive
and pressure in the development of new approaches to the market and clients. In this
context, the access to information, the decision support and knowledge sharing become
essential for the organization performance.
In the marketing domain several approaches for the exploration of database exploration
have emerged. One of the most successfully used approaches has been the knowledge
discovery process in databases. On the other hand, the necessity of knowledge
representation and sharing and contributed to a growing development of ontologies in
several areas such as in the medical, the aviation or safety areas.
This work crosses several areas: technology and information systems (specifically
knowledge discovery in databases), marketing (specifically database marketing) and
ontologies in general. The main goal of this investigation is to focus on the role of
ontologies in terms of support and aid to the knowledge discovery process in databases
in a database marketing context. Through distinct approaches two ontologies were
created: ontology for the knowledge discovery process in databases, and the ontology
for the database marketing process supported on the knowledge extraction in databases
(reusing the former ontology). The elicitation and validation of knowledge process was
based on the Delphi method (database marketing ontology) and the investigation
process was based on literature review (knowledge discovery ontology). The carrying
out of both ontologies was based on two methodologies: methontology methodology,
for the knowledge discovery process and 101 methodology for the database marketing
ontology. The former methodology, stresses the reusing of ontologies, allowing the
reusing of the knowledge discovery ontology in the database marketing ontology. Both
ontologies were developed with the Protege-OWL tool. This tool allows not only the
creation of all the hierarchic classes, properties and relationships, but also the carrying
out of inference methods through web semantics based languages. Then, the ontology
was tested in practical cases of knowledge extraction from marketing databases. The application of ontologies in this investigation represents a pioneer and innovative
approach, once they are proposed to aid and execute an effective support in each phase
of the knowledge extraction from databases in the database marketing context process.
Through inference processes on the knowledge base created it was possible to assist the
user in each phase of the database marketing process such as, in marketing activity
selection actions according to the marketing objectives (e.g., client profile) or in data
selection actions (e.g., type of data to use according to the activity to be preformed. In
relation to aid in the knowledge discovery process in databases, it was also possible to
infer on the type of algorithm to use according to the defined objective or even
according to the type of data pre-processing activities to develop regarding the type of
data and type of attribute information.
The integration of both ontologies in a more general context allows proposing a
methodology aiming to the effective support of the database marketing process based on
the knowledge discovery process in databases, named in this dissertation as: Database
Marketing Intelligence. To demonstrate the viability of the proposed methodology the
action-research method was followed with which the role of ontologies in assisting
knowledge discovery in databases (through a practical case) in the database marketing
context was observed and tested. For the practical application work a real database
about a customer loyalty card from a Portuguese oil company was used.
The results achieved demonstrated the success of the proposed approach in two ways:
on one hand, it was possible to formalize and follow the whole knowledge discovery in
databases process; on the other hand, it was possible to perceive a methodology for a
concrete domain supported by ontologies (support of the decision in the selection of
methods and tasks) and in the knowledge discovery in databases.Fundação para a Ciência e a Tecnologia (FCT) - SFRH/BD/36541/200
- …