30 research outputs found
Congruence Closure with Free Variables (Work in Progress)
International audienceThis paper presents preliminary work on the definition of a general framework for handling quantified formulas in SMT solving. Its focus is on the derivation of instances conflicting with a ground context, redefining the approach introduced in [11]. An enhanced version of the classical congruence closure algorithm, able to handle free variables, is presented
Learning, Probability and Logic: Toward a Unified Approach for Content-Based Music Information Retrieval
Within the last 15 years, the field of Music Information Retrieval (MIR) has made tremendous progress in the development of algorithms for organizing and analyzing the ever-increasing large and varied amount of music and music-related data available digitally. However, the development of content-based methods to enable or ameliorate multimedia retrieval still remains a central challenge. In this perspective paper, we critically look at the problem of automatic chord estimation from audio recordings as a case study of content-based algorithms, and point out several bottlenecks in current approaches: expressiveness and flexibility are obtained to the expense of robustness and vice versa; available multimodal sources of information are little exploited; modeling multi-faceted and strongly interrelated musical information is limited with current architectures; models are typically restricted to short-term analysis that does not account for the hierarchical temporal structure of musical signals. Dealing with music data requires the ability to tackle both uncertainty and complex relational structure at multiple levels of representation. Traditional approaches have generally treated these two aspects separately, probability and learning being the usual way to represent uncertainty in knowledge, while logical representation being the usual way to represent knowledge and complex relational information. We advocate that the identified hurdles of current approaches could be overcome by recent developments in the area of Statistical Relational Artificial Intelligence (StarAI) that unifies probability, logic and (deep) learning. We show that existing approaches used in MIR find powerful extensions and unifications in StarAI, and we explain why we think it is time to consider the new perspectives offered by this promising research field
Recommended from our members
A Tool for Producing Verified, Explainable Proofs
Mathematicians are reluctant to use interactive theorem provers. In this thesis I argue that this is because proof assistants don't emphasise explanations of proofs; and that in order to produce good explanations, the system must create proofs in a manner that mimics how humans would create proofs. My research goals are to determine what constitutes a human-like proof and to represent human-like reasoning within an interactive theorem prover to create formalised, understandable proofs. Another goal is to produce a framework to visualise the goal states of this system.
To demonstrate this, I present HumanProof: a piece of software built for the Lean 3 theorem prover. It is used for interactively creating proofs that resemble how human mathematicians reason. The system provides a visual, hierarchical representation of the goal and a system for suggesting available inference rules. The system produces output in the form of both natural language and formal proof terms which are checked by Lean's kernel. This is made possible with the use of a structured goal state system which interfaces with Lean's tactic system which is detailed in Chapter 3.
In Chapter 4, I present the subtasks automation planning subsystem, which is used to produce equality proofs in a human-like fashion. The basic strategy of the subtasks system is break a given equality problem in to a hierarchy of tasks and then maintain a stack of these tasks in order to determine the order in which to apply equational rewriting moves. This process produces equality chains for simple problems without having to resort to brute force or specialised procedures such as normalisation. This makes proofs more human-like by breaking the problem into a hierarchical set of tasks in the same way that a human would.
To produce the interface for this software, I also created the ProofWidgets system for Lean 3. This system is detailed in Chapter 5. The ProofWidgets system uses Lean's metaprogramming framework to allow users to write their own interactive, web-based user interfaces to display within the VSCode editor and in an online web-editor. The entire tactic state is available to the rendering engine, and hence expression structure and types of subexpressions can be explored interactively. The ProofWidgets system also allows the user interface to interactively edit the proof document, enabling a truly interactive modality for creating proofs; human-like or not.
In Chapter 6, the system is evaluated by asking real mathematicians about the output of the system, and what it means for a proof to be understandable to them. The user group study asks participants to rank and comment on proofs created by HumanProof alongside natural language and pure Lean proofs. The study finds that participants generally prefer the HumanProof format over the Lean format. The verbal responses collected during the study indicate that providing intuition and signposting are the most important properties of a proof that aid understanding.EPSR
Neural-symbolic learning for knowledge base completion
A query answering task computes the prediction scores of ground queries inferred from a Knowledge Base (KB). Traditional symbolic-based methods solve this task using ‘exact’ provers. However, they are not very scalable and difficult to apply to current large KBs. Sub-symbolic methods have recently been proposed to address this problem. They require to be trained to learn the semantics of the symbolic representation and use it to make predictions about query answering. Such predictions may rely upon unknown rules over the given KB. Not all proposed sub-symbolic systems are capable of inducing rules from the KB; and even more challenging is the learning of rules that are human interpretable. Some approaches, e.g., those based on a Neural Theorem Prover (NTP), are able to address this problem but with limited scalability and expressivity of the rules that they can induce.
We take inspiration from the NTP framework and propose three sub-symbolic architectures that solve the query answering task in a scalable manner while supporting the induction of more expressive rules. Two of these architectures, called Topical NTP (TNTP) and Topic-Subdomain NTP (TSNTP), address the scalability aspect. Trained representations of predicates and constants are clustered and the soft-unification of the backward chaining proof procedure that they use is controlled by these clusters. The third architecture, called Negation-as-Failure TSNTP (NAF TSNTP), addresses the expressivity of the induced rules by supporting the learning of rules with negation-as-failure. All these architectures make use of additional hyperparameters that encourage the learning of induced rules during training.
Each architecture is evaluated over benchmark datasets with increased complexity in size of the KB, number of predicates and constants present in the KB, and level of incompleteness of the KB with respect to test sets. The evaluation measures the accuracy of query answering prediction and computational time. The former uses two key metrics, AUC_PR and HITS, adopted also by existing sub-symbolic systems that solve the same task, whereas the computational time is in terms of CPU training time. The evaluation performance of our systems is compared against that of existing state-of-the-art sub-symbolic systems, showing that our approaches are indeed in most cases more accurate in solving query answering tasks, whilst being more efficient in computational time. The increased accuracy in some tasks is specifically due to the learning of more expressive rules, thus demonstrating the importance of increased expressivity in rule induction.Open Acces
Inteligência Ambiente em serviços de saúde baseada em ontologias e na descoberta de conhecimento em bases de dados e/ou bases de conhecimento
Tese de doutoramento em Engenharia BiomédicaAvanços em novas Metodologias para a Resolução de Problemas e o uso de Tecnologias de
Informação e Comunicação (TIC) possibilitam uma nova conceção dos processos ao nível dos
Serviços de Saúde, baseados no uso e na integração de dados e conhecimentos a todos os
níveis, num ambiente relacionado com Instituições de Saúde. De facto, as novas tecnologias de
comunicação suportarão a transição de uma aproximação baseada na instituição, para uma
abordagem baseada no utente, i.e., o Sistema de Saúde está confrontado com uma série de
desafios, nomeadamente os relacionados com a qualidade da informação e o custo de tais
processos. Por outro lado, o nível de saúde já atingido em Portugal não permite uma degradação
do nível e qualidade de serviços já oferecidos aos cidadãos a quem constitucionalmente é
garantido o direto à saúde. Importa encontrar formas de concertar a governação clínica (clinical
governance) com a gestão económica dos serviços de saúde, muito particularmente para a
desejada sustentabilidade do Serviço Nacional de Saúde.
A prestação de cuidados de saúde a custos controlados, dotando o utente de capacidade para
participar nesse processo, e a utilização e reutilização de informação, são aspetos importantes
para a instituição prestadora de cuidados de saúde.
O nosso principal intuito será o de encontrar formas e métodos de superação da atual dispersão
de informação nos diversos serviços, departamentos e setores da saúde, muito coincidentes com
a situação deficitária, por vezes desagregada, dissociada dos problemas, onde impera a
desmotivação e se propiciam assimetrias nos cuidados de saúde sejam hospitalares, ou ao nível
dos cuidados primários e outros. Importará garantir a acessibilidade aos cidadãos, particularmente aos doentes, disponibilizando
cuidados de saúde oportunos, integradores, que respeitem as reais necessidades de saúde das
pessoas e sejam administrados segundo as melhores práticas de gestão clínica e de administração. Para promover tal desiderato importa articular e integrar procedimentos,
controlar metas e resultados.
Importará maximizar a utilização das TIC’s, exigindo-se para tal soluções de integração e
interoperabilidade de processos, criando verdadeiros sistemas de informação integrados,
multidisciplinares, que considerem as necessidades e legítimos interesses dos stakeholders da
saúde.
As infraestruturas das TIC’s deverão, por conseguinte, refletir a visão do sistema de prestação de
cuidados de saúde de forma não intrusiva, onde a informação possa fluir de forma transparente
entre as instituições e os profissionais de saúde. Assim, o trabalho desenvolvido e apresentado
nesta tese abordará aspetos chave que deverão estar relacionados com a utilização da
tecnologia e dos sistemas de informação numa visão centrada no utente, na prestação de
cuidados de saúde e na gestão e prevenção de doenças. Tratar-se-á de consolidar uma visão
global da assistência hospitalar, articulada e integrada com outros setores e saberes, oferecendo
aos cidadãos cuidados de saúde garantidamente adequados, sustentados em processos de
integração da informação e de automação aceitáveis e oportunos, atendendo às ontologias
médicas e melhores práticas profissionais, que transmitam maior proximidade aos utentes,
permitindo, no limite, monitorizar e tratar os doentes no seu domicílio.Advances in new Methodologies for Problem Solving and Information Technology enable a
fundamental redesign of health care processes based on the use and integration of data and/or
knowledge at all levels, in a healthcare environment. Indeed, new communication technologies
will support the transition from institution centric to patient-centric based applications, i.e., the
health care system is faced with a series of challenges, namely those concerning quality-ofinformation
and the cost-effectiveness of such processes. On the other hand, the health level
achieved in Portugal does not allow anymore the degradation of the quality of services provided
to citizens, to whom, constitutionally, is guaranteed the right to have health care treatments. It is
important to gather clinical governance with the economic management of health care services,
particularly to achieve the desired sustainability of the National Health Service (Serviço Nacional
de Saúde).
The distribution of cost-effective health care allowing the patient to take active part in the caring
process, provision of evidence based care on all levels in the system and effective use and reuse
of information are key issues for the health care organization.
Our main aim will be the finding of ways and methods to overpass the actual scattering of
information through the services, departments and health sectors, where personal motivation is
difficult to attain, and the health care treatments are becoming disparate, at all levels.
It should be guaranteed, to the citizens, mainly the sick or the needy, providing timely health care
concerning the real needs of people through practices recognized as the best, from the clinical
point of view. To promote such goal it is important to integrate procedures and to control results. The use of IT’s should be maximized through the application of interoperability procedures,
creating truly integrated information services, considering the needs and interests of the health
care stakeholders.
The information and communication technology infrastructure should, therefore, reflect the view
of the health care system as a seamless system where information can flow across organizational
and professional borders. Thus, the work presented in this thesis, it will be address key principles
that must be at the center of patient-centered use of technologies for the health care and disease
management and prevention.
A global vision of hospital assistance will be gathered, integrated with other sectors an know-how,
providing citizens with adequate health care services, supported by information integration and
automation procedures acceptable and secure, respecting clinical ontologies and the best
medical practices, insuring the needed proximity to citizens, allowing, at the limit, to monitor ant
to give care to patients at home
Simple low cost causal discovery using mutual information and domain knowledge
PhDThis thesis examines causal discovery within datasets, in particular observational datasets where
normal experimental manipulation is not possible. A number of machine learning techniques
are examined in relation to their use of knowledge and the insights they can provide regarding
the situation under study. Their use of prior knowledge and the causal knowledge produced by
the learners are examined. Current causal learning algorithms are discussed in terms of their
strengths and limitations. The main contribution of the thesis is a new causal learner LUMIN
that operates with a polynomial time complexity in both the number of variables and records
examined. It makes no prior assumptions about the form of the relationships and is capable of
making extensive use of available domain information. This learner is compared to a number of
current learning algorithms and it is shown to be competitive with them
A process model in platform independent and neutral formal representation for design engineering automation
An engineering design process as part of product development (PD) needs to satisfy ever-changing customer demands by striking a balance between time, cost and quality. In order to achieve a faster lead-time, improved quality and reduced PD costs for increased profits, automation methods have been developed with the help of virtual engineering. There are various methods of achieving Design Engineering Automation (DEA) with Computer-Aided (CAx) tools such as CAD/CAE/CAM, Product Lifecycle Management (PLM) and Knowledge Based Engineering (KBE). For example, Computer Aided Design (CAD) tools enable Geometry Automation (GA), PLM systems allow for sharing and exchange of product knowledge throughout the PD lifecycle.
Traditional automation methods are specific to individual products and are hard-coded and bound by the proprietary tool format. Also, existing CAx tools and PLM systems offer bespoke islands of automation as compared to KBE. KBE as a design method incorporates complete design intent by including re-usable geometric, non-geometric product knowledge as well as engineering process knowledge for DEA including various processes such as mechanical design, analysis and manufacturing.
It has been recognised, through an extensive literature review, that a research gap exists in the form of a generic and structured method of knowledge modelling, both informal and formal modelling, of mechanical design process with manufacturing knowledge (DFM/DFA) as part of model based systems engineering (MBSE) for DEA with a KBE approach. There is a lack of a structured technique for knowledge modelling, which can provide a standardised method to use platform independent and neutral formal standards for DEA with generative modelling for mechanical product design process and DFM with preserved semantics. The neutral formal representation through computer or machine understandable format provides open standard usage.
This thesis provides a contribution to knowledge by addressing this gap in two-steps:
• In the first step, a coherent process model, GPM-DEA is developed as part of MBSE which can be used for modelling of mechanical design with manufacturing knowledge utilising hybrid approach, based on strengths of existing modelling standards such as IDEF0, UML, SysML and addition of constructs as per author’s Metamodel. The structured process model is highly granular with complex interdependencies such as activities, object, function, rule association and includes the effect of the process model on the product at both component and geometric attributes.
• In the second step, a method is provided to map the schema of the process model to equivalent platform independent and neutral formal standards using OWL/SWRL ontology for system development using Protégé tool, enabling machine interpretability with semantic clarity for DEA with generative modelling by building queries and reasoning on set of generic SWRL functions developed by the author.
Model development has been performed with the aid of literature analysis and pilot use-cases. Experimental verification with test use-cases has confirmed the reasoning and querying capability on formal axioms in generating accurate results. Some of the other key strengths are that knowledgebase is generic, scalable and extensible, hence provides re-usability and wider design space exploration. The generative modelling capability allows the model to generate activities and objects based on functional requirements of the mechanical design process with DFM/DFA and rules based on logic. With the help of application programming interface, a platform specific DEA system such as a KBE tool or a CAD tool enabling GA and a web page incorporating engineering knowledge for decision support can consume relevant part of the knowledgebase
How to Win First-Order Safety Games
First-order (FO) transition systems have recently attracted attention for the verification of parametric systems such as network protocols, software-defined networks or multi-agent workflows like conference management systems. Functional correctness or noninterference of these systems have conveniently been formulated as safety or hypersafety properties, respectively. In this article, we take the step from verification to synthesis---tackling the question whether it is possible to automatically synthesize predicates to enforce safety or hypersafety properties like noninterference. For that, we generalize FO transition systems to FO safety games. For FO games with monadic predicates only, we provide a complete classification into decidable and undecidable cases. For games with non-monadic predicates, we concentrate on universal first-order invariants, since these are sufficient to express a large class of properties---for example noninterference. We identify a non-trivial sub-class where invariants can be proven inductive and FO winning strategies be effectively constructed. We also show how the extraction of weakest FO winning strategies can be reduced to SO quantifier elimination itself. We demonstrate the usefulness of our approach by automatically synthesizing nontrivial FO specifications of messages in a leader election protocol as well as for paper assignment in a conference management system to exclude unappreciated disclosure of reports
The Psychology of Human Thought
The “Psychology of Human Thought” is an “open access” collection of peer-reviewed chapters from all areas of higher cognitive processes. The book is intended to be used as a textbook in courses on higher process, complex cognition, human thought, and related courses. Chapters include concept acquisition, knowledge representation, inductive and deductive reasoning, problem solving, metacognition, language, expertise, intelligence, creativity, wisdom, development of thought, affect and thought, and sections about history and about methods. The chapters are written by distinguished scholarly experts in their respective fields, coming from such diverse regions as North America, Great Britain, France, Germany, Norway, Israel, and Australia. The level of the chapters is addressed to advanced undergraduates and beginning graduate students.„Psychology of Human Thought“ ist eine Sammlung frei zugänglicher, qualitätsgeprüfter Kapitel aus allen Gebieten höherer Kognition. Sie ist gedacht als Lesebuch zum Studium komplexer Kognition und des menschlichen Denkens. Die Kapitel umfassen die Themen Begriffserwerb, Wissensrepräsentation, induktives und deduktives Schließen, Problemlösen, Metakognition, Sprache, Kultur, Expertise, Intelligenz, Kreativität, Weisheit, Denkentwicklung, Denken und Gefühle. Auch Kapitel zur Geschichte und zu Methoden sind dabei. Die Kapitel sind von weltweit führenden Experten aus den USA, Großbritannien, Frankreich, Norwegen, Israel, Australien und Deutschland verfasst. Das Niveau ist ausgerichtet auf fortgeschrittene Studierende