193 research outputs found
Active search in intensionally specified structured spaces
We consider an active search problem in intensionally specified structured spaces. The ultimate goal in this setting is to discover structures from structurally different partitions of a fixed but unknown target class. An example of such a process is that of computer-aided de novo drug design. In the past 20 years several Monte Carlo search heuristics have been developed for this process. Motivated by these hand-crafted search heuristics, we devise a Metropolis--Hastings sampling scheme where the acceptance probability is given by a probabilistic surrogate of the target property, modeled with a max entropy conditional model. The surrogate model is updated in each iteration upon the evaluation of a selected structure. The proposed approach is consistent and the empirical evidence indicates that it achieves a large structural variety of discovered targets
Recommended from our members
Learning from AI : new trends in database technology
Recently some researchers in the areas of database data modelling and knowledge representations in artificial intelligence have recognized that they share many common goals. In this survey paper we show the relationship between database and artificial intelligence research. We show that there has been a tendency for data models to incorporate more modelling techniques developed for knowledge representations in artificial intelligence as the desire to incorporate more application oriented semantics, user friendliness, and flexibility has increased. Increasing the semantics of the representation is the key to capturing the "reality" of the database environment, increasing user friendliness, and facilitating the support of multiple, possibly conflicting, user views of the information contained in a database
kLog: A Language for Logical and Relational Learning with Kernels
We introduce kLog, a novel approach to statistical relational learning.
Unlike standard approaches, kLog does not represent a probability distribution
directly. It is rather a language to perform kernel-based learning on
expressive logical and relational representations. kLog allows users to specify
learning problems declaratively. It builds on simple but powerful concepts:
learning from interpretations, entity/relationship data modeling, logic
programming, and deductive databases. Access by the kernel to the rich
representation is mediated by a technique we call graphicalization: the
relational representation is first transformed into a graph --- in particular,
a grounded entity/relationship diagram. Subsequently, a choice of graph kernel
defines the feature space. kLog supports mixed numerical and symbolic data, as
well as background knowledge in the form of Prolog or Datalog programs as in
inductive logic programming systems. The kLog framework can be applied to
tackle the same range of tasks that has made statistical relational learning so
popular, including classification, regression, multitask learning, and
collective classification. We also report about empirical comparisons, showing
that kLog can be either more accurate, or much faster at the same level of
accuracy, than Tilde and Alchemy. kLog is GPLv3 licensed and is available at
http://klog.dinfo.unifi.it along with tutorials
Network models : an assessment
This article – based on a larger study (Pawelec 2009) – has two aims. The more limited one is to present network models proposed by Ronald Langacker and George Lakoff. I try to show that both ventures rest on manifestly different assumptions, contrary to the widespread view that they are convergent or complementary. Langacker’s declared aim is “descriptive adequacy”: his model serves as a global representation of linguistic intuitions, rooted in convention. Lakoff, on the other hand, offers a developmental model: a fairly general abstract schema is “imagistically” specified and transformed, while the more specific schemas serve as the basis for metaphorical transfers. My wider aim is to offer a preliminary assessment of theoretical justifications and practical potential of network models in lexical semantics
24th International Conference on Information Modelling and Knowledge Bases
In the last three decades information modelling and knowledge bases have become essentially important subjects not only in academic communities related to information systems and computer science but also in the business area where information technology is applied. The series of European – Japanese Conference on Information Modelling and Knowledge Bases (EJC) originally started as a co-operation initiative between Japan and Finland in 1982. The practical operations were then organised by professor Ohsuga in Japan and professors Hannu Kangassalo and Hannu Jaakkola in Finland (Nordic countries). Geographical scope has expanded to cover Europe and also other countries. Workshop characteristic - discussion, enough time for presentations and limited number of participants (50) / papers (30) - is typical for the conference. Suggested topics include, but are not limited to: 1. Conceptual modelling: Modelling and specification languages; Domain-specific conceptual modelling; Concepts, concept theories and ontologies; Conceptual modelling of large and heterogeneous systems; Conceptual modelling of spatial, temporal and biological data; Methods for developing, validating and communicating conceptual models. 2. Knowledge and information modelling and discovery: Knowledge discovery, knowledge representation and knowledge management; Advanced data mining and analysis methods; Conceptions of knowledge and information; Modelling information requirements; Intelligent information systems; Information recognition and information modelling. 3. Linguistic modelling: Models of HCI; Information delivery to users; Intelligent informal querying; Linguistic foundation of information and knowledge; Fuzzy linguistic models; Philosophical and linguistic foundations of conceptual models. 4. Cross-cultural communication and social computing: Cross-cultural support systems; Integration, evolution and migration of systems; Collaborative societies; Multicultural web-based software systems; Intercultural collaboration and support systems; Social computing, behavioral modeling and prediction. 5. Environmental modelling and engineering: Environmental information systems (architecture); Spatial, temporal and observational information systems; Large-scale environmental systems; Collaborative knowledge base systems; Agent concepts and conceptualisation; Hazard prediction, prevention and steering systems. 6. Multimedia data modelling and systems: Modelling multimedia information and knowledge; Contentbased multimedia data management; Content-based multimedia retrieval; Privacy and context enhancing technologies; Semantics and pragmatics of multimedia data; Metadata for multimedia information systems. Overall we received 56 submissions. After careful evaluation, 16 papers have been selected as long paper, 17 papers as short papers, 5 papers as position papers, and 3 papers for presentation of perspective challenges. We thank all colleagues for their support of this issue of the EJC conference, especially the program committee, the organising committee, and the programme coordination team. The long and the short papers presented in the conference are revised after the conference and published in the Series of “Frontiers in Artificial Intelligence” by IOS Press (Amsterdam). The books “Information Modelling and Knowledge Bases” are edited by the Editing Committee of the conference. We believe that the conference will be productive and fruitful in the advance of research and application of information modelling and knowledge bases. Bernhard Thalheim Hannu Jaakkola Yasushi Kiyok
Recommended from our members
PolyView : an object-oriented data model for supporting multiple user views
In a typical database application, there are many different users with a great variety of skills, needs and perceptions. The problem of supporting this plethora of user views in a dynamic, data intensive environment is the topic of this dissertation.In traditional record-based systems, all information is represented by an idealized data structure and a set of operations on that structure. User views are defined by simple variations in this structure, such as permuting field names, selecting a subset of the data, or creating links between records. Semantic database models support more complex, "natural" structures. It is often claimed that relativism is supported because semantic schemas can be correctly interpreted (by users) in different ways. The object-oriented paradigm, with its simple and elegant structural semantics, provides both simplicity and richness. Unfortunately, current object-oriented systems only provide a single object interface (or protocol). This dissertation presents PolyView; an object-oriented data model capable of simultaneously supporting many points of view. In PolyView, objects encapsulate a single structure and any number of object interfaces (view instance descriptions). PolyView, therefore, supports distributed mappings from user views to the underlying database structure.Algorithms are presented for generic methods which retrieve and update information through user views. PolyView "colors" queries (messages) by attaching a view identity to them. As messages are propagated through the schema, each receiving object uses the color to determine how the message is to be processed. The color is used to select the user's protocol and allows different user's queries to be processed through apparently different database structures. Because objects act independently, PolyView is a data-driven system; messages are processed without any centralized control or shared memory.Finally, PolyView provides a set of view transformations which allow view administrators to build object interfaces. Since views are supported by both global and localized mechanisms, there are transformations which operate at each of these levels. There are three major categories of transformations presented in this thesis: those which customize the schema as a whole, transformations for changing the structure of the IS-A hierarchy and transformations for customizing attributes
- …