213 research outputs found
GPU Parallelism for SAT Solving Heuristics
Modern SAT solvers employ a number of smart techniques and strategies to achieve maximum efficiency in solving the Boolean Satisfiability problem. Among all components of a solver, the branching heuristics plays a crucial role in affecting the performance of the entire solver. Traditionally, the main branching heuristics that have appeared in the literature have been classified as look-back heuristics or look-ahead heuristics. As SAT technology has evolved, the former have become more and more preferable, for their demand for less computational effort. Graphics Processor Units (GPUs) are massively parallel devices that have spread enormously over the past few decades and offer great computing power at a relatively low cost. We describe how to exploit such computational power to efficiently implement look-ahead heuristics. Our aim is to “rehabilitate” these heuristics, by showing their effectiveness in the contest of a parallel SAT solver
An xAI Approach for Data-to-Text Processing with ASP
The generation of natural language text from data series gained renewed interest among AI research goals. Not surprisingly, the few proposals in the state of the art are based on training some system, in order to produce a text that describes and that is coherent to the data provided as input. Main challenges of such approaches are the proper identification of what to say (the key descriptive elements to be addressed in the data) and how to say: the correspondence and accuracy between data and text, the presence of contradictions/redundancy in the text, the control of the amount of synthesis. This paper presents a framework that is compliant with xAI requirements. In particular we model ASP/Python programs that enable an explicit control of accuracy errors and amount of synthesis, with proven optimal solutions. The text description is hierarchically organized, in a top-down structure where text is enriched with further details, according to logic rules. The generation of natural language descriptions’ structure is also managed by logic rules
2D object reconstruction with ASP
Damages to cultural heritage due to human malicious actions or to natural disasters (e.g., earthquakes, tornadoes) are nowadays more and more frequent. Huge work is needed by professional restores to reproduce, as best as possible, the original artwork or architecture opera starting from the potsherds. The tool we are presenting in this paper is devised for being a digital support for this kind of work. As soon as the fragments of the opera are cataloged, a user (possibly young students, and even children, using a tablet or a smartphone as playing with a video game) can propose a partial reconstruction. The final part of the job is left to an ASP program that first computes a pre-processing task to find coherence between (sides of) fragments, and then tries to reconstruct the original object. Experiments are made here focusing on 2D reconstruction (frescoes, reliefs, etc)
ECHO: A hierarchical combination of classical and multi-agent epistemic planning problems
The continuous interest in Artificial Intelligence (AI) has brought, among other things, the development of several scenarios where multiple artificial entities interact with each other. As for all the other autonomous settings, these multi-agent systems require orchestration. This is, generally, achieved through techniques derived from the vast field of Automated Planning. Notably, arbitration in multi-agent domains is not only tasked with regulating how the agents act, but must also consider the interactions between the agents' information flows and must, therefore, reason on an epistemic level. This brings a substantial overhead that often diminishes the reasoning process's usability in real-world situations. To address this problem, we present ECHO, a hierarchical framework that embeds classical and multi-agent epistemic (epistemic, for brevity) planners in a single architecture. The idea is to combine (i) classical; and(ii) epistemic solvers to model efficiently the agents' interactions with the (i) 'physical world'; and(ii) information flows, respectively. In particular, the presented architecture starts by planning on the 'epistemic level', with a high level of abstraction, focusing only on the information flows. Then it refines the planning process, due to the classical planner, to fully characterize the interactions with the 'physical' world. To further optimize the solving process, we introduced the concept of macros in epistemic planning and enriched the 'classical' part of the domain with goal-networks. Finally, we evaluated our approach in an actual robotic environment showing that our architecture indeed reduces the overall computational time
Towards explainable data-to-text generation
In recent years there has been a renewed burst of interest in systems able to textually summarize data, producing natural language text as a description of input data series. Many of the recently proposed approaches to solve the data-to-text task are based on Machine Learning (ML) and ultimately rely on Deep Learning (DL) techniques. This technological choice often prevents the system from enjoying explainability properties. In this paper we outline our ongoing research and present a framework that is ML/DL free and is conceived to be compliant with xAI requirements. In particular we design ASP/Python programs that enable explicit control of the abstraction process, descriptions' accuracy and relevance handling, and amount of synthesis. We provide a critical analysis of the xAI features that should be implemented and a working proof of concept that addresses crucial aspects in the abstraction of data. In particular we discuss: how to model and output the abstraction accuracy of a concept w.r.t. data; how to identify what to say with controlled synthesis level: i.e., the key descriptive elements to be addressed in the data; how to represent abstracted information by means of visual annotation to charts. The main advantages of such approach are a trustworthy and reliable description, a transparent methodology, logically provable output, and measured accuracy that can control natural language modulation of descriptions
Modeling and Solving the Rush Hour puzzle
We introduce the physical puzzle Rush Hour and its generalization. We briefly survey its complexity limits, then we model and solve it using declarative paradigms. In particular, we provide a constraint programming encoding in MiniZinc and a model in Answer Set Programming and we report and compare experimental results. Although this is simply a game, the kind of reasoning involved is the same that autonomous vehicles should do for exiting a garage. This shows the potential of logic programming for problems concerning transport problems and self-driving cars
Constraints Propagation on GPU: A Case Study for AllDifferent
The AllDifferent constraint is a fundamental tool in Constraint Programming. It naturally arises in many problems, from puzzles to scheduling and routing applications. Such popularity has prompted an extensive literature on filtering and propagation for this constraint. Motivated by the benefits that GPUs offer to other branches of AI, this paper investigates the use of GPUs to accelerate filtering and propagation. In particular, we present an efficient parallelization of the AllDifferent constraint on GPU; we analyze different design and implementation choices and evaluates the performance of the resulting system on medium to large instances of the Travelling Salesman Problem with encouraging results
The kth nearest neighbor method for estimation of entropy changes from molecular ensembles
All processes involving molecular systems entail a balance between associated enthalpic and entropic changes. Molecular dynamics simulations of the end-points of a process provide in a straightforward way the enthalpy as an ensemble average. Obtaining absolute entropies is still an open problem and most commonly pathway methods are used to obtain free energy changes and thereafter entropy changes. The kth nearest neighbor (kNN) method has been first proposed as a general method for entropy estimation in the mathematical community 20 years ago. Later, it has been applied to compute conformational, positional–orientational, and hydration entropies of molecules. Programs to compute entropies from molecular ensembles, for example, from molecular dynamics (MD) trajectories, based on the kNN method, are currently available. The kNN method has distinct advantages over traditional methods, namely that it is possible to address high-dimensional spaces, impossible to treat without loss of resolution or drastic approximations with, for example, histogram-based methods. Application of the method requires understanding the features of: the kth nearest neighbor method for entropy estimation; the variables relevant to biomolecular and in general molecular processes; the metrics associated with such variables; the practical implementation of the method, including requirements and limitations intrinsic to the method; and the applications for conformational, position/orientation and solvation entropy. Coupling the method with general approximations for the multivariable entropy based on mutual information, it is possible to address high dimensional problems like those involving the conformation of proteins, nucleic acids, binding of molecules and hydration. This article is categorized under: Molecular and Statistical Mechanics > Free Energy Methods Theoretical and Physical Chemistry > Statistical Mechanics Structure and Mechanism > Computational Biochemistry and Biophysics
Quality differences in cheeses produced by lowland and highland units of the Alpine transhumant system
The characteristics of ripened cheeses depend on a large number of factors, of which animal feeding plays an important role. Several researches showed influences of factors linked to forage, such as quality or method of conservation (Verdier-Metz et al., 1998)
- …