387 research outputs found
Effectively Solving NP-SPEC Encodings by Translation to ASP
NP-SPEC is a language for specifying problems in NP in a declarative way. Despite the fact that the semantics of the language was given by referring to Datalog with circumscription, which is very close to ASP, so far the only existing implementations are by means of ECLiPSe Prolog and via Boolean satisfiability solvers. In this paper, we present translations from NP-SPEC into ASP, and provide an experimental evaluation of existing implementations and the proposed translations to ASP using various ASP solvers. The results show that translating to ASP clearly has an edge over the existing translation into SAT, which involves an intrinsic grounding process. We also argue that it might be useful to incorporate certain language constructs of NPSPEC into mainstream ASP
Detecting and repairing anomalous evolutions in noisy environments: logic programming formalization and complexity results
In systems where agents are required to interact with a partially known and dynamic world, sensors can be used to obtain further knowledge about the environment. However, sensors may be unreliable, that is, they may deliver wrong information (due, e.g., to hardware or software malfunctioning) and, consequently, they may cause agents to take wrong decisions, which is a scenario that should be avoided. The paper considers the problem of reasoning in noisy environments in a setting where no (either certain or probabilistic) data is available in advance about the reliability of sensors. Therefore, assuming that each agent is equipped with a background theory (in our setting, an extended logic program) encoding its general knowledge about the world, we define a concept of detecting an anomaly perceived in sensor data and the related concept of agent recovering to a coherent status of information. In this context, the complexities of various anomaly detection and anomaly recovery problems are studied.IFIP International Conference on Artificial Intelligence in Theory and Practice - Agents 1Red de Universidades con Carreras en Informática (RedUNCI
Recommended from our members
Enhancing Usability and Explainability of Data Systems
The recent growth of data science expanded its reach to an ever-growing user base of nonexperts, increasing the need for usability, understandability, and explainability in these systems. Enhancing usability makes data systems accessible to people with different skills and backgrounds alike, leading to democratization of data systems. Furthermore, proper understanding of data and data-driven systems is necessary for the users to trust the function of the systems that learn from data. Finally, data systems should be transparent: when a data system behaves unexpectedly or malfunctions, the users deserve proper explanation of what caused the observed incident. Unfortunately, most existing data systems offer limited usability and support for explanations: these systems are usable only by experts with sound technical skills, and even expert users are hindered by the lack of transparency into the systems\u27 inner workings and functions. The aim of my thesis is to bridge the usability gap between nonexpert users and complex data systems, aid all sort of users, including the expert ones, in data and system understanding, and provide explanations that help reason about unexpected outcomes involving data systems. Specifically, my thesis has the following three goals: (1) enhancing usability of data systems for nonexperts, (2) enable data understanding that can assist users in a variety of tasks such as achieving trust in data-driven machine learning, gaining data understanding, and data cleaning, and (3) explaining causes of unexpected outcomes involving data and data systems.
For enhancing usability, we focus on example-driven user intent discovery. We develop systems based on example-driven interactions in two different settings: querying relational databases and personalized document summarization. Towards data understanding, we develop a new data-profiling primitive that can characterize tuples for which a machine-learned model is likely to produce untrustworthy predictions. We also develop an explanation framework to explain causes of such untrustworthy predictions. Additionally, this new data-profiling primitive enables interactive data cleaning. Finally, we develop two explanation frameworks, tailored to provide explanations in debugging data system components, including the data itself. The explanation frameworks focus on explaining the root cause of a concurrent application\u27s intermittent failure and exposing issues in the data that cause a data-driven system to malfunction
Optimal Counterfactual Explanations in Tree Ensembles
Counterfactual explanations are usually generated through heuristics that are
sensitive to the search's initial conditions. The absence of guarantees of
performance and robustness hinders trustworthiness. In this paper, we take a
disciplined approach towards counterfactual explanations for tree ensembles. We
advocate for a model-based search aiming at "optimal" explanations and propose
efficient mixed-integer programming approaches. We show that isolation forests
can be modeled within our framework to focus the search on plausible
explanations with a low outlier score. We provide comprehensive coverage of
additional constraints that model important objectives, heterogeneous data
types, structural constraints on the feature space, along with resource and
actionability restrictions. Our experimental analyses demonstrate that the
proposed search approach requires a computational effort that is orders of
magnitude smaller than previous mathematical programming algorithms. It scales
up to large data sets and tree ensembles, where it provides, within seconds,
systematic explanations grounded on well-defined models solved to optimality.Comment: Authors Accepted Manuscript (AAM), to be published in the Proceedings
of the 38th International Conference on Machine Learning, PMLR 139, 2021.
Additional typo corrections. Open source code available at
https://github.com/vidalt/OCEA
Exploring the visualization of student behavior in interactive learning environments
My research combines Interactive Learning Environments (ILE), Educational Data Mining (EDM) and Information Visualization (Info-Vis) to inform analysts, educators and researchers about user behavior in software, specifically in CBEs, which include intelligent tutoring systems, computer aided instruction tools, and educational games.
InVis is a novel visualization technique and tool I created for exploring, navigating, and understanding user interaction data. InVis reads in user-interaction data logged from students using educational systems and constructs an Interaction Network from those logs. Using this data InVis provides an interactive environment to allow instructors and education researchers to navigate and explore to build new insights and discoveries about student learning.
I conducted a three-point user study, which included a quantitative task analysis, qualitative feedback, and a validated usability survey. Through this study, I show that creating an Interaction Network and visualizing it with InVis is an effective means of providing information to users about student behavior. In addition to this, I also provide four use-cases describing how InVis has been used to confirm hypotheses and debug software tutors.
A major challenge in visualizing and exploring the Interaction Network is network's complexity, there are too many nodes and edges presented to understand the data efficiently. In a typical Interaction Network for twenty students, it is common to have hundreds of nodes, which to make sense of, has proven to be too many. I present a network reduction method, based on edge frequencies, which lowers the number of edges and nodes by roughly 90\\% while maintaining the most important elements of the Interaction Network. Next, I compare the results of this method with three alternative approaches and show our reduction method produces the preferred results. I also present an ordering detection method for identifying solution path redundancy because of student action orders. This method reduces the number of nodes and edges further and advances the resulting network towards the structure of a simple graph.
Understanding the successful student solutions is only a portion of the behaviors we are interested in as researchers and educators using computer based educational systems, student difficulties are also important. To address areas of student difficulty, I present three different methods and two visual representations to draw the attention of the user to nodes where students had difficulty. Those methods include presenting the nodes with the highest number of successful students, the nodes with the highest number of failing students, and the expected difficulty of each state. Combined with a visual representation, these methods can draw the focus of users to potentially important nodes, which contain areas of difficulty for students. Lastly, I present the latest version of the InVis tool, which is a platform for investigating student behavior in computer based educational systems. Through the continued use of this tool, new researchers can investigate many new hypotheses, research questions and student behaviors, with the potential to facilitate a wide range of new discoveries
Abusive adversaries in 5G and beyond IoT
5G and subsequent cellular network generations aim to extend ubiquitous connectivity of billions of Internet-of-Things (IoT) for their consumers. Security is a prime concern in this context as adversaries have evolved to become smart and often employ new attack strategies. Network defenses can be enhanced against attacks by employing behavior models for devices to detect misbehavior. One example is Abusive Modeling (AM) that is inspired by financial technologies to defend adversaries operating with unlimited resources who have no intention of self-profit apart from harming the system. This article investigates behavior modeling against abusive adversaries in the context of 5G and beyond security functions for IoT. Security threats and countermeasures are discussed to understand AM. A complexitysecurity trade-off enables a better understanding of the limitations of state-based behavior modeling and paves the way as a future direction for developing more robust solutions against AM.PostprintPeer reviewe
A Survey on Explainable Anomaly Detection
In the past two decades, most research on anomaly detection has focused on
improving the accuracy of the detection, while largely ignoring the
explainability of the corresponding methods and thus leaving the explanation of
outcomes to practitioners. As anomaly detection algorithms are increasingly
used in safety-critical domains, providing explanations for the high-stakes
decisions made in those domains has become an ethical and regulatory
requirement. Therefore, this work provides a comprehensive and structured
survey on state-of-the-art explainable anomaly detection techniques. We propose
a taxonomy based on the main aspects that characterize each explainable anomaly
detection technique, aiming to help practitioners and researchers find the
explainable anomaly detection method that best suits their needs.Comment: Paper accepted by the ACM Transactions on Knowledge Discovery from
Data (TKDD) for publication (preprint version
- …