98 research outputs found

    Scaling Causality Analysis for Production Systems.

    Full text link
    Causality analysis reveals how program values influence each other. It is important for debugging, optimizing, and understanding the execution of programs. This thesis scales causality analysis to production systems consisting of desktop and server applications as well as large-scale Internet services. This enables developers to employ causality analysis to debug and optimize complex, modern software systems. This thesis shows that it is possible to scale causality analysis to both fine-grained instruction level analysis and analysis of Internet scale distributed systems with thousands of discrete software components by developing and employing automated methods to observe and reason about causality. First, we observe causality at a fine-grained instruction level by developing the first taint tracking framework to support tracking millions of input sources. We also introduce flexible taint tracking to allow for scoping different queries and dynamic filtering of inputs, outputs, and relationships. Next, we introduce the Mystery Machine, which uses a ``big data'' approach to discover causal relationships between software components in a large-scale Internet service. We leverage the fact that large-scale Internet services receive a large number of requests in order to observe counterexamples to hypothesized causal relationships. Using discovered casual relationships, we identify the critical path for request execution and use the critical path analysis to explore potential scheduling optimizations. Finally, we explore using causality to make data-quality tradeoffs in Internet services. A data-quality tradeoff is an explicit decision by a software component to return lower-fidelity data in order to improve response time or minimize resource usage. We perform a study of data-quality tradeoffs in a large-scale Internet service to show the pervasiveness of these tradeoffs. We develop DQBarge, a system that enables better data-quality tradeoffs by propagating critical information along the causal path of request processing. Our evaluation shows that DQBarge helps Internet services mitigate load spikes, improve utilization of spare resources, and implement dynamic capacity planning.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/135888/1/mcchow_1.pd

    Information-Theoretic Causal Discovery

    Get PDF
    It is well-known that correlation does not equal causation, but how can we infer causal relations from data? Causal discovery tries to answer precisely this question by rigorously analyzing under which assumptions it is feasible to infer causal networks from passively collected, so-called observational data. Particularly, causal discovery aims to infer a directed graph among a set of observed random variables under assumptions which are as realistic as possible. A key assumption in causal discovery is faithfulness. That is, we assume that separations in the true graph imply independencies in the distribution and vice versa. If faithfulness holds and we have access to a perfect independence oracle, traditional causal discovery approaches can infer the Markov equivalence class of the true causal graph---i.e., infer the correct undirected network and even some of the edge directions. In a real-world setting, faithfulness may be violated, however, and neither do we have access to such an independence oracle. Beyond that, we are interested in inferring the complete DAG structure and not just the Markov equivalence class. To circumvent or at least alleviate these limitations, we take an information-theoretic approach. In the first part of this thesis, we consider violations of faithfulness that can be induced by exclusive or relations or cancelling paths, and develop a weaker faithfulness assumption, called 2-adjacency faithfulness, to detect some of these mechanisms. Further, we analyze under which conditions it is possible to infer the correct DAG structure even if such violations occur. In the second part, we focus on independence testing via conditional mutual information (CMI). CMI is an information-theoretic measure of dependence based on Shannon entropy. We first suggest estimating CMI for discrete variables via normalized maximum likelihood instead of the plug-in maximum likelihood estimator that tends to overestimate dependencies. On top of that, we show that CMI can be consistently estimated for discrete-continuous mixture random variables by simply discretizing the continuous parts of each variable. Last, we consider the problem of distinguishing the two Markov equivalent graphs X to Y and Y to X, which is a necessary step towards discovering all edge directions. To solve this problem, it is inevitable to make assumptions about the generating mechanism. We build upon the idea which states that the cause is algorithmically independent of its mechanism. We propose two methods to approximate this postulate via the Minimum Description Length (MDL) principle: one for univariate numeric data and one for multivariate mixed-type data. Finally, we combine insights from our MDL-based approach and regression-based methods with strong guarantees and show we can identify cause and effect via L0-regularized regression

    Multiple Media Correlation: Theory and Applications

    Get PDF
    This thesis introduces multiple media correlation, a new technology for the automatic alignment of multiple media objects such as text, audio, and video. This research began with the question: what can be learned when multiple multimedia components are analyzed simultaneously? Most ongoing research in computational multimedia has focused on queries, indexing, and retrieval within a single media type. Video is compressed and searched independently of audio, text is indexed without regard to temporal relationships it may have to other media data. Multiple media correlation provides a framework for locating and exploiting correlations between multiple, potentially heterogeneous, media streams. The goal is computed synchronization, the determination of temporal and spatial alignments that optimize a correlation function and indicate commonality and synchronization between media objects. The model also provides a basis for comparison of media in unrelated domains. There are many real-world applications for this technology, including speaker localization, musical score alignment, and degraded media realignment. Two applications, text-to-speech alignment and parallel text alignment, are described in detail with experimental validation. Text-to-speech alignment computes the alignment between a textual transcript and speech-based audio. The presented solutions are effective for a wide variety of content and are useful not only for retrieval of content, but in support of automatic captioning of movies and video. Parallel text alignment provides a tool for the comparison of alternative translations of the same document that is particularly useful to the classics scholar interested in comparing translation techniques or styles. The results presented in this thesis include (a) new media models more useful in analysis applications, (b) a theoretical model for multiple media correlation, (c) two practical application solutions that have wide-spread applicability, and (d) Xtrieve, a multimedia database retrieval system that demonstrates this new technology and demonstrates application of multiple media correlation to information retrieval. This thesis demonstrates that computed alignment of media objects is practical and can provide immediate solutions to many information retrieval and content presentation problems. It also introduces a new area for research in media data analysis

    Experience matters: women's experience of care during facility-based childbirth. A mixed-methods study on postpartum outcomes

    Get PDF
    Background: The poor treatment women are receiving during facility-based childbirth is an escalating global issue with potentially adverse postnatal consequences. My thesis aims to enhance understanding of these consequences, with a focus on postnatal care-seeking behaviour, maternal mental health and breastfeeding patterns in Tucumán, Argentina. / Objective: I sought to investigate the impact of mistreatment during childbirth (MDC) on postnatal outcomes and explore the influence of individual, interpersonal and societal factors on this relationship. / Methods: Employing a pragmatic epistemological framework, I adopted a mixed-methods approach. First, a systematic review of existing literature on mistreatment and its postnatal effects provided a comprehensive foundation for my research. Subsequently, I conducted semi-structured interviews and focus group discussions with women from an underserved community in Tucumán to gain qualitative insights. To complement this, I carried out a prospective cohort study with women who delivered in a public maternity hospital. Data analysis involved using the capability, opportunity, motivation, and behaviour (COM-B) model, directed acyclic graphs, and factor analysis to examine behavioural impacts, association pathways, and operationalisation of MDC. Multivariable models were applied to measure the association between MDC and postnatal outcomes. / Results: The study revealed that MDC should not be operationalised as a single construct, as women perceive breaches of quality of care differently from direct physical or verbal abuse. Health literacy, social support and self-esteem were identified as psychosocial confounders in the relationship between mistreatment and postnatal outcomes. Only 26% of women in the cohort study in Tucumán accessed postnatal care, with incidences of postpartum depression and anxiety of 67% and 21%, respectively. No statistically significant association was found between MDC and care seeking behaviour, although a possible trend emerged suggesting the women experiencing physical or verbal MDC could be more likely to seek care than those who were not mistreated. / Conclusion: Several exploratory hypotheses are presented to explain the trend suggesting that women who are verbally or physically mistreated are more prone to seek care after birth. Additionally, three concrete contributions emerged from this work: 1) the need to differentiate the conceptualisation of MDC from its operationalisation when assessing postnatal effects; 2) the importance of integrating psychosocial factors into the theory of change when designing effective interventions, and 3) the urgency of enhancing postnatal care access to improve maternal and newborn health outcomes, regardless of women’s childbirth experiences

    Integration of Logic and Probability in Terminological and Inductive Reasoning

    Get PDF
    This thesis deals with Statistical Relational Learning (SRL), a research area combining principles and ideas from three important subfields of Artificial Intelligence: machine learn- ing, knowledge representation and reasoning on uncertainty. Machine learning is the study of systems that improve their behavior over time with experience; the learning process typi- cally involves a search through various generalizations of the examples, in order to discover regularities or classification rules. A wide variety of machine learning techniques have been developed in the past fifty years, most of which used propositional logic as a (limited) represen- tation language. Recently, more expressive knowledge representations have been considered, to cope with a variable number of entities as well as the relationships that hold amongst them. These representations are mostly based on logic that, however, has limitations when reason- ing on uncertain domains. These limitations have been lifted allowing a multitude of different formalisms combining probabilistic reasoning with logics, databases or logic programming, where probability theory provides a formal basis for reasoning on uncertainty. In this thesis we consider in particular the proposals for integrating probability in Logic Programming, since the resulting probabilistic logic programming languages present very in- teresting computational properties. In Probabilistic Logic Programming, the so-called "dis- tribution semantics" has gained a wide popularity. This semantics was introduced for the PRISM language (1995) but is shared by many other languages: Independent Choice Logic, Stochastic Logic Programs, CP-logic, ProbLog and Logic Programs with Annotated Disjunc- tions (LPADs). A program in one of these languages defines a probability distribution over normal logic programs called worlds. This distribution is then extended to queries and the probability of a query is obtained by marginalizing the joint distribution of the query and the programs. The languages following the distribution semantics differ in the way they define the distribution over logic programs. The first part of this dissertation presents techniques for learning probabilistic logic pro- grams under the distribution semantics. Two problems are considered: parameter learning and structure learning, that is, the problems of inferring values for the parameters or both the structure and the parameters of the program from data. This work contributes an algorithm for parameter learning, EMBLEM, and two algorithms for structure learning (SLIPCASE and SLIPCOVER) of probabilistic logic programs (in particular LPADs). EMBLEM is based on the Expectation Maximization approach and computes the expectations directly on the Binary De- cision Diagrams that are built for inference. SLIPCASE performs a beam search in the space of LPADs while SLIPCOVER performs a beam search in the space of probabilistic clauses and a greedy search in the space of LPADs, improving SLIPCASE performance. All learning approaches have been evaluated in several relational real-world domains. The second part of the thesis concerns the field of Probabilistic Description Logics, where we consider a logical framework suitable for the Semantic Web. Description Logics (DL) are a family of formalisms for representing knowledge. Research in the field of knowledge repre- sentation and reasoning is usually focused on methods for providing high-level descriptions of the world that can be effectively used to build intelligent applications. Description Logics have been especially effective as the representation language for for- mal ontologies. Ontologies model a domain with the definition of concepts and their properties and relations. Ontologies are the structural frameworks for organizing information and are used in artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, etc. They should also allow to ask questions about the concepts and in- stances described, through inference procedures. Recently, the issue of representing uncertain information in these domains has led to probabilistic extensions of DLs. The contribution of this dissertation is twofold: (1) a new semantics for the Description Logic SHOIN(D) , based on the distribution semantics for probabilistic logic programs, which embeds probability; (2) a probabilistic reasoner for computing the probability of queries from uncertain knowledge bases following this semantics. The explanations of queries are encoded in Binary Decision Diagrams, with the same technique employed in the learning systems de- veloped for LPADs. This approach has been evaluated on a real-world probabilistic ontology

    Hybrid intelligence for data mining

    Full text link
    Today, enormous amount of data are being recorded in all kinds of activities. This sheer size provides an excellent opportunity for data scientists to retrieve valuable information using data mining techniques. Due to the complexity of data in many neoteric problems, one-size-fits-all solutions are seldom able to provide satisfactory answers. Although the studies of data mining have been active, hybrid techniques are rarely scrutinized in detail. Currently, not many techniques can handle time-varying properties while performing their core functions, neither do they retrieve and combine information from heterogeneous dimensions, e.g., textual and numerical horizons. This thesis summarizes our investigations on hybrid methods to provide data mining solutions to problems involving non-trivial datasets, such as trajectories, microblogs, and financial data. First, time-varying dynamic Bayesian networks are extended to consider both causal and dynamic regularization requirements. Combining with density-based clustering, the enhancements overcome the difficulties in modeling spatial-temporal data where heterogeneous patterns, data sparseness and distribution skewness are common. Secondly, topic-based methods are proposed for emerging outbreak and virality predictions on microblogs. Complicated models that consider structural details are popular while others might have taken overly simplified assumptions to sacrifice accuracy for efficiency. Our proposed virality prediction solution delivers the benefits of both worlds. It considers the important characteristics of a structure yet without the burden of fine details to reduce complexity. Thirdly, the proposed topic-based approach for microblog mining is extended for sentiment prediction problems in finance. Sentiment-of-topic models are learned from both commentaries and prices for better risk management. Moreover, previously proposed, supervised topic model provides an avenue to associate market volatility with financial news yet it displays poor resolutions at extreme regions. To overcome this problem, extreme topic model is proposed to predict volatility in financial markets by using supervised learning. By mapping extreme events into Poisson point processes, volatile regions are magnified to reveal their hidden volatility-topic relationships. Lastly, some of the proposed hybrid methods are applied to service computing to verify that they are sufficiently generic for wider applications
    • …
    corecore