17,921 research outputs found

    What is the dimension of citation space?

    Get PDF
    © 2016 Published by Elsevier B.V.Citation networks represent the flow of information between agents. They are constrained in time and so form directed acyclic graphs which have a causal structure. Here we provide novel quantitative methods to characterise that structure by adapting methods used in the causal set approach to quantum gravity by considering the networks to be embedded in a Minkowski spacetime and measuring its dimension using Myrheim-Meyer and Midpoint-scaling estimates. We illustrate these methods on citation networks from the arXiv, supreme court judgements from the USA, and patents and find that otherwise similar citation networks have measurably different dimensions. We suggest that these differences can be interpreted in terms of the level of diversity or narrowness in citation behaviour

    A Markov model for inferring flows in directed contact networks

    Full text link
    Directed contact networks (DCNs) are a particularly flexible and convenient class of temporal networks, useful for modeling and analyzing the transfer of discrete quantities in communications, transportation, epidemiology, etc. Transfers modeled by contacts typically underlie flows that associate multiple contacts based on their spatiotemporal relationships. To infer these flows, we introduce a simple inhomogeneous Markov model associated to a DCN and show how it can be effectively used for data reduction and anomaly detection through an example of kernel-level information transfers within a computer.Comment: 12 page

    Flexible Execution of Plans with Choice

    Get PDF
    Dynamic plan execution strategies allow an autonomous agent to respond to uncertainties while improving robustness and reducing the need for an overly conservative plan. Executives have improved this robustness by expanding the types of choices made dynamically, such as selecting alternate methods. However, in methods to date, these additional choices introduce substantial run-time latency. This paper presents a novel system called Drake that makes steps towards executing an expanded set of choices dynamically without significant latency. Drake frames a plan as a Disjunctive Temporal Problem and executes it with a fast dynamic scheduling algorithm. Prior work demonstrated an efficient technique for dynamic execution of one special type of DTPs by using an off-line compilation step to find the possible consistent choices and compactly record the differences between them. Drake extends this work to handle a more general set of choices by recording the minimal differences between the solutions which are required at run-time. On randomly generated structured plans with choice, we show a reduction in the size of the solution set of over two orders of magnitude, compared to prior art

    Output constraints in multimedia database systems

    Get PDF
    Zusammenfassung Semantische Fehler treten bei jeder Art von Datenverwaltung auf. Herkömmliche Datenbanksysteme verwenden eine Integritätskontrolle, um semantische Fehler zu vermeiden. Um die Integrität der Daten zu gewährleisten werden Integritätsregeln benutzt. Diese Regeln können allerdings nur die Konsistenz einfach strukturierter Daten überprüfen. Multimedia Datenbanksystem verwalten neben einfachen alphanumerischen Daten auch komplexe Mediendaten wie Videos. Um die Konsistenz dieser Daten zu sichern, bedarf es einer erheblichen Erweiterung des bestehenden Integritätskonzeptes. Dabei muss besonders auf die konsistente Datenausgabe geachtet werden. Im Gegensatz zu alphanumerischen Daten können Mediendaten während der Ausgabe verfälscht werden. Dieser Fall kann eintreten, wenn eine geforderte Datenqualität bei der Ausgabe nicht erreicht werden kann oder wenn Synchronisationsbedingungen zwischen Medienobjekten nicht eingehalten werden können. Es besteht daher die Notwendigkeit, Ouptut Constraints einzuführen. Mit ihrer Hilfe kann definiert werden, wann die Ausgabe von Mediendaten semantisch korrekt ist. Das Datenbanksystem kann diese Bedingungen überprüfen und so gewährleisten, dass der Nutzer semantisch einwandfreie Daten erhält. In dieser Arbeit werden alle Aspekte betrachtet, die notwendig sind, um Ausgabebedingungen in ein Multimedia Datenbanksystem zu integrieren. Im einzelnen werden die Modellierung der Bedingungen, deren datenbankinterne Repräsentation sowie die Bedingungsüberprüfung betrachtet. Für die Bedingungsmodellierung wird eine Constraint Language auf Basis der Prädikatenlogik eingeführt. Um die Definition von zeitlichen und räumlichen Synchronisationen zu ermöglichen, verwenden wir Allen-Relationen. Für die effiziente Überprüfung der Ausgabebedingungen müssen diese aus der Spezifikationssprache in eine datenbankinterne Darstellung überführt werden. Für die datenbankinterne Darstellung werden Difference Constraints verwendet. Diese erlauben eine sehr effiziente Bedingungsüberprüfung. Wir haben Algorithmen entwickelt, die eine effiziente Überprüfung von Ausgabebedingungen erlauben und dies anhand von Experimenten nachgewiesen. Neben der Überprüfung der Bedingungen müssen Mediendaten so synchronisiert werden, dass dies den Ausgabebedingungen entspricht. Wir haben dazu das Konzept des Output Schedules entwickelt. Dieser wird aufgrund der definierten Ausgabebedingungen generiert. Durch die Ausgabebedingungen, die in dieser Arbeit eingeführt werden, werden semantische Fehler bei der Verwaltung von Mediendaten erheblich reduziert. Die Arbeit stellt daher einen Beitrag zur qualitativen Verbesserung der Verwaltung von Mediendaten dar.Semantic errors exist as long as data are managed. Traditional database systems try to prevent this errors by proposing integrity concepts for stored data. Integrity constraints are used to implement these integrity concepts. However, integrity constraints can only detect semantic errors in elementary data. Multimedia database systems manage elementary data as well as complex media data, like videos. Considering these media data we need a much wider consistency concept as traditional database systems provide. Especially, data output of media data must be taken into account. In contrast to alphanumeric data the semantics of media data can be falsified during data output if data quality or synchronization of data are not suitable. Thus, we need a concept for output constraints that allow for preventing semantic errors in case of data output. For integrating output constraints into a multimedia database system we have to consider modelling, representation and checking of output constraints. For modelling output constraints we have introduced a constraint language which uses the same principles as traditional constraint languages. Our constraint specification language must support temporal and spatial synchronization constraints. However, it is desired to support both kinds of synchronization in almost the same manner. Therefore, we use Allen-Relations for defining temporal synchronization constraints as well as for defining spatial synchronization constraints. We need a database internal representation of output constraints that makes efficient constraint checking possible. The Allen-Relations used in the constraint language cannot be checked efficiently. However, difference constraints are a class of constraints that allows an very efficient checking. Therefore, we use difference constraints as database internal representation of output constraints. As methods for checking consistency of output constraints we use an approach based on graph theory as well as an analytical approach. Both approaches require a constraint graph as data structure. For data output we need an output order that is adequate to the defined output constraints. This output schedule can be produced based on the output constraints. With output constraints, proposed in this thesis, semantical correctness of media data considering the data output can be supported.Thus, the contribution of this work is an qualitative improvement of managing media data by database systems

    Network Inference via the Time-Varying Graphical Lasso

    Full text link
    Many important problems can be modeled as a system of interconnected entities, where each entity is recording time-dependent observations or measurements. In order to spot trends, detect anomalies, and interpret the temporal dynamics of such data, it is essential to understand the relationships between the different entities and how these relationships evolve over time. In this paper, we introduce the time-varying graphical lasso (TVGL), a method of inferring time-varying networks from raw time series data. We cast the problem in terms of estimating a sparse time-varying inverse covariance matrix, which reveals a dynamic network of interdependencies between the entities. Since dynamic network inference is a computationally expensive task, we derive a scalable message-passing algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in an efficient way. We also discuss several extensions, including a streaming algorithm to update the model and incorporate new observations in real time. Finally, we evaluate our TVGL algorithm on both real and synthetic datasets, obtaining interpretable results and outperforming state-of-the-art baselines in terms of both accuracy and scalability

    Managing Supply Chain Events to Build Sense-and-Respond Capability

    Get PDF
    As supply chains become more dynamic, there is a need for a sense-and-respond capability to react to events in a real-time manner. In this paper, we propose Petri nets extended with time and color (for case data) as a formalism for doing so. Hence, we describe seven basic patterns that are used to capture modeling concepts that arise commonly in supply chains. These basic patterns may be used by themselves and also be combined to create new patterns. Next, we show how to use the patterns as building blocks to model a complete supply chain and analyze it using dependency graphs and simulation. Dependency graphs can be used to analyze the various events and their causes. Simulation was, in addition, used to analyze various performance indicators (e.g. fill rates, replenishment times, and lead times) under different supply chain strategies. We performed sensitivity analysis to study the effect of changing parameter values on the performance indicators. In the experiments, by cutting resolution time for production delays in half (strategy 1), we were able to increase order fill rate from 89% to 95%. Similarly, upon raising the probability of successful alternative sourcing (strategy 2) from 0.5 to 0.7 the order fill rate again increased from 89% to 95%. We show that by modeling timing and causality issues accurately, it is possible to improve supply chain performance
    corecore