1,308 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Analytical validation of innovative magneto-inertial outcomes: a controlled environment study.
peer reviewe
Mining Butterflies in Streaming Graphs
This thesis introduces two main-memory systems sGrapp and sGradd for performing the fundamental analytic tasks of biclique counting and concept drift detection over a streaming graph. A data-driven heuristic is used to architect the systems. To this end, initially, the growth patterns of bipartite streaming graphs are mined and the emergence principles of streaming motifs are discovered. Next, the discovered principles are (a) explained by a graph generator called sGrow; and (b) utilized to establish the requirements for efficient, effective, explainable, and interpretable management and processing of streams. sGrow is used to benchmark stream analytics, particularly in the case of concept drift detection.
sGrow displays robust realization of streaming growth patterns independent of initial conditions, scale and temporal characteristics, and model configurations. Extensive evaluations confirm the simultaneous effectiveness and efficiency of sGrapp and sGradd. sGrapp achieves mean absolute percentage error up to 0.05/0.14 for the cumulative butterfly count in streaming graphs with uniform/non-uniform temporal distribution and a processing throughput of 1.5 million data records per second. The throughput and estimation error of sGrapp are 160x higher and 0.02x lower than baselines. sGradd demonstrates an improving performance over time, achieves zero false detection rates when there is not any drift and when drift is already detected, and detects sequential drifts in zero to a few seconds after their occurrence regardless of drift intervals
Recommended from our members
Foundations of Node Representation Learning
Low-dimensional node representations, also called node embeddings, are a cornerstone in the modeling and analysis of complex networks. In recent years, advances in deep learning have spurred development of novel neural network-inspired methods for learning node representations which have largely surpassed classical \u27spectral\u27 embeddings in performance. Yet little work asks the central questions of this thesis: Why do these novel deep methods outperform their classical predecessors, and what are their limitations?
We pursue several paths to answering these questions. To further our understanding of deep embedding methods, we explore their relationship with spectral methods, which are better understood, and show that some popular deep methods are equivalent to spectral methods in a certain natural limit. We also introduce the problem of inverting node embeddings in order to probe what information they contain. Further, we propose a simple, non-deep method for node representation learning, and find it to often be competitive with modern deep graph networks in downstream performance.
To better understand the limitations of node embeddings, we prove some upper and lower bounds on their capabilities. Most notably, we prove that node embeddings are capable of exact low-dimensional representation of networks with bounded max degree or arboricity, and we further show that a simple algorithm can find such exact embeddings for real-world networks. By contrast, we also prove inherent bounds on random graph models, including those derived from node embeddings, to capture key structural properties of networks without simply memorizing a given graph
Who, when, and how long? Time-sensitive social network modeling using relational event data
Social interactions between people play a central role in society, and understanding social interaction behavior is thus an important area of study in the social sciences. The Relational Event Model (REM) is a statistical tool that helps us examine the factors that motivate individuals in a social network to engage with each other and the timing of these interactions. An essential aspect of this model lies in its ability to consider the past interactions among individuals in the network, leading to a time-sensitive analysis. The primary question it addresses is how patterns that have emerged from previous interactions explain social interaction behavior and predict when the next interaction is likely to occur and who will be involved. This dissertation contributes to the study of social interaction dynamics using REM in several ways. Firstly, it offers a clear introduction to REM for psychologists, demonstrating its application in uncovering trends in social interaction behavior over time among university freshmen. Three key research questions are explored: What motivates students' social interaction behavior? How do interaction processes change as students get to know each other? How do these evolving processes influence interactions in different contexts? The main findings indicate that patterns of interaction develop early in the acquaintance process, which play a significant role in predicting future interaction behavior. Moreover, this work introduces two methodologies that enhance the REM toolkit. One extends REM to explore changes in social interaction behavior over time. Another extension allows us to examine the role of the duration of interactions in explaining future interaction behavior. The proposed methods are evaluated through simulations and applied to real-world cases, including interactions between employees, interactions within a healthcare setting, and interactions amid a violent conflict. These applications highlight how the proposed methods can be applied to deepen our understanding of how interaction patterns develop over time, aiming to gain insight into when the next interaction is likely to occur, who will be involved, and how long it will last. Finally, the dissertation includes two tutorials for using REM and testing scientific theories related to REM parameters in R. These tutorials offer step-by-step explanations and examples for researchers interested in applying REM to their own social interaction research. This allows researchers to more easily utilize REM and contribute to the further development of knowledge regarding the dynamics of social interaction behavior
"Le present est plein de l’avenir, et chargé du passé" : Vorträge des XI. Internationalen Leibniz-Kongresses, 31. Juli – 4. August 2023, Leibniz Universität Hannover, Deutschland. Band 3
[No abstract available]Deutschen Forschungsgemeinschaft (DFG)/Projektnr. 517991912VGH VersicherungNiedersächsisches Ministerium für Wissenschaft und Kultur (MWK
Explainable temporal data mining techniques to support the prediction task in Medicine
In the last decades, the increasing amount of data available in all fields raises the necessity to discover new knowledge and explain the hidden information found. On one hand, the rapid increase of interest in, and use of, artificial intelligence (AI) in computer applications has raised a parallel concern about its ability (or lack thereof) to provide understandable, or explainable, results to users. In the biomedical informatics and computer science communities, there is considerable discussion about the `` un-explainable" nature of artificial intelligence, where often algorithms and systems leave users, and even developers, in the dark with respect to how results were obtained. Especially in the biomedical context, the necessity to explain an artificial intelligence system result is legitimate of the importance of patient safety. On the other hand, current database systems enable us to store huge quantities of data. Their analysis through data mining techniques provides the possibility to extract relevant knowledge and useful hidden information. Relationships and patterns within these data could provide new medical knowledge. The analysis of such healthcare/medical data collections could greatly help to observe the health conditions of the population and extract useful information that can be exploited in the assessment of healthcare/medical processes. Particularly, the prediction of medical events is essential for preventing disease, understanding disease mechanisms, and increasing patient quality of care. In this context, an important aspect is to verify whether the database content supports the capability of predicting future events. In this thesis, we start addressing the problem of explainability, discussing some of the most significant challenges need to be addressed with scientific and engineering rigor in a variety of biomedical domains. We analyze the ``temporal component" of explainability, focusing on detailing different perspectives such as: the use of temporal data, the temporal task, the temporal reasoning, and the dynamics of explainability in respect to the user perspective and to knowledge. Starting from this panorama, we focus our attention on two different temporal data mining techniques. The first one, based on trend abstractions, starting from the concept of Trend-Event Pattern and moving through the concept of prediction, we propose a new kind of predictive temporal patterns, namely Predictive Trend-Event Patterns (PTE-Ps). The framework aims to combine complex temporal features to extract a compact and non-redundant predictive set of patterns composed by such temporal features. The second one, based on functional dependencies, we propose a methodology for deriving a new kind of approximate temporal functional dependencies, called Approximate Predictive Functional Dependencies (APFDs), based on a three-window framework. We then discuss the concept of approximation, the data complexity of deriving an APFD, the introduction of two new error measures, and finally the quality of APFDs in terms of coverage and reliability. Exploiting these methodologies, we analyze intensive care unit data from the MIMIC dataset
(b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!)
(b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!
Ratio und similitudo: die vernunftkonforme Argumentation im Dialogus des Petrus Alfonsi
Anders als in religionspolemischen Werken früherer Autoren, die auf die exegetische Diskussion zentriert sind, argumentiert Petrus Alfonsi in seinem Dialogus (um 1110 verfasst) gleichermaßen auf der Grundlage von auctoritas (der Bibel) und von ratio. In diesem Beitrag wird diskutiert, wie Petrus Alfonsi die vernunftbasierte Argumentation begrifflich fasst und umsetzt. An einer Stelle präzisiert Petrus Alfonsi drei Quellen der rationalen Erkenntnis. Die Aussage wird anhand einer genauen Lektüre und durch die Heranziehung der Quelle, das Werk des jüdischen Philosophen Saadia Gaon, Emunoth we-Deoth, interpretiert. Petrus Alfonsi unterscheidet darin die spontane Erkenntnis durch die Sinne, die deduktive Argumentation auf der Grundlage von allgemein anerkannten Prämissen (necessariae rationes) und die similitudo, die sich als die evidenzbasierte Argumentation verstehen lässt. Im Dialogus argumentiert Petrus Alfonsi nur selten auf der Grundlage von Prämissen, immer wieder findet sich eine Argumentation, die auf beobachtbaren Phänomenen basiert. Häufig legt Petrus Erkenntnisse der Naturphilosophie dar, die er durch Naturbespiele erläutert. Für dieses Verfahren setzt er auch den Begriff similitudo ein
- …