1,480 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Learning permutation symmetries with gips in R

    Full text link
    The study of hidden structures in data presents challenges in modern statistics and machine learning. We introduce the gips\mathbf{gips} package in R, which identifies permutation subgroup symmetries in Gaussian vectors. gips\mathbf{gips} serves two main purposes: exploratory analysis in discovering hidden permutation symmetries and estimating the covariance matrix under permutation symmetry. It is competitive to canonical methods in dimensionality reduction while providing a new interpretation of the results. gips\mathbf{gips} implements a novel Bayesian model selection procedure within Gaussian vectors invariant under the permutation subgroup introduced in Graczyk, Ishi, Ko{\l}odziejek, Massam, Annals of Statistics, 50 (3) (2022).Comment: 36 pages, 11 figure

    Mining Butterflies in Streaming Graphs

    Get PDF
    This thesis introduces two main-memory systems sGrapp and sGradd for performing the fundamental analytic tasks of biclique counting and concept drift detection over a streaming graph. A data-driven heuristic is used to architect the systems. To this end, initially, the growth patterns of bipartite streaming graphs are mined and the emergence principles of streaming motifs are discovered. Next, the discovered principles are (a) explained by a graph generator called sGrow; and (b) utilized to establish the requirements for efficient, effective, explainable, and interpretable management and processing of streams. sGrow is used to benchmark stream analytics, particularly in the case of concept drift detection. sGrow displays robust realization of streaming growth patterns independent of initial conditions, scale and temporal characteristics, and model configurations. Extensive evaluations confirm the simultaneous effectiveness and efficiency of sGrapp and sGradd. sGrapp achieves mean absolute percentage error up to 0.05/0.14 for the cumulative butterfly count in streaming graphs with uniform/non-uniform temporal distribution and a processing throughput of 1.5 million data records per second. The throughput and estimation error of sGrapp are 160x higher and 0.02x lower than baselines. sGradd demonstrates an improving performance over time, achieves zero false detection rates when there is not any drift and when drift is already detected, and detects sequential drifts in zero to a few seconds after their occurrence regardless of drift intervals

    Subgroup discovery for structured target concepts

    Get PDF
    The main object of study in this thesis is subgroup discovery, a theoretical framework for finding subgroups in data—i.e., named sub-populations— whose behaviour with respect to a specified target concept is exceptional when compared to the rest of the dataset. This is a powerful tool that conveys crucial information to a human audience, but despite past advances has been limited to simple target concepts. In this work we propose algorithms that bring this framework to novel application domains. We introduce the concept of representative subgroups, which we use not only to ensure the fairness of a sub-population with regard to a sensitive trait, such as race or gender, but also to go beyond known trends in the data. For entities with additional relational information that can be encoded as a graph, we introduce a novel measure of robust connectedness which improves on established alternative measures of density; we then provide a method that uses this measure to discover which named sub-populations are more well-connected. Our contributions within subgroup discovery crescent with the introduction of kernelised subgroup discovery: a novel framework that enables the discovery of subgroups on i.i.d. target concepts with virtually any kind of structure. Importantly, our framework additionally provides a concrete and efficient tool that works out-of-the-box without any modification, apart from specifying the Gramian of a positive definite kernel. To use within kernelised subgroup discovery, but also on any other kind of kernel method, we additionally introduce a novel random walk graph kernel. Our kernel allows the fine tuning of the alignment between the vertices of the two compared graphs, during the count of the random walks, while we also propose meaningful structure-aware vertex labels to utilise this new capability. With these contributions we thoroughly extend the applicability of subgroup discovery and ultimately re-define it as a kernel method.Der Hauptgegenstand dieser Arbeit ist die Subgruppenentdeckung (Subgroup Discovery), ein theoretischer Rahmen für das Auffinden von Subgruppen in Daten—d. h. benannte Teilpopulationen—deren Verhalten in Bezug auf ein bestimmtes Targetkonzept im Vergleich zum Rest des Datensatzes außergewöhnlich ist. Es handelt sich hierbei um ein leistungsfähiges Instrument, das einem menschlichen Publikum wichtige Informationen vermittelt. Allerdings ist es trotz bisherigen Fortschritte auf einfache Targetkonzepte beschränkt. In dieser Arbeit schlagen wir Algorithmen vor, die diesen Rahmen auf neuartige Anwendungsbereiche übertragen. Wir führen das Konzept der repräsentativen Untergruppen ein, mit dem wir nicht nur die Fairness einer Teilpopulation in Bezug auf ein sensibles Merkmal wie Rasse oder Geschlecht sicherstellen, sondern auch über bekannte Trends in den Daten hinausgehen können. Für Entitäten mit zusätzlicher relationalen Information, die als Graph kodiert werden kann, führen wir ein neuartiges Maß für robuste Verbundenheit ein, das die etablierten alternativen Dichtemaße verbessert; anschließend stellen wir eine Methode bereit, die dieses Maß verwendet, um herauszufinden, welche benannte Teilpopulationen besser verbunden sind. Unsere Beiträge in diesem Rahmen gipfeln in der Einführung der kernelisierten Subgruppenentdeckung: ein neuartiger Rahmen, der die Entdeckung von Subgruppen für u.i.v. Targetkonzepten mit praktisch jeder Art von Struktur ermöglicht. Wichtigerweise, unser Rahmen bereitstellt zusätzlich ein konkretes und effizientes Werkzeug, das ohne jegliche Modifikation funktioniert, abgesehen von der Angabe des Gramian eines positiv definitiven Kernels. Für den Einsatz innerhalb der kernelisierten Subgruppentdeckung, aber auch für jede andere Art von Kernel-Methode, führen wir zusätzlich einen neuartigen Random-Walk-Graph-Kernel ein. Unser Kernel ermöglicht die Feinabstimmung der Ausrichtung zwischen den Eckpunkten der beiden unter-Vergleich-gestelltenen Graphen während der Zählung der Random Walks, während wir auch sinnvolle strukturbewusste Vertex-Labels vorschlagen, um diese neue Fähigkeit zu nutzen. Mit diesen Beiträgen erweitern wir die Anwendbarkeit der Subgruppentdeckung gründlich und definieren wir sie im Endeffekt als Kernel-Methode neu

    Digital Traces of the Mind::Using Smartphones to Capture Signals of Well-Being in Individuals

    Get PDF
    General context and questions Adolescents and young adults typically use their smartphone several hours a day. Although there are concerns about how such behaviour might affect their well-being, the popularity of these powerful devices also opens novel opportunities for monitoring well-being in daily life. If successful, monitoring well-being in daily life provides novel opportunities to develop future interventions that provide personalized support to individuals at the moment they require it (just-in-time adaptive interventions). Taking an interdisciplinary approach with insights from communication, computational, and psychological science, this dissertation investigated the relation between smartphone app use and well-being and developed machine learning models to estimate an individual’s well-being based on how they interact with their smartphone. To elucidate the relation between smartphone trace data and well-being and to contribute to the development of technologies for monitoring well-being in future clinical practice, this dissertation addressed two overarching questions:RQ1: Can we find empirical support for theoretically motivated relations between smartphone trace data and well-being in individuals? RQ2: Can we use smartphone trace data to monitor well-being in individuals?Aims The first aim of this dissertation was to quantify the relation between the collected smartphone trace data and momentary well-being at the sample level, but also for each individual, following recent conceptual insights and empirical findings in psychological, communication, and computational science. A strength of this personalized (or idiographic) approach is that it allows us to capture how individuals might differ in how smartphone app use is related to their well-being. Considering such interindividual differences is important to determine if some individuals might potentially benefit from spending more time on their smartphone apps whereas others do not or even experience adverse effects. The second aim of this dissertation was to develop models for monitoring well-being in daily life. The present work pursued this transdisciplinary aim by taking a machine learning approach and evaluating to what extent we might estimate an individual’s well-being based on their smartphone trace data. If such traces can be used for this purpose by helping to pinpoint when individuals are unwell, they might be a useful data source for developing future interventions that provide personalized support to individuals at the moment they require it (just-in-time adaptive interventions). With this aim, the dissertation follows current developments in psychoinformatics and psychiatry, where much research resources are invested in using smartphone traces and similar data (obtained with smartphone sensors and wearables) to develop technologies for detecting whether an individual is currently unwell or will be in the future. Data collection and analysis This work combined novel data collection techniques (digital phenotyping and experience sampling methodology) for measuring smartphone use and well-being in the daily lives of 247 student participants. For a period up to four months, a dedicated application installed on participants’ smartphones collected smartphone trace data. In the same time period, participants completed a brief smartphone-based well-being survey five times a day (for 30 days in the first month and 30 days in the fourth month; up to 300 assessments in total). At each measurement, this survey comprised questions about the participants’ momentary level of procrastination, stress, and fatigue, while sleep duration was measured in the morning. Taking a time-series and machine learning approach to analysing these data, I provide the following contributions: Chapter 2 investigates the person-specific relation between passively logged usage of different application types and momentary subjective procrastination, Chapter 3 develops machine learning methodology to estimate sleep duration using smartphone trace data, Chapter 4 combines machine learning and explainable artificial intelligence to discover smartphone-tracked digital markers of momentary subjective stress, Chapter 5 uses a personalized machine learning approach to evaluate if smartphone trace data contains behavioral signs of fatigue. Collectively, these empirical studies provide preliminary answers to the overarching questions of this dissertation.Summary of results With respect to the theoretically motivated relations between smartphone trace data and wellbeing (RQ1), we found that different patterns in smartphone trace data, from time spent on social network, messenger, video, and game applications to smartphone-tracked sleep proxies, are related to well-being in individuals. The strength and nature of this relation depends on the individual and app usage pattern under consideration. The relation between smartphone app use patterns and well-being is limited in most individuals, but relatively strong in a minority. Whereas some individuals might benefit from using specific app types, others might experience decreases in well-being when spending more time on these apps. With respect to the question whether we might use smartphone trace data to monitor well-being in individuals (RQ2), we found that smartphone trace data might be useful for this purpose in some individuals and to some extent. They appear most relevant in the context of sleep monitoring (Chapter 3) and have the potential to be included as one of several data sources for monitoring momentary procrastination (Chapter 2), stress (Chapter 4), and fatigue (Chapter 5) in daily life. Outlook Future interdisciplinary research is needed to investigate whether the relationship between smartphone use and well-being depends on the nature of the activities performed on these devices, the content they present, and the context in which they are used. Answering these questions is essential to unravel the complex puzzle of developing technologies for monitoring well-being in daily life.<br/

    A survey of Bayesian Network structure learning

    Get PDF

    Efficient parameterized algorithms on structured graphs

    Get PDF
    In der klassischen Komplexitätstheorie werden worst-case Laufzeiten von Algorithmen typischerweise einzig abhängig von der Eingabegröße angegeben. In dem Kontext der parametrisierten Komplexitätstheorie versucht man die Analyse der Laufzeit dahingehend zu verfeinern, dass man zusätzlich zu der Eingabengröße noch einen Parameter berücksichtigt, welcher angibt, wie strukturiert die Eingabe bezüglich einer gewissen Eigenschaft ist. Ein parametrisierter Algorithmus nutzt dann diese beschriebene Struktur aus und erreicht so eine Laufzeit, welche schneller ist als die eines besten unparametrisierten Algorithmus, falls der Parameter klein ist. Der erste Hauptteil dieser Arbeit führt die Forschung in diese Richtung weiter aus und untersucht den Einfluss von verschieden Parametern auf die Laufzeit von bekannten effizient lösbaren Problemen. Einige vorgestellte Algorithmen sind dabei adaptive Algorithmen, was bedeutet, dass die Laufzeit von diesen Algorithmen mit der Laufzeit des besten unparametrisierten Algorithm für den größtmöglichen Parameterwert übereinstimmt und damit theoretisch niemals schlechter als die besten unparametrisierten Algorithmen und übertreffen diese bereits für leicht nichttriviale Parameterwerte. Motiviert durch den allgemeinen Erfolg und der Vielzahl solcher parametrisierten Algorithmen, welche eine vielzahl verschiedener Strukturen ausnutzen, untersuchen wir im zweiten Hauptteil dieser Arbeit, wie man solche unterschiedliche homogene Strukturen zu mehr heterogenen Strukturen vereinen kann. Ausgehend von algebraischen Ausdrücken, welche benutzt werden können, um von Parametern beschriebene Strukturen zu definieren, charakterisieren wir klar und robust heterogene Strukturen und zeigen exemplarisch, wie sich die Parameter tree-depth und modular-width heterogen verbinden lassen. Wir beschreiben dazu effiziente Algorithmen auf heterogenen Strukturen mit Laufzeiten, welche im Spezialfall mit den homogenen Algorithmen übereinstimmen.In classical complexity theory, the worst-case running times of algorithms depend solely on the size of the input. In parameterized complexity the goal is to refine the analysis of the running time of an algorithm by additionally considering a parameter that measures some kind of structure in the input. A parameterized algorithm then utilizes the structure described by the parameter and achieves a running time that is faster than the best general (unparameterized) algorithm for instances of low parameter value. In the first part of this thesis, we carry forward in this direction and investigate the influence of several parameters on the running times of well-known tractable problems. Several presented algorithms are adaptive algorithms, meaning that they match the running time of a best unparameterized algorithm for worst-case parameter values. Thus, an adaptive parameterized algorithm is asymptotically never worse than the best unparameterized algorithm, while it outperforms the best general algorithm already for slightly non-trivial parameter values. As illustrated in the first part of this thesis, for many problems there exist efficient parameterized algorithms regarding multiple parameters, each describing a different kind of structure. In the second part of this thesis, we explore how to combine such homogeneous structures to more general and heterogeneous structures. Using algebraic expressions, we define new combined graph classes of heterogeneous structure in a clean and robust way, and we showcase this for the heterogeneous merge of the parameters tree-depth and modular-width, by presenting parameterized algorithms on such heterogeneous graph classes and getting running times that match the homogeneous cases throughout

    Dynamic processes on networks and higher-order structures

    Get PDF
    Higher-order interactions are increasingly recognized as a critical aspect in the modeling of complex systems. Higher-order networks provide a framework for studying the relationship between the structure of higher-order interactions and the function of the complex system. However, little is known about how higher-order interactions affect dynamic processes. In this thesis, we develop general frameworks of percolation aiming at understanding the interplay between higher-order network structures and the critical properties of dynamics. We reveal that degree correlations strongly affect the percolation threshold on higher-order networks and interestingly, the effect of correlations is different on ordinary percolation and higher-order percolation. We further elucidate the mechanisms responsible for the emergence of discontinuous transitions on higher-order networks. Moreover, we show that triadic regulatory interaction, as a general type of higher-order interaction found widely in nature, can turn percolation into a fully-fledged dynamic process that exhibits period doubling and a route to chaos. As an important example of dynamic processes, we further investigate the role of network topology on epidemic spreading. We show that higher-order interactions can induce a non-linear infection kernel in a pandemic, which results in a discontinuous phase transition, hysteresis, and superexponential spreading. Finally, we propose an epidemic model to evaluate the role of automated contact-and-tracing with mobile apps as a new containment measure to mitigate a pandemic. We reveal the non-linear effect on the reduction of the incidence provided by a certain fraction of app adoption in the population and we propose the optimal strategy to mitigate the pandemic with limited resources. Altogether, the thesis provides new insights into the interplay between the topology of higher-order networks and their dynamics. The results obtained may shed light on the research in other areas of interest such as brain functions and epidemic spreading

    Parallel and Flow-Based High Quality Hypergraph Partitioning

    Get PDF
    Balanced hypergraph partitioning is a classic NP-hard optimization problem that is a fundamental tool in such diverse disciplines as VLSI circuit design, route planning, sharding distributed databases, optimizing communication volume in parallel computing, and accelerating the simulation of quantum circuits. Given a hypergraph and an integer kk, the task is to divide the vertices into kk disjoint blocks with bounded size, while minimizing an objective function on the hyperedges that span multiple blocks. In this dissertation we consider the most commonly used objective, the connectivity metric, where we aim to minimize the number of different blocks connected by each hyperedge. The most successful heuristic for balanced partitioning is the multilevel approach, which consists of three phases. In the coarsening phase, vertex clusters are contracted to obtain a sequence of structurally similar but successively smaller hypergraphs. Once sufficiently small, an initial partition is computed. Lastly, the contractions are successively undone in reverse order, and an iterative improvement algorithm is employed to refine the projected partition on each level. An important aspect in designing practical heuristics for optimization problems is the trade-off between solution quality and running time. The appropriate trade-off depends on the specific application, the size of the data sets, and the computational resources available to solve the problem. Existing algorithms are either slow, sequential and offer high solution quality, or are simple, fast, easy to parallelize, and offer low quality. While this trade-off cannot be avoided entirely, our goal is to close the gaps as much as possible. We achieve this by improving the state of the art in all non-trivial areas of the trade-off landscape with only a few techniques, but employed in two different ways. Furthermore, most research on parallelization has focused on distributed memory, which neglects the greater flexibility of shared-memory algorithms and the wide availability of commodity multi-core machines. In this thesis, we therefore design and revisit fundamental techniques for each phase of the multilevel approach, and develop highly efficient shared-memory parallel implementations thereof. We consider two iterative improvement algorithms, one based on the Fiduccia-Mattheyses (FM) heuristic, and one based on label propagation. For these, we propose a variety of techniques to improve the accuracy of gains when moving vertices in parallel, as well as low-level algorithmic improvements. For coarsening, we present a parallel variant of greedy agglomerative clustering with a novel method to resolve cluster join conflicts on-the-fly. Combined with a preprocessing phase for coarsening based on community detection, a portfolio of from-scratch partitioning algorithms, as well as recursive partitioning with work-stealing, we obtain our first parallel multilevel framework. It is the fastest partitioner known, and achieves medium-high quality, beating all parallel partitioners, and is close to the highest quality sequential partitioner. Our second contribution is a parallelization of an n-level approach, where only one vertex is contracted and uncontracted on each level. This extreme approach aims at high solution quality via very fine-grained, localized refinement, but seems inherently sequential. We devise an asynchronous n-level coarsening scheme based on a hierarchical decomposition of the contractions, as well as a batch-synchronous uncoarsening, and later fully asynchronous uncoarsening. In addition, we adapt our refinement algorithms, and also use the preprocessing and portfolio. This scheme is highly scalable, and achieves the same quality as the highest quality sequential partitioner (which is based on the same components), but is of course slower than our first framework due to fine-grained uncoarsening. The last ingredient for high quality is an iterative improvement algorithm based on maximum flows. In the sequential setting, we first improve an existing idea by solving incremental maximum flow problems, which leads to smaller cuts and is faster due to engineering efforts. Subsequently, we parallelize the maximum flow algorithm and schedule refinements in parallel. Beyond the strive for highest quality, we present a deterministically parallel partitioning framework. We develop deterministic versions of the preprocessing, coarsening, and label propagation refinement. Experimentally, we demonstrate that the penalties for determinism in terms of partition quality and running time are very small. All of our claims are validated through extensive experiments, comparing our algorithms with state-of-the-art solvers on large and diverse benchmark sets. To foster further research, we make our contributions available in our open-source framework Mt-KaHyPar. While it seems inevitable, that with ever increasing problem sizes, we must transition to distributed memory algorithms, the study of shared-memory techniques is not in vain. With the multilevel approach, even the inherently slow techniques have a role to play in fast systems, as they can be employed to boost quality on coarse levels at little expense. Similarly, techniques for shared-memory parallelism are important, both as soon as a coarse graph fits into memory, and as local building blocks in the distributed algorithm
    • …
    corecore