779 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Modern meat: the next generation of meat from cells
Modern Meat is the first textbook on cultivated meat, with contributions from over 100 experts within the cultivated meat community.
The Sections of Modern Meat comprise 5 broad categories of cultivated meat: Context, Impact, Science, Society, and World.
The 19 chapters of Modern Meat, spread across these 5 sections, provide detailed entries on cultivated meat. They extensively tour a range of topics including the impact of cultivated meat on humans and animals, the bioprocess of cultivated meat production, how cultivated meat may become a food option in Space and on Mars, and how cultivated meat may impact the economy, culture, and tradition of Asia
Parallel and Flow-Based High Quality Hypergraph Partitioning
Balanced hypergraph partitioning is a classic NP-hard optimization problem that is a fundamental tool in such diverse disciplines as VLSI circuit design, route planning, sharding distributed databases, optimizing communication volume in parallel computing, and accelerating the simulation of quantum circuits.
Given a hypergraph and an integer , the task is to divide the vertices into disjoint blocks with bounded size, while minimizing an objective function on the hyperedges that span multiple blocks.
In this dissertation we consider the most commonly used objective, the connectivity metric, where we aim to minimize the number of different blocks connected by each hyperedge.
The most successful heuristic for balanced partitioning is the multilevel approach, which consists of three phases.
In the coarsening phase, vertex clusters are contracted to obtain a sequence of structurally similar but successively smaller hypergraphs.
Once sufficiently small, an initial partition is computed.
Lastly, the contractions are successively undone in reverse order, and an iterative improvement algorithm is employed to refine the projected partition on each level.
An important aspect in designing practical heuristics for optimization problems is the trade-off between solution quality and running time.
The appropriate trade-off depends on the specific application, the size of the data sets, and the computational resources available to solve the problem.
Existing algorithms are either slow, sequential and offer high solution quality, or are simple, fast, easy to parallelize, and offer low quality.
While this trade-off cannot be avoided entirely, our goal is to close the gaps as much as possible.
We achieve this by improving the state of the art in all non-trivial areas of the trade-off landscape with only a few techniques, but employed in two different ways.
Furthermore, most research on parallelization has focused on distributed memory, which neglects the greater flexibility of shared-memory algorithms and the wide availability of commodity multi-core machines.
In this thesis, we therefore design and revisit fundamental techniques for each phase of the multilevel approach, and develop highly efficient shared-memory parallel implementations thereof.
We consider two iterative improvement algorithms, one based on the Fiduccia-Mattheyses (FM) heuristic, and one based on label propagation.
For these, we propose a variety of techniques to improve the accuracy of gains when moving vertices in parallel, as well as low-level algorithmic improvements.
For coarsening, we present a parallel variant of greedy agglomerative clustering with a novel method to resolve cluster join conflicts on-the-fly.
Combined with a preprocessing phase for coarsening based on community detection, a portfolio of from-scratch partitioning algorithms, as well as recursive partitioning with work-stealing, we obtain our first parallel multilevel framework.
It is the fastest partitioner known, and achieves medium-high quality, beating all parallel partitioners, and is close to the highest quality sequential partitioner.
Our second contribution is a parallelization of an n-level approach, where only one vertex is contracted and uncontracted on each level.
This extreme approach aims at high solution quality via very fine-grained, localized refinement, but seems inherently sequential.
We devise an asynchronous n-level coarsening scheme based on a hierarchical decomposition of the contractions, as well as a batch-synchronous uncoarsening, and later fully asynchronous uncoarsening.
In addition, we adapt our refinement algorithms, and also use the preprocessing and portfolio.
This scheme is highly scalable, and achieves the same quality as the highest quality sequential partitioner (which is based on the same components), but is of course slower than our first framework due to fine-grained uncoarsening.
The last ingredient for high quality is an iterative improvement algorithm based on maximum flows.
In the sequential setting, we first improve an existing idea by solving incremental maximum flow problems, which leads to smaller cuts and is faster due to engineering efforts.
Subsequently, we parallelize the maximum flow algorithm and schedule refinements in parallel.
Beyond the strive for highest quality, we present a deterministically parallel partitioning framework.
We develop deterministic versions of the preprocessing, coarsening, and label propagation refinement.
Experimentally, we demonstrate that the penalties for determinism in terms of partition quality and running time are very small.
All of our claims are validated through extensive experiments, comparing our algorithms with state-of-the-art solvers on large and diverse benchmark sets.
To foster further research, we make our contributions available in our open-source framework Mt-KaHyPar.
While it seems inevitable, that with ever increasing problem sizes, we must transition to distributed memory algorithms, the study of shared-memory techniques is not in vain.
With the multilevel approach, even the inherently slow techniques have a role to play in fast systems, as they can be employed to boost quality on coarse levels at little expense.
Similarly, techniques for shared-memory parallelism are important, both as soon as a coarse graph fits into memory, and as local building blocks in the distributed algorithm
Beyond Quantity: Research with Subsymbolic AI
How do artificial neural networks and other forms of artificial intelligence interfere with methods and practices in the sciences? Which interdisciplinary epistemological challenges arise when we think about the use of AI beyond its dependency on big data? Not only the natural sciences, but also the social sciences and the humanities seem to be increasingly affected by current approaches of subsymbolic AI, which master problems of quality (fuzziness, uncertainty) in a hitherto unknown way. But what are the conditions, implications, and effects of these (potential) epistemic transformations and how must research on AI be configured to address them adequately
Compositional synthesis of reactive systems
Synthesis is the task of automatically deriving correct-by-construction implementations from formal specifications. While it is a promising path toward developing verified programs, it is infamous for being hard to solve. Compositionality is recognized as a key technique for reducing the complexity of synthesis. So far, compositional approaches require extensive manual effort. In this thesis, we introduce algorithms that automate these steps. In the first part, we develop compositional synthesis techniques for distributed systems. Providing assumptions on other processes' behavior is fundamental in this setting due to inter-process dependencies. We establish delay-dominance, a new requirement for implementations that allows for implicitly assuming that other processes will not maliciously violate the shared goal. Furthermore, we present an algorithm that computes explicit assumptions on process behavior to address more complex dependencies. In the second part, we transfer the concept of compositionality from distributed to single-process systems. We present a preprocessing technique for synthesis that identifies independently synthesizable system components. We extend this approach to an incremental synthesis algorithm, resulting in more fine-grained decompositions. Our experimental evaluation shows that our techniques automate the required manual efforts, resulting in fully automated compositional synthesis algorithms for both distributed and single-process systems.Synthese ist die Aufgabe korrekte Implementierungen aus formalen Spezifikation abzuleiten. Sie ist zwar ein vielversprechender Weg für die Entwicklung verifizierter Programme, aber auch dafür bekannt schwer zu lösen zu sein. Kompositionalität gilt als eine Schlüsseltechnik zur Verringerung der Komplexität der Synthese. Bislang erfordern kompositionale Ansätze einen hohen manuellen Aufwand. In dieser Dissertation stellen wir Algorithmen vor, die diese Schritte automatisieren. Im ersten Teil entwickeln wir kompositionale Synthesetechniken für verteilte Systeme. Aufgrund der Abhängigkeiten zwischen den Prozessen ist es in diesem Kontext von grundlegender Bedeutung, Annahmen über das Verhalten der anderen Prozesse zu treffen. Wir etablieren Delay-Dominance, eine neue Anforderung für Implementierungen, die es ermöglicht, implizit anzunehmen, dass andere Prozesse das gemeinsame Ziel nicht böswillig verletzen. Darüber hinaus stellen wir einen Algorithmus vor, der explizite Annahmen über das Verhalten anderer Prozesse ableitet, um komplexere Abhängigkeiten zu berücksichtigen. Im zweiten Teil übertragen wir das Konzept der Kompositionalität von verteilten auf Einzelprozesssysteme. Wir präsentieren eine Vorverarbeitungmethode für die Synthese, die unabhängig synthetisierbare Systemkomponenten identifiziert. Wir erweitern diesen Ansatz zu einem inkrementellen Synthesealgorithmus, der zu feineren Dekompositionen führt. Unsere experimentelle Auswertung zeigt, dass unsere Techniken den erforderlichen manuellen Aufwand automatisieren und so zu vollautomatischen Algorithmen für die kompositionale Synthese sowohl für verteilte als auch für Einzelprozesssysteme führen
Algorithms for Triangles, Cones & Peaks
Three different geometric objects are at the center of this dissertation: triangles, cones and peaks.
In computational geometry, triangles are the most basic shape for planar subdivisions.
Particularly, Delaunay triangulations are a widely used for manifold applications in engineering, geographic information systems, telecommunication networks, etc.
We present two novel parallel algorithms to construct the Delaunay triangulation of a given point set.
Yao graphs are geometric spanners that connect each point of a given set to its nearest neighbor in each of cones drawn around it.
They are used to aid the construction of Euclidean minimum spanning trees
or in wireless networks for topology control and routing.
We present the first implementation of an optimal -time sweepline algorithm to construct Yao graphs.
One metric to quantify the importance of a mountain peak is its isolation.
Isolation measures the distance between a peak and the closest point of higher elevation.
Computing this metric from high-resolution digital elevation models (DEMs) requires efficient algorithms.
We present a novel sweep-plane algorithm that can calculate the isolation of all peaks on Earth in mere minutes
Evolutionary Reinforcement Learning: A Survey
Reinforcement learning (RL) is a machine learning approach that trains agents
to maximize cumulative rewards through interactions with environments. The
integration of RL with deep learning has recently resulted in impressive
achievements in a wide range of challenging tasks, including board games,
arcade games, and robot control. Despite these successes, there remain several
crucial challenges, including brittle convergence properties caused by
sensitive hyperparameters, difficulties in temporal credit assignment with long
time horizons and sparse rewards, a lack of diverse exploration, especially in
continuous search space scenarios, difficulties in credit assignment in
multi-agent reinforcement learning, and conflicting objectives for rewards.
Evolutionary computation (EC), which maintains a population of learning agents,
has demonstrated promising performance in addressing these limitations. This
article presents a comprehensive survey of state-of-the-art methods for
integrating EC into RL, referred to as evolutionary reinforcement learning
(EvoRL). We categorize EvoRL methods according to key research fields in RL,
including hyperparameter optimization, policy search, exploration, reward
shaping, meta-RL, and multi-objective RL. We then discuss future research
directions in terms of efficient methods, benchmarks, and scalable platforms.
This survey serves as a resource for researchers and practitioners interested
in the field of EvoRL, highlighting the important challenges and opportunities
for future research. With the help of this survey, researchers and
practitioners can develop more efficient methods and tailored benchmarks for
EvoRL, further advancing this promising cross-disciplinary research field
Massively Parallel Continuous Local Search for Hybrid SAT Solving on GPUs
Although state-of-the-art (SOTA) SAT solvers based on conflict-driven clause
learning (CDCL) have achieved remarkable engineering success, their sequential
nature limits the parallelism that may be extracted for acceleration on
platforms such as the graphics processing unit (GPU). In this work, we propose
FastFourierSAT, a highly parallel hybrid SAT solver based on gradient-driven
continuous local search (CLS). This is realized by a novel parallel algorithm
inspired by the Fast Fourier Transform (FFT)-based convolution for computing
the elementary symmetric polynomials (ESPs), which is the major computational
task in previous CLS methods. The complexity of our algorithm matches the best
previous result. Furthermore, the substantial parallelism inherent in our
algorithm can leverage the GPU for acceleration, demonstrating significant
improvement over the previous CLS approaches. We also propose to incorporate
the restart heuristics in CLS to improve search efficiency. We compare our
approach with the SOTA parallel SAT solvers on several benchmarks. Our results
show that FastFourierSAT computes the gradient 100+ times faster than previous
prototypes implemented on CPU. Moreover, FastFourierSAT solves most instances
and demonstrates promising performance on larger-size instances
Deep Neural Networks for Visual Object Tracking: An Investigation of Performance Optimization
This thesis advances visual object tracking by introducing four key enhancements that address current limitations in training data, network architecture, and tracking methodologies. It proposes a refined sampling strategy for Siamese Networks to enrich training data and develops a more efficient Partially Siamese Network through neural architecture search, achieving superior performance on benchmarks. The work further streamlines tracking with a new transformer-based pipeline and breaks ground with a speech-guided tracking framework, improving human-machine collaboration. These advancements are thoroughly validated, marking significant progress in the visual object tracking domain
Operational Research: methods and applications
This is the final version. Available on open access from Taylor & Francis via the DOI in this recordThroughout its history, Operational Research has evolved to include methods, models and algorithms that have been applied to a wide range of contexts. This encyclopedic article consists of two main sections: methods and applications. The first summarises the up-to-date knowledge and provides an overview of the state-of-the-art methods and key developments in the various subdomains of the field. The second offers a wide-ranging list of areas where Operational Research has been applied. The article is meant to be read in a nonlinear fashion and used as a point of reference by a diverse pool of readers: academics, researchers, students, and practitioners. The entries within the methods and applications sections are presented in alphabetical order. The authors dedicate this paper to the 2023 Turkey/Syria earthquake victims. We sincerely hope that advances in OR will play a role towards minimising the pain and suffering caused by this and future catastrophes
- …