514 research outputs found

    Natural and Technological Hazards in Urban Areas

    Get PDF
    Natural hazard events and technological accidents are separate causes of environmental impacts. Natural hazards are physical phenomena active in geological times, whereas technological hazards result from actions or facilities created by humans. In our time, combined natural and man-made hazards have been induced. Overpopulation and urban development in areas prone to natural hazards increase the impact of natural disasters worldwide. Additionally, urban areas are frequently characterized by intense industrial activity and rapid, poorly planned growth that threatens the environment and degrades the quality of life. Therefore, proper urban planning is crucial to minimize fatalities and reduce the environmental and economic impacts that accompany both natural and technological hazardous events

    Modern meat: the next generation of meat from cells

    Get PDF
    Modern Meat is the first textbook on cultivated meat, with contributions from over 100 experts within the cultivated meat community. The Sections of Modern Meat comprise 5 broad categories of cultivated meat: Context, Impact, Science, Society, and World. The 19 chapters of Modern Meat, spread across these 5 sections, provide detailed entries on cultivated meat. They extensively tour a range of topics including the impact of cultivated meat on humans and animals, the bioprocess of cultivated meat production, how cultivated meat may become a food option in Space and on Mars, and how cultivated meat may impact the economy, culture, and tradition of Asia

    Improving low latency applications for reconfigurable devices

    Get PDF
    This thesis seeks to improve low latency application performance via architectural improvements in reconfigurable devices. This is achieved by improving resource utilisation and access, and by exploiting the different environments within which reconfigurable devices are deployed. Our first contribution leverages devices deployed at the network level to enable the low latency processing of financial market data feeds. Financial exchanges transmit messages via two identical data feeds to reduce the chance of message loss. We present an approach to arbitrate these redundant feeds at the network level using a Field-Programmable Gate Array (FPGA). With support for any messaging protocol, we evaluate our design using the NASDAQ TotalView-ITCH, OPRA, and ARCA data feed protocols, and provide two simultaneous outputs: one prioritising low latency, and one prioritising high reliability with three dynamically configurable windowing methods. Our second contribution is a new ring-based architecture for low latency, parallel access to FPGA memory. Traditional FPGA memory is formed by grouping block memories (BRAMs) together and accessing them as a single device. Our architecture accesses these BRAMs independently and in parallel. Targeting memory-based computing, which stores pre-computed function results in memory, we benefit low latency applications that rely on: highly-complex functions; iterative computation; or many parallel accesses to a shared resource. We assess square root, power, trigonometric, and hyperbolic functions within the FPGA, and provide a tool to convert Python functions to our new architecture. Our third contribution extends the ring-based architecture to support any FPGA processing element. We unify E heterogeneous processing elements within compute pools, with each element implementing the same function, and the pool serving D parallel function calls. Our implementation-agnostic approach supports processing elements with different latencies, implementations, and pipeline lengths, as well as non-deterministic latencies. Compute pools evenly balance access to processing elements across the entire application, and are evaluated by implementing eight different neural network activation functions within an FPGA.Open Acces

    Parallel and Flow-Based High Quality Hypergraph Partitioning

    Get PDF
    Balanced hypergraph partitioning is a classic NP-hard optimization problem that is a fundamental tool in such diverse disciplines as VLSI circuit design, route planning, sharding distributed databases, optimizing communication volume in parallel computing, and accelerating the simulation of quantum circuits. Given a hypergraph and an integer kk, the task is to divide the vertices into kk disjoint blocks with bounded size, while minimizing an objective function on the hyperedges that span multiple blocks. In this dissertation we consider the most commonly used objective, the connectivity metric, where we aim to minimize the number of different blocks connected by each hyperedge. The most successful heuristic for balanced partitioning is the multilevel approach, which consists of three phases. In the coarsening phase, vertex clusters are contracted to obtain a sequence of structurally similar but successively smaller hypergraphs. Once sufficiently small, an initial partition is computed. Lastly, the contractions are successively undone in reverse order, and an iterative improvement algorithm is employed to refine the projected partition on each level. An important aspect in designing practical heuristics for optimization problems is the trade-off between solution quality and running time. The appropriate trade-off depends on the specific application, the size of the data sets, and the computational resources available to solve the problem. Existing algorithms are either slow, sequential and offer high solution quality, or are simple, fast, easy to parallelize, and offer low quality. While this trade-off cannot be avoided entirely, our goal is to close the gaps as much as possible. We achieve this by improving the state of the art in all non-trivial areas of the trade-off landscape with only a few techniques, but employed in two different ways. Furthermore, most research on parallelization has focused on distributed memory, which neglects the greater flexibility of shared-memory algorithms and the wide availability of commodity multi-core machines. In this thesis, we therefore design and revisit fundamental techniques for each phase of the multilevel approach, and develop highly efficient shared-memory parallel implementations thereof. We consider two iterative improvement algorithms, one based on the Fiduccia-Mattheyses (FM) heuristic, and one based on label propagation. For these, we propose a variety of techniques to improve the accuracy of gains when moving vertices in parallel, as well as low-level algorithmic improvements. For coarsening, we present a parallel variant of greedy agglomerative clustering with a novel method to resolve cluster join conflicts on-the-fly. Combined with a preprocessing phase for coarsening based on community detection, a portfolio of from-scratch partitioning algorithms, as well as recursive partitioning with work-stealing, we obtain our first parallel multilevel framework. It is the fastest partitioner known, and achieves medium-high quality, beating all parallel partitioners, and is close to the highest quality sequential partitioner. Our second contribution is a parallelization of an n-level approach, where only one vertex is contracted and uncontracted on each level. This extreme approach aims at high solution quality via very fine-grained, localized refinement, but seems inherently sequential. We devise an asynchronous n-level coarsening scheme based on a hierarchical decomposition of the contractions, as well as a batch-synchronous uncoarsening, and later fully asynchronous uncoarsening. In addition, we adapt our refinement algorithms, and also use the preprocessing and portfolio. This scheme is highly scalable, and achieves the same quality as the highest quality sequential partitioner (which is based on the same components), but is of course slower than our first framework due to fine-grained uncoarsening. The last ingredient for high quality is an iterative improvement algorithm based on maximum flows. In the sequential setting, we first improve an existing idea by solving incremental maximum flow problems, which leads to smaller cuts and is faster due to engineering efforts. Subsequently, we parallelize the maximum flow algorithm and schedule refinements in parallel. Beyond the strive for highest quality, we present a deterministically parallel partitioning framework. We develop deterministic versions of the preprocessing, coarsening, and label propagation refinement. Experimentally, we demonstrate that the penalties for determinism in terms of partition quality and running time are very small. All of our claims are validated through extensive experiments, comparing our algorithms with state-of-the-art solvers on large and diverse benchmark sets. To foster further research, we make our contributions available in our open-source framework Mt-KaHyPar. While it seems inevitable, that with ever increasing problem sizes, we must transition to distributed memory algorithms, the study of shared-memory techniques is not in vain. With the multilevel approach, even the inherently slow techniques have a role to play in fast systems, as they can be employed to boost quality on coarse levels at little expense. Similarly, techniques for shared-memory parallelism are important, both as soon as a coarse graph fits into memory, and as local building blocks in the distributed algorithm

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    The LDBC social network benchmark: Business intelligence workload

    Get PDF
    The Social Network Benchmark’s Business Intelligence workload (SNB BI) is a comprehensive graph OLAP benchmark targeting analytical data systems capable of supporting graph workloads. This paper marks the finalization of almost a decade of research in academia and industry via the Linked Data Benchmark Council (LDBC). SNB BI advances the state-of-the art in synthetic and scalable analytical database benchmarks in many aspects. Its base is a sophisticated data generator, implemented on a scalable distributed infrastructure, that produces a social graph with small-world phenomena, whose value properties follow skewed and correlated distributions and where values correlate with structure. This is a temporal graph where all nodes and edges follow lifespan-based rules with temporal skew enabling realistic and consistent temporal inserts and (recursive) deletes. The query workload exploiting this skew and correlation is based on LDBC’s “choke point”-driven design methodology and will entice technical and scientific improvements in future (graph) database systems. SNB BI includes the first adoption of “parameter curation” in an analytical benchmark, a technique that ensures stable runtimes of query variants across different parameter values. Two performance metrics characterize peak single-query performance (power) and sustained concurrent query throughput. To demonstrate the portability of the benchmark, we present experimental results on a relational and a graph DBMS. Note that these do not constitute an official LDBC Benchmark Result – only audited results can use this trademarked term

    Semiconductor quantum dots for photonic quantum repeaters

    Get PDF
    Current information exchange is based on optical fibers and satellite communication via free-space links, where security is provided by mathematical complexity. However, it could potentially be threatened by paradigm shifts in computing technology. Encryption techniques using quantum key distribution based on entangled photons would allow for theoretical full secure communication. The very same platform, entangled photons, can also be employed as a core element to establish multi-node secure communication—a concept known as quantum network. For these reasons, en tangled photon sources might be the core of future quantum networks for secure communication. In this thesis, I study GaAs/AlGaAs quantum dots as entangled photon sources. After giving a general overview on the fundamentals of photonic quantum networks and GaAs droplet-etched quantum dots, I mainly focus on two aspects of the development of this technology. First, limits of the source performance as entangled photon sources and second, applications of entangled photons from quantum dots for secure communication. The prior includes degrading effects of entanglement in these quantum dots, especially based on multiphoton emission and optical Stark effect induced by the particular entangled-photon generation technique, resonant two-photon excitation. The experimental results demonstrate that multiphoton emission is negligible under practical conditions, which is supported by a probabilistic model. The finite excitation laser pulse duration in resonant two-photon excitation, on the other hand, induces an optical Stark effect. The measurements in this thesis support the theoretical predictions and an entanglement reduction by increasing excitation laser pulse length is observed experimentally. If some conditions are met, GaAs/AlGaAs quantum dots emit highly entangled photons, which are utilized in the second part of this thesis by applying them in entanglement-based quantum key distribution protocols. The demonstrations range from the first implementation of quantum dots as entangled photon sources for secure communication in fiber and free-space, to a continuous secret key exchange over three days. The second test case, in particular, tackles the challenges of real-life applications such as sunlight and mild rain. At the end, I provide a brief outlook on how to use entangled photons from GaAs/AlGaAs quantum dots to transfer information from one node of a network, namely a quantum repeater, to another by proposing an experiment called remote quantum teleportation

    Global optimisation of large-scale quadratic programs: application to short-term planning of industrial refinery-petrochemical complexes

    Get PDF
    This thesis is driven by an industrial problem arising in the short-term planning of an integrated refinery-petrochemical complex (IRPC) in Colombia. The IRPC of interest is composed of 60 industrial plants and a tank farm for crude mixing and fuel blending consisting of 30 additional units. It considers both domestic and imported crude oil supply, as well as refined product imports such as low sulphur diesel and alkylate. This gives rise to a large-scale mixed-integer quadratically constrained quadratic program (MIQCQP) comprising about 7,000 equality constraints with over 35,000 bilinear terms and 280 binary variables describing operating modes for the process units. Four realistic planning scenarios are recreated to study the performance of the algorithms developed through the thesis and compare them to commercial solvers. Local solvers such as SBB and DICOPT cannot reliably solve such large-scale MIQCQPs. Usually, it is challenging to even reach a feasible solution with these solvers, and a heuristic procedure is required to initialize the search. On the other hand, global solvers such as ANTIGONE and BARON determine a feasible solution for all the scenarios analysed, but they are unable to close the relaxation gap to less than 40% on average after 10h of CPU runtime. Overall, this industrial-size problem is thus intractable to global optimality in a monolithic way. The first main contribution of the thesis is a deterministic global optimisation algorithm based on cluster decomposition (CL) that divides the network into groups of process units according to their functionality. The algorithm runs through the sequences of clusters and proceeds by alternating between: (i) the (global) solution of a mixed-integer linear program (MILP), obtained by relaxing the bilinear terms based on their piecewise McCormick envelopes and a dynamic partition of their variable ranges, in order to determine an upper bound on the maximal profit; and (ii) the local solution of a quadratically-constrained quadratic program (QCQP), after fixing the binary variables and initializing the continuous variables to the relaxed MILP solution point, in order to determine a feasible solution (lower bound on the maximal profit). Applied to the base case scenario, the CL approach reaches a best solution of 2.964 MMUSD/day and a relaxation gap of 7.5%, a remarkable result for such challenging MIQCQP problem. The CL approach also vastly outperforms both ANTIGONE (2.634 MMUSD/day, 32% optimality gap) and BARON (2.687 MMUSD/day, 40% optimality gap). The second main contribution is a spatial Lagrangean decomposition, which entails decomposing the IRPC short-term planning problem into a collection of smaller subproblems that can be solved independently to determine an upper bound on the maximal profit. One advantage of this strategy is that each sub-problem can be solved to global optimality, potentially providing good initial points for the monolithic problem itself. It furthermore creates a virtual market for trading crude blends and intermediate refined–petrochemical streams and seeks an optimal trade-off in such a market, with the Lagrange multipliers acting as transfer prices. A decomposition over two to four is considered, which matches the crude management, refinery, petrochemical operations, and fuel blending sections of the IRPC. An optimality gap below 4% is achieved in all four scenarios considered, which is a significant improvement over the cluster decomposition algorithm.Open Acces
    • …
    corecore