794 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Developmental Bootstrapping of AIs

    Full text link
    Although some current AIs surpass human abilities in closed artificial worlds such as board games, their abilities in the real world are limited. They make strange mistakes and do not notice them. They cannot be instructed easily, fail to use common sense, and lack curiosity. They do not make good collaborators. Mainstream approaches for creating AIs are the traditional manually-constructed symbolic AI approach and generative and deep learning AI approaches including large language models (LLMs). These systems are not well suited for creating robust and trustworthy AIs. Although it is outside of the mainstream, the developmental bootstrapping approach has more potential. In developmental bootstrapping, AIs develop competences like human children do. They start with innate competences. They interact with the environment and learn from their interactions. They incrementally extend their innate competences with self-developed competences. They interact and learn from people and establish perceptual, cognitive, and common grounding. They acquire the competences they need through bootstrapping. However, developmental robotics has not yet produced AIs with robust adult-level competences. Projects have typically stopped at the Toddler Barrier corresponding to human infant development at about two years of age, before their speech is fluent. They also do not bridge the Reading Barrier, to skillfully and skeptically draw on the socially developed information resources that power current LLMs. The next competences in human cognitive development involve intrinsic motivation, imitation learning, imagination, coordination, and communication. This position paper lays out the logic, prospects, gaps, and challenges for extending the practice of developmental bootstrapping to acquire further competences and create robust, resilient, and human-compatible AIs.Comment: 102 pages, 29 figure

    Parallel and Flow-Based High Quality Hypergraph Partitioning

    Get PDF
    Balanced hypergraph partitioning is a classic NP-hard optimization problem that is a fundamental tool in such diverse disciplines as VLSI circuit design, route planning, sharding distributed databases, optimizing communication volume in parallel computing, and accelerating the simulation of quantum circuits. Given a hypergraph and an integer kk, the task is to divide the vertices into kk disjoint blocks with bounded size, while minimizing an objective function on the hyperedges that span multiple blocks. In this dissertation we consider the most commonly used objective, the connectivity metric, where we aim to minimize the number of different blocks connected by each hyperedge. The most successful heuristic for balanced partitioning is the multilevel approach, which consists of three phases. In the coarsening phase, vertex clusters are contracted to obtain a sequence of structurally similar but successively smaller hypergraphs. Once sufficiently small, an initial partition is computed. Lastly, the contractions are successively undone in reverse order, and an iterative improvement algorithm is employed to refine the projected partition on each level. An important aspect in designing practical heuristics for optimization problems is the trade-off between solution quality and running time. The appropriate trade-off depends on the specific application, the size of the data sets, and the computational resources available to solve the problem. Existing algorithms are either slow, sequential and offer high solution quality, or are simple, fast, easy to parallelize, and offer low quality. While this trade-off cannot be avoided entirely, our goal is to close the gaps as much as possible. We achieve this by improving the state of the art in all non-trivial areas of the trade-off landscape with only a few techniques, but employed in two different ways. Furthermore, most research on parallelization has focused on distributed memory, which neglects the greater flexibility of shared-memory algorithms and the wide availability of commodity multi-core machines. In this thesis, we therefore design and revisit fundamental techniques for each phase of the multilevel approach, and develop highly efficient shared-memory parallel implementations thereof. We consider two iterative improvement algorithms, one based on the Fiduccia-Mattheyses (FM) heuristic, and one based on label propagation. For these, we propose a variety of techniques to improve the accuracy of gains when moving vertices in parallel, as well as low-level algorithmic improvements. For coarsening, we present a parallel variant of greedy agglomerative clustering with a novel method to resolve cluster join conflicts on-the-fly. Combined with a preprocessing phase for coarsening based on community detection, a portfolio of from-scratch partitioning algorithms, as well as recursive partitioning with work-stealing, we obtain our first parallel multilevel framework. It is the fastest partitioner known, and achieves medium-high quality, beating all parallel partitioners, and is close to the highest quality sequential partitioner. Our second contribution is a parallelization of an n-level approach, where only one vertex is contracted and uncontracted on each level. This extreme approach aims at high solution quality via very fine-grained, localized refinement, but seems inherently sequential. We devise an asynchronous n-level coarsening scheme based on a hierarchical decomposition of the contractions, as well as a batch-synchronous uncoarsening, and later fully asynchronous uncoarsening. In addition, we adapt our refinement algorithms, and also use the preprocessing and portfolio. This scheme is highly scalable, and achieves the same quality as the highest quality sequential partitioner (which is based on the same components), but is of course slower than our first framework due to fine-grained uncoarsening. The last ingredient for high quality is an iterative improvement algorithm based on maximum flows. In the sequential setting, we first improve an existing idea by solving incremental maximum flow problems, which leads to smaller cuts and is faster due to engineering efforts. Subsequently, we parallelize the maximum flow algorithm and schedule refinements in parallel. Beyond the strive for highest quality, we present a deterministically parallel partitioning framework. We develop deterministic versions of the preprocessing, coarsening, and label propagation refinement. Experimentally, we demonstrate that the penalties for determinism in terms of partition quality and running time are very small. All of our claims are validated through extensive experiments, comparing our algorithms with state-of-the-art solvers on large and diverse benchmark sets. To foster further research, we make our contributions available in our open-source framework Mt-KaHyPar. While it seems inevitable, that with ever increasing problem sizes, we must transition to distributed memory algorithms, the study of shared-memory techniques is not in vain. With the multilevel approach, even the inherently slow techniques have a role to play in fast systems, as they can be employed to boost quality on coarse levels at little expense. Similarly, techniques for shared-memory parallelism are important, both as soon as a coarse graph fits into memory, and as local building blocks in the distributed algorithm

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Efficiency and Sustainability of the Distributed Renewable Hybrid Power Systems Based on the Energy Internet, Blockchain Technology and Smart Contracts-Volume II

    Get PDF
    The climate changes that are becoming visible today are a challenge for the global research community. In this context, renewable energy sources, fuel cell systems, and other energy generating sources must be optimally combined and connected to the grid system using advanced energy transaction methods. As this reprint presents the latest solutions in the implementation of fuel cell and renewable energy in mobile and stationary applications, such as hybrid and microgrid power systems based on the Energy Internet, Blockchain technology, and smart contracts, we hope that they will be of interest to readers working in the related fields mentioned above

    Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services

    Get PDF
    This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book

    D4.2 Intelligent D-Band wireless systems and networks initial designs

    Get PDF
    This deliverable gives the results of the ARIADNE project's Task 4.2: Machine Learning based network intelligence. It presents the work conducted on various aspects of network management to deliver system level, qualitative solutions that leverage diverse machine learning techniques. The different chapters present system level, simulation and algorithmic models based on multi-agent reinforcement learning, deep reinforcement learning, learning automata for complex event forecasting, system level model for proactive handovers and resource allocation, model-driven deep learning-based channel estimation and feedbacks as well as strategies for deployment of machine learning based solutions. In short, the D4.2 provides results on promising AI and ML based methods along with their limitations and potentials that have been investigated in the ARIADNE project

    Themelio: a new blockchain paradigm

    Get PDF
    Public blockchains hold great promise in building protocols that uphold security properties like transparency and consistency based on internal, incentivized cryptoeconomic mechanisms rather than preexisting trust in participants. Yet user-facing blockchain applications beyond "internal" immediate derivatives of blockchain incentive models, like cryptocurrency and decentralized finance, have not achieved widespread development or adoption. We propose that this is not primarily due to "engineering" problems in aspects such as scaling, but due to an overall lack of transferable endogenous trust—the twofold ability to uphold strong, internally-generated security guarantees and to translate them into application-level security. Yet we argue that blockchains, due to their foundation on game-theoretic incentive models rather than trusted authorities, are uniquely suited for building transferable endogenous trust, despite their current deficiencies. We then engage in a survey of existing public blockchains and the difficulties and crises that they have faced, noting that in almost every case, problems such as governance disputes and ecosystem inflexibility stem from a lack of transferable endogenous trust. Next, we introduce Themelio, a decentralized, public blockchain designed to support a new blockchain paradigm focused on transferable endogenous trust. Here, the blockchain is used as a low-level, stable, and simple root of trust, capable of sharing this trust with applications through scalable light clients. This contrasts with current blockchains, which are either applications or application execution platforms. We present evidence that this new paradigm is crucial to achieving flexible deployment of blockchain-based trust. We then describe the Themelio blockchain in detail, focusing on three areas key to its overall theme of transferable, strong endogenous trust: a traditional yet enhanced UTXO model with features that allow powerful programmability and light-client composability, a novel proof-of-stake system with unique cryptoeconomic guarantees against collusion, and Themelio's unique cryptocurrency "mel", which achieves stablecoin-like low volatility without sacrificing decentralization and security. Finally, we explore the wide variety of novel, partly off-chain applications enabled by Themelio's decoupled blockchain paradigm. This includes Astrape, a privacy-protecting off-chain micropayment network, Bitforest, a blockchain-based PKI that combines blockchain-backed security guarantees with the performance and administration benefits of traditional systems, as well as sketches of further applications

    Qualitative impact assessment of land management interventions on Ecosystem Services (“QEIA”). Report 3.7: Cultural Services

    Get PDF
    The focus of this project was to provide a rapid qualitative assessment of land management interventions on Ecosystem Services (ES) proposed for inclusion in Environmental Land Management (ELM) schemes. This involved a review of the current evidence base by ten expert teams drawn from the independent research community in a consistent series of ten Evidence Reviews. These reviews were undertaken rapidly at Defra’s request and together captured more than 2000 individual sources of evidence. These reviews were then used to inform an Integrated Assessment (IA) to provide a more accessible summary of these evidence reviews with a focus on capturing the actions with the greatest potential magnitude of change for the intended ES and their potential co-benefits and trade-offs across the Ecosystem Services and Ecosystem Services Indicators. The final IA table captured scores for 741 actions across 8 Themes, 33 ES and 53 ES-indicators. This produced a total possible matrix of 39,273 scores. It should be noted that this piece of work is just one element of the wider underpinning work Defra has commissioned to support the development of the ELM schemes. The project was carried out in two phases with the environmental and provisioning services commissioned in Phase 1 and cultural and regulatory services in a follow-on Phase 2. Due to the urgency of the need for these evidence reviews, there was insufficient time for systematic reviews and therefore the reviews relied on the knowledge of the team of the peer reviewed and grey literature with some rapid additional checking of recent reports and papers. This limitation of the review process was clearly explained and understood by Defra. The review presented here is one of the ten evidence reviews which informed the IA

    Applications

    Get PDF
    Volume 3 describes how resource-aware machine learning methods and techniques are used to successfully solve real-world problems. The book provides numerous specific application examples: in health and medicine for risk modelling, diagnosis, and treatment selection for diseases in electronics, steel production and milling for quality control during manufacturing processes in traffic, logistics for smart cities and for mobile communications
    • …
    corecore