56 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    A geo-informatics approach to sustainability assessments of floatovoltaic technology in South African agricultural applications

    Get PDF
    South African project engineers recently pioneered the first agricultural floating solar photovoltaic tech nology systems in the Western Cape wine region. This effort prepared our country for an imminent large scale diffusion of this exciting new climate solver technology. However, hydro-embedded photovoltaic sys tems interact with environmentally sensitive underlying aquatic ecosystems, causing multiple project as sessment uncertainties (energy, land, air, water) compared to ground-mounted photovoltaics. The dissimi lar behaviour of floatovoltaic technologies delivers a broader and more diversified range of technical advan tages, environmental offset benefits, and economic co-benefits, causing analytical modelling imperfections and tooling mismatches in conventional analytical project assessment techniques. As a universal interna tional real-world problem of significance, the literature review identified critical knowledge and methodology gaps as the primary causes of modelling deficiencies and assessment uncertainties. By following a design thinking methodology, the thesis views the sustainability assessment and modelling problem through a geo graphical information systems lens, thus seeing an academic research opportunity to fill critical knowledge gaps through new theory formulation and geographical knowledge creation. To this end, this philosophi cal investigation proposes a novel object-oriented systems-thinking and climate modelling methodology to study the real-world geospatial behaviour of functioning floatovoltaic systems from a dynamical system thinking perspective. As an empirical feedback-driven object-process methodology, it inspired the thesis to create new knowledge by postulating a new multi-disciplinary sustainability theory to holistically characterise agricultural floatovoltaic projects through ecosystems-based quantitative sustainability profiling criteria. The study breaks new ground at the frontiers of energy geo-informatics by conceptualising a holistic theoretical framework designed for the theoretical characterisation of floatovoltaic technology ecosystem operations in terms of the technical energy, environmental and economic (3E) domain responses. It campaigns for a fully coupled model in ensemble analysis that advances the state-of-the-art by appropriating the 3E theo retical framework as underpinning computer program logic blueprint to synthesise the posited theory in a digital twin simulation. Driven by real-world geo-sensor data, this geospatial digital twin can mimic the geo dynamical behaviour of floatovoltaics through discrete-time computer simulations in real-time and lifetime digital project enactment exercises. The results show that the theoretical 3E framing enables project due diligence and environmental impact assessment reporting as it uniquely incorporates balanced scorecard performance metrics, such as the water-energy-land-food resource impacts, environmental offset benefits and financial feasibility of floatovoltaics. Embedded in a geoinformatics decision-support platform, the 3E theory, framework and model enable numerical project decision-supporting through an analytical hierarchy process. The experimental results obtained with the digital twin model and decision support system show that the desktop-based parametric floatovoltaic synthesis toolset can uniquely characterise the broad and diverse spectrum of performance benefits of floatovoltaics in a 3E sustainability profile. The model uniquely predicts important impact aspects of the technology’s land, air and water preservation qualities, quantifying these impacts in terms of the water, energy, land and food nexus parameters. The proposed GIS model can quantitatively predict most FPV technology unknowns, thus solving a contemporary real-world prob lem that currently jeopardises floating PV project licensing and approvals. Overall, the posited theoretical framework, methodology model, and reported results provide an improved understanding of floating PV renewable energy systems and their real-world behaviour. Amidst a rapidly growing international interest in floatovoltaic solutions, the research advances fresh philosophical ideas with novel theoretical principles that may have far-reaching implications for developing electronic, photovoltaic performance models worldwide.GeographyPh. D. (Geography

    Two studies in resource-efficient inference: structural testing of networks, and selective classification

    Get PDF
    Inference systems suffer costs arising from information acquisition, and from communication and computational costs of executing complex models. This dissertation proposes, in two distinct themes, systems-level methods to reduce these costs without affecting the accuracy of inference by using ancillary low-cost methods to cheaply address most queries, while only using resource-heavy methods on 'difficult' instances. The first theme concerns testing methods in structural inference of networks and graphical models, the proposal being that one first cheaply tests whether the structure underlying a dataset differs from a reference structure, and only estimates the new structure if this difference is large. This study focuses on theoretically establishing separations between the costs of testing and learning to determine when a strategy such as the above has benefits. For two canonical models---the Ising model, and the stochastic block model---fundamental limits are derived on the costs of one- and two-sample goodness-of-fit tests by determining information-theoretic lower bounds, and developing matching tests. A biphasic behaviour in the costs of testing is demonstrated: there is a critical size scale such that detection of differences smaller than this size is nearly as expensive as recovering the structure, while detection of larger differences has vanishing costs relative to recovery. The second theme concerns using Selective classification (SC), or classification with an option to abstain, to control inference-time costs in the machine learning framework. The proposal is to learn a low-complexity selective classifier that only abstains on hard instances, and to execute more expensive methods upon abstention. Herein, a novel SC formulation with a focus on high-accuracy is developed, and used to obtain both theoretical characterisations, and a scheme for learning selective classifiers based on optimising a collection of class-wise decoupled one-sided risks. This scheme attains strong empirical performance, and admits efficient implementation, leading to an effective SC methodology. Finally, SC is studied in the online learning setting with feedback only provided upon abstention, modelling the practical lack of reliable labels without expensive feature collection, and a Pareto-optimal low-error scheme is described

    LIPIcs, Volume 248, ISAAC 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 248, ISAAC 2022, Complete Volum

    Big Data and Artificial Intelligence in Digital Finance

    Get PDF
    This open access book presents how cutting-edge digital technologies like Big Data, Machine Learning, Artificial Intelligence (AI), and Blockchain are set to disrupt the financial sector. The book illustrates how recent advances in these technologies facilitate banks, FinTech, and financial institutions to collect, process, analyze, and fully leverage the very large amounts of data that are nowadays produced and exchanged in the sector. To this end, the book also describes some more the most popular Big Data, AI and Blockchain applications in the sector, including novel applications in the areas of Know Your Customer (KYC), Personalized Wealth Management and Asset Management, Portfolio Risk Assessment, as well as variety of novel Usage-based Insurance applications based on Internet-of-Things data. Most of the presented applications have been developed, deployed and validated in real-life digital finance settings in the context of the European Commission funded INFINITECH project, which is a flagship innovation initiative for Big Data and AI in digital finance. This book is ideal for researchers and practitioners in Big Data, AI, banking and digital finance

    Order-Related Problems Parameterized by Width

    Get PDF
    In the main body of this thesis, we study two different order theoretic problems. The first problem, called Completion of an Ordering, asks to extend a given finite partial order to a complete linear order while respecting some weight constraints. The second problem is an order reconfiguration problem under width constraints. While the Completion of an Ordering problem is NP-complete, we show that it lies in FPT when parameterized by the interval width of ρ. This ordering problem can be used to model several ordering problems stemming from diverse application areas, such as graph drawing, computational social choice, and computer memory management. Each application yields a special partial order ρ. We also relate the interval width of ρ to parameterizations for these problems that have been studied earlier in the context of these applications, sometimes improving on parameterized algorithms that have been developed for these parameterizations before. This approach also gives some practical sub-exponential time algorithms for ordering problems. In our second main result, we combine our parameterized approach with the paradigm of solution diversity. The idea of solution diversity is that instead of aiming at the development of algorithms that output a single optimal solution, the goal is to investigate algorithms that output a small set of sufficiently good solutions that are sufficiently diverse from one another. In this way, the user has the opportunity to choose the solution that is most appropriate to the context at hand. It also displays the richness of the solution space. There, we show that the considered diversity version of the Completion of an Ordering problem is fixed-parameter tractable with respect to natural paramaters that capture the notion of diversity and the notion of sufficiently good solutions. We apply this algorithm in the study of the Kemeny Rank Aggregation class of problems, a well-studied class of problems lying in the intersection of order theory and social choice theory. Up to this point, we have been looking at problems where the goal is to find an optimal solution or a diverse set of good solutions. In the last part, we shift our focus from finding solutions to studying the solution space of a problem. There we consider the following order reconfiguration problem: Given a graph G together with linear orders τ and τ ′ of the vertices of G, can one transform τ into τ ′ by a sequence of swaps of adjacent elements in such a way that at each time step the resulting linear order has cutwidth (pathwidth) at most w? We show that this problem always has an affirmative answer when the input linear orders τ and τ ′ have cutwidth (pathwidth) at most w/2. Using this result, we establish a connection between two apparently unrelated problems: the reachability problem for two-letter string rewriting systems and the graph isomorphism problem for graphs of bounded cutwidth. This opens an avenue for the study of the famous graph isomorphism problem using techniques from term rewriting theory. In addition to the main part of this work, we present results on two unrelated problems, namely on the Steiner Tree problem and on the Intersection Non-emptiness problem from automata theory.Doktorgradsavhandlin

    A method for system of systems definition and modeling using patterns of collective behavior

    Get PDF
    The Department of Defense ship and aircraft acquisition process, with its capability-based assessments and fleet synthesis studies, relies heavily on the assumption that a functional decomposition of higher-level system of systems (SoS) capabilities into lower-level system and subsystem behaviors is both possible and practical. However, SoS typically exhibit “non-decomposable” behaviors (also known as emergent behaviors) for which no widely-accepted representation exists. The presence of unforeseen emergent behaviors, particularly undesirable ones, can make systems vulnerable to attacks, hacks, or other exploitation, or can cause delays in acquisition program schedules and cost overruns in order to mitigate them. The International Council on Systems Engineering has identified the development of methods for predicting and managing emergent behaviors as one of the top research priorities for the Systems Engineering profession. Therefore, this thesis develops a method for rendering quantifiable SoS emergent properties and behaviors traceable to patterns of interaction of their constitutive systems, so that exploitable patterns identified during the early stages of design can be accounted for. This method is designed to fill two gaps in the literature. First, the lack of an approach for mining data to derive a model (i.e. an equation) of the non-decomposable behavior. Second, the lack of an approach for qualitatively and quantitatively associating emergent behaviors with the components that cause the behavior. A definition for emergent behavior is synthesized from the literature, as well as necessary conditions for its identification. An ontology of emergence that enables studying the emergent behaviors exhibited by self-organized systems via numerical simulations is adapted for this thesis in order to develop the mathematical approach needed to satisfy the research objective. Within the confines of two carefully qualified assumptions (that the model is valid, and that the model is efficient), it is argued that simulated emergence is bona-fide emergence, and that simulations can be used for experimentation without sacrificing rigor. This thesis then puts forward three hypotheses: The first hypothesis is that self-organized structures imply the presence of a form of data compression, and this compression can be used to explicitly calculate an upper bound on the number of emergent behaviors that a system can possess. The second hypothesis is that the set of numerical criteria for detecting emergent behavior derived in this research constitutes sufficient conditions for identifying weak and functional emergent behaviors. The third hypothesis states that affecting the emergent properties of these systems will have a bigger impact on the system’s performance than affecting any single component of that system. Using the method developed in this thesis, exploitable properties are identified and component behaviors are modified to attempt the exploit. Changes in performance are evaluated using problem-specific measures of merit. The experiments find that Hypothesis 2 is false (the numerical criteria are not sufficient conditions) by identifying instances where the numerical criteria produce a false-positive. As a result, a set of sufficient conditions for emergent behavior identification remains to be found. Hypothesis 1 was also falsified based on a worst-case scenario where the largest possible number of obtainable emergent behaviors was compared against the upper bound computed from the smallest possible data compression of a self-organized system. Hypothesis 3, on the other hand, was supported, as it was found that new behavior rules based on component-level properties provided less improvement to performance against an adversary than rules based on system-level properties. Overall, the method is shown to be an effective, systematic approach to non-decomposable behavior exploitation, and an improvement over the modern, largely ad hoc approach.Ph.D

    Sublinear Computation Paradigm

    Get PDF
    This open access book gives an overview of cutting-edge work on a new paradigm called the “sublinear computation paradigm,” which was proposed in the large multiyear academic research project “Foundations of Innovative Algorithms for Big Data.” That project ran from October 2014 to March 2020, in Japan. To handle the unprecedented explosion of big data sets in research, industry, and other areas of society, there is an urgent need to develop novel methods and approaches for big data analysis. To meet this need, innovative changes in algorithm theory for big data are being pursued. For example, polynomial-time algorithms have thus far been regarded as “fast,” but if a quadratic-time algorithm is applied to a petabyte-scale or larger big data set, problems are encountered in terms of computational resources or running time. To deal with this critical computational and algorithmic bottleneck, linear, sublinear, and constant time algorithms are required. The sublinear computation paradigm is proposed here in order to support innovation in the big data era. A foundation of innovative algorithms has been created by developing computational procedures, data structures, and modelling techniques for big data. The project is organized into three teams that focus on sublinear algorithms, sublinear data structures, and sublinear modelling. The work has provided high-level academic research results of strong computational and algorithmic interest, which are presented in this book. The book consists of five parts: Part I, which consists of a single chapter on the concept of the sublinear computation paradigm; Parts II, III, and IV review results on sublinear algorithms, sublinear data structures, and sublinear modelling, respectively; Part V presents application results. The information presented here will inspire the researchers who work in the field of modern algorithms
    corecore