332 research outputs found

    Estimating the Longest Increasing Subsequence in Nearly Optimal Time

    Full text link
    Longest Increasing Subsequence (LIS) is a fundamental statistic of a sequence, and has been studied for decades. While the LIS of a sequence of length nn can be computed exactly in time O(nlogā”n)O(n\log n), the complexity of estimating the (length of the) LIS in sublinear time, especially when LIS ā‰Ŗn\ll n, is still open. We show that for any integer nn and any Ī»=o(1)\lambda = o(1), there exists a (randomized) non-adaptive algorithm that, given a sequence of length nn with LIS ā‰„Ī»n\ge \lambda n, approximates the LIS up to a factor of 1/Ī»o(1)1/\lambda^{o(1)} in no(1)/Ī»n^{o(1)} / \lambda time. Our algorithm improves upon prior work substantially in terms of both approximation and run-time: (i) we provide the first sub-polynomial approximation for LIS in sub-linear time; and (ii) our run-time complexity essentially matches the trivial sample complexity lower bound of Ī©(1/Ī»)\Omega(1/\lambda), which is required to obtain any non-trivial approximation of the LIS. As part of our solution, we develop two novel ideas which may be of independent interest: First, we define a new Genuine-LIS problem, where each sequence element may either be genuine or corrupted. In this model, the user receives unrestricted access to actual sequence, but does not know apriori which elements are genuine. The goal is to estimate the LIS using genuine elements only, with the minimal number of "genuiness tests". The second idea, Precision Forest, enables accurate estimations for composition of general functions from "coarse" (sub-)estimates. Precision Forest essentially generalizes classical precision sampling, which works only for summations. As a central tool, the Precision Forest is initially pre-processed on a set of samples, which thereafter is repeatedly reused by multiple sub-parts of the algorithm, improving their amortized complexity.Comment: Full version of FOCS 2022 pape

    A Faster Algorithm for Calculating Hypervolume

    Get PDF
    We present an algorithm for calculating hypervolume exactly, the Hypervolume by Slicing Objectives (HSO) algorithm, that is faster than any that has previously been published. HSO processes objectives instead of points, an idea that has been considered before but that has never been properly evaluated in the literature. We show that both previously studied exact hypervolume algorithms are exponential in at least the number of objectives and that although HSO is also exponential in the number of objectives in the worst case, it runs in significantly less time, i.e., two to three orders of magnitude less for randomly generated and benchmark data in three to eight objectives. Thus, HSO increases the utility of hypervolume, both as a metric for general optimization algorithms and as a diversity mechanism for evolutionary algorithm

    On the Computation of Common Subsumers in Description Logics

    Get PDF
    Description logics (DL) knowledge bases are often build by users with expertise in the application domain, but little expertise in logic. To support this kind of users when building their knowledge bases a number of extension methods have been proposed to provide the user with concept descriptions as a starting point for new concept definitions. The inference service central to several of these approaches is the computation of (least) common subsumers of concept descriptions. In case disjunction of concepts can be expressed in the DL under consideration, the least common subsumer (lcs) is just the disjunction of the input concepts. Such a trivial lcs is of little use as a starting point for a new concept definition to be edited by the user. To address this problem we propose two approaches to obtain "meaningful" common subsumers in the presence of disjunction tailored to two different methods to extend DL knowledge bases. More precisely, we devise computation methods for the approximation-based approach and the customization of DL knowledge bases, extend these methods to DLs with number restrictions and discuss their efficient implementation

    Exploring QCD matter in extreme conditions with Machine Learning

    Full text link
    In recent years, machine learning has emerged as a powerful computational tool and novel problem-solving perspective for physics, offering new avenues for studying strongly interacting QCD matter properties under extreme conditions. This review article aims to provide an overview of the current state of this intersection of fields, focusing on the application of machine learning to theoretical studies in high energy nuclear physics. It covers diverse aspects, including heavy ion collisions, lattice field theory, and neutron stars, and discuss how machine learning can be used to explore and facilitate the physics goals of understanding QCD matter. The review also provides a commonality overview from a methodology perspective, from data-driven perspective to physics-driven perspective. We conclude by discussing the challenges and future prospects of machine learning applications in high energy nuclear physics, also underscoring the importance of incorporating physics priors into the purely data-driven learning toolbox. This review highlights the critical role of machine learning as a valuable computational paradigm for advancing physics exploration in high energy nuclear physics.Comment: 146 pages,53 figure

    Artificial intelligence : A powerful paradigm for scientific research

    Get PDF
    Y Artificial intelligence (AI) coupled with promising machine learning (ML) techniques well known from computer science is broadly affecting many aspects of various fields including science and technology, industry, and even our day-to-day life. The ML techniques have been developed to analyze high-throughput data with a view to obtaining useful insights, categorizing, predicting, and making evidence-based decisions in novel ways, which will promote the growth of novel applications and fuel the sustainable booming of AI. This paper undertakes a comprehensive survey on the development and application of AI in different aspects of fundamental sciences, including information science, mathematics, medical science, materials science, geoscience, life science, physics, and chemistry. The challenges that each discipline of science meets, and the potentials of AI techniques to handle these challenges, are discussed in detail. Moreover, we shed light on new research trends entailing the integration of AI into each scientific discipline. The aim of this paper is to provide a broad research guideline on fundamental sciences with potential infusion of AI, to help motivate researchers to deeply understand the state-of-the-art applications of AI-based fundamental sciences, and thereby to help promote the continuous development of these fundamental sciences.Peer reviewe

    Network Science and Law: A Sales Pitch and an Application to the Patent Explosion

    Get PDF
    The network may be the technological metaphor of the present era. A network, consisting of ā€œnodesā€ and ā€œlinks,ā€ may be a group of individuals linked by friendship; a group of computers linked by network cables; a system of roads or airline flights -- or another of a virtually limitless variety of systems of connected ā€œthings.ā€ The past few years have seen an explosion of interest in ā€œnetwork scienceā€ in fields from physics to sociology. Network science highlights the role of relationship patterns in determining collective behavior. It underscores and begins to address the difficulty of predicting collective behavior from individual interactions. This Article seeks first to describe how network science can provide new conceptual and empirical approaches to legal questions because of its focus on analyzing the effects of patterns of relationship. Second, the Article illustrates the network approach by describing a study of the network created by patents and the citations between them. Burgeoning patenting has raised concerns about patent quality, reflected in proposed legislation and in renewed Supreme Court attention to patent law. The network approach allows us to get behind the increasing numbers and investigate the relationships between patented technologies. We distinguish between faster technological progress, increasing breadth of patented technologies, and a lower patentability standard as possible explanations for increased patenting. Our analysis suggests that increasing pace and breadth of innovation alone are unlikely to explain the recent evolution of the patent citation network. Since the early 1990s the disparity in likelihood of citation between the most ā€œcitableā€ and least ā€œcitableā€ patents has grown, suggesting that patents may be being issued for increasingly trivial advances. The timing of the increasing stratification is correlated with increasing reliance by the Federal Circuit Court of Appeals on the widely criticized ā€œmotivation or suggestion to combineā€ test for nonobviousness, although we cannot rule out other explanations. The final part of the Article describes how network analysis may be used to address other issues in patent law

    The Chirality-Flow Formalism and Optimising Scattering Amplitudes

    Get PDF
    This thesis is composed of five papers, which all attempt to optimise calculations of scattering amplitudes in high-energy-physics collisions. These scattering amplitudes are a key part of theoretical predictions for particle-physics experiments like the Large Hadron Collider at CERN. The first four papers are the main topic of the thesis, and describe a novel method called chirality flow. Chirality flow simplifies Feynman-diagram calculations and makes them more intuitive. Papers I, II, and IV describe chirality flow in detail at both tree-level and one-loop level, while paper III shows a first implementation of it in the event generator MadGraph5_aMC@NLO. The final paper instead explores the speed, accuracy, and precision of an approximation of the colour part of a scattering amplitude.Paper I introduces the chirality-flow formalism, a new pictorial method used to calculate tree-level helicity amplitudes by drawing lines and connecting them to find spinor inner products, instead of doing algebraic manipulations. This method makes calculations more transparent, and often allows one to go from Feynman diagram to spinor inner products in a single line. Massless QED and QCD are treated in full. Paper II extends the chirality-flow formalism of paper I to deal with massive particles, and therefore allows chirality flow to be used for any tree-level Standard Model calculation. Paper III describes our implementation of chirality flow in massless QED in MadGraph5_aMC@NLO. A speed comparison is made showing up to a factor of 10 increase in evaluation speed. Paper IV extends the chirality-flow formalism to the one-loop level for any Standard Model calculation, showing the simplifications in the numerator algebra and the tensor reduction. Paper V describes an extension to the MadGraph5_aMC@NLO event generator in which the kinematics are calculated using Berends-Giele recursions instead of Feynman diagrams, and the colour matrix can be expanded in the number of colours Nc. The speed of the extension, and the accuracy and precision of the colour expansion are explored
    • ā€¦
    corecore