2,789 research outputs found

    Nearly Optimal Separation Between Partially and Fully Retroactive Data Structures

    Get PDF
    Since the introduction of retroactive data structures at SODA 2004, a major unsolved problem has been to bound the gap between the best partially retroactive data structure (where changes can be made to the past, but only the present can be queried) and the best fully retroactive data structure (where the past can also be queried) for any problem. It was proved in 2004 that any partially retroactive data structure with operation time T_{op}(n,m) can be transformed into a fully retroactive data structure with operation time O(sqrt{m} * T_{op}(n,m)), where n is the size of the data structure and m is the number of operations in the timeline [Demaine et al., 2004]. But it has been open for 14 years whether such a gap is necessary. In this paper, we prove nearly matching upper and lower bounds on this gap for all n and m. We improve the upper bound for n << sqrt m by showing a new transformation with multiplicative overhead n log m. We then prove a lower bound of min {n log m, sqrt m}^{1-o(1)} assuming any of the following conjectures: - Conjecture I: Circuit SAT requires 2^{n - o(n)} time on n-input circuits of size 2^{o(n)}. This conjecture is far weaker than the well-believed SETH conjecture from complexity theory, which asserts that CNF SAT with n variables and O(n) clauses already requires 2^{n-o(n)} time. - Conjecture II: Online (min,+) product between an integer n x n matrix and n vectors requires n^{3 - o(1)} time. This conjecture is weaker than the APSP conjectures widely used in fine-grained complexity. - Conjecture III (3-SUM Conjecture): Given three sets A,B,C of integers, each of size n, deciding whether there exist a in A, b in B, c in C such that a + b + c = 0 requires n^{2 - o(1)} time. This 1995 conjecture [Anka Gajentaan and Mark H. Overmars, 1995] was the first conjecture in fine-grained complexity. Our lower bound construction illustrates an interesting power of fully retroactive queries: they can be used to quickly solve batched pair evaluation. We believe this technique can prove useful for other data structure lower bounds, especially dynamic ones

    Lower Bounds on Retroactive Data Structures

    Get PDF
    We prove essentially optimal fine-grained lower bounds on the gap between a data structure and a partially retroactive version of the same data structure. Precisely, assuming any one of three standard conjectures, we describe a problem that has a data structure where operations run in O(T(n,m)) time per operation, but any partially retroactive version of that data structure requires T(n,m)?m^{1-o(1)} worst-case time per operation, where n is the size of the data structure at any time and m is the number of operations. Any data structure with operations running in O(T(n,m)) time per operation can be converted (via the "rollback method") into a partially retroactive data structure running in O(T(n,m)?m) time per operation, so our lower bound is tight up to an m^o(1) factor common in fine-grained complexity

    16th Scandinavian Symposium and Workshops on Algorithm Theory: SWAT 2018, June 18-20, 2018, Malmö University, Malmö, Sweden

    Get PDF

    Limited Liability and the Known Unknown

    Get PDF
    Limited liability is a double-edged sword. On the one hand, limited lia-bility may help overcome investors’ risk aversion and facilitate capital formation and economic growth. On the other hand, limited liability is widely believed to contribute to excessive risk-taking and externaliza-tion of losses to the public. The externalization problem can be mitigated imperfectly through existing mechanisms such as regulation, mandatory insurance, and minimum capital requirements. These mechanisms would be more effective if information asymmetries between industry and poli-cymakers were reduced. Private businesses typically have better infor-mation about industry-specific risks than policymakers. A charge for limited liability entities—resembling a corporate income tax but calibrated to risk levels—could have two salutary effects. First, a well-calibrated limited liability tax could help compensate the public fisc for risks and reduce externalization. Second, a limited liability tax could force private industry actors to reveal information to policymakers and regulators, thereby dynamically improving the public response to externalization risk. Charging firms for limited liability at initially similar rates will lead relatively low-risk firms to forgo limited liability, while relatively high-risk firms will pay for limited liability. Policymakers will then be able to focus on the industries whose firms have self-identified as high risk, and thus develop more finely tailored regulatory responses. Because the ben-efits of making the proper election are fully internalized by individual firms, whereas the costs of future regulation or limited liability tax changes will be borne collectively by industries, firms will be unlikely to strategically mislead policymakers in electing limited or unlimited lia-bility. By helping to reveal private information and focus regulators’ at-tention, a limited liability tax could accelerate the pace at which poli-cymakers learn, and therefore, the pace at which regulations improve

    Conditional Lower Bounds for Dynamic Geometric Measure Problems

    Get PDF

    ESTRUTURAS DE DADOS RETROATIVAS: UM MAPEAMENTO SISTEMÁTICO

    Get PDF
    Given the miniaturization of electronic devices and the amount ofdata processed by them, the applications developed need to beefficient in terms of memory consumption and temporalcomplexity. Retroactive data structures are data structures inwhich it is possible to make a modification in the past and observethe effect of this modification on its timeline. These datastructures are used in some geometric problems and in problemsrelated with graphs, such as the minimum path problem indynamic graphs. However, the implementation of these datastructures in an optimized way is not trivial. In this scenario, thiswork presents the results of a research related to retroactive datastructures, in order to compare the performance of theimplementations proposed by several authors in relation to thetrivial implementations of these data structures. The researchmethod used was the study of the articles related to retroactivedata structures, from a systematic mapping, and the performanceanalysis of these data structures coded in C++ language. The datastructures identified in this study presented better results in termsof space consumption and processing time in relation to theirimplementations by brute force, but, in some cases, with highconstants.Face à miniaturização dos aparelhos eletrÎnicos e devido àquantidade de dados processados por eles, as aplicaçÔesdesenvolvidas necessitam ser eficientes em termos de consumo dememória e complexidade temporal. Estruturas de dadosretroativas são estruturas em que é possível realizar umamodificação no passado e observar o efeito dessa modificação emsua linha temporal. Essas estruturas são utilizadas em algunsproblemas geométricos e em outros relacionados a grafos, como oproblema do caminho mínimo em grafos dinùmicos. Contudo, aimplementação dessas estruturas de maneira otimizada nemsempre é trivial. Diante desse cenårio, este trabalho apresenta osresultados de uma pesquisa relacionada a estruturas de dadosretroativas, visando comparar o desempenho das implementaçÔespropostas por variados autores em relação às implementaçÔestriviais dessas estruturas. O método de pesquisa adotado foi oestudo dos artigos relacionados às estruturas de dados retroativas,a partir de um mapeamento sistemåtico, e a anålise dedesempenho dessas estruturas codificadas na linguagem C++. Asestruturas identificadas nesse mapeamento apresentaram melhoresresultados no que tange ao consumo de espaço e tempo deprocessamento com relação às suas implementaçÔes por forçabruta, porém, em alguns casos, com constantes altas

    Proactive Regulation of Prosectors\u27 Offices: Strengthening Disciplinary Committees\u27 Oversight of Prosecutors\u27 Offices Across the United States with ABA Model Rule 5.1

    Get PDF
    In the United States, there are currently several mechanisms to deter prosecutorial misconduct, including judicial orders, civil litigation by defendants, enforcement actions by disciplinary authorities, and internal discipline within a prosecutor’s office. Despite these many avenues of oversight, none have successfully prevented misconduct to the degree society demands. Several international legal systems have adopted regulatory frameworks based on the theory of proactive management-based regulation, which mitigates against unethical conduct by requiring attorneys to selfassess their internal ethics policies against a rubric of ethics goals set by ethics and disciplinary authorities. While most U.S. jurisdictions have not adopted proactive management-based regulations, attorneys are required by state-enacted versions of American Bar Association (ABA) Model Rule of Professional Conduct 5.1 to maintain internal policies that guarantee ethical practices in their offices. This Note argues that disciplinary committees should meaningfully enforce ABA Model Rule 5.1 by utilizing a proactive regulatory approach. By requiring prosecutors to self-assess and report whether their office policies guarantee ethical prosecutions and prevent misconduct, disciplinary committees can safeguard against prosecutorial misconduct more than current efforts. Proactive enforcement of Model Rule 5.1 allows disciplinary committees to move beyond a defensive, ex post approach to misconduct and, instead, utilize preventative measures that would demand accountability from historically opaque prosecutors’ offices

    Spoonerisms: An Analysis of Language Processing in Light of Neurobiology

    Get PDF
    Spoonerisms are described as the category of speech errors involving jumbled-up words. The author examines language, the brain, and the correlation between spoonerisms and the neural structures involved in language processing

    Tax Now or Tax Never: Political Optionality and the Case for Current-Assessment Tax Reform

    Get PDF
    The U.S. income tax system is broken. Due to the realization doctrine and taxpayers’ consequent ability to defer taxation of gains, taxpayers can easily minimize or avoid the taxation of investment income, a failure that is magnified many times over when considering the ultra-wealthy. As a result, this small group of taxpayers commands an enormous share of national wealth yet pays paltry taxes relative to the economic income their wealth produces—a predicament that this Article condemns as being economically, politically, and socially harmful. The conventional view among tax law experts has assumed that the problems created by the realization doctrine can be fixed on the back end by adjusting the rules that govern taxation at the time of realization. Specifically, most tax scholars have favored reform proposals that would retain the realization doctrine while aiming to impose taxes in a way that would erase or reduce the financial benefits of deferral. Examples include retrospective capital gains tax reforms, progressive consumption tax reforms, and more incremental reforms such as ending stepped-up basis. However, this Article argues that these future-assessment reform proposals ignore a crucial additional problem of deferral—political optionality. If there is a many-year or longer gap between when either income is earned or wealth is accrued and when tax is assessed, then any number of things can happen in the interim to undermine the eventual assessment and collection of tax. This Article explains three sets of pressures that tend to erode future-assessment reforms over time: (1) policy drift and the need for incremental bolstering of tax reforms, (2) the time value of options, and (3) federal budget rules and related political incentives. By contrast to future-assessment reforms, this Article explains how current assessment reforms—like wealth tax or accrual-income tax reform proposals— are relatively resistant to these pressures. As this Article demonstrates, both theory and historical experience reveal that future-assessment reforms are fragile and often fail—and that ultra-wealthy taxpayers are well aware of this. Therefore, accounting for the implications of political optionality, only current assessment reforms are likely to succeed at meaningfully taxing the ultra-wealthy and fixing the personal tax system

    The existence of species rests on a metastable equilibrium between inbreeding and outbreeding

    Get PDF
    Background: Speciation corresponds to the progressive establishment of reproductive barriers between groups of individuals derived from an ancestral stock. Since Darwin did not believe that reproductive barriers could be selected for, he proposed that most events of speciation would occur through a process of separation and divergence, and this point of view is still shared by most evolutionary biologists today. &#xd;&#xa;&#xd;&#xa;Results: I do, however, contend that, if so much speciation occurs, it must result from a process of natural selection, whereby it is advantageous for individuals to reproduce preferentially within a group and reduce their breeding with the rest of the population, leading to a model whereby new species arise not by populations splitting into separate branches, but by small inbreeding groups &#x201c;budding&#x201d; from an ancestral stock. This would be driven by several advantages of inbreeding, and mainly by advantageous recessive phenotypes, which could only be retained in the context of inbreeding. Reproductive barriers would thus not arise passively as a consequence of drift in isolated populations, but under the selective pressure of ancestral stocks. Most documented cases of speciation in natural populations appear to fit the model proposed, with more speciation occurring in populations with high inbreeding coefficients, many recessive characters identified as central to the phenomenon of speciation, with these recessive mutations expected to be surrounded by patterns of limited genomic diversity.&#xd;&#xa;&#xd;&#xa;Conclusions: Whilst adaptive evolution would correspond to gains of function that would, most of the time, be dominant, the phenomenon of speciation would thus be driven by mutations resulting in the advantageous loss of certain functions since recessive mutations very often correspond to the inactivation of a gene. A very important further advantage of inbreeding is that it reduces the accumulation of recessive mutations in genomes. A consequence of the model proposed is that the existence of species would correspond to a metastable equilibrium between inbreeding and outbreeding, with excessive inbreeding promoting speciation, and excessive outbreeding resulting in irreversible accumulation of recessive mutations that could ultimately only lead to the extinction.&#xd;&#xa
    • 

    corecore