12,178 research outputs found

    Multilateral Transparency for Security Markets Through DLT

    Get PDF
    For decades, changing technology and policy choices have worked to fragment securities markets, rendering them so dark that neither ownership nor real-time price of securities are generally visible to all parties multilaterally. The policies in the U.S. National Market System and the EU Market in Financial Instruments Directive— together with universal adoption of the indirect holding system— have pushed Western securities markets into a corner from which escape to full transparency has seemed either impossible or prohibitively expensive. Although the reader has a right to skepticism given the exaggerated promises surrounding blockchain in recent years, we demonstrate in this paper that distributed ledger technology (DLT) contains the potential to convert fragmented securities markets back to multilateral transparency. Leading markets generally lack transparency in two ways that derive from their basic structure: (1) multiple platforms on which trades in the same security are matched have separate bid/ask queues and are not consolidated in real time (fragmented pricing), and (2) highspeed transfers of securities are enabled by placing ownership of the securities in financial institutions, thus preventing transparent ownership (depository or street name ownership). The distributed nature of DLT allows multiple copies of the same pricing queue to be held simultaneously by a large number of order-matching platforms, curing the problem of fragmented pricing. This same distributed nature of DLT would allow the issuers of securities to be nodes in a DLT network, returning control over securities ownership and transfer to those issuers and thus, restoring transparent ownership through direct holding with the issuer. A serious objection to DLT is that its latency is very high—with each Bitcoin blockchain transaction taking up to ten minutes. To remedy this, we first propose a private network without cumbersome proof-of-work cryptography. Second, we introduce into our model the quickly evolving technology of “lightning networks,” which are advanced two-layer off-chain networks conducting high-speed transacting with only periodic memorialization in the permanent DLT network. Against the background of existing securities trading and settlement, this Article demonstrates that a DLT network could bring multilateral transparency and thus represent the next step in evolution for markets in their current configuration

    Survey on Combinatorial Register Allocation and Instruction Scheduling

    Full text link
    Register allocation (mapping variables to processor registers or memory) and instruction scheduling (reordering instructions to increase instruction-level parallelism) are essential tasks for generating efficient assembly code in a compiler. In the last three decades, combinatorial optimization has emerged as an alternative to traditional, heuristic algorithms for these two tasks. Combinatorial optimization approaches can deliver optimal solutions according to a model, can precisely capture trade-offs between conflicting decisions, and are more flexible at the expense of increased compilation time. This paper provides an exhaustive literature review and a classification of combinatorial optimization approaches to register allocation and instruction scheduling, with a focus on the techniques that are most applied in this context: integer programming, constraint programming, partitioned Boolean quadratic programming, and enumeration. Researchers in compilers and combinatorial optimization can benefit from identifying developments, trends, and challenges in the area; compiler practitioners may discern opportunities and grasp the potential benefit of applying combinatorial optimization

    No Harm Done? An Experimental Approach to the Nonidentity Problem

    Get PDF
    Discussions of the non-identity problem presuppose a widely shared intuition that actions or policies that change who comes into existence don't, thereby, become morally unproblematic. We hypothesize that this intuition isn’t generally shared by the public, which could have widespread implications concerning how to generate support for large-scale, identity-affecting policies relating to matters like climate change. To test this, we ran a version of the well-known dictator game designed to mimic the public's behavior over identity-affecting choices. We found the public does seem to behave more selfishly when making identity-affecting choices, which should be concerning. We further hypothesized that one possible mechanism is the notion of harm the public uses in their decision-making and find that substantial portions of the population seem to each employ distinct notions of harm in their normative thinking. These findings raise puzzling features about the public’s normative thinking that call out for further empirical examination

    goSLP: Globally Optimized Superword Level Parallelism Framework

    Full text link
    Modern microprocessors are equipped with single instruction multiple data (SIMD) or vector instruction sets which allow compilers to exploit superword level parallelism (SLP), a type of fine-grained parallelism. Current SLP auto-vectorization techniques use heuristics to discover vectorization opportunities in high-level language code. These heuristics are fragile, local and typically only present one vectorization strategy that is either accepted or rejected by a cost model. We present goSLP, a novel SLP auto-vectorization framework which solves the statement packing problem in a pairwise optimal manner. Using an integer linear programming (ILP) solver, goSLP searches the entire space of statement packing opportunities for a whole function at a time, while limiting total compilation time to a few minutes. Furthermore, goSLP optimally solves the vector permutation selection problem using dynamic programming. We implemented goSLP in the LLVM compiler infrastructure, achieving a geometric mean speedup of 7.58% on SPEC2017fp, 2.42% on SPEC2006fp and 4.07% on NAS benchmarks compared to LLVM's existing SLP auto-vectorizer.Comment: Published at OOPSLA 201

    Pre-reform Conditions, Intermediate Inputs and Distortions: Solving the Indian Growth Puzzle

    Get PDF
    This paper answers the puzzling questions that why under the similar set of economic conditions service sector in India grew while manufacturing could not and how economic reforms in 1990s accelerated the productivity growth. The paper provides a very innovative and convincing explanation. Two subtle but important distortion-inefficiency mechanisms, which work through distorting the intermediate input allocation, are identified in the paper. Interaction of policies of quantitative restrictions and inflexible labor laws resulted in lower than optimal materials per worker usage.Combination of high inflation and unavailability of credit exacerbated this factor distortion and lowered the productivity growth further. Using panel data on Indian industries, I find underutilization of materials compared to labor until recently. This sub-optimal materials per worker usage lowers productivity growth. Productivity estimates are negatively related to labor growth and positively related to materials growth. Real wages and labor productivity are negatively related to materials inflation and this relationship breaks down after the capital market reforms in 1990s. Since these mechanisms work through intermediate inputs, service sector productivity is not affected as adversely. Estimates show that after 1990s firms have started oversubstituting materials and capital relative to labor, which can explain the jobless growth in Indian manufacturing.License Quota. Labor Laws. Price Change and Factor Substitution. Credit Constraints. Intermediate Inputs. Distortions and Productivity Growth

    Code generation and reorganization in the presence of pipeline constraints

    Full text link

    Achieving network resiliency using sound theoretical and practical methods

    Get PDF
    Computer networks have revolutionized the life of every citizen in our modern intercon- nected society. The impact of networked systems spans every aspect of our lives, from financial transactions to healthcare and critical services, making these systems an attractive target for malicious entities that aim to make financial or political profit. Specifically, the past decade has witnessed an astounding increase in the number and complexity of sophisti- cated and targeted attacks, known as advanced persistent threats (APT). Those attacks led to a paradigm shift in the security and reliability communities’ perspective on system design; researchers and government agencies accepted the inevitability of incidents and malicious attacks, and marshaled their efforts into the design of resilient systems. Rather than focusing solely on preventing failures and attacks, resilient systems are able to maintain an acceptable level of operation in the presence of such incidents, and then recover gracefully into normal operation. Alongside prevention, resilient system design focuses on incident detection as well as timely response. Unfortunately, the resiliency efforts of research and industry experts have been hindered by an apparent schism between theory and practice, which allows attackers to maintain the upper hand advantage. This lack of compatibility between the theory and practice of system design is attributed to the following challenges. First, theoreticians often make impractical and unjustifiable assumptions that allow for mathematical tractability while sacrificing accuracy. Second, the security and reliability communities often lack clear definitions of success criteria when comparing different system models and designs. Third, system designers often make implicit or unstated assumptions to favor practicality and ease of design. Finally, resilient systems are tested in private and isolated environments where validation and reproducibility of the results are not publicly accessible. In this thesis, we set about showing that the proper synergy between theoretical anal- ysis and practical design can enhance the resiliency of networked systems. We illustrate the benefits of this synergy by presenting resiliency approaches that target the inter- and intra-networking levels. At the inter-networking level, we present CPuzzle as a means to protect the transport control protocol (TCP) connection establishment channel from state- exhaustion distributed denial of service attacks (DDoS). CPuzzle leverages client puzzles to limit the rate at which misbehaving users can establish TCP connections. We modeled the problem of determining the puzzle difficulty as a Stackleberg game and solve for the equilibrium strategy that balances the users’ utilizes against CPuzzle’s resilience capabilities. Furthermore, to handle volumetric DDoS attacks, we extend CPuzzle and implement Midgard, a cooperative approach that involves end-users in the process of tolerating and neutralizing DDoS attacks. Midgard is a middlebox that resides at the edge of an Internet service provider’s network and uses client puzzles at the IP level to allocate bandwidth to its users. At the intra-networking level, we present sShield, a game-theoretic network response engine that manipulates a network’s connectivity in response to an attacker who is moving laterally to compromise a high-value asset. To implement such decision making algorithms, we leverage the recent advances in software-defined networking (SDN) to collect logs and security alerts about the network and implement response actions. However, the programma- bility offered by SDN comes with an increased chance for design-time bugs that can have drastic consequences on the reliability and security of a networked system. We therefore introduce BiFrost, an open-source tool that aims to verify safety and security proper- ties about data-plane programs. BiFrost translates data-plane programs into functionally equivalent sequential circuits, and then uses well-established hardware reduction, abstrac- tion, and verification techniques to establish correctness proofs about data-plane programs. By focusing on those four key efforts, CPuzzle, Midgard, sShield, and BiFrost, we believe that this work illustrates the benefits that the synergy between theory and practice can bring into the world of resilient system design. This thesis is an attempt to pave the way for further cooperation and coordination between theoreticians and practitioners, in the hope of designing resilient networked systems

    Is There a Puzzle? Compliance with Minority Rights in Turkey (1999-2010)

    Get PDF
    The Helsinki Summit in 1999 represents a turning point for EU–Turkey relations. Turkey gained status as a formal candidate country for the EU providing a strong incentive to launch democratic reforms for the ultimate reward of membership. Since 2001, the country has launched a number of reforms in minority rights. Many controversial issues, such as denial of the existence of the Kurds, or the lack of property rights granted to non-Muslim minorities in the country, have made progress. Even though the reforms in minority rights may represent a tremendous step for the Europeanization process of Turkey, the compliance trend in minority rights is neither progressive nor smooth. While there is a consensus within the literature about the acceleration of reforms starting in 2002 and the slow down by 2005 in almost all policy areas, scholars are divided into two camps regarding the continuing slow down of the reform process or the revival of the reforms since 2008. I argue, in the present paper, that the compliance process with minority rights in Turkey is puzzling due to the differentiated outcome and the recent revival of behavioral compliance. I aim to shed light on the empirical facts in the least-likely area for reform in the enlargement process. Through a detailed analysis of minority-related reform process of Turkey being an instance of ongoing compliance, the paper contributes to the literature divided on the end result of Europeanization in the country recently.Turkey; enlargement; minorities; Europeanization; Europeanization
    • 

    corecore