699,954 research outputs found

    The Eligibility Definition Used in the Social Security Administration’s Disability Programs Needs to be Changed

    Get PDF
    In its October 2003 report on the definition of disability used by the Social Security Administration’s (SSA’s) disability programs [i.e., Social Security Disability Insurance (SSDI) and Supplemental Security Income (SSI) for people with disabilities], the Social Security Advisory Board raises the issue of whether this definition is at odds with the concept of disability embodied in the Americans with Disabilities Act (ADA) and, more importantly, with the aspirations of people with disabilities to be full participants in mainstream social activities and lead fulfilling, productive lives. The Board declares that “the Nation must face up to the contradictions created by the existing definition of disability.” I wholeheartedly agree. Further, I have concluded that we have to make fundamental, conceptual changes to both how we define eligibility for economic security benefits, and how we provide those benefits, if we are ever to fulfill the promise of the ADA. To convince you of that proposition, I will begin by relating a number of facts that paint a very bleak picture – a picture of deterioration in the economic security of the population that the disability programs are intended to serve; a picture of programs that purport to provide economic security, but are themselves financially insecure and subject to cycles of expansion and cuts that undermine their purpose; a picture of programs that are facing their biggest expenditure crisis ever; and a picture of an eligibility determination process that is inefficient and inequitable -- one that rations benefits by imposing high application costs on applicants in an arbitrary fashion. I will then argue that the fundamental reason for this bleak picture is the conceptual definition of eligibility that these programs use – one rooted in a disability paradigm that social scientists, people with disabilities, and, to a substantial extent, the public have rejected as being flawed, most emphatically through the passage of the ADA. Current law requires eligibility rules to be based on the premise that disability is medically determinable. That’s wrong because, as the ADA recognizes, a person’s environment matters. I will further argue that programs relying on this eligibility definition must inevitably: reward people if they do not try to help themselves, but not if they do; push the people they serve out of society’s mainstream, fostering a culture of isolation and dependency; relegate many to a lifetime of poverty; and undermine their promise of economic security because of the periodic “reforms” that are necessary to maintain taxpayer support. I conclude by pointing out that to change the conceptual definition for program eligibility, we also must change our whole approach to providing for the economic security of people with disabilities. We need to replace our current “caretaker” approach with one that emphasizes helping people with disabilities help themselves. I will briefly describe features that such a program might require, and point out the most significant challenges we would face in making the transition

    A framework for security requirements engineering

    Get PDF
    This paper presents a framework for security requirements elicitation and analysis, based upon the construction of a context for the system and satisfaction arguments for the security of the system. One starts with enumeration of security goals based on assets in the system. These goals are used to derive security requirements in the form of constraints. The system context is described using a problem-centered notation, then this context is validated against the security requirements through construction of a satisfaction argument. The satisfaction argument is in two parts: a formal argument that the system can meet its security requirements, and a structured informal argument supporting the assumptions expressed in the formal argument. The construction of the satisfaction argument may fail, revealing either that the security requirement cannot be satisfied in the context, or that the context does not contain sufficient information to develop the argument. In this case, designers and architects are asked to provide additional design information to resolve the problems

    Composability in quantum cryptography

    Full text link
    In this article, we review several aspects of composability in the context of quantum cryptography. The first part is devoted to key distribution. We discuss the security criteria that a quantum key distribution protocol must fulfill to allow its safe use within a larger security application (e.g., for secure message transmission). To illustrate the practical use of composability, we show how to generate a continuous key stream by sequentially composing rounds of a quantum key distribution protocol. In a second part, we take a more general point of view, which is necessary for the study of cryptographic situations involving, for example, mutually distrustful parties. We explain the universal composability framework and state the composition theorem which guarantees that secure protocols can securely be composed to larger applicationsComment: 18 pages, 2 figure

    Isogeny-based post-quantum key exchange protocols

    Get PDF
    The goal of this project is to understand and analyze the supersingular isogeny Diffie Hellman (SIDH), a post-quantum key exchange protocol which security lies on the isogeny-finding problem between supersingular elliptic curves. In order to do so, we first introduce the reader to cryptography focusing on key agreement protocols and motivate the rise of post-quantum cryptography as a necessity with the existence of the model of quantum computation. We review some of the known attacks on the SIDH and finally study some algorithmic aspects to understand how the protocol can be implemented

    A formal definition and a new security mechanism of physical unclonable functions

    Full text link
    The characteristic novelty of what is generally meant by a "physical unclonable function" (PUF) is precisely defined, in order to supply a firm basis for security evaluations and the proposal of new security mechanisms. A PUF is defined as a hardware device which implements a physical function with an output value that changes with its argument. A PUF can be clonable, but a secure PUF must be unclonable. This proposed meaning of a PUF is cleanly delineated from the closely related concepts of "conventional unclonable function", "physically obfuscated key", "random-number generator", "controlled PUF" and "strong PUF". The structure of a systematic security evaluation of a PUF enabled by the proposed formal definition is outlined. Practically all current and novel physical (but not conventional) unclonable physical functions are PUFs by our definition. Thereby the proposed definition captures the existing intuition about what is a PUF and remains flexible enough to encompass further research. In a second part we quantitatively characterize two classes of PUF security mechanisms, the standard one, based on a minimum secret read-out time, and a novel one, based on challenge-dependent erasure of stored information. The new mechanism is shown to allow in principle the construction of a "quantum-PUF", that is absolutely secure while not requiring the storage of an exponentially large secret. The construction of a PUF that is mathematically and physically unclonable in principle does not contradict the laws of physics.Comment: 13 pages, 1 figure, Conference Proceedings MMB & DFT 2012, Kaiserslautern, German

    Architecture-based Qualitative Risk Analysis for Availability of IT Infrastructures

    Get PDF
    An IT risk assessment must deliver the best possible quality of results in a time-effective way. Organisations are used to customise the general-purpose standard risk assessment methods in a way that can satisfy their requirements. In this paper we present the QualTD Model and method, which is meant to be employed together with standard risk assessment methods for the qualitative assessment of availability risks of IT architectures, or parts of them. The QualTD Model is based on our previous quantitative model, but geared to industrial practice since it does not require quantitative data which is often too costly to acquire. We validate the model and method in a real-world case by performing a risk assessment on the authentication and authorisation system of a large multinational company and by evaluating the results w.r.t. the goals of the stakeholders of the system. We also perform a review of the most popular standard risk assessment methods and an analysis of which one can be actually integrated with our QualTD Model
    • …
    corecore