4,151 research outputs found

    Paintings and their implicit presuppositions : a preliminary report

    Get PDF
    In a series of earlier papers (Social Science Working Papers 350, 355. 357) we have studied the ways in which differences in "implicit presupposi tions" (i. e ‱‱ differences in world views) cause scientists and historians to reach differing conclusions from a consideration of the same evidence. In this paper we show that paintings are characterized by implicit presuppositions similar to those that characterize the written materials -- essays, letters, scientific papers -- we have already studied

    Paintings and their implicit presuppositions: High Renaissance and Mannerism

    Get PDF
    All art historians who are interested in questions of "styles" or "schools" agree in identifying a High Renaissance school of Italian painting. There is, however, a disagreement, which has seemed nonterminating, regarding Mannerism: Is it another distinct school or is it merely a late development of the Renaissance school? We believe that this disagreement can be terminated by distinguishing questions of fact about paintings from questions about the definitions of schools. To this end we have had two representative subsets of paintings--one earlier, one later--rated on four of the dimensions of implicit presuppositions that we have introduced in other Working Papers. When the paintings are scaled in this way a very distinct profile emerges for the earlier, or Renaissance, paintings. In contrast, the later, or Mannerist, paintings are so heterogeneous that we conclude that they are best described as deviations from the Renaissance profile, rather than a separate school. These results are not unimportant--at least for art historians. But they are more important methodologically inasmuch as the procedures applied here can be used in classifying and distinguishing from one another all kind of cultural products

    Non-malleable codes for space-bounded tampering

    Get PDF
    Non-malleable codes—introduced by Dziembowski, Pietrzak and Wichs at ICS 2010—are key-less coding schemes in which mauling attempts to an encoding of a given message, w.r.t. some class of tampering adversaries, result in a decoded value that is either identical or unrelated to the original message. Such codes are very useful for protecting arbitrary cryptographic primitives against tampering attacks against the memory. Clearly, non-malleability is hopeless if the class of tampering adversaries includes the decoding and encoding algorithm. To circumvent this obstacle, the majority of past research focused on designing non-malleable codes for various tampering classes, albeit assuming that the adversary is unable to decode. Nonetheless, in many concrete settings, this assumption is not realistic

    The chaining lemma and its application

    Get PDF
    We present a new information-theoretic result which we call the Chaining Lemma. It considers a so-called “chain” of random variables, defined by a source distribution X(0)with high min-entropy and a number (say, t in total) of arbitrary functions (T1,
, Tt) which are applied in succession to that source to generate the chain (Formula presented). Intuitively, the Chaining Lemma guarantees that, if the chain is not too long, then either (i) the entire chain is “highly random”, in that every variable has high min-entropy; or (ii) it is possible to find a point j (1 ≀ j ≀ t) in the chain such that, conditioned on the end of the chain i.e. (Formula presented), the preceding part (Formula presented) remains highly random. We think this is an interesting information-theoretic result which is intuitive but nevertheless requires rigorous case-analysis to prove. We believe that the above lemma will find applications in cryptography. We give an example of this, namely we show an application of the lemma to protect essentially any cryptographic scheme against memory tampering attacks. We allow several tampering requests, the tampering functions can be arbitrary, however, they must be chosen from a bounded size set of functions that is fixed a prior

    Astrophysical Implication of Low E(2^+_1) in Neutron-rich Sn Isotopes

    Full text link
    The observation and prediction of unusually depressed first excited 2^+_1 states in even-A neutron - rich isotopes of semi-magic Sn above 132Sn provide motivations for reviewing the problems related to the nuclear astrophysics in general. In the present work, the beta-decay rates of the exotic even Sn isotopes (134,136Sn) above the 132Sn core have been calculated as a function of temperature (T). In order to get the necessary ft values, B(GT) values corresponding to allowed Gamow Teller (GT-) beta-decay have been theoretically calculated using shell model. The total decay rate shows decrease with increasing temperature as the ground state population is depleted and population of excited states with slower decay rates increases. The abundance at each Z value is inversely proportional to the decay constant of the waiting point nucleus for that particular Z. So the increase in half-life of isotopes of Sn, like 136Sn, might have substantial impact on the r-process nucleosynthesis.Comment: 4th International Workshop on Nuclear Fission and Fission Product Spectroscopy, CEA Cadarache, May 13 - 16, 2009, 4 pages, 2 figure

    Non-malleable encryption: simpler, shorter, stronger

    Get PDF
    In a seminal paper, Dolev et al. [15] introduced the notion of non-malleable encryption (NM-CPA). This notion is very intriguing since it suffices for many applications of chosen-ciphertext secure encryption (IND-CCA), and, yet, can be generically built from semantically secure (IND-CPA) encryption, as was shown in the seminal works by Pass et al. [29] and by Choi et al. [9], the latter of which provided a black-box construction. In this paper we investigate three questions related to NM-CPA security: 1. Can the rate of the construction by Choi et al. of NM-CPA from IND-CPA be improved? 2. Is it possible to achieve multi-bit NM-CPA security more efficiently from a single-bit NM-CPA scheme than from IND-CPA? 3. Is there a notion stronger than NM-CPA that has natural applications and can be achieved from IND-CPA security? We answer all three questions in the positive. First, we improve the rate in the scheme of Choi et al. by a factor O(λ), where λ is the security parameter. Still, encrypting a message of size O(λ) would require ciphertext and keys of size O(λ2) times that of the IND-CPA scheme, even in our improved scheme. Therefore, we show a more efficient domain extension technique for building a λ-bit NM-CPA scheme from a single-bit NM-CPA scheme with keys and ciphertext of size O(λ) times that of the NM-CPA one-bit scheme. To achieve our goal, we define and construct a novel type of continuous non-malleable code (NMC), called secret-state NMC, as we show that standard continuous NMCs are not enough for the natural “encode-then-encrypt-bit-by-bit” approach to work. Finally, we introduce a new security notion for public-key encryption that we dub non-malleability under (chosen-ciphertext) self-destruct attacks (NM-SDA). After showing that NM-SDA is a strict strengthening of NM-CPA and allows for more applications, we nevertheless show that both of our results—(faster) construction from IND-CPA and domain extension from one-bit scheme—also hold for our stronger NM-SDA security. In particular, the notions of IND-CPA, NM-CPA, and NM-SDA security are all equivalent, lying (plausibly, strictly?) below IND-CCA securit

    Suspected sepsis: summary of NICE guidance

    No full text
    The UK Parliamentary and Health Service Ombudsman inquiry “Time to Act” found failures in the recognition, diagnosis, and early management of those who died from sepsis, which triggered this guidance. In sepsis the body’s immune and coagulation systems are switched on by an infection and cause one or more body organs to malfunction with variable severity. The condition is life threatening. Although most people with infection do not have and will not develop sepsis, non-specific signs and symptoms can lead to late recognition of people who might have sepsis. We would like clinicians to “think sepsis” and recognise symptoms and signs of potential organ failure when they assess someone with infection, in a similar way to thinking “Could this chest pain be cardiac in origin?”This guidance provides a pragmatic approach for patients with infection who are assessed in the community, emergency departments, and hospitals by a wide range of general and specialist healthcare professionals. It includes guidance on assessment of risk factors followed by a detailed structured assessment of potential clinical signs and symptoms of concern.Definitions of sepsis have been developed, but these offer limited explanation on how to confirm or rule out the diagnosis in general clinical settings or in the community. Current mechanisms to diagnose sepsis and guidelines for use largely apply to critical care settings such as intensive care. We recognised a need for better recognition of sepsis in non-intensive settings and for the diagnosis to be entertained sooner.While sepsis is multifactorial and rarely presents in the same way, the Guideline Development Group considered that use of an easy, structured risk assessment may help clinicians identify those most severely ill who require immediate potentially lifesaving treatment. This guideline ensures that patients defined as having sepsis by recent definitions are, as a minimum, assessed as moderate-high risk. This guidance is also about appropriate de-escalation if sepsis is unlikely and broad spectrum antibiotics or hospital admission are not appropriate.This article summarises recommendations from the National Institute for Health and Care Excellence (NICE) guideline for the recognition, diagnosis, and management of sepsis in children and adults. Recommendations and the clinical pathway are available via the NICE website, and the UK Sepsis Trust tools are being revised to align with this guidance. This article is accompanied by an infographic, which displays the NICE guideline as a decision making tool

    Simulating Auxiliary Inputs, Revisited

    Get PDF
    For any pair (X,Z)(X,Z) of correlated random variables we can think of ZZ as a randomized function of XX. Provided that ZZ is short, one can make this function computationally efficient by allowing it to be only approximately correct. In folklore this problem is known as \emph{simulating auxiliary inputs}. This idea of simulating auxiliary information turns out to be a powerful tool in computer science, finding applications in complexity theory, cryptography, pseudorandomness and zero-knowledge. In this paper we revisit this problem, achieving the following results: \begin{enumerate}[(a)] We discuss and compare efficiency of known results, finding the flaw in the best known bound claimed in the TCC'14 paper "How to Fake Auxiliary Inputs". We present a novel boosting algorithm for constructing the simulator. Our technique essentially fixes the flaw. This boosting proof is of independent interest, as it shows how to handle "negative mass" issues when constructing probability measures in descent algorithms. Our bounds are much better than bounds known so far. To make the simulator (s,Ï”)(s,\epsilon)-indistinguishable we need the complexity O(s⋅25ℓϔ−2)O\left(s\cdot 2^{5\ell}\epsilon^{-2}\right) in time/circuit size, which is better by a factor ϔ−2\epsilon^{-2} compared to previous bounds. In particular, with our technique we (finally) get meaningful provable security for the EUROCRYPT'09 leakage-resilient stream cipher instantiated with a standard 256-bit block cipher, like AES256\mathsf{AES256}.Comment: Some typos present in the previous version have been correcte

    Efficient public-key cryptography with bounded leakage and tamper resilience

    Get PDF
    We revisit the question of constructing public-key encryption and signature schemes with security in the presence of bounded leakage and tampering memory attacks. For signatures we obtain the first construction in the standard model; for public-key encryption we obtain the first construction free of pairing (avoiding non-interactive zero-knowledge proofs). Our constructions are based on generic building blocks, and, as we show, also admit efficient instantiations under fairly standard number-theoretic assumptions. The model of bounded tamper resistance was recently put forward by DamgÄrd et al. (Asiacrypt 2013) as an attractive path to achieve security against arbitrary memory tampering attacks without making hardware assumptions (such as the existence of a protected self-destruct or key-update mechanism), the only restriction being on the number of allowed tampering attempts (which is a parameter of the scheme). This allows to circumvent known impossibility results for unrestricted tampering (Gennaro et al., TCC 2010), while still being able to capture realistic tampering attack

    Simon Terrill: Crowd Theory 2004-18, Perspectives, Notes and Comments

    Get PDF
    A catalogue publication for a major survey of the monumental Crowd Theory photographs by Melbourne-born, London-based artist Simon Terrill. The publication coincides with an exhibition bringing together all ten Crowd Theory images for the first time, at the Centre for Contemporary Photography, Melbourne. Inside are a range of responses and documents, including images and texts from the time of each event, as well as three newly commissioned essays reflecting on the project
    • 

    corecore