2,835 research outputs found
The Interpersonal Hot Hand Fallacy: How Similarity With Previous Winners Increases Subjective Probability of Winning
Organizers of promotional or state lotteries often feature a recent winner in their advertisements, depicted by a photograph and some personal information. We show that potential participants estimate they have higher odds of winning the next drawing when featured previous winners are similar to them (on age, gender or educational background). This effect, referred to herein as the "Interpersonal Hot Hand" fallacy, then increases their participation likelihood. It disappears when respondents are given objective information on their probability of winning-rare information in the context of real-world lotteries. We identify moderating variables
Secure Compilation of Side-Channel Countermeasures: The Case of Cryptographic âConstant-Timeâ
International audienceSoftware-based countermeasures provide effective mitigation against side-channel attacks, often with minimal efficiency and deployment overheads. Their effectiveness is often amenable to rigorous analysis: specifically, several popular countermeasures can be formalized as information flow policies, and correct implementation of the countermeasures can be verified with state-of-the-art analysis and verification techniques. However , in absence of further justification, the guarantees only hold for the language (source, target, or intermediate representation) on which the analysis is performed. We consider the problem of preserving side-channel countermeasures by compilation for cryptographic "constant-time", a popular countermeasure against cache-based timing attacks. We present a general method, based on the notion of constant-time-simulation, for proving that a compilation pass preserves the constant-time countermeasure. Using the Coq proof assistant, we verify the correctness of our method and of several representative instantiations
Provably secure compilation of side-channel countermeasures
Software-based countermeasures provide effective mitigation against side-channel attacks, often with minimal efficiency and deployment overheads. Their effectiveness is often amenable to rigorous analysis: specifically, several popular countermeasures can be formalized as information flow policies, and correct implementation of the countermeasures can be verified with state-of-the-art analysis and verification techniques. However, in absence of further justification, the guarantees only hold for the language (source, target, or intermediate representation) on which the analysis is performed.
We consider the problem of preserving side-channel countermeasures by compilation, and present a general method for proving that compilation preserves software-based side-channel countermeasures. The crux of our method is the notion of 2-simulation, which adapts to our setting the notion of simulation from compiler verification. Using the Coq proof assistant, we verify the correctness of our method and of several representative instantiations
Model-based Clustering with Missing Not At Random Data
Traditional ways for handling missing values are not designed for the
clustering purpose and they rarely apply to the general case, though frequent
in practice, of Missing Not At Random (MNAR) values. This paper proposes to
embed MNAR data directly within model-based clustering algorithms. We introduce
a mixture model for different types of data (continuous, count, categorical and
mixed) to jointly model the data distribution and the MNAR mechanism. Eight
different MNAR models are proposed, which may depend on the underlying
(unknown) classes and/or the values of the missing variables themselves. We
prove the identifiability of the parameters of both the data distribution and
the mechanism, whatever the type of data and the mechanism, and propose an EM
or Stochastic EM algorithm to estimate them. The code is available on
\url{https://github.com/AudeSportisse/Clustering-MNAR}.
%\url{https://anonymous.4open.science/r/Clustering-MNAR-0201} We also prove
that MNAR models for which the missingness depends on the class membership have
the nice property that the statistical inference can be carried out on the data
matrix concatenated with the mask by considering a MAR mechanism instead.
Finally, we perform empirical evaluations for the proposed sub-models on
synthetic data and we illustrate the relevance of our method on a medical
register, the TraumaBase^{\mbox{\normalsize{\textregistered}}} dataset
Model-based clustering with missing not at random data. Missing mechanism
International audienceSince the 90s, model-based clustering is largely used to classify data. Nowadays, with the increase of available data, missing values are more frequent. We defend the need to embed the missingness mechanism directly within the clustering model-ing step. There exist three types of missing data: missing completely at random (MCAR), missing at random (MAR) and missing not at random (MNAR). In all situations , logistic regression is proposed as a natural and exible candidate model. In this unied context, standard model selection criteria can be used to select between such dierent missing data mechanisms, simultaneously with the number of clusters. Practical interest of our proposal is illustrated on data derived from medical studies suffering from many missing data
Defined plant extracts can protect human cells against combined xenobiotic effects
<p>Abstract</p> <p>Background</p> <p>Pollutants representative of common environmental contaminants induce intracellular toxicity in human cells, which is generally amplified in combinations. We wanted to test the common pathways of intoxication and detoxification in human embryonic and liver cell lines. We used various pollutants such as Roundup residues, Bisphenol-A and Atrazine, and five precise medicinal plant extracts called Circ1, Dig1, Dig2, Sp1, and Uro1 in order to understand whether specific molecular actions took place or not.</p> <p>Methods</p> <p>Kidney and liver are major detoxification organs. We have studied embryonic kidney and hepatic human cell lines E293 and HepG2. The intoxication was induced on the one hand by a formulation of one of the most common herbicides worldwide, Roundup 450 GT+ (glyphosate and specific adjuvants), and on the other hand by a mixture of Bisphenol-A and Atrazine, all found in surface waters, feed and food. The prevention and curative effects of plant extracts were also measured on mitochondrial succinate dehydrogenase activity, on the entry of radiolabelled glyphosate (in Roundup) in cells, and on cytochromes P450 1A2 and 3A4 as well as glutathione-S-transferase.</p> <p>Results</p> <p>Clear toxicities of pollutants were observed on both cell lines at very low sub-agricultural dilutions. The prevention of such phenomena took place within 48 h with the plant extracts tested, with success rates ranging between 25-34% for the E293 intoxicated by Roundup, and surprisingly up to 71% for the HepG2. By contrast, after intoxication, no plant extract was capable of restoring E293 viability within 48 h, however, two medicinal plant combinations did restore the Bisphenol-A/Atrazine intoxicated HepG2 up to 24-28%. The analysis of underlying mechanisms revealed that plant extracts were not capable of preventing radiolabelled glyphosate from entering cells; however Dig2 did restore the CYP1A2 activity disrupted by Roundup, and had only a mild preventive effect on the CYP3A4, and no effect on the glutathione S-transferase.</p> <p>Conclusions</p> <p>Environmental pollutants have intracellular effects that can be prevented, or cured in part, by precise medicinal plant extracts in two human cell lines. This appears to be mediated at least in part by the cytochromes P450 modulation.</p
The last mile: High-Assurance and High-Speed cryptographic implementations
We develop a new approach for building cryptographic implementations. Our approach goes the last mile and delivers assembly code that is provably functionally correct, protected against side-channels, and as efficient as handwritten assembly. We illustrate our approach using ChaCha20Poly1305, one of the two ciphersuites recommended in TLS 1.3, and deliver formally verified vectorized implementations which outperform the fastest non-verified code.We realize our approach by combining the Jasmin framework, which offers in a single language features of high-level and low-level programming, and the EasyCrypt proof assistant, which offers a versatile verification infrastructure that supports proofs of functional correctness and equivalence checking. Neither of these tools had been used for functional correctness before. Taken together, these infrastructures empower programmers to develop efficient and verified implementations by "game hopping", starting from reference implementations that are proved functionally correct against a specification, and gradually introducing program optimizations that are proved correct by equivalence checking.We also make several contributions of independent interest, including a new and extensible verified compiler for Jasmin, with a richer memory model and support for vectorized instructions, and a new embedding of Jasmin in EasyCrypt.This work is partially supported by project ONR N00014-19-1-2292. Manuel Barbosa was supported by grant SFRH/BSAB/143018/2018 awarded by FCT. This work was partially funded by national funds via FCT in the context of project PTDC/CCI-INF/31698/2017
- âŠ