115 research outputs found
Recommended from our members
Termination-insensitive noninterference leaks more than just a bit
Current tools for analysing information flow in programs build upon ideas going back to Denning's work from the 70's. These systems enforce an imperfect notion of information flow which has become known as termination-insensitive noninterference. Under this version of noninterference, information leaks are permitted if they are transmitted purely by the program's termination behaviour (i.e., whether it terminates or not). This imperfection is the price to pay for having a security condition which is relatively liberal (e.g. allowing while-loops whose termination may depend on the value of a secret) and easy to check. But what is the price exactly? We argue that, in the presence of output, the price is higher than the “one bit” often claimed informally in the literature, and effectively such programs can leak all of their secrets. In this paper we develop a definition of termination-insensitive noninterference suitable for reasoning about programs with outputs. We show that the definition generalises “batch-job” style definitions from the literature and that it is indeed satisfied by a Denning-style program analysis with output. Although more than a bit of information can be leaked by programs satisfying this condition, we show that the best an attacker can do is a brute-force attack, which means that the attacker cannot reliably (in a technical sense) learn the secret in polynomial time in the size of the secret. If we further assume that secrets are uniformly distributed, we show that the advantage the attacker gains when guessing the secret after observing a polynomial amount of output is negligible in the size of the secret
A Semantic Hierarchy for Erasure Policies
We consider the problem of logical data erasure, contrasting with physical
erasure in the same way that end-to-end information flow control contrasts with
access control. We present a semantic hierarchy for erasure policies, using a
possibilistic knowledge-based semantics to define policy satisfaction such that
there is an intuitively clear upper bound on what information an erasure policy
permits to be retained. Our hierarchy allows a rich class of erasure policies
to be expressed, taking account of the power of the attacker, how much
information may be retained, and under what conditions it may be retained.
While our main aim is to specify erasure policies, the semantic framework
allows quite general information-flow policies to be formulated for a variety
of semantic notions of secrecy.Comment: 18 pages, ICISS 201
Non-interference for deterministic interactive programs
We consider the problem of defining an appropriate notion of non-interference (NI) for deterministic interactive programs. Previous work on the security of interactive programs by O'Neill, Clarkson and Chong (CSFW 2006) builds on earlier ideas due to Wittbold and Johnson (Symposium on Security and Privacy 1990), and argues for a notion of NI defined in terms of strategies modelling the behaviour of users. We show that, for deterministic interactive programs, it is not necessary to consider strategies and that a simple stream model of the users' behaviour is sufficient. The key technical result is that, for deterministic programs, stream-based NI implies the apparently more general strategy-based NI (in fact we consider a wider class of strategies than those of O'Neill et al). We give our results in terms of a simple notion of Input-Output Labelled Transition System, thus allowing application of the results to a large class of deterministic interactive programming languages
The Meaning of Memory Safety
We give a rigorous characterization of what it means for a programming
language to be memory safe, capturing the intuition that memory safety supports
local reasoning about state. We formalize this principle in two ways. First, we
show how a small memory-safe language validates a noninterference property: a
program can neither affect nor be affected by unreachable parts of the state.
Second, we extend separation logic, a proof system for heap-manipulating
programs, with a memory-safe variant of its frame rule. The new rule is
stronger because it applies even when parts of the program are buggy or
malicious, but also weaker because it demands a stricter form of separation
between parts of the program state. We also consider a number of pragmatically
motivated variations on memory safety and the reasoning principles they
support. As an application of our characterization, we evaluate the security of
a previously proposed dynamic monitor for memory safety of heap-allocated data.Comment: POST'18 final versio
Clinical and prognostic features of patients with peripheral arterial disease of the lower extremities
Purpose. То study frequency of acute coronary syndrome (ACS) and vascular complications, and identify predictors of unfavorable cardiovascular prognosis in patients with peripheral arterial disease (PAD) depending on stage of PAD during one-year prospective study. Material and methods. In prospective study 78 male patients with PAD and 60 male patients with acute myocardial infarction (control group without symptoms of PAD) were included, aged 38-70 years. Patients with PAD had high frequency of cardiovascular risk factors: arterial hypertension (AH; 88,0%), coronary artery disease (83,0%), smoking and abdominal obesity (77,0%), hypercholesterolemia (75,0%), low physical activity (72,0%). Results. During observation period vascular complications and ACS were registered in 28,0% and in 42,5% of patients with PAD; more frequency of acute myocardial infarction and operations of myocardial revascularization than patients of control group. Important predictors of unfavorable cardiovascular prognosis among patients with PAD were: severe stages of PAD (3rd and 4th), stable angina with functional class 3, uncontrolled AH (blood pressures 50/90 mm Hg) and diabetes mellitus type 2, permanent form of atrial fibrillation, high smoking intensity, high level of high sensivity C-reactive protein, dyslipidemia and increased platelet aggregation.Цель работы - изучить частоту развития острого коронарного синдрома (ОКС) и сосудистых осложнений, а также выявить предикторы неблагоприятного сердечно-сосудистого прогноза у больных облитерирующим атеросклерозом артерий нижних конечностей (ОААНК) по данным годового проспективного наблюдения. Материал и методы. В исследование включено 75 больных со 2-4 ст. ОААНК мужского пола в возрасте 55+7,1 лет с длительностью заболевания 6+4,5 лет. Контрольную группу составили пациенты (60 чел), у которых ИБС впервые манифестировала острым инфарктом миокарда с подъёмом сегмента ST (средний возраст - 55+6,2 лет), но не имеющие клинических проявлений ОААНК. Среди больных ОААНК наблюдалась высокая частота: артериальной гипертензии (АГ; 88,0%) и ишемической болезни сердца (83,0%), курения и абдоминального ожирения - 77,0%, гиперхолестеринемии и низкой физической активности - в 75.0% и 72,0% случаев. Результаты. За период наблюдения сосудистые осложнения и ОКС зарегистрированы у 28,0% и 42,5 % больных ОААНК, инфаркт миокарда в 3 раза и операции аорто-коронарного шунтирования в 1,5 раза чаще, чем у больных контрольной группы (соответственно 42,5% vs. 14,0% и 12,0% vs. 8,0%; р3 мг/л), дислипидемия и повышенная АДФ-агрегация тромбоцитов
Lazy Programs Leak Secrets
To preserve confidentiality, information-flow control (IFC) restricts how untrusted code handles secret data. While promising, IFC systems are not perfect; they can still leak sensitive information via covert channels. In this work, we describe a novel exploit of lazy evaluation to reveal secrets in IFC systems. Specifically, we show that lazy evaluation might transport information through the internal timing covert channel, a channel present in systems with concurrency and shared resources. We illustrate our claim with an attack for LIO, a concurrent IFC system for Haskell. We propose a countermeasure based on restricting the implicit sharing caused by lazy evaluation
Mathematical description of the grain cleaning process on the experimental facility
Machines with pneumatic systems are used to clean grain from light impurities and dust. Modern grain cleaning machines for preliminary cleaning from light impurities and dust with a pneumatic system do not fully meet the increasing requirements of modern agricultural production in terms of their performance. The article presents the principle of operation and a mathematical description that allows you to determine the main parameters of the installation. In this work, an analytical research method was used based on the analysis of differential equations describing the processes of cleaning grain from light impurities and dust. To develop a mathematical model, the method of constructing Lagrange mathematical models was applied. On the basis of this model, a number of equations were constructed that characterize: the transition of the potential energy of the grain mass into the kinetic energy of the grain flow; dynamic equation of the equilibrium fall of the grain flow; the volume flow rate of the air flow in the intergranular space, as well as an equation that makes it possible to determine the pressure loss in the air flow passing through the blinds and the average speed of the grains falling. Taking into account the above equations, a mathematical model is built in the form of a system of equations. This mathematical model makes it possible to determine the main design parameters of the new installation: the geometric dimensions of the working area, as well as energy costs for the grain cleaning process. The introduction of the developed installation at the receiving points of the elevators makes it possible to increase the efficiency of grain cleaning from light impurities and dust at low unit costs
Very static enforcement of dynamic policies
Security policies are naturally dynamic. Reflecting this, there has been a growing interest in studying information-flow properties which change during program execution, including concepts such as declassification, revocation, and role-change.
A static verification of a dynamic information flow policy, from a semantic perspective, should only need to concern itself with two things: 1) the dependencies between data in a program, and 2) whether those dependencies are consistent with the intended flow policies as they change over time. In this paper we provide a formal ground for this intuition. We present a straightforward extension to the principal flow-sensitive type system introduced by Hunt and Sands (POPL’06, ESOP’11) to infer both end-to-end dependencies and dependencies at intermediate points in a program. This allows typings to be applied to verification of both static and dynamic policies. Our extension preserves the principal type system’s distinguishing feature, that type inference is independent of the policy to be enforced: a single, generic dependency analysis (typing) can be used to verify many different dynamic policies of a given program, thus achieving a clean separation between (1) and (2).
We also make contributions to the foundations of dynamic information flow. Arguably, the most compelling semantic definitions for dynamic security conditions in the literature are phrased in the so-called knowledge-based style. We contribute a new definition of knowledge-based progress insensitive security for dynamic policies. We show that the new definition avoids anomalies of previous definitions and enjoys a simple and useful characterisation as a two-run style property
- …