184 research outputs found
The PER model of abstract non-interference
Abstract. In this paper, we study the relationship between two models of secure information flow: the PER model (which uses equivalence relations) and the abstract non-interference model (which uses upper closure operators). We embed the lattice of equivalence relations into the lattice of closures, re-interpreting abstract non-interference over the lattice of equivalence relations. For narrow abstract non-interference, we show non-interference it is strictly less general. The relational presentation of abstract non-interference leads to a simplified construction of the most concrete harmless attacker. Moreover, the PER model of abstract noninterference allows us to derive unconstrained attacker models, which do not necessarily either observe all public information or ignore all private information. Finally, we show how abstract domain completeness can be used for enforcing the PER model of abstract non-interference
The posterity of Zadeh's 50-year-old paper: A retrospective in 101 Easy Pieces – and a Few More
International audienceThis article was commissioned by the 22nd IEEE International Conference of Fuzzy Systems (FUZZ-IEEE) to celebrate the 50th Anniversary of Lotfi Zadeh's seminal 1965 paper on fuzzy sets. In addition to Lotfi's original paper, this note itemizes 100 citations of books and papers deemed “important (significant, seminal, etc.)” by 20 of the 21 living IEEE CIS Fuzzy Systems pioneers. Each of the 20 contributors supplied 5 citations, and Lotfi's paper makes the overall list a tidy 101, as in “Fuzzy Sets 101”. This note is not a survey in any real sense of the word, but the contributors did offer short remarks to indicate the reason for inclusion (e.g., historical, topical, seminal, etc.) of each citation. Citation statistics are easy to find and notoriously erroneous, so we refrain from reporting them - almost. The exception is that according to Google scholar on April 9, 2015, Lotfi's 1965 paper has been cited 55,479 times
A survey of statistical network models
Networks are ubiquitous in science and have become a focal point for
discussion in everyday life. Formal statistical models for the analysis of
network data have emerged as a major topic of interest in diverse areas of
study, and most of these involve a form of graphical representation.
Probability models on graphs date back to 1959. Along with empirical studies in
social psychology and sociology from the 1960s, these early works generated an
active network community and a substantial literature in the 1970s. This effort
moved into the statistical literature in the late 1970s and 1980s, and the past
decade has seen a burgeoning network literature in statistical physics and
computer science. The growth of the World Wide Web and the emergence of online
networking communities such as Facebook, MySpace, and LinkedIn, and a host of
more specialized professional network communities has intensified interest in
the study of networks and network data. Our goal in this review is to provide
the reader with an entry point to this burgeoning literature. We begin with an
overview of the historical development of statistical network modeling and then
we introduce a number of examples that have been studied in the network
literature. Our subsequent discussion focuses on a number of prominent static
and dynamic network models and their interconnections. We emphasize formal
model descriptions, and pay special attention to the interpretation of
parameters and their estimation. We end with a description of some open
problems and challenges for machine learning and statistics.Comment: 96 pages, 14 figures, 333 reference
Transparent fault tolerance for scalable functional computation
Reliability is set to become a major concern on emergent large-scale architectures. While there are many parallel languages, and indeed many parallel functional languages, very few address reliability. The notable exception is the widely emulated Erlang distributed actor model that provides explicit supervision and recovery of actors with isolated state.
We investigate scalable transparent fault tolerant functional computation with automatic supervision and recovery of tasks. We do so by developing HdpH-RS, a variant of the Haskell distributed parallel Haskell (HdpH) DSL with Reliable Scheduling. Extending the distributed work stealing protocol of HdpH for task supervision and recovery is challenging. To eliminate elusive concurrency bugs, we validate the HdpH-RS work stealing protocol using the SPIN model checker.
HdpH-RS differs from the actor model in that its principal entities are tasks, i.e. independent stateless computations, rather than isolated stateful actors. Thanks to statelessness, fault recovery can be performed automatically and entirely hidden in the HdpH-RS runtime system. Statelessness is also key for proving a crucial property of the semantics of HdpH-RS: fault recovery does not change the result of the program, akin to deterministic parallelism.
HdpH-RS provides a simple distributed fork/join-style programming model, with minimal exposure of fault tolerance at the language level, and a library of higher level abstractions such as algorithmic skeletons. In fact, the HdpH-RS DSL is exactly the same as the HdpH DSL, hence users can opt in or out of fault tolerant execution without
any refactoring.
Computations in HdpH-RS are always as reliable as the root node, no matter how many nodes and cores are actually used. We benchmark HdpH-RS on conventional clusters and an HPC platform: all benchmarks survive Chaos Monkey random fault injection; the system scales well e.g. up to 1,400 cores on the HPC; reliability and recovery overheads are consistently low even at scale
A Survey on Graph Database Management Techniques for Huge Unstructured Data
Data analysis, data management, and big data play a major role in both social and business perspective, in the last decade. Nowadays, the graph database is the hottest and trending research topic. A graph database is preferred to deal with the dynamic and complex relationships in connected data and offer better results. Every data element is represented as a node. For example, in social media site, a person is represented as a node, and its properties name, age, likes, and dislikes, etc and the nodes are connected with the relationships via edges. Use of graph database is expected to be beneficial in business, and social networking sites that generate huge unstructured data as that Big Data requires proper and efficient computational techniques to handle with. This paper reviews the existing graph data computational techniques and the research work, to offer the future research line up in graph database management
Alloy experiments for a least privilege separation kernel
A least privilege separation kernel (LPSK) is part of a long-term project known as the Trusted Computing Exemplar (TCX). A major objective of the TCX is the creation of an open framework for high assurance development. A relatively new specification tool called Alloy has shown potential for high assurance development. We implemented the formal security policy model (FSPM) and the formal top level specification (FTLS) of the TCX LPSK in Alloy and concluded that Alloy has few limitations and is more than sufficiently useful, as measured by utility and ease of use, to include in the TCX framework.http://archive.org/details/alloyexperiments109453390Civilian author.Approved for public release; distribution is unlimited
The Mathematical Abstraction Theory, The Fundamentals for Knowledge Representation and Self-Evolving Autonomous Problem Solving Systems
The intention of the present study is to establish the mathematical
fundamentals for automated problem solving essentially targeted for robotics by
approaching the task universal algebraically introducing knowledge as
realizations of generalized free algebra based nets, graphs with gluing forms
connecting in- and out-edges to nodes. Nets are caused to undergo
transformations in conceptual level by type wise differentiated intervening net
rewriting systems dispersing problems to abstract parts, matching being
determined by substitution relations. Achieved sets of conceptual nets
constitute congruent classes. New results are obtained within construction of
problem solving systems where solution algorithms are derived parallel with
other candidates applied to the same net classes. By applying parallel
transducer paths consisting of net rewriting systems to net classes congruent
quotient algebras are established and the manifested class rewriting comprises
all solution candidates whenever produced nets are in anticipated languages
liable to acceptance of net automata. Furthermore new solutions will be added
to the set of already known ones thus expanding the solving power in the
forthcoming. Moreover special attention is set on universal abstraction,
thereof generation by net block homomorphism, consequently multiple order
solving systems and the overall decidability of the set of the solutions. By
overlapping presentation of nets new abstraction relation among nets is
formulated alongside with consequent alphabetical net block renetting system
proportional to normal forms of renetting systems regarding the operational
power. A new structure in self-evolving problem solving is established via
saturation by groups of equivalence relations and iterative closures of
generated quotient transducer algebras over the whole evolution.Comment: This article is a part of my thesis giving the unity for both
knowledge presentation and self-evolution in autonomous problem solving
mathematical systems and for that reason draws heavily from my previous work
arxiv:1305.563
- …