68 research outputs found
First-Order Model-Checking in Random Graphs and Complex Networks
Complex networks are everywhere. They appear for example in the form of
biological networks, social networks, or computer networks and have been
studied extensively. Efficient algorithms to solve problems on complex networks
play a central role in today's society. Algorithmic meta-theorems show that
many problems can be solved efficiently. Since logic is a powerful tool to
model problems, it has been used to obtain very general meta-theorems. In this
work, we consider all problems definable in first-order logic and analyze which
properties of complex networks allow them to be solved efficiently.
The mathematical tool to describe complex networks are random graph models.
We define a property of random graph models called
-power-law-boundedness. Roughly speaking, a random graph is
-power-law-bounded if it does not admit strong clustering and its
degree sequence is bounded by a power-law distribution with exponent at least
(i.e. the fraction of vertices with degree is roughly
).
We solve the first-order model-checking problem (parameterized by the length
of the formula) in almost linear FPT time on random graph models satisfying
this property with . This means in particular that one can solve
every problem expressible in first-order logic in almost linear expected time
on these random graph models. This includes for example preferential attachment
graphs, Chung-Lu graphs, configuration graphs, and sparse Erd\H{o}s-R\'{e}nyi
graphs. Our results match known hardness results and generalize previous
tractability results on this topic
Incrementalizing Lattice-Based Program Analyses in Datalog
Program analyses detect errors in code, but when code changes frequently as in an IDE, repeated re-analysis from-scratch is unnecessary: It leads to poor performance unless we give up on precision and recall. Incremental program analysis promises to deliver fast feedback without giving up on precision or recall by deriving a new analysis result from the previous one. However, Datalog and other existing frameworks for incremental program analysis are limited in expressive power: They only support the powerset lattice as representation of analysis results, whereas many practically relevant analyses require custom lattices and aggregation over lattice values. To this end, we present a novel algorithm called DRedL that supports incremental maintenance of recursive lattice-value aggregation in Datalog. The key insight of DRedL is to dynamically recognize increasing replacements of old lattice values by new ones, which allows us to avoid the expensive deletion of the old value. We integrate DRedL into the analysis framework IncA and use IncA to realize incremental implementations of strong-update points-to analysis and string analysis for Java. As our performance evaluation demonstrates, both analyses react to code changes within milliseconds
Semantic Constraint Modeling in Database Using the Applicative Data Language.
There is a growing need to incorporate database integrity subsystems into large information systems in engineering design environments and real-time control and monitoring environments. The objectives of the integrity subsystem are to provide a user interface for constraint specification, to compile the specification into enforcement strategies, and to check data integrity at both compile-time and run-time. The approach proposed by this research is to develop the conceptual view of the database using the Entity Relationship Model (ERM). Users\u27 queries and semantic constraints can be specified by an ER-based data language, the Applicative Data Language (ADL). Any ADL constraint specification is compiled into both a compile-time and a run-time checking strategy for enforcement. The integrity subsystem, then, automatically maintains the consistency of data whenever there is a change in the database state. The basic constructs of ADL are data structures, functions, and predicates. It takes advantage of the semantic clarification of objects and relationships in the Entity Relationship Model by doing, first, an object level computation and, then, a data element level computation. The object level computation determines how objects are associated with each other. The data element computation, on the other hand, examines the data values of those associated objects and derives new relations from these values. A semantic constraint, therefore, is formulated as a computation procedure that maps the current database state to a TRUE or FALSE value. The computational syntax of ADL allows us to compile directly each constraint specification into a transition digraph for compile-time constraint checking. This research proposes the incremental computation strategy for efficient run-time constraint checking. The objective of the strategy is to do run-time constraint checking without full evaluation of the database. The entire computation procedure centers around the user\u27s update. It propagates the incremental changes along the transition digraph to infer the effect of the update upon the new truth value of the semantic constraint. This research concludes that ADL with its generality in semantic constraint modeling and its enforcement strategies at both compile-time and run-time is adequate as the architecture for an integrity subsystem supporting an Entity Relationship database
Secondary predication in Russian
The paper makes two contributions to semantic typology of secondary predicates. It provides an explanation of the fact that Russian has no resultative secondary predicates, relating this explanation to the interpretation of secondary predicates in English. And it relates depictive secondary predicates in Russian, which usually occur in the instrumental case, to other uses of the instrumental case in Russian, establishing here, too, a difference to English concerning the scope of the secondary predication phenomenon
Automatic visual recognition using parallel machines
Invariant features and quick matching algorithms are two major concerns in the area of automatic visual recognition. The former reduces the size of an established model database, and the latter shortens the computation time. This dissertation, will discussed both line invariants under perspective projection and parallel implementation of a dynamic programming technique for shape recognition. The feasibility of using parallel machines can be demonstrated through the dramatically reduced time complexity.
In this dissertation, our algorithms are implemented on the AP1000 MIMD parallel machines. For processing an object with a features, the time complexity of the proposed parallel algorithm is O(n), while that of a uniprocessor is O(n2). The two applications, one for shape matching and the other for chain-code extraction, are used in order to demonstrate the usefulness of our methods.
Invariants from four general lines under perspective projection are also discussed in here. In contrast to the approach which uses the epipolar geometry, we investigate the invariants under isotropy subgroups. Theoretically speaking, two independent invariants can be found for four general lines in 3D space. In practice, we show how to obtain these two invariants from the projective images of four general lines without the need of camera calibration.
A projective invariant recognition system based on a hypothesis-generation-testing scheme is run on the hypercube parallel architecture. Object recognition is achieved by matching the scene projective invariants to the model projective invariants, called transfer. Then a hypothesis-generation-testing scheme is implemented on the hypercube parallel architecture
Hyper Static Analysis of Programs - An Abstract Interpretation-Based Framework for Hyperproperties Verification
In the context of systems security, information flows play a central role. Unhandled information flows potentially leave the door open to very dangerous types of security attacks, such as code injection or sensitive information leakage. Information flows verification is based on a notion of dependency between a system\u2019s objects, which requires specifications expressing relations between different executions of a system. Specifications of this kind, called hyperproperties, go beyond classic trace properties, defined in terms of predicate over single executions. The problem of trace properties verification is well studied, both from a theoretical as well as a practical point of view. Unfortunately, very few works deal with the verification of hyperproperties. Note that hyperproperties are not limited to information flows. Indeed, a lot of other important problems can be modeled through hyperproperties only: processes synchronization, availability requirements, integrity issues, error resistant codes check, just to name a few. The sound verification of hyperproperties is not trivial: it is not easy to adapt classic verification methods, used for trace properties, in order to deal with hyperproperties. The added complexity derives from the fact that hyperproperties are defined over sets of sets of executions, rather than sets of executions, as happens for trace properties. In general, passing to powersets involves many problems, from a computability point of view, and this is the case also for systems verification. In this thesis, it is explored the problem of hyperproperties verification in its theoretical and practical aspects. In particular, the aim is to extend verification methods used for trace properties to the more general case of hyperproperties. The verification is performed exploiting the framework of abstract interpretation, a very general theory for approximating the behavior of discrete dynamic systems. Apart from the general setting, the thesis focuses on sound verification methods, based on static analysis, for computer programs. As a case study \u2013 which is also a leading motivation \u2013 the verification of information flows specifications has been taken into account, in the form of Non-Interference and Abstract Non-Interference. The second is a weakening of the first, useful in the context where Non-Interference is a too restrictive specification. The results of the thesis have been implemented in a prototype analyzer for (Abstract) Non-Interference which is, to the best of the author\u2019s knowledge, the first attempt to implement a sound verifier for that specification(s), based on abstract interpretation and taking into account the expressive power of hyperproperties
A Methodology for Evaluating Relational and NoSQL Databases for Small-Scale Storage and Retrieval
Modern systems record large quantities of electronic data capturing time-ordered events, system state information, and behavior. Subsequent analysis enables historic and current system status reporting, supports fault investigations, and may provide insight for emerging system trends. Unfortunately, the management of log data requires ever more efficient and complex storage tools to access, manipulate, and retrieve these records. Truly effective solutions also require a well-planned architecture supporting the needs of multiple stakeholders. Historically, database requirements were well-served by relational data models, however modern, non-relational databases, i.e. NoSQL, solutions, initially intended for “big data” distributed system may also provide value for smaller-scale problems such as those required by log data. However, no evaluation method currently exists to adequately compare the capabilities of traditional (relational database) and modern NoSQL solutions for small-scale problems. This research proposes a methodology to evaluate modern data storage and retrieval systems. While the methodology is intended to be generalizable to many data sources, a commercially-produced unmanned aircraft system served as a representative use case to test the methodology for aircraft log data. The research first defined the key characteristics of database technologies and used those characteristics to inform laboratory simulations emulating representative examples of modern database technologies (relational, key-value, columnar, document, and graph). Based on those results, twelve evaluation criteria were proposed to compare the relational and NoSQL database types. The Analytical Hierarchy Process was then used to combine literature findings, laboratory simulations, and user inputs to determine the most suitable database type for the log data use case. The study results demonstrate the efficacy of the proposed methodology
Recommended from our members
An Object-Oriented System for Engineering Polymer Information
Issues arising from the development of a computer-based information system for engineering polymer data have been explored.
The system was designed with the aim of providing a user-independent representation of engineering polymers that would organise the data pertaining to them and be amenable to extension and evolution to allow for new materials and new properties.
A classification of engineering polymer materials was developed to provide the structure for the representation, and an existing computer information system was modified in order to accommodate it. The classification was designed to create and order classes of similar materials to enable easy access to their information. Criteria for grouping material grades into families and families into a hierarchy were assessed. Existing polymer classifications were analysed; several alternative approaches to the factoring process are described.
The final taxonomy was implemented within the object-oriented information system POISE, written in the language Smalltalk 80TM. Inherent in the system is a facility to support browsing of general class information. Other tools developed during the course of the project allow the addition and positioning of new classes, grades, properties and data and searching for grades by property value or name.
It was shown that a classification based on criteria of similar chemical structure is a prerequisite for extensibility. Also demonstrated was that no such classification will consistently group together grades that are similar in respect of all of their physical and engineering property data for the uses of engineering designers.
A detailed analysis of the properties used to describe grades of engineering polymer gave an insight into the above dichotomy. To accommodate the resulting conflict, the polymer information system was enhanced to incorporate an orthogonal factoring at the grade level in addition to that already created by the final classification based on chemical families
Acesso remoto dinâmico e seguro a bases de dados com integração de políticas de acesso suave
The amount of data being created and shared has grown greatly in recent
years, thanks in part to social media and the growth of smart devices.
Managing the storage and processing of this data can give a competitive edge
when used to create new services, to enhance targeted advertising, etc. To
achieve this, the data must be accessed and processed. When applications
that access this data are developed, tools such as Java Database Connectivity,
ADO.NET and Hibernate are typically used. However, while these tools aim to
bridge the gap between databases and the object-oriented programming
paradigm, they focus only on the connectivity issue. This leads to increased
development time as developers need to master the access policies to write
correct queries. Moreover, when used in database applications within noncontrolled
environments, other issues emerge such as database credentials
theft; application authentication; authorization and auditing of large groups of
new users seeking access to data, potentially with vague requirements;
network eavesdropping for data and credential disclosure; impersonating
database servers for data modification; application tampering for unrestricted
database access and data disclosure; etc.
Therefore, an architecture capable of addressing these issues is necessary to
build a reliable set of access control solutions to expand and simplify the
application scenarios of access control systems. The objective, then, is to
secure the remote access to databases, since database applications may be
used in hard-to-control environments and physical access to the host
machines/network may not be always protected. Furthermore, the authorization
process should dynamically grant the appropriate permissions to users that
have not been explicitly authorized to handle large groups seeking access to
data. This includes scenarios where the definition of the access requirements is
difficult due to their vagueness, usually requiring a security expert to authorize
each user individually. This is achieved by integrating and auditing soft access
policies based on fuzzy set theory in the access control decision-making
process. A proof-of-concept of this architecture is provided alongside a
functional and performance assessment.A quantidade de dados criados e partilhados tem crescido nos últimos anos,
em parte graças às redes sociais e à proliferação dos dispositivos inteligentes.
A gestão do armazenamento e processamento destes dados pode fornecer
uma vantagem competitiva quando usados para criar novos serviços, para
melhorar a publicidade direcionada, etc. Para atingir este objetivo, os dados
devem ser acedidos e processados. Quando as aplicações que acedem a
estes dados são desenvolvidos, ferramentas como Java Database
Connectivity, ADO.NET e Hibernate são normalmente utilizados. No entanto,
embora estas ferramentas tenham como objetivo preencher a lacuna entre as
bases de dados e o paradigma da programação orientada por objetos, elas
concentram-se apenas na questão da conectividade. Isto aumenta o tempo de
desenvolvimento, pois os programadores precisam dominar as políticas de
acesso para escrever consultas corretas. Além disso, quando usado em
aplicações de bases de dados em ambientes não controlados, surgem outros
problemas, como roubo de credenciais da base de dados; autenticação de
aplicações; autorização e auditoria de grandes grupos de novos utilizadores
que procuram acesso aos dados, potencialmente com requisitos vagos; escuta
da rede para obtenção de dados e credenciais; personificação de servidores
de bases de dados para modificação de dados; manipulação de aplicações
para acesso ilimitado à base de dados e divulgação de dados; etc.
Uma arquitetura capaz de resolver esses problemas é necessária para
construir um conjunto confiável de soluções de controlo de acesso, para
expandir e simplificar os cenários de aplicação destes sistemas. O objetivo,
então, é proteger o acesso remoto a bases de dados, uma vez que as
aplicações de bases de dados podem ser usados em ambientes de difícil
controlo e o acesso físico às máquinas/rede nem sempre está protegido.
Adicionalmente, o processo de autorização deve conceder dinamicamente as
permissões adequadas aos utilizadores que não foram explicitamente
autorizados para suportar grupos grandes de utilizadores que procuram aceder
aos dados. Isto inclui cenários em que a definição dos requisitos de acesso é
difícil devido à sua imprecisão, geralmente exigindo um especialista em
segurança para autorizar cada utilizador individualmente. Este objetivo é
atingido no processo de decisão de controlo de acesso com a integração e
auditaria das políticas de acesso suaves baseadas na teoria de conjuntos
difusos. Uma prova de conceito desta arquitetura é fornecida em conjunto com
uma avaliação funcional e de desempenho.Programa Doutoral em Informátic
- …