249 research outputs found

    Topological Equivalence and Similarity in Multi-Representation Geographic Databases

    Get PDF
    Geographic databases contain collections of spatial data representing the variety of views for the real world at a specific time. Depending on the resolution or scale of the spatial data, spatial objects may have different spatial dimensions, and they may be represented by point, linear, or polygonal features, or combination of them. The diversity of data that are collected over the same area, often from different sources, imposes a question of how to integrate and to keep them consistent in order to provide correct answers for spatial queries. This thesis is concerned with the development of a tool to check topological equivalence and similarity for spatial objects in multi-representation databases. The main question is what are the components of a model to identify topological consistency, based on a set of possible transitions for the different types of spatial representations. This work develops a new formalism to model consistently spatial objects and spatial relations between several objects, each represented at multiple levels of detail. It focuses on the topological consistency constraints that must hold among the different representation of objects, but it is not concerned about generalization operations of how to derive one representation level from another. The result of this thesis is a?computational tool to evaluate topological equivalence and similarity across multiple representations. This thesis proposes to organize a spatial scene -a set of spatial objects and their embeddings in space- directly as a relation-based model that uses a hierarchical graph representation. The focus of the relation-based model is on relevant object representations. Only the highest-dimensional object representations are explicitly stored, while their parts are not represented in the graph

    A framework for digital model checking

    Get PDF
    Dissertação de mestrado em European Master in Building Information ModellingDigital model checking (DMC) is a solution that has the power to become a primary key player for the AEC industry concerns. Despite the research achievements on DMC, there are still gaps to make it practical to solve real-world problems. DMC, as an emerging research discipline, is still an area of development and not yet completely formalized. This means that there is still a need for enhanced system capabilities, updated processes, and adjustments to the current project delivery documents and proper standardization of DMC aspects. The work of this dissertation proposes a diagnostic approach based on using pre-defined principles to analyse digital model checking (DMC) and a formal framework and implementation plan. These principles are the Digital Information model (DIM), Rule-set, and checking platform. To set up a formal framework a modularization approach was used focused on “what things are”, “what is the logic behind extending the pre-existing concepts” and “how it assists the DMC process”. These modules play a fundamental role and they must be captured, tracked, and interconnected during the development of the framework. Throughout the expansion of principles, modules were built on a basis that 1) DIMs are the wholeness of information that should include existing physical systems not only buildings, 2) verification rules are not only sourced from regulatory codes and standards, and there are other sources of rules that should be taken into consideration, 3) the role of involved stakeholders, native system and project phases has not been ignored, 4) evaluate the effectiveness of DIMs to integrate, exchange, identify, and verify its content and 5) highlight on the existent classifications that could aid the DMC process. Moreover, DMC is a dependent activity that has cause and effect on former and subsequent activities. Thus, this dissertation also proposes a DMC implementation plan that could fit within the other project activities.A verificação de modelo digital (DMC) é uma solução que tem o poder de se tornar um ator principal para as preocupações da indústria de AEC. Apesar dos resultados da investigação sobre DMC, ainda existem lacunas para torná-lo prático para resolver problemas do mundo real. DMC, como uma área de investigação emergente, é ainda uma área em desenvolvimento e não completamente formalizada. Isso significa que existe ainda necessidade de aprimorato das capacidades dos sistemas, atualização de processos, ajustes aos atuais documentos de entrega do projeto e padronização adequada dos aspectos de DMC. O trabalho desta dissertação visa propor uma abordagem de diagnóstico baseada no uso de princípios pré-definidos para analisar o processo de verificação de modelo digital (DMC), um framework formal e um plano de implementação. Esses princípios são o modelo digital de informação (DIM), o conjunto de regras e a plataforma de verificação. Para configurar uma metodologia formal, uma abordagem de modularização foi usada com foco em “o que as coisas são”, “qual é a lógica por trás da extensão dos conceitos pré-existentes” e “como isso auxilia o processo DMC”. Esses módulos desempenham um papel fundamental e devem ser capturados, verificados e interconectados durante o desenvolvimento da metodologia. Ao longo da expansão dos princípios, os módulos foram construídos com base em: 1) os DIMs representam a totalidade da informação os quais devem incluir todos sistemas físicos existentes, não apenas os edifícios, 2) as regras de verificação não são apenas originárias de códigos e padrões regulatórios, existindo outras fontes de regras que devem ser levadas em consideração, 3) o papel das partes interessadas envolvidas, sistemas nativos e as fases do projeto não foram ignorados, 4) avaliar a eficácia dos DIMs para integrar, trocar, identificar e verificar seu conteúdo e 5) destacar a existencia de systemas de classificação que poderiam auxiliar no processo de DMC. Além disso, o DMC é uma atividade dependente que tem causa e efeito nas atividades anteriores e subsequentes. Assim, esta dissertação também propoe um plano de implementação do DMC para se enquadrar nas outras atividades do projeto

    A temporal versioned object-oriented data schema model

    Get PDF
    AbstractThis paper describes in a formal way a data schema model which introduces temporal and versioning schema features in an object-oriented environment. In our model, the schema is time dependent and the history of the changes which occur on its elements are kept into version hierarchies. A fundamental assumption behind our approach is that a new schema specification should not define a new database, so that previous schema definitions are considered as alternative design specifications, and consequently, existing data can be accessed in a consistent way using any of the defined schemas

    Hardware-Assisted Dependable Systems

    Get PDF
    Unpredictable hardware faults and software bugs lead to application crashes, incorrect computations, unavailability of internet services, data losses, malfunctioning components, and consequently financial losses or even death of people. In particular, faults in microprocessors (CPUs) and memory corruption bugs are among the major unresolved issues of today. CPU faults may result in benign crashes and, more problematically, in silent data corruptions that can lead to catastrophic consequences, silently propagating from component to component and finally shutting down the whole system. Similarly, memory corruption bugs (memory-safety vulnerabilities) may result in a benign application crash but may also be exploited by a malicious hacker to gain control over the system or leak confidential data. Both these classes of errors are notoriously hard to detect and tolerate. Usual mitigation strategy is to apply ad-hoc local patches: checksums to protect specific computations against hardware faults and bug fixes to protect programs against known vulnerabilities. This strategy is unsatisfactory since it is prone to errors, requires significant manual effort, and protects only against anticipated faults. On the other extreme, Byzantine Fault Tolerance solutions defend against all kinds of hardware and software errors, but are inadequately expensive in terms of resources and performance overhead. In this thesis, we examine and propose five techniques to protect against hardware CPU faults and software memory-corruption bugs. All these techniques are hardware-assisted: they use recent advancements in CPU designs and modern CPU extensions. Three of these techniques target hardware CPU faults and rely on specific CPU features: ∆-encoding efficiently utilizes instruction-level parallelism of modern CPUs, Elzar re-purposes Intel AVX extensions, and HAFT builds on Intel TSX instructions. The rest two target software bugs: SGXBounds detects vulnerabilities inside Intel SGX enclaves, and “MPX Explained” analyzes the recent Intel MPX extension to protect against buffer overflow bugs. Our techniques achieve three goals: transparency, practicality, and efficiency. All our systems are implemented as compiler passes which transparently harden unmodified applications against hardware faults and software bugs. They are practical since they rely on commodity CPUs and require no specialized hardware or operating system support. Finally, they are efficient because they use hardware assistance in the form of CPU extensions to lower performance overhead

    ROVER: A Framework for the Evolution of Relationships

    Full text link

    Protecting Systems From Exploits Using Language-Theoretic Security

    Get PDF
    Any computer program processing input from the user or network must validate the input. Input-handling vulnerabilities occur in programs when the software component responsible for filtering malicious input---the parser---does not perform validation adequately. Consequently, parsers are among the most targeted components since they defend the rest of the program from malicious input. This thesis adopts the Language-Theoretic Security (LangSec) principle to understand what tools and research are needed to prevent exploits that target parsers. LangSec proposes specifying the syntactic structure of the input format as a formal grammar. We then build a recognizer for this formal grammar to validate any input before the rest of the program acts on it. To ensure that these recognizers represent the data format, programmers often rely on parser generators or parser combinators tools to build the parsers. This thesis propels several sub-fields in LangSec by proposing new techniques to find bugs in implementations, novel categorizations of vulnerabilities, and new parsing algorithms and tools to handle practical data formats. To this end, this thesis comprises five parts that tackle various tenets of LangSec. First, I categorize various input-handling vulnerabilities and exploits using two frameworks. First, I use the mismorphisms framework to reason about vulnerabilities. This framework helps us reason about the root causes leading to various vulnerabilities. Next, we built a categorization framework using various LangSec anti-patterns, such as parser differentials and insufficient input validation. Finally, we built a catalog of more than 30 popular vulnerabilities to demonstrate the categorization frameworks. Second, I built parsers for various Internet of Things and power grid network protocols and the iccMAX file format using parser combinator libraries. The parsers I built for power grid protocols were deployed and tested on power grid substation networks as an intrusion detection tool. The parser I built for the iccMAX file format led to several corrections and modifications to the iccMAX specifications and reference implementations. Third, I present SPARTA, a novel tool I built that generates Rust code that type checks Portable Data Format (PDF) files. The type checker I helped build strictly enforces the constraints in the PDF specification to find deviations. Our checker has contributed to at least four significant clarifications and corrections to the PDF 2.0 specification and various open-source PDF tools. In addition to our checker, we also built a practical tool, PDFFixer, to dynamically patch type errors in PDF files. Fourth, I present ParseSmith, a tool to build verified parsers for real-world data formats. Most parsing tools available for data formats are insufficient to handle practical formats or have not been verified for their correctness. I built a verified parsing tool in Dafny that builds on ideas from attribute grammars, data-dependent grammars, and parsing expression grammars to tackle various constructs commonly seen in network formats. I prove that our parsers run in linear time and always terminate for well-formed grammars. Finally, I provide the earliest systematic comparison of various data description languages (DDLs) and their parser generation tools. DDLs are used to describe and parse commonly used data formats, such as image formats. Next, I conducted an expert elicitation qualitative study to derive various metrics that I use to compare the DDLs. I also systematically compare these DDLs based on sample data descriptions available with the DDLs---checking for correctness and resilience

    A Domain-Specific Language and Editor for Parallel Particle Methods

    Full text link
    Domain-specific languages (DSLs) are of increasing importance in scientific high-performance computing to reduce development costs, raise the level of abstraction and, thus, ease scientific programming. However, designing and implementing DSLs is not an easy task, as it requires knowledge of the application domain and experience in language engineering and compilers. Consequently, many DSLs follow a weak approach using macros or text generators, which lack many of the features that make a DSL a comfortable for programmers. Some of these features---e.g., syntax highlighting, type inference, error reporting, and code completion---are easily provided by language workbenches, which combine language engineering techniques and tools in a common ecosystem. In this paper, we present the Parallel Particle-Mesh Environment (PPME), a DSL and development environment for numerical simulations based on particle methods and hybrid particle-mesh methods. PPME uses the meta programming system (MPS), a projectional language workbench. PPME is the successor of the Parallel Particle-Mesh Language (PPML), a Fortran-based DSL that used conventional implementation strategies. We analyze and compare both languages and demonstrate how the programmer's experience can be improved using static analyses and projectional editing. Furthermore, we present an explicit domain model for particle abstractions and the first formal type system for particle methods.Comment: Submitted to ACM Transactions on Mathematical Software on Dec. 25, 201

    Acta Cybernetica : Volume 25. Number 2.

    Get PDF
    corecore