239 research outputs found

    Methodology and Software Prototype for Ontology-Enabled Traceability Mechanisms

    Get PDF
    Due to the rapid advancement of technology, industrial-aged systems are being replaced by information-based models through system integration, where hardware and software are combined by a variety of communication means. As engineering systems become progressively complex, the challenge is to fully understand and implement the connectivity relationships among various models of visualization so that catastrophic and expensive failures of engineering systems can be avoided. In order to achieve these connectivity relationships, this project inserts a new notion called "Design Concepts" in the traceability link between the already connected requirements and engineering objects, where rule-checking may be embedded into the design concepts. A software prototype of the Washington, D.C. Metro System has been built to illustrate the feasibility of connectivity between requirements, UML class diagrams and an engineering model. The software makes use of listener-driven events, which are a scalable and efficient method for establishing traceability links and responding to external user events

    A Comprehensive Approach to WSN-Based ITS Applications: A Survey

    Get PDF
    In order to perform sensing tasks, most current Intelligent Transportation Systems (ITS) rely on expensive sensors, which offer only limited functionality. A more recent trend consists of using Wireless Sensor Networks (WSN) for such purpose, which reduces the required investment and enables the development of new collaborative and intelligent applications that further contribute to improve both driving safety and traffic efficiency. This paper surveys the application of WSNs to such ITS scenarios, tackling the main issues that may arise when developing these systems. The paper is divided into sections which address different matters including vehicle detection and classification as well as the selection of appropriate communication protocols, network architecture, topology and some important design parameters. In addition, in line with the multiplicity of different technologies that take part in ITS, it does not consider WSNs just as stand-alone systems, but also as key components of heterogeneous systems cooperating along with other technologies employed in vehicular scenarios

    Methodology and System for Ontology-Enabled Traceability: Pilot Application to Design and Management of the Washington D.C. Metro System

    Get PDF
    This report describes a new methodology and system for satisfying requirements, and an architectural framework for linking discipline-specific dependencies through interaction relationships at the meta-model (or ontology) level. In state-of-the-art traceability mechanisms, requirements are connected directly to design objects. Here, in contrast, we ask the question: What design concept (or family of design concepts) should be applied to satisfy this requirement? Solutions to this question establish links between requirements and design concepts. Then, it is the implementation of these concepts that leads to the design itself. These ideas are prototyped through a Washington DC Metro System requirements-to-design model mockup. The proposed methodology offers several benefits not possible with state-of-the-art procedures. First, procedures for design rule checking may be embedded into design concept nodes, thereby creating a pathway for system validation and verification processes that can be executed early in the systems lifecycle where errors are cheapest and easiest to fix. Second, the proposed model provides a much better big-picture view of relevant design concepts and how they fit together, than is possible with linking of domains at the model level. And finally, the proposed procedures are automatically reusable across families of projects where the ontologies are applicable

    Fault and Defect Tolerant Computer Architectures: Reliable Computing With Unreliable Devices

    Get PDF
    This research addresses design of a reliable computer from unreliable device technologies. A system architecture is developed for a fault and defect tolerant (FDT) computer. Trade-offs between different techniques are studied and yield and hardware cost models are developed. Fault and defect tolerant designs are created for the processor and the cache memory. Simulation results for the content-addressable memory (CAM)-based cache show 90% yield with device failure probabilities of 3 x 10(-6), three orders of magnitude better than non fault tolerant caches of the same size. The entire processor achieves 70% yield with device failure probabilities exceeding 10(-6). The required hardware redundancy is approximately 15 times that of a non-fault tolerant design. While larger than current FT designs, this architecture allows the use of devices much more likely to fail than silicon CMOS. As part of model development, an improved model is derived for NAND Multiplexing. The model is the first accurate model for small and medium amounts of redundancy. Previous models are extended to account for dependence between the inputs and produce more accurate results

    High-Quality Hypergraph Partitioning

    Get PDF
    This dissertation focuses on computing high-quality solutions for the NP-hard balanced hypergraph partitioning problem: Given a hypergraph and an integer kk, partition its vertex set into kk disjoint blocks of bounded size, while minimizing an objective function over the hyperedges. Here, we consider the two most commonly used objectives: the cut-net metric and the connectivity metric. Since the problem is computationally intractable, heuristics are used in practice - the most prominent being the three-phase multi-level paradigm: During coarsening, the hypergraph is successively contracted to obtain a hierarchy of smaller instances. After applying an initial partitioning algorithm to the smallest hypergraph, contraction is undone and, at each level, refinement algorithms try to improve the current solution. With this work, we give a brief overview of the field and present several algorithmic improvements to the multi-level paradigm. Instead of using a logarithmic number of levels like traditional algorithms, we present two coarsening algorithms that create a hierarchy of (nearly) nn levels, where nn is the number of vertices. This makes consecutive levels as similar as possible and provides many opportunities for refinement algorithms to improve the partition. This approach is made feasible in practice by tailoring all algorithms and data structures to the nn-level paradigm, and developing lazy-evaluation techniques, caching mechanisms and early stopping criteria to speed up the partitioning process. Furthermore, we propose a sparsification algorithm based on locality-sensitive hashing that improves the running time for hypergraphs with large hyperedges, and show that incorporating global information about the community structure into the coarsening process improves quality. Moreover, we present a portfolio-based initial partitioning approach, and propose three refinement algorithms. Two are based on the Fiduccia-Mattheyses (FM) heuristic, but perform a highly localized search at each level. While one is designed for two-way partitioning, the other is the first FM-style algorithm that can be efficiently employed in the multi-level setting to directly improve kk-way partitions. The third algorithm uses max-flow computations on pairs of blocks to refine kk-way partitions. Finally, we present the first memetic multi-level hypergraph partitioning algorithm for an extensive exploration of the global solution space. All contributions are made available through our open-source framework KaHyPar. In a comprehensive experimental study, we compare KaHyPar with hMETIS, PaToH, Mondriaan, Zoltan-AlgD, and HYPE on a wide range of hypergraphs from several application areas. Our results indicate that KaHyPar, already without the memetic component, computes better solutions than all competing algorithms for both the cut-net and the connectivity metric, while being faster than Zoltan-AlgD and equally fast as hMETIS. Moreover, KaHyPar compares favorably with the current best graph partitioning system KaFFPa - both in terms of solution quality and running time
    • …
    corecore