1,790 research outputs found

    Search-based system architecture development using a holistic modeling approach

    Get PDF
    This dissertation presents an innovative approach to system architecting where search algorithms are used to explore design trade space for good architecture alternatives. Such an approach is achieved by integrating certain model construction, alternative generation, simulation, and assessment processes into a coherent and automated framework. This framework is facilitated by a holistic modeling approach that combines the capabilities of Object Process Methodology (OPM), Colored Petri Net (CPN), and feature model. The resultant holistic model can not only capture the structural, behavioral, and dynamic aspects of a system, allowing simulation and strong analysis methods to be applied, it can also specify the architectural design space. Both object-oriented analysis and design (OOA/D) and domain engineering were exploited to capture design variables and their domains and define architecture generation operations. A fully realized framework (with genetic algorithms as the search algorithm) was developed. Both the proposed framework and its suggested implementation, including the proposed holistic modeling approach and architecture alternative generation operations, are generic. They are targeted at systems that can be specified using object-oriented or process-oriented paradigm. The broad applicability of the proposed approach is demonstrated on two examples. One is the configuration of reconfigurable manufacturing systems (RMSs) under multi-objective optimization and the other is the architecture design of a manned lunar landing system for the Apollo program. The test results show that the proposed approach can cover a huge number of architecture alternatives and support the assessment of several performance measures. A set of quality results was obtained after running the optimization algorithm following the proposed framework --Abstract, page iii

    Real-time software methodologies: Are they suitable for developing Manufacturing control software?

    Full text link
    Computer-Integrated Manufacturing (CIM) systems may be classified as real-time systems. Hence, the applicability of methodologies that are developed for specifying, designing, implementing, testing, and evolving real-time software is investigated in this article.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45553/1/10696_2005_Article_BF01358949.pd

    System-on-chip Computing and Interconnection Architectures for Telecommunications and Signal Processing

    Get PDF
    This dissertation proposes novel architectures and design techniques targeting SoC building blocks for telecommunications and signal processing applications. Hardware implementation of Low-Density Parity-Check decoders is approached at both the algorithmic and the architecture level. Low-Density Parity-Check codes are a promising coding scheme for future communication standards due to their outstanding error correction performance. This work proposes a methodology for analyzing effects of finite precision arithmetic on error correction performance and hardware complexity. The methodology is throughout employed for co-designing the decoder. First, a low-complexity check node based on the P-output decoding principle is designed and characterized on a CMOS standard-cells library. Results demonstrate implementation loss below 0.2 dB down to BER of 10^{-8} and a saving in complexity up to 59% with respect to other works in recent literature. High-throughput and low-latency issues are addressed with modified single-phase decoding schedules. A new "memory-aware" schedule is proposed requiring down to 20% of memory with respect to the traditional two-phase flooding decoding. Additionally, throughput is doubled and logic complexity reduced of 12%. These advantages are traded-off with error correction performance, thus making the solution attractive only for long codes, as those adopted in the DVB-S2 standard. The "layered decoding" principle is extended to those codes not specifically conceived for this technique. Proposed architectures exhibit complexity savings in the order of 40% for both area and power consumption figures, while implementation loss is smaller than 0.05 dB. Most modern communication standards employ Orthogonal Frequency Division Multiplexing as part of their physical layer. The core of OFDM is the Fast Fourier Transform and its inverse in charge of symbols (de)modulation. Requirements on throughput and energy efficiency call for FFT hardware implementation, while ubiquity of FFT suggests the design of parametric, re-configurable and re-usable IP hardware macrocells. In this context, this thesis describes an FFT/IFFT core compiler particularly suited for implementation of OFDM communication systems. The tool employs an accuracy-driven configuration engine which automatically profiles the internal arithmetic and generates a core with minimum operands bit-width and thus minimum circuit complexity. The engine performs a closed-loop optimization over three different internal arithmetic models (fixed-point, block floating-point and convergent block floating-point) using the numerical accuracy budget given by the user as a reference point. The flexibility and re-usability of the proposed macrocell are illustrated through several case studies which encompass all current state-of-the-art OFDM communications standards (WLAN, WMAN, xDSL, DVB-T/H, DAB and UWB). Implementations results are presented for two deep sub-micron standard-cells libraries (65 and 90 nm) and commercially available FPGA devices. Compared with other FFT core compilers, the proposed environment produces macrocells with lower circuit complexity and same system level performance (throughput, transform size and numerical accuracy). The final part of this dissertation focuses on the Network-on-Chip design paradigm whose goal is building scalable communication infrastructures connecting hundreds of core. A low-complexity link architecture for mesochronous on-chip communication is discussed. The link enables skew constraint looseness in the clock tree synthesis, frequency speed-up, power consumption reduction and faster back-end turnarounds. The proposed architecture reaches a maximum clock frequency of 1 GHz on 65 nm low-leakage CMOS standard-cells library. In a complex test case with a full-blown NoC infrastructure, the link overhead is only 3% of chip area and 0.5% of leakage power consumption. Finally, a new methodology, named metacoding, is proposed. Metacoding generates correct-by-construction technology independent RTL codebases for NoC building blocks. The RTL coding phase is abstracted and modeled with an Object Oriented framework, integrated within a commercial tool for IP packaging (Synopsys CoreTools suite). Compared with traditional coding styles based on pre-processor directives, metacoding produces 65% smaller codebases and reduces the configurations to verify up to three orders of magnitude

    Compiling Mechanical Nanocomputer Components

    Get PDF

    Computational intelligence based complex adaptive system-of-systems architecture evolution strategy

    Get PDF
    The dynamic planning for a system-of-systems (SoS) is a challenging endeavor. Large scale organizations and operations constantly face challenges to incorporate new systems and upgrade existing systems over a period of time under threats, constrained budget and uncertainty. It is therefore necessary for the program managers to be able to look at the future scenarios and critically assess the impact of technology and stakeholder changes. Managers and engineers are always looking for options that signify affordable acquisition selections and lessen the cycle time for early acquisition and new technology addition. This research helps in analyzing sequential decisions in an evolving SoS architecture based on the wave model through three key features namely; meta-architecture generation, architecture assessment and architecture implementation. Meta-architectures are generated using evolutionary algorithms and assessed using type II fuzzy nets. The approach can accommodate diverse stakeholder views and convert them to key performance parameters (KPP) and use them for architecture assessment. On the other hand, it is not possible to implement such architecture without persuading the systems to participate into the meta-architecture. To address this issue a negotiation model is proposed which helps the SoS manger to adapt his strategy based on system owners behavior. This work helps in capturing the varied differences in the resources required by systems to prepare for participation. The viewpoints of multiple stakeholders are aggregated to assess the overall mission effectiveness of the overarching objective. An SAR SoS example problem illustrates application of the method. Also a dynamic programing approach can be used for generating meta-architectures based on the wave model. --Abstract, page iii

    Evaluating Resilience of Cyber-Physical-Social Systems

    Get PDF
    Nowadays, protecting the network is not the only security concern. Still, in cyber security, websites and servers are becoming more popular as targets due to the ease with which they can be accessed when compared to communication networks. Another threat in cyber physical social systems with human interactions is that they can be attacked and manipulated not only by technical hacking through networks, but also by manipulating people and stealing users’ credentials. Therefore, systems should be evaluated beyond cy- ber security, which means measuring their resilience as a piece of evidence that a system works properly under cyber-attacks or incidents. In that way, cyber resilience is increas- ingly discussed and described as the capacity of a system to maintain state awareness for detecting cyber-attacks. All the tasks for making a system resilient should proactively maintain a safe level of operational normalcy through rapid system reconfiguration to detect attacks that would impact system performance. In this work, we broadly studied a new paradigm of cyber physical social systems and defined a uniform definition of it. To overcome the complexity of evaluating cyber resilience, especially in these inhomo- geneous systems, we proposed a framework including applying Attack Tree refinements and Hierarchical Timed Coloured Petri Nets to model intruder and defender behaviors and evaluate the impact of each action on the behavior and performance of the system.Hoje em dia, proteger a rede não é a única preocupação de segurança. Ainda assim, na segurança cibernética, sites e servidores estão se tornando mais populares como alvos devido à facilidade com que podem ser acessados quando comparados às redes de comu- nicação. Outra ameaça em sistemas sociais ciberfisicos com interações humanas é que eles podem ser atacados e manipulados não apenas por hackers técnicos através de redes, mas também pela manipulação de pessoas e roubo de credenciais de utilizadores. Portanto, os sistemas devem ser avaliados para além da segurança cibernética, o que significa medir sua resiliência como uma evidência de que um sistema funciona adequadamente sob ataques ou incidentes cibernéticos. Dessa forma, a resiliência cibernética é cada vez mais discutida e descrita como a capacidade de um sistema manter a consciência do estado para detectar ataques cibernéticos. Todas as tarefas para tornar um sistema resiliente devem manter proativamente um nível seguro de normalidade operacional por meio da reconfi- guração rápida do sistema para detectar ataques que afetariam o desempenho do sistema. Neste trabalho, um novo paradigma de sistemas sociais ciberfisicos é amplamente estu- dado e uma definição uniforme é proposta. Para superar a complexidade de avaliar a resiliência cibernética, especialmente nesses sistemas não homogéneos, é proposta uma estrutura que inclui a aplicação de refinamentos de Árvores de Ataque e Redes de Petri Coloridas Temporizadas Hierárquicas para modelar comportamentos de invasores e de- fensores e avaliar o impacto de cada ação no comportamento e desempenho do sistema

    Component-based records: a novel method to record transaction design work

    Get PDF
    The growing pressures from global competitive markets signal the inevitable challenge for companies to rapidly design and develop new successful products. To continually improve design quality and efficiency, companies must consider how to speed design processes, minimise human-errors, avoid unnecessary iterations, and sustain knowledge embedded in the design process. All of these issues strongly concern one topic: how to make and exploit records of design activities. Using process modelling ideas, this paper introduces a new method called component-based records, in place of traditional design reports. The proposed method records transaction elements of the actual design processes undertaken in a design episode, which aims to continually improve design quality and efficiency, reduce designers’ workload for routine tasks, and sustain competitiveness of companies

    Model based test suite minimization using metaheuristics

    Get PDF
    Software testing is one of the most widely used methods for quality assurance and fault detection purposes. However, it is one of the most expensive, tedious and time consuming activities in software development life cycle. Code-based and specification-based testing has been going on for almost four decades. Model-based testing (MBT) is a relatively new approach to software testing where the software models as opposed to other artifacts (i.e. source code) are used as primary source of test cases. Models are simplified representation of a software system and are cheaper to execute than the original or deployed system. The main objective of the research presented in this thesis is the development of a framework for improving the efficiency and effectiveness of test suites generated from UML models. It focuses on three activities: transformation of Activity Diagram (AD) model into Colored Petri Net (CPN) model, generation and evaluation of AD based test suite and optimization of AD based test suite. Unified Modeling Language (UML) is a de facto standard for software system analysis and design. UML models can be categorized into structural and behavioral models. AD is a behavioral type of UML model and since major revision in UML version 2.x it has a new Petri Nets like semantics. It has wide application scope including embedded, workflow and web-service systems. For this reason this thesis concentrates on AD models. Informal semantics of UML generally and AD specially is a major challenge in the development of UML based verification and validation tools. One solution to this challenge is transforming a UML model into an executable formal model. In the thesis, a three step transformation methodology is proposed for resolving ambiguities in an AD model and then transforming it into a CPN representation which is a well known formal language with extensive tool support. Test case generation is one of the most critical and labor intensive activities in testing processes. The flow oriented semantic of AD suits modeling both sequential and concurrent systems. The thesis presented a novel technique to generate test cases from AD using a stochastic algorithm. In order to determine if the generated test suite is adequate, two test suite adequacy analysis techniques based on structural coverage and mutation have been proposed. In terms of structural coverage, two separate coverage criteria are also proposed to evaluate the adequacy of the test suite from both perspectives, sequential and concurrent. Mutation analysis is a fault-based technique to determine if the test suite is adequate for detecting particular types of faults. Four categories of mutation operators are defined to seed specific faults into the mutant model. Another focus of thesis is to improve the test suite efficiency without compromising its effectiveness. One way of achieving this is identifying and removing the redundant test cases. It has been shown that the test suite minimization by removing redundant test cases is a combinatorial optimization problem. An evolutionary computation based test suite minimization technique is developed to address the test suite minimization problem and its performance is empirically compared with other well known heuristic algorithms. Additionally, statistical analysis is performed to characterize the fitness landscape of test suite minimization problems. The proposed test suite minimization solution is extended to include multi-objective minimization. As the redundancy is contextual, different criteria and their combination can significantly change the solution test suite. Therefore, the last part of the thesis describes an investigation into multi-objective test suite minimization and optimization algorithms. The proposed framework is demonstrated and evaluated using prototype tools and case study models. Empirical results have shown that the techniques developed within the framework are effective in model based test suite generation and optimizatio
    corecore