141 research outputs found

    Leveraging Ada 2012 and SPARK 2014 for assessing generated code from AADL models

    Get PDF
    Modeling of Distributed Real-time Embedded systems using Architecture Description Language provides the foundations for various levels of analysis: scheduling, reliability, consis- tency, etc.; but also allows for automatic code generation. A challenge is to demonstrate that generated code matches quality required for safety-critical systems. In the scope of the AADL, the Ocarina toolchain proposes code generation towards the Ada Ravenscar profile with restrictions for High- Integrity. It has been extensively used in the space domain as part of the TASTE project within the European Space Agency. In this paper, we illustrate how the combined use of Ada 2012 and SPARK 2014 significantly increases code quality and exhibits absence of run-time errors at both run-time and generated code levels

    Certification of open-source software : a role for formal methods?

    Get PDF
    Despiteitshugesuccessandincreasingincorporationincom- plex, industrial-strength applications, open source software, by the very nature of its open, unconventional, distributed development model, is hard to assess and certify in an effective, sound and independent way. This makes its use and integration within safety or security-critical systems, a risk. And, simultaneously an opportunity and a challenge for rigourous, mathematically based, methods which aim at pushing software analysis and development to the level of a mature engineering discipline. This paper discusses such a challenge and proposes a number of ways in which open source development may benefit from the whole patrimony of formal methods.L. S. Barbosa research was partially supported by the CROSS project, under contract PTDC/EIA-CCO/108995/2008

    Неразрушающие тесты с четным повторением адресов для запоминающих устройств

    Get PDF
    The urgency of the problem of memory testing of modern computing systems is shown. Mathematical models describing the faulty states of storage devices and the methods used for their detection are investigated. The concept of address sequences (pA) with an even repetition of addresses is introduced, which are the basis of the basic element included in the structure of the new transparent march tests March _pA_1 and March _pA_2. Algorithms for the formation of such sequences and examples of their implementations are given. The maximum diagnostic ability of new tests is shown for the case of the simplest faults, such as constant (SAF) and transition faults (TF), as well as for complex pattern sensitive faults (PNPSFk). There is a significantly lower time complexity of the March_pA_1 and March_pA_2 tests compared to classical transparent tests, which is achieved at the expense of less time spent on obtaining a reference signature. New distance metrics are introduced to quantitatively compare the effectiveness of the applied pA address sequences in a single implementation of the March_pA_1 and March_pA_2 tests. The basis of new metrics is the distance D(A(j), pA) determined by the difference between the indices of repeated addresses A(j) in the sequence pA. The properties of new characteristics of the pA sequences are investigated and their applicability is evaluated for choosing the optimal test pA sequences that ensure the high efficiency of new transparent tests. Examples of calculating distance metrics are given and the dependence of the effectiveness of new tests on the numerical values of the distance metrics is shown. As well as in the case of classical transparent tests, multiple applications of new March_pA_1 and March_pA_2 tests are considered. The characteristic V(pA) is introduced, which is numerically equal to the number of different values of the distance D(A(j), pA) of addresses A(j) of the sequence pA. The validity of analytical estimates is experimentally shown and high efficiency of fault detection by the tests March_pA_1 and March_pA_2 is confirmed by the example of coupling faults for p = 2.Показывается актуальность задачи тестирования запоминающих устройств современных вычислительных систем. Исследуются математические модели неисправностей этих устройств и используемые методы тестирования наиболее сложных из них на базе классических неразрушающих маршевых тестов. Вводится понятие адресных последовательностей (pA) с четным повторением адресов, которые являются основой базового элемента, входящего в структуру новых неразрушающих маршевых тестов March_pA_1 и March_pA_2. Приводятся алгоритмы формирования подобных последовательностей и примеры их реализации. Показывается максимальная диагностическая способность новых тестов для случая простейших неисправностей, таких как константные (SAF) и переходные (TF), а также сложных кодочувствительных неисправностей (PNPSFk). Отмечается существенно меньшая временная сложность тестов March_pA_1 и March_pA_2 по сравнению с классическими неразрушающими тестами, которая достигается за счет меньших временных затрат на получение эталонной сигнатуры. Вводятся новые метрики расстояния для количественного сравнения эффективности применяемых pA при однократной реализации тестов March_pA_1 и March_pA_2. В основе новых метрик лежит расстояние D(A(j), pA), определяемое разностью индексов повторяющихся адресов A(j) в последовательности pA. Исследуются свойства новых характеристик последовательностей pA и оценивается их применимость для выбора оптимальных тестовых последовательностей pA, обеспечивающих высокую эффективность новых неразрушающих тестов. Приводятся примеры вычисления метрик расстояний и показывается зависимость эффективности новых тестов от численных значений метрик расстояния. Как и в случае классических неразрушающих тестов, рассматривается многократное применение тестов March_pA_1 и March_pA_2. Вводится характеристика V(pA), которая численно равняется количеству отличающихся значений расстояния D(A(j), pA) адресов A(j) последовательности pA. Экспериментально показывается справедливость аналитических оценок и подтверждается высокая эффективность обнаружения неисправностей однократными и многократными тестами типа March_pA_1 и March_pA_2 на примере неисправностей взаимного влияния для p = 2

    Learning The Differences Between Ontologies and Conceptual Schemas Through Ontology-Driven Information Systems.

    Get PDF
    In the traditional systems modeling approach, the modeler is required to capture a user\u27s view of some domain in a formal conceptual schema. The designer\u27s conceptualization may or may not match with the user\u27s conceptualization. One of the reasons for these conflicts is the lack of an initial agreement among users and modelers concerning the concepts belonging to the domain. Such an agreement could be facilitated by means of an ontology. If the ontology is previously constructed and formalized so that it can be shared by the modeler and the user in the development process, such conflicts would be less likely to happen. Following up on that, a number of investigators have suggested that those working on information systems should make use of commonly held, formally defined ontologies that would constrain and direct the design, development, and use of information systems - thus avoiding the above mentioned difficulties. Whether ontologies represent a significant advance from the more traditional conceptual schemas has been challenged by some researchers. We review and summarize some major themes of this complex discussion. While recognizing the commonalities and historical continuities between conceptual schemas and ontologies, we think that there is an important emerging distinction that should not be obscured and should guide future developments. In particular, we propose that the notions of conceptual schemas and ontologies be distinguished so as to play essentially different roles for the developers and users of information systems. We first suggest that ontologies and conceptual schemas belong to two different epistemic levels. They have different objects and are created with different objectives. Our proposal is that ontologies should deal with general assumptions concerning the explanatory invariants of a domain - those that provide a framework enabling understanding and explanation of data across all domains inviting explanation and understanding. Conceptual schemas, on the other hand, should address the relation between such general explanatory categories and the facts that exemplify them in a particular domain (e.g., the contents of the database). In contrast to ontologies, conceptual schemas would involve specification of the meaning of the explanatory categories for a particular domain as well as the consequent dimensions of possible variation among the relevant data of a given domain. Accordingly, the conceptual schema makes possible both the intelligibility and the measurement of those facts of a particular domain. The proposed distinction between ontologies and conceptual schemas makes possible a natural decomposition of information systems in terms of two necessary but complementary epistemic functions: identification of an invariant background and measurement of the object along dimensions of possible variation. Recognition of the suggested distinction represents, we think, a natural evolution in the field of modeling, and significant principled guidance for developers and users of information systems

    Integration of Virtual Programming Lab in a process of teaching programming EduScrum based

    Get PDF
    Programming teaching is a key factor for technological evolution. The efficient way to learn to program is by programming and hard training and thus feedback is a crucial factor in the success and flow of the process. This work aims to analyse the potential use of VPL in the teaching process of programming in higher education. It also intends to verify whether, with VPL, it is possible to make students learning more effective and autonomous, with a reduction in the volume of assessment work by teachers. Experiments were carried out with the VPL, in the practical-laboratory classes of a curricular unit of initiation to programming in a higher education institution. The results supported by the responses to surveys, point to the validity of the model

    Multi-agent system based active distribution networks

    Get PDF
    This thesis gives a particular vision of the future power delivery system with its main requirements. An investigation of suitable concepts and technologies which creates a road map forward the smart grid has been carried out. They should meet the requirements on sustainability, efficiency, flexibility and intelligence. The so called Active Distribution Network (ADN) is introduced as an important element of the future power delivery system. With an open architecture, the ADN is designed to integrate various types of networks, i.e., MicroGrid or Autonomous Network, and different forms of operation, i.e., islanding or interconnection. By enabling an additional local control layer, these so called cells are able to reconfigure, manage local faults, support voltage regulation, or manage power flow. Furthermore, the Multi-Agent System (MAS) concept is regarded as a potential technology to cope with the anticipated challenges of future grid operation. Analysis of benefits and challenges of implementing MAS shows that it is a suitable technology for a complex and highly dynamic operation and open architecture as the ADN. By taking advantages of the MAS technology, the AND is expected to fully enable distributed monitoring and control functions. This MAS-based ADN focuses mainly on control strategies and communication topologies for the distribution systems. The transition to the proposed concept does not require an intensive physical change to the existing infrastructure. The main point is that inside the MAS-based ADN, loads and generators interact with each other and the outside world. This infrastructure can be built up of several cells (local areas) that are able to operate autonomously by an additional agent-based control layer. The ADN adapts a MAS hierarchical control structure in which each agent handles three functional layers of management, coordination, and execution. In the operational structure, the ADN addresses two main function parts: Distributed State Estimation (DSE) to analyze the network topology, compute the state estimation, and detect bad data; and Local Control Scheduling (LCS) to establish the control set points for voltage coordination and power flow management. Under the distributed context of the controls, an appropriate method for DSE is proposed. The method takes advantage of the MAS technology to compute iteratively the local state variables through neighbor data measurements. Although using the classical Weighted Least Square (WLS) as a core, the proposed algorithm based on an agent environment distributes drastically computation burden to subtasks of state estimation with only two interactive buses and an interconnection line in between. The accuracy and complexity of the proposed estimation are investigated through both off-line and on-line simulations. Distributed and parallel working of processors improves significantly the computation time. This estimation is also suitable for a meshed configuration of the ADN, which includes more than one interconnection between each pair of the cells. Depending on the availability of a communication infrastructure, it is able to work locally inside the cells or globally for the whole ADN. As a part of the LCS, the voltage control function is investigated in both steady-state and dynamic environments. The autonomous voltage control within each network area (cell) can be deployed by a combination of active and reactive power support of distributed generation (DG). The coordinated voltage control defines the optimal tap setting of the on-load tap changer (OLTC) while comparing amounts of control actions in each area. Based on the sensitivity factors, these negotiations are thoroughly supported in the distributed environment of the MAS platform. To verify the proposed method, both steady-state and dynamic simulations are developed. Simulation results show that the proposed function helps to integrate more DG while mitigating voltage violation effectively. The optimal solution can be reached within a small number of calculation iterations. It opens a possibility to apply the proposed method as an on-line application. Furthermore, a distributed approach for the power flow management function is developed. By converting the power network to a represented graph, the optimal power flow is understood as the well-known minimum cost flow problem. Two fundamental solutions for the minimum cost flow, i.e., the Successive Shortest Path (SSP) algorithm and the Cost-Scaling Push-Relabel (CS-PR) algorithm, are introduced. The SSP algorithm is augmenting the power flow along the shortest path until reaching the capacity of at least one edge. After updating the flow, it finds another shortest path and augments the flow again. The CS-PR algorithm approaches the problem in a different way which is scaling cost and pushing as much flow as possible at each active node. Simulations of both meshed and radial test networks are developed to compare their performances in various network conditions. Simulation results show that the two methods can allow both generation and power flow controller devices to operate optimally. In the radial test network, the CS-PR needs less computation effort represented by a number of exchanged messages among the MAS platform than the SSP. Their performances in the meshed network are, however, almost the same. Last but not least, this novel concept of MAS-based AND is verified under a laboratory environment. The lab set-up separates some local network areas by using a three-inverter system. The MAS platform is created on different computers and is able to retrieve data from and to hardware components, i.e., the three-inverter system. In this set-up, a configuration of the power router is established in a combination of the three-inverter system with the MAS platform. Three control functions of the inverters, AC voltage control, DC bus voltage control, and PQ control, are developed in a Simulink diagram. By assigning suitable operation modes for the inverters, the set-up successfully experiments on synchronizing and disconnecting a cell to the rest of the grid. In the MAS platform, an obvious power routing strategy is executed to optimally manage power flow in the lab set-up. The results show that the proposed concept of the ADN with the power router interface works well and can be used to manage electrical networks with distributed generation and controllable loads, leading to active networks

    Arquitectura para coordenação em tempo-real de múltiplas unidades móveis autónomas

    Get PDF
    Doutoramento em Engenharia ElectrotécnicaInterest on using teams of mobile robots has been growing, due to their potential to cooperate for diverse purposes, such as rescue, de-mining, surveillance or even games such as robotic soccer. These applications require a real-time middleware and wireless communication protocol that can support an efficient and timely fusion of the perception data from different robots as well as the development of coordinated behaviours. Coordinating several autonomous robots towards achieving a common goal is currently a topic of high interest, which can be found in many application domains. Despite these different application domains, the technical problem of building an infrastructure to support the integration of the distributed perception and subsequent coordinated action is similar. This problem becomes tougher with stronger system dynamics, e.g., when the robots move faster or interact with fast objects, leading to tighter real-time constraints. This thesis work addressed computing architectures and wireless communication protocols to support efficient information sharing and coordination strategies taking into account the real-time nature of robot activities. The thesis makes two main claims. Firstly, we claim that despite the use of a wireless communication protocol that includes arbitration mechanisms, the self-organization of the team communications in a dynamic round that also accounts for variable team membership, effectively reduces collisions within the team, independently of its current composition, significantly improving the quality of the communications. We will validate this claim in terms of packet losses and communication latency. We show how such self-organization of the communications can be achieved in an efficient way with the Reconfigurable and Adaptive TDMA protocol. Secondly, we claim that the development of distributed perception, cooperation and coordinated action for teams of mobile robots can be simplified by using a shared memory middleware that replicates in each cooperating robot all necessary remote data, the Real-Time Database (RTDB) middleware. These remote data copies, which are updated in the background by the selforganizing communications protocol, are extended with age information automatically computed by the middleware and are locally accessible through fast primitives. We validate our claim showing a parsimonious use of the communication medium, improved timing information with respect to the shared data and the simplicity of use and effectiveness of the proposed middleware shown in several use cases, reinforced with a reasonable impact in the Middle Size League of RoboCup.O interesse na utilização de equipas multi-robô tem vindo a crescer, devido ao seu potencial para cooperarem na resolução de vários problemas, tais como salvamento, desminagem, vigilância e até futebol robótico. Estas aplicações requerem uma infraestrutura de comunicação sem fios, em tempo real, suportando a fusão eficiente e atempada dos dados sensoriais de diferentes robôs bem como o desenvolvimento de comportamentos coordenados. A coordenação de vários robôs autónomos com vista a um dado objectivo é actualmente um tópico que suscita grande interesse, e que pode ser encontrado em muitos domínios de aplicação. Apesar das diferenças entre domínios de aplicação, o problema técnico de construir uma infraestrutura para suportar a integração da percepção distribuída e das acções coordenadas é similar. O problema torna-se mais difícil à medida que o dinamismo dos robôs se acentua, por exemplo, no caso de se moverem mais rápido, ou de interagirem com objectos que se movimentam rapidamente, dando origem a restrições de tempo-real mais apertadas. Este trabalho centrou-se no desenvolvimento de arquitecturas computacionais e protocolos de comunicação sem fios para suporte à partilha de informação e à realização de acções coordenadas, levando em consideração as restrições de tempo-real. A tese apresenta duas afirmações principais. Em primeiro lugar, apesar do uso de um protocolo de comunicação sem fios que inclui mecanismos de arbitragem, a auto-organização das comunicações reduz as colisões na equipa, independentemente da sua composição em cada momento. Esta afirmação é validada em termos de perda de pacotes e latência da comunicação. Mostra-se também como a auto-organização das comunicações pode ser atingida através da utilização de um protocolo TDMA reconfigurável e adaptável sem sincronização de relógio. A segunda afirmação propõe a utilização de um sistema de memória partilhada, com replicação nos diferentes robôs, para suportar o desenvolvimento de mecanismos de percepção distribuída, fusão sensorial, cooperação e coordenação numa equipa de robôs. O sistema concreto que foi desenvolvido é designado como Base de Dados de Tempo Real (RTDB). Os dados remotos, que são actualizados de forma transparente pelo sistema de comunicações auto-organizado, são estendidos com a respectiva idade e são disponibilizados localmente a cada robô através de primitivas de acesso eficientes. A RTDB facilita a utilização parcimoniosa da rede e bem como a manutenção de informação temporal rigorosa. A simplicidade da integração da RTDB para diferentes aplicações permitiu a sua efectiva utilização em diferentes projectos, nomeadamente no âmbito do RoboCup
    corecore