1,442 research outputs found
Safe and Secure Support for Public Safety Networks
International audienceAs explained by Tanzi et al. in the first volume of this book, communicating and autonomous devices will surely have a role to play in the future Public Safety Networks. The “communicating” feature comes from the fact that the information should be delivered in a fast way to rescuers. The “autonomous” characteristic comes from the fact that rescuers should not have to concern themselves about these objects: they should perform their mission autonomously so as not to delay the intervention of the rescuers, but rather to assist them efficiently and reliably.</p
An Approach for Supporting Ad-hoc Modifications in Distributed Workflow Management Systems
Supporting enterprise-wide or even cross-organizational business processes is a characteristic challenge for any workflow management system (WfMS). Scalability at the presence of high loads as well as the capability to dynamically modify running workflow (WF) instances (e.g., to cope with exceptional situations) are essential requirements in this context. Should the latter one, in particular, not be met, the WfMS will not have the necessary flexibility to cover the wide range of process-oriented applications deployed in many organizations. Scalability and flexibility have, for the most part, been treated separately in the relevant literature thus far. Even though they are basic needs for a WfMS, the requirements related with them are totally different. To achieve satisfactory scalability, on the one hand, the system needs to be designed such that a workflow instance can be controlled by several WF servers that are as independent from each other as possible. Yet dynamic WF modifications, on the other hand, necessitate a (logical) central control instance which knows the current and global state of a WF instance. For the first time, this paper presents methods which allow ad-hoc modifications (e.g., to insert, delete, or shift steps) to be performed in a distributed WfMS; i.e., in a WfMS with partitioned WF execution graphs and distributed WF control. It is especially noteworthy that the system succeeds in realizing the full functionality as given in the central case while, at the same time, achieving extremely favorable behavior with respect to communication costs
An MDE Approach for Automatic Code Generation from MARTE to OpenCL
Advanced engineering and scientific communities have used parallel programming to solve their large scale complex problems. Achieving high performance is the main advantage for this choice. However, as parallel programming requires a non-trivial distribution of tasks and data, developers find it hard to implement their applications effectively. Thus, in order to reduce design complexity, we propose an approach to generate code for OpenCL API, an open standard for parallel programming of heterogeneous systems. This approach is based on Model Driven Engineering (MDE) and Modeling and Analysis of Real-Time and Embedded Systems (MARTE) standard proposed by Object Management Group (OMG). The aim is to provide resources to non-specialist in parallel programming to implement their applications. Moreover, concepts like reuse and platform independence are present. Since we have designed an application and execution platform architecture, we can reuse the same project to add more functionalities and/or change the target architecture. Consequently, this approach helps industries to achieve their time-to-market constraints. The resulting code, for the host and compute devices, are compilable source files that satisfy the specifications defined on design time.L'ingénierie avancée et les communautés scientifiques utilisent souvent la programmation parallèle pour résoudre leurs problèmes complexes de grande envergure. Atteindre la haute performance est le principal avantage de ce choix. Toutefois, comme la programmation parallèle nécessite une distribution non-trivial de tâches et de données, les développeurs ont du mal à mettre en œuvre leurs applications de manière efficace. Ainsi, afin de réduire la complexité de conception, nous proposons une approche pour générer du code pour la API OpenCL, un standard ouvert pour la programmation parallèle de systèmes hétérogènes. Cette approche est basée sur Ingénierie Dirigée par les Modèles (IDM) et de Modeling and Analysis of Real-Time and Embedded Systems (MARTE) norme proposée par l'Object Management Group (OMG). L'objectif est de fournir des ressources pour les non-spécialistes de la programmation parallèle pour dévéloper leurs applications. En outre, des concepts tels que la réutilisation et l'indépendance de plateforme sont présents. Ainsi, une fois que nous avons conçu une application et architecture de la plateforme d'exécution, nous pouvons réutiliser le même projet pour ajouter plus de fonctionnalités et/ou de modifier l'architecture cible. Par conséquent, cette approche aide les industries à atteindre leurs contraintes de time-to-market. Le code résultant, pour l'hôte et les unités de calcul, sont des fichiers source compilable qui satisfont aux spécifications définies dans la conception
Survey of Verification and Validation Techniques for Small Satellite Software Development
The purpose of this paper is to provide an overview of the current trends and practices in small-satellite software verification and validation. This document is not intended to promote a specific software assurance method. Rather, it seeks to present an unbiased survey of software assurance methods used to verify and validate small satellite software and to make mention of the benefits and value of each approach. These methods include simulation and testing, verification and validation with model-based design, formal methods, and fault-tolerant software design with run-time monitoring. Although the literature reveals that simulation and testing has by far the longest legacy, model-based design methods are proving to be useful for software verification and validation. Some work in formal methods, though not widely used for any satellites, may offer new ways to improve small satellite software verification and validation. These methods need to be further advanced to deal with the state explosion problem and to make them more usable by small-satellite software engineers to be regularly applied to software verification. Last, it is explained how run-time monitoring, combined with fault-tolerant software design methods, provides an important means to detect and correct software errors that escape the verification process or those errors that are produced after launch through the effects of ionizing radiation
A scalable parallel finite element framework for growing geometries. Application to metal additive manufacturing
This work introduces an innovative parallel, fully-distributed finite element
framework for growing geometries and its application to metal additive
manufacturing. It is well-known that virtual part design and qualification in
additive manufacturing requires highly-accurate multiscale and multiphysics
analyses. Only high performance computing tools are able to handle such
complexity in time frames compatible with time-to-market. However, efficiency,
without loss of accuracy, has rarely held the centre stage in the numerical
community. Here, in contrast, the framework is designed to adequately exploit
the resources of high-end distributed-memory machines. It is grounded on three
building blocks: (1) Hierarchical adaptive mesh refinement with octree-based
meshes; (2) a parallel strategy to model the growth of the geometry; (3)
state-of-the-art parallel iterative linear solvers. Computational experiments
consider the heat transfer analysis at the part scale of the printing process
by powder-bed technologies. After verification against a 3D benchmark, a
strong-scaling analysis assesses performance and identifies major sources of
parallel overhead. A third numerical example examines the efficiency and
robustness of (2) in a curved 3D shape. Unprecedented parallelism and
scalability were achieved in this work. Hence, this framework contributes to
take on higher complexity and/or accuracy, not only of part-scale simulations
of metal or polymer additive manufacturing, but also in welding, sedimentation,
atherosclerosis, or any other physical problem where the physical domain of
interest grows in time
Product information management for complex modular security systems
Um sistema PIM gere toda a informação que possibilita a comercialização dos produtos
através de diferentes canais. A sua importância durante o ciclo de vida de um produto
aumentou devido à sofisticação técnica dos produtos, a gerir internamente e a publicar
externamente. Sistemas, tais como o ERP e o CCMS, deverão integrar-se com um sistema
PIM, o qual deve funcionar como a “espinha dorsal” da informação de produto.
O presente projeto tem como objetivo principal a criação de uma solução para gerir a
informação de produto para sistemas modulares complexos. A proposta de solução inclui
a criação de uma ontologia para parte dos inúmeros sistemas disponíveis no catálogo de
produtos de uma das maiores organizações multinacionais do setor de engenharia e
tecnologia a nível mundial. O processo de criação da solução proposta baseou-se na
metodologia de investigação pesquisa-ação e foi dividido em cinco fases. Na fase de
diagnóstico descreveu-se e analisou-se a atual situação dos sistemas ERP e CCMS que
gerem o catálogo online dos sistemas de produtos comercializados. Levantaram-se ainda
as taxonomias de produto atuais e elaborou-se a proposta. Na fase de planeamento da
ação descreveram-se a equipa de trabalho, a abordagem inspirada na metodologia Agile
usada para desenvolver a solução, as reuniões de planeamento, os parceiros de trabalho,
as ferramentas a usar e a sua justificação. Na fase de tomada de ação foi descrito o
processo de criação da solução ontológica e o resultado final, incluindo a construção das
novas taxonomias e a sua validação pelos especialistas. Propuseram-se exemplos e
representações gráficas usando a ferramenta Protégé. Na fase de avaliação, a solução
ontológica foi testada, tendo-se validado que os requisitos necessários foram satisfeitos
pela estrutura. Na fase de especificação de aprendizagem propuseram-se os próximos
passos para a implementação e gestão futura do modelo ontológico.
Com esta solução, a organização poderá gerir mais eficientemente a informação de
produto e a estrutura de dados. Ela possui versatilidade para gerir produtos individuais ou
sistemas modulares complexos e melhorar a sua comunicação com o cliente. Além disso,
a ontologia tem ainda um enorme potencial se combinada com técnicas de IA. Algumas
limitações do projeto e propostas de trabalhos futuros foram ainda apresentadas
A synthesis of logic and bio-inspired techniques in the design of dependable systems
Much of the development of model-based design and dependability analysis in the design of dependable systems, including software intensive systems, can be attributed to the application of advances in formal logic and its application to fault forecasting and verification of systems. In parallel, work on bio-inspired technologies has shown potential for the evolutionary design of engineering systems via automated exploration of potentially large design spaces. We have not yet seen the emergence of a design paradigm that effectively combines these two techniques, schematically founded on the two pillars of formal logic and biology, from the early stages of, and throughout, the design lifecycle. Such a design paradigm would apply these techniques synergistically and systematically to enable optimal refinement of new designs which can be driven effectively by dependability requirements. The paper sketches such a model-centric paradigm for the design of dependable systems, presented in the scope of the HiP-HOPS tool and technique, that brings these technologies together to realise their combined potential benefits. The paper begins by identifying current challenges in model-based safety assessment and then overviews the use of meta-heuristics at various stages of the design lifecycle covering topics that span from allocation of dependability requirements, through dependability analysis, to multi-objective optimisation of system architectures and maintenance schedules
- …