2,245 research outputs found
Local Reasoning about Parametric and Reconfigurable Component-based Systems
We introduce a logical framework for the specification and verification of component-based systems, in which finitely many component instances are active, but the bound on their number is not known. Besides specifying and verifying parametric systems, we consider the aspect of dynamic reconfiguration, in which components can migrate at runtime on a physical map, whose shape and size may change. We describe such para-metric and reconfigurable architectures using resource logics, close in spirit to Separation Logic, used to reason about dynamic pointer structures. These logics support the principle of local reasoning, which is the key for writing modular specifications and building scalable verification algorithms, that deal with large industrial-size systems
Reuse and integration of specification logics: the hybridisation perspective
Hybridisation is a systematic process along which the characteristic features
of hybrid logic, both at the syntactic and the semantic levels, are developed on
top of an arbitrary logic framed as an institution. It also captures the construction
of first-order encodings of such hybridised institutions into theories in first-order
logic. The method was originally developed to build suitable logics for the specification
of reconfigurable software systems on top of whatever logic is used to describe
local requirements of each system’s configuration. Hybridisation has, however, a
broader scope, providing a fresh example of yet another development in combining
and reusing logics driven by a problem from Computer Science. This paper offers an
overview of this method, proposes some new extensions, namely the introduction of
full quantification leading to the specification of dynamic modalities, and exemplifies
its potential through a didactical application. It is discussed how hybridisation
can be successfully used in a formal specification course in which students progress
from equational to hybrid specifications in a uniform setting, integrating paradigms,
combining data and behaviour, and dealing appropriately with systems evolution and
reconfiguration.This work is financed by the ERDF—European Regional Development Fund
through the Operational Programme for Competitiveness and Internationalisation—COMPETE
2020 Programme, and by National Funds through the FCT (Portuguese Foundation for Science
and Technology) within project POCI-01-0145-FEDER-006961. M. Martins was further
supported by project UID/MAT/04106/2013. A. Madeira and R. Neves research was carried
out in the context of a post-doc and a Ph.D. grant with references SFRH/BPD/103004/2014
and SFRH/BD/52234/2013, respectively. L.S. Barbosa is also supported by SFRH/BSAB/
113890/2015
Adaptation of Manufacturing Systems in Dynamic Environment Based on Capability Description Method
publishedVersionPeer reviewe
Multi-Softcore Architecture on FPGA
To meet the high performance demands of embedded multimedia applications, embedded systems are integrating multiple processing units. However, they are mostly based on custom-logic design methodology. Designing parallel multicore systems using available standards intellectual properties yet maintaining high performance is also a challenging issue. Softcore processors and field programmable gate arrays (FPGAs) are a cheap and fast option to develop and test such systems. This paper describes a FPGA-based design methodology to implement a rapid prototype of parametric multicore systems. A study of the viability of making the SoC using the NIOS II soft-processor core from Altera is also presented. The NIOS II features a general-purpose RISC CPU architecture designed to address a wide range of applications. The performance of the implemented architecture is discussed, and also some parallel applications are used for testing speedup and efficiency of the system. Experimental results demonstrate the performance of the proposed multicore system, which achieves better speedup than the GPU (29.5% faster for the FIR filter and 23.6% faster for the matrix-matrix multiplication)
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
Virtual manufacturing: prediction of work piece geometric quality by considering machine and set-up
Lien vers la version éditeur: http://www.tandfonline.com/doi/full/10.1080/0951192X.2011.569952#.U4yZIHeqP3UIn the context of concurrent engineering, the design of the parts, the production planning and the manufacturing facility must be considered simultaneously. The design and development cycle can thus be reduced as manufacturing constraints are taken into account as early as possible. Thus, the design phase takes into account the manufacturing constraints as the customer requirements; more these constraints must not restrict the creativity of design. Also to facilitate the choice of the most suitable system for a specific process, Virtual Manufacturing is supplemented with developments of numerical computations (Altintas et al. 2005, Bianchi et al. 1996) in order to compare at low cost several solutions developed with several hypothesis without manufacturing of prototypes. In this context, the authors want to predict the work piece geometric more accurately by considering machine defects and work piece set-up, through the use of process simulation. A particular case study based on a 3 axis milling machine will be used here to illustrate the authors’ point of view. This study focuses on the following geometric defects: machine geometric errors, work piece positioning errors due to fixture system and part accuracy
Performance Assessment Strategies
Using engineering performance evaluations to explore design alternatives during the conceptual phase of architectural design helps to understand the relationships between form and performance; and is crucial for developing well-performing final designs. Computer aided conceptual design has the potential to aid the design team in discovering and highlighting these relationships; especially by means of procedural and parametric geometry to support the generation of geometric design, and building performance simulation tools to support performance assessments. However, current tools and methods for computer aided conceptual design in architecture do not explicitly reveal nor allow for backtracking the relationships between performance and geometry of the design. They currently support post-engineering, rather than the early design decisions and the design exploration process.
Focusing on large roofs, this research aims at developing a computational design approach to support designers in performance driven explorations. The approach is meant to facilitate the multidisciplinary integration and the learning process of the designer; and not to constrain the process in precompiled procedures or in hard engineering formulations, nor to automatize it by delegating the design creativity to computational procedures. PAS (Performance Assessment Strategies) as a method is the main output of the research. It consists of a framework including guidelines and an extensible library of procedures for parametric modelling. It is structured on three parts.
Pre-PAS provides guidelines for a design strategy-definition, toward the parameterization process. Model-PAS provides guidelines, procedures and scripts for building the parametric models. Explore-PAS supports the solutions-assessment based on numeric evaluations and performance simulations, until the identification of a suitable design solution. PAS has been developed based on action research. Several case studies have focused on each step of PAS and on their interrelationships.
The relations between the knowledge available in pre-PAS and the challenges of the solution space exploration in explore-PAS have been highlighted. In order to facilitate the explore-PAS phase in case of large solution spaces, the support of genetic algorithms has been investigated and the exiting method ParaGen has been further implemented. Final case studies have focused on the potentials of ParaGen to identify well performing solutions; to extract knowledge during explore-PAS; and to allow interventions of the designer as an alternative to generations driven solely by coded criteria.
Both the use of PAS and its recommended future developments are addressed in the thesis
- …