303 research outputs found

    Acta Cybernetica : Volume 22. Number 2.

    Get PDF

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Cost estimation in initial development stages of products: an ontological approach

    Get PDF
    Cost estimation in the early stages of a product are fraught with uncertainties. The conceptual design of product development is characterized by the absence of data, the most critical being costs. The costs impact in the initial phases of the project is low, when discovered in later stages represent great risks. As there are no structured alternatives to obtaining costs in the conceptual phase, the reuse of data from past projects is an alternative discussed in the literature. Knowledge management approaches can search for data, nonexistent in the current phases, in successful earlier projects. The use of ontology is discussed as an approach in generating knowledge stored in a database. The proposed solution seeks to estimate costs based on previous projects. A query is formulated to describe the product function and settings. The ontological model searches the classes, instances, and properties in the database and generates a cost estimation. The costs of the previous project are reused to generate a new agile cost estimate without the need to consult other industry sectors. This dissertation project follows the methodological framework Design Science Research to make partial deliveries up to the final artifact, an ontological model. This proposal has great potential in the industry, considering there are no tools attending the initial phases with the same efficiency.Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)Estimativas de custos nas fases iniciais de um produto são repletas de incertezas. O projeto conceitual do desenvolvimento de produto e caracterizado pela ausência de dados, sendo os mais críticos os custos. O impacto dos custos nas fases iniciais do projeto e baixo, quando descobertos em fases posteriores representam grandes riscos. Como não existem meios estruturados de obtenção dos custos no projeto na fase conceitual, o reuso de dados de projetos passados e uma alternativa discutida na literatura. Abordagens de gerenciamento de conhecimento podem buscar dados, inexistentes nas fases atuais, em projetos anteriores bem sucedidos. O uso de ontologia e discutido como uma abordagem na geração de conhecimento armazenado em um banco de dados. A solução proposta busca estimar custos baseada em projetos anteriores. E formulada uma pergunta que descreva a função do produto e configurações. O modelo ontológico busca na base de dados classes, instâncias e propriedades e gera uma estimativa de custos. Os custos do projeto anterior são reutilizados para gerar uma nova estimativa de custos ágil sem necessidade de consultar outros setores da indústria. Este projeto de dissertação segue o framework metodológico Design Science Research para fazer entregas parciais ate a entrega do artefato final, um modelo ontológico. Esta proposta possui grande potencial na indústria, considerando que não existem ferramentas que atendam as fases iniciais com a mesma eficiência

    A New Method for Efficient Parallel Solution of Large Linear Systems on a SIMD Processor.

    Get PDF
    This dissertation proposes a new technique for efficient parallel solution of very large linear systems of equations on a SIMD processor. The model problem used to investigate both the efficiency and applicability of the technique was of a regular structure with semi-bandwidth β,\beta, and resulted from approximation of a second order, two-dimensional elliptic equation on a regular domain under the Dirichlet and periodic boundary conditions. With only slight modifications, chiefly to properly account for the mathematical effects of varying bandwidths, the technique can be extended to encompass solution of any regular, banded systems. The computational model used was the MasPar MP-X (model 1208B), a massively parallel processor hostnamed hurricane and housed in the Concurrent Computing Laboratory of the Physics/Astronomy department, Louisiana State University. The maximum bandwidth which caused the problem\u27s size to fit the nyproc ×\times nxproc machine array exactly, was determined. This as well as smaller sizes were used in four experiments to evaluate the efficiency of the new technique. Four benchmark algorithms, two direct--Gauss elimination (GE), Orthogonal factorization--and two iterative--symmetric over-relaxation (SOR) (ω\omega = 2), the conjugate gradient method (CG)--were used to test the efficiency of the new approach based upon three evaluation metrics--deviations of results of computations, measured as average absolute errors, from the exact solution, the cpu times, and the mega flop rates of executions. All the benchmarks, except the GE, were implemented in parallel. In all evaluation categories, the new approach outperformed the benchmarks and very much so when N ≫\gg p, p being the number of processors and N the problem size. At the maximum system\u27s size, the new method was about 2.19 more accurate, and about 1.7 times faster than the benchmarks. But when the system size was a lot smaller than the machine\u27s size, the new approach\u27s performance deteriorated precipitously, and, in fact, in this circumstance, its performance was worse than that of GE, the serial code. Hence, this technique is recommended for solution of linear systems with regular structures on array processors when the problem\u27s size is large in relation to the processor\u27s size

    Acta Cybernetica : Volume 21. Number 1.

    Get PDF

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    A high-performance open-source framework for multiphysics simulation and adjoint-based shape and topology optimization

    Get PDF
    The first part of this thesis presents the advances made in the Open-Source software SU2, towards transforming it into a high-performance framework for design and optimization of multiphysics problems. Through this work, and in collaboration with other authors, a tenfold performance improvement was achieved for some problems. More importantly, problems that had previously been impossible to solve in SU2, can now be used in numerical optimization with shape or topology variables. Furthermore, it is now exponentially simpler to study new multiphysics applications, and to develop new numerical schemes taking advantage of modern high-performance-computing systems. In the second part of this thesis, these capabilities allowed the application of topology optimiza- tion to medium scale fluid-structure interaction problems, using high-fidelity models (nonlinear elasticity and Reynolds-averaged Navier-Stokes equations), which had not been done before in the literature. This showed that topology optimization can be used to target aerodynamic objectives, by tailoring the interaction between fluid and structure. However, it also made ev- ident the limitations of density-based methods for this type of problem, in particular, reliably converging to discrete solutions. This was overcome with new strategies to both guarantee and accelerate (i.e. reduce the overall computational cost) the convergence to discrete solutions in fluid-structure interaction problems.Open Acces
    • …
    corecore