177 research outputs found

    New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs

    Full text link
    Tesis por compendio[EN] Relevance of electronics towards safety of common devices has only been growing, as an ever growing stake of the functionality is assigned to them. But of course, this comes along the constant need for higher performances to fulfill such functionality requirements, while keeping power and budget low. In this scenario, industry is struggling to provide a technology which meets all the performance, power and price specifications, at the cost of an increased vulnerability to several types of known faults or the appearance of new ones. To provide a solution for the new and growing faults in the systems, designers have been using traditional techniques from safety-critical applications, which offer in general suboptimal results. In fact, modern embedded architectures offer the possibility of optimizing the dependability properties by enabling the interaction of hardware, firmware and software levels in the process. However, that point is not yet successfully achieved. Advances in every level towards that direction are much needed if flexible, robust, resilient and cost effective fault tolerance is desired. The work presented here focuses on the hardware level, with the background consideration of a potential integration into a holistic approach. The efforts in this thesis have focused several issues: (i) to introduce additional fault models as required for adequate representativity of physical effects blooming in modern manufacturing technologies, (ii) to provide tools and methods to efficiently inject both the proposed models and classical ones, (iii) to analyze the optimum method for assessing the robustness of the systems by using extensive fault injection and later correlation with higher level layers in an effort to cut development time and cost, (iv) to provide new detection methodologies to cope with challenges modeled by proposed fault models, (v) to propose mitigation strategies focused towards tackling such new threat scenarios and (vi) to devise an automated methodology for the deployment of many fault tolerance mechanisms in a systematic robust way. The outcomes of the thesis constitute a suite of tools and methods to help the designer of critical systems in his task to develop robust, validated, and on-time designs tailored to his application.[ES] La relevancia que la electrónica adquiere en la seguridad de los productos ha crecido inexorablemente, puesto que cada vez ésta copa una mayor influencia en la funcionalidad de los mismos. Pero, por supuesto, este hecho viene acompañado de una necesidad constante de mayores prestaciones para cumplir con los requerimientos funcionales, al tiempo que se mantienen los costes y el consumo en unos niveles reducidos. En este escenario, la industria está realizando esfuerzos para proveer una tecnología que cumpla con todas las especificaciones de potencia, consumo y precio, a costa de un incremento en la vulnerabilidad a múltiples tipos de fallos conocidos o la introducción de nuevos. Para ofrecer una solución a los fallos nuevos y crecientes en los sistemas, los diseñadores han recurrido a técnicas tradicionalmente asociadas a sistemas críticos para la seguridad, que ofrecen en general resultados sub-óptimos. De hecho, las arquitecturas empotradas modernas ofrecen la posibilidad de optimizar las propiedades de confiabilidad al habilitar la interacción de los niveles de hardware, firmware y software en el proceso. No obstante, ese punto no está resulto todavía. Se necesitan avances en todos los niveles en la mencionada dirección para poder alcanzar los objetivos de una tolerancia a fallos flexible, robusta, resiliente y a bajo coste. El trabajo presentado aquí se centra en el nivel de hardware, con la consideración de fondo de una potencial integración en una estrategia holística. Los esfuerzos de esta tesis se han centrado en los siguientes aspectos: (i) la introducción de modelos de fallo adicionales requeridos para la representación adecuada de efectos físicos surgentes en las tecnologías de manufactura actuales, (ii) la provisión de herramientas y métodos para la inyección eficiente de los modelos propuestos y de los clásicos, (iii) el análisis del método óptimo para estudiar la robustez de sistemas mediante el uso de inyección de fallos extensiva, y la posterior correlación con capas de más alto nivel en un esfuerzo por recortar el tiempo y coste de desarrollo, (iv) la provisión de nuevos métodos de detección para cubrir los retos planteados por los modelos de fallo propuestos, (v) la propuesta de estrategias de mitigación enfocadas hacia el tratamiento de dichos escenarios de amenaza y (vi) la introducción de una metodología automatizada de despliegue de diversos mecanismos de tolerancia a fallos de forma robusta y sistemática. Los resultados de la presente tesis constituyen un conjunto de herramientas y métodos para ayudar al diseñador de sistemas críticos en su tarea de desarrollo de diseños robustos, validados y en tiempo adaptados a su aplicación.[CA] La rellevància que l'electrònica adquireix en la seguretat dels productes ha crescut inexorablement, puix cada volta més aquesta abasta una major influència en la funcionalitat dels mateixos. Però, per descomptat, aquest fet ve acompanyat d'un constant necessitat de majors prestacions per acomplir els requeriments funcionals, mentre es mantenen els costos i consums en uns nivells reduïts. Donat aquest escenari, la indústria està fent esforços per proveir una tecnologia que complisca amb totes les especificacions de potència, consum i preu, tot a costa d'un increment en la vulnerabilitat a diversos tipus de fallades conegudes, i a la introducció de nous tipus. Per oferir una solució a les noves i creixents fallades als sistemes, els dissenyadors han recorregut a tècniques tradicionalment associades a sistemes crítics per a la seguretat, que en general oferixen resultats sub-òptims. De fet, les arquitectures empotrades modernes oferixen la possibilitat d'optimitzar les propietats de confiabilitat en habilitar la interacció dels nivells de hardware, firmware i software en el procés. Tot i això eixe punt no està resolt encara. Es necessiten avanços a tots els nivells en l'esmentada direcció per poder assolir els objectius d'una tolerància a fallades flexible, robusta, resilient i a baix cost. El treball ací presentat se centra en el nivell de hardware, amb la consideració de fons d'una potencial integració en una estratègia holística. Els esforços d'esta tesi s'han centrat en els següents aspectes: (i) la introducció de models de fallada addicionals requerits per a la representació adequada d'efectes físics que apareixen en les tecnologies de fabricació actuals, (ii) la provisió de ferramentes i mètodes per a la injecció eficient del models proposats i dels clàssics, (iii) l'anàlisi del mètode òptim per estudiar la robustesa de sistemes mitjançant l'ús d'injecció de fallades extensiva, i la posterior correlació amb capes de més alt nivell en un esforç per retallar el temps i cost de desenvolupament, (iv) la provisió de nous mètodes de detecció per cobrir els reptes plantejats pels models de fallades proposats, (v) la proposta d'estratègies de mitigació enfocades cap al tractament dels esmentats escenaris d'amenaça i (vi) la introducció d'una metodologia automatitzada de desplegament de diversos mecanismes de tolerància a fallades de forma robusta i sistemàtica. Els resultats de la present tesi constitueixen un conjunt de ferramentes i mètodes per ajudar el dissenyador de sistemes crítics en la seua tasca de desenvolupament de dissenys robustos, validats i a temps adaptats a la seua aplicació.Espinosa García, J. (2016). New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/73146TESISCompendi

    The State of the Art of Automatic Programming

    Get PDF
    Automaatprogrammeerimine või koodi genereerimine on teatud tüüpi arvutiprogrammide loomisviis, kus kood genereeritakse mõne tööriista abil, mis võimaldab arendajatel koodi kirjutada kõrgemal abstraktsioonitasemel. Selliste programmide rakendamine tarkvaraarenduse protsessis on hea viis programmeerijate produktiivsuse tõstmiseks, võimaldades neil keskenduda pigem käesolevale ülesandele kui implementatsiooni detailidele. Senises teaduskirjanduses on vaadeldud konkreetseid lähenemisi või meetodeid eraldi. Väga vähesed uurimustööd vaatlevad aga kogu valdkonna viimast taset. Käesolevas töös käsitletakse automaatprogrammeerimist olemasoleva kirjanduse süstemaatilise kirjandusülevaate meetodi abil. Töö teeb ülevaate teemaga seonduvatest algoritmidest, probleemidest ning uurmisvaldkonna avatud uurimisküsimustest ning võrdleb valdkonna hetketaset praktika hetketasemega. Vaaldeldud 37 asjakohasest uuringust tegelesid 19 automaatprogrammeerimise üldise määratlemise ja alateemadega. Kolmkümmend uuringut pakkusid välja konkreetse algoritmi või lähenemisviisi. Esitatud tehnikatest rakendati 2 praktikas. Viimasel ajal on automaatprogrammerimise fookus nihkunud programmide sünteesilt induktiivsele programmeerimisele, mille on põhjustanud läbimurded tehisintellekti valdkonnas. Mõistete ja alateemade määratlus on teadlaste vahel ühtne. Õigete spetsifikatsioonide sõnastamine ja piisava teabe andmine automatiseerimiseks on endiselt lahtine uurimisküsimus.Automatic programming or code generation is a type of computer programming where the code is generated using some tools allowing developers to write code at the higher level of abstraction. Implementing these types of programs into the software development process is a good way to boost programmers’ performance by focusing on the task at hand rather than implementation details. Current literature on the subject reviews single approach or method. Very few of them are reviewing state of the art in general. This paper reviews the state of the art of automatic programming by overviewing the existing literature on the topic using systematic literature review method. The paper overviews approaches and algorithms of the topic, examines issues and open questions in the field and compares the state of the art to the state of the practice. Of 37 relevant studies, 19 addressed general definitions and subtopics of automatic programming. 30 presented specific algorithms or approaches. 2 of proposed techniques were implemented in practice. Currently, the focus of automatic programming shifted from program synthesis to inductive programming, caused by a breakthrough in artificial intelligence. Definition of the term and subtopics is consistent between scholars. However, formulating correct specification and providing sufficient information for automation is still an open research question

    Unwoven Aspect Analysis

    Get PDF
    Various languages and tools supporting advanced separation of concerns (such as aspect-oriented programming) provide a software developer with the ability to separate functional and non-functional programmatic intentions. Once these separate pieces of the software have been specified, the tools automatically handle interaction points between separate modules, relieving the developer of this chore and permitting more understandable, maintainable code. Many approaches have left traditional compiler analysis and optimization until after the composition has been performed; unfortunately, analyses performed after composition cannot make use of the logical separation present in the original program. Further, for modular systems that can be configured with different sets of features, testing under every possible combination of features may be necessary and time-consuming to avoid bugs in production software. To solve this testing problem, we investigate a feature-aware compiler analysis that runs during composition and discovers features strongly independent of each other. When the their independence can be judged, the number of feature combinations that must be separately tested can be reduced. We develop this approach and discuss our implementation. We look forward to future programming languages in two ways: we implement solutions to problems that are conceptually aspect-oriented but for which current aspect languages and tools fail. We study these cases and consider what language designs might provide even more information to a compiler. We describe some features that such a future language might have, based on our observations of current language deficiencies and our experience with compilers for these languages

    ICSEA 2022: the seventeenth international conference on software engineering advances

    Get PDF
    The Seventeenth International Conference on Software Engineering Advances (ICSEA 2022), held between October 16th and October 20th, 2022, continued a series of events covering a broad spectrum of software-related topics. The conference covered fundamentals on designing, implementing, testing, validating and maintaining various kinds of software. Several tracks were proposed to treat the topics from theory to practice, in terms of methodologies, design, implementation, testing, use cases, tools, and lessons learned. The conference topics covered classical and advanced methodologies, open source, agile software, as well as software deployment and software economics and education. Other advanced aspects are related to on-time practical aspects, such as run-time vulnerability checking, rejuvenation process, updates partial or temporary feature deprecation, software deployment and configuration, and on-line software updates. These aspects trigger implications related to patenting, licensing, engineering education, new ways for software adoption and improvement, and ultimately, to software knowledge management. There are many advanced applications requiring robust, safe, and secure software: disaster recovery applications, vehicular systems, biomedical-related software, biometrics related software, mission critical software, E-health related software, crisis-situation software. These applications require appropriate software engineering techniques, metrics and formalisms, such as, software reuse, appropriate software quality metrics, composition and integration, consistency checking, model checking, provers and reasoning. The nature of research in software varies slightly with the specific discipline researchers work in, yet there is much common ground and room for a sharing of best practice, frameworks, tools, languages and methodologies. Despite the number of experts we have available, little work is done at the meta level, that is examining how we go about our research, and how this process can be improved. There are questions related to the choice of programming language, IDEs and documentation styles and standard. Reuse can be of great benefit to research projects yet reuse of prior research projects introduces special problems that need to be mitigated. The research environment is a mix of creativity and systematic approach which leads to a creative tension that needs to be managed or at least monitored. Much of the coding in any university is undertaken by research students or young researchers. Issues of skills training, development and quality control can have significant effects on an entire department. In an industrial research setting, the environment is not quite that of industry as a whole, nor does it follow the pattern set by the university. The unique approaches and issues of industrial research may hold lessons for researchers in other domains. We take here the opportunity to warmly thank all the members of the ICSEA 2022 technical program committee, as well as all the reviewers. The creation of such a high-quality conference program would not have been possible without their involvement. We also kindly thank all the authors who dedicated much of their time and effort to contribute to ICSEA 2022. We truly believe that, thanks to all these efforts, the final conference program consisted of top-quality contributions. We also thank the members of the ICSEA 2022 organizing committee for their help in handling the logistics of this event. We hope that ICSEA 2022 was a successful international forum for the exchange of ideas and results between academia and industry and for the promotion of progress in software engineering advances

    Multiscale Universal Interface: A Concurrent Framework for Coupling Heterogeneous Solvers

    Full text link
    Concurrently coupled numerical simulations using heterogeneous solvers are powerful tools for modeling multiscale phenomena. However, major modifications to existing codes are often required to enable such simulations, posing significant difficulties in practice. In this paper we present a C++ library, i.e. the Multiscale Universal Interface (MUI), which is capable of facilitating the coupling effort for a wide range of multiscale simulations. The library adopts a header-only form with minimal external dependency and hence can be easily dropped into existing codes. A data sampler concept is introduced, combined with a hybrid dynamic/static typing mechanism, to create an easily customizable framework for solver-independent data interpretation. The library integrates MPI MPMD support and an asynchronous communication protocol to handle inter-solver information exchange irrespective of the solvers' own MPI awareness. Template metaprogramming is heavily employed to simultaneously improve runtime performance and code flexibility. We validated the library by solving three different multiscale problems, which also serve to demonstrate the flexibility of the framework in handling heterogeneous models and solvers. In the first example, a Couette flow was simulated using two concurrently coupled Smoothed Particle Hydrodynamics (SPH) simulations of different spatial resolutions. In the second example, we coupled the deterministic SPH method with the stochastic Dissipative Particle Dynamics (DPD) method to study the effect of surface grafting on the hydrodynamics properties on the surface. In the third example, we consider conjugate heat transfer between a solid domain and a fluid domain by coupling the particle-based energy-conserving DPD (eDPD) method with the Finite Element Method (FEM).Comment: The library source code is freely available under the GPLv3 license at http://www.cfm.brown.edu/repo/release/MUI

    The 6th Conference of PhD Students in Computer Science

    Get PDF

    Opt: A Domain Specific Language for Non-linear Least Squares Optimization in Graphics and Imaging

    Full text link
    Many graphics and vision problems can be expressed as non-linear least squares optimizations of objective functions over visual data, such as images and meshes. The mathematical descriptions of these functions are extremely concise, but their implementation in real code is tedious, especially when optimized for real-time performance on modern GPUs in interactive applications. In this work, we propose a new language, Opt (available under http://optlang.org), for writing these objective functions over image- or graph-structured unknowns concisely and at a high level. Our compiler automatically transforms these specifications into state-of-the-art GPU solvers based on Gauss-Newton or Levenberg-Marquardt methods. Opt can generate different variations of the solver, so users can easily explore tradeoffs in numerical precision, matrix-free methods, and solver approaches. In our results, we implement a variety of real-world graphics and vision applications. Their energy functions are expressible in tens of lines of code, and produce highly-optimized GPU solver implementations. These solver have performance competitive with the best published hand-tuned, application-specific GPU solvers, and orders of magnitude beyond a general-purpose auto-generated solver

    Self-Organizing Software Architectures

    Get PDF
    Looking at engineering productivity is a source for improving the state of software engineering. We present two approaches to improve productivity: bottom-up modeling and self-configuring software components. Productivity, as measured in the ability to produce correctly working software features using limited resources is improved by performing less wasteful activities and by concentrating on the required activities to build sustainable software development organizations. Bottom-up modeling is a way to combine improved productivity with agile software engineering. Instead of focusing on tools and up-front planning, the models used emerge, as the requirements to the product are unveiled during a project. The idea is to build the modeling formalisms strong enough to be employed in code generation and as runtime models. This brings the benefits of model-driven engineering to agile projects, where the benefits have been rare. Self-configuring components are a development of bottom-up modeling. The notion of a source model is extended to incorporate the software entities themselves. Using computational reflection and introspection, dependent components of the software can be automatically updated to reflect changes in the dependence. This improves maintainability, thus making software changes faster. The thesis contains a number of case studies explaining the ways of applying the presented techniques. In addition to constructing the case studies, an empirical validation with test subjects is presented to show the usefulness of the techniques.Itseorganisoituvat ohjelmistoarkkitehtuurit Ohjelmistokehityksen tuottavuus on monen ohjelmistokehitysorganisaation huolenaihe. Erityisesti ylläpitovaiheessa ohjelmistojen heikko muokattavuus tuottaa turhia kustannuksia ja pettymyksiä asiakassuhteissa, kun vaikeasti muokattavaan ohjelmistoon tulisi tehdä muutoksia. Tässä työssä esitetään kaksi menetelmää ohjelmistojen muokattavuuden parantamiseksi: kokoava mallinnuskielten käyttäminen sekä itseorganisoituvat ohjelmistokomponentit. Mallipohjaisessa ohjelmistotuotannossa ohjelmistoille kehitetään soveltuvat mallinnuskielet ja -työkalut, joiden pohjalta kehitettävä ohjelmisto voidaan automaattisesti tuottaa. Uuden mallinnuskielen kehittäminen ja sitä tukevan välineistön rakentaminen on kuitenkin aikaaviepää ja vaikeaa. Vaarana on, että kehitetty kieli on valmistuessaan vanhentunut. Niin kutsutuissa ketterissä ohjelmistomenetelmissä yritetään välttää perinteisten, suunittelupainotteisten kehitysmenetelmien tuottamia sudenkuoppia. Liiallinen ketteryys voi kuitenkin kostautua heikkona tuottavuutena, kun kehitysväen kaikki aika kuluu näppäryysharjoituksiin varsinaisen tuottavan työn sijaan. Kokoava mallipohjainen tuotanto keskittyy kehittämään vain riittävän hyviä malleja, joiden perusteella voidaan yhdistää mallipohjaisen ohjelmistotuotannon ja ketterien prosessimallien tuomat edut. Ulkoisten, erikseen kehiteltyjen mallikielten lisäksi työssä esitellään ajatus ohjelmakoodin itsensä käyttämisestä mallipohjaisen ohjelmistotuotannon työkaluna. Näin syntyy itseorganisoituva ohjelmistoarkkitehtuuri. Tällä tavoin kehitystyön tuottavuus paranee, sillä ohjelmakoodin sisäisten riippuvuuksien määrä laskee, ja näin ollen muokkausten tekeminen on helpompaa. Työssä esitellään tapaustutkimuksia ohjelmakoodiin perustuvasta mallipohjaisen ohjelmistotuotannon ohjelmistokehyksistä sekä empiirinen validointi itseorganisoituvuuden hyödyllisyydestä tuottavuusnäkökulmasta katsoen
    corecore