5 research outputs found

    Software quality evaluation models applicable in health information and communications technologies: a review of the literature

    Get PDF
    Information and Communications Technologies in healthcare has increased the need to consider quality criteria through standardised processes. The aim of this study was to analyse the software quality evaluation models applicable to healthcare from the perspective of ICT-purchasers. Through a systematic literature review with the keywords software, product, quality, evaluation and health, we selected and analysed 20 original research papers published from 2005-2016 in health science and technology databases. The results showed four main topics: non- ISO models, software quality evaluation models based on ISO/IEC standards, studies analysing software quality evaluation models, and studies analysing ISO standards for software quality evaluation. The models provide cost-efficiency criteria for specific software, and improve use outcomes. The ISO/IEC25000 standard is shown as the most suitable for evaluating the quality of ICTs for healthcare use from the perspective of institutional acquisition

    Pengukuran Kualitas Perangkat Lunak berdasarkan ISO/IEC 25000: Systematic Mapping

    Get PDF
    Kualitas perangkat lunak adalah tema kajian dan penelitian聽 turun temurun dalam sejarah ilmu rekayasa perangkat lunak. Kajian dimulai dari apa yang akan diukur (apakah proses atau produk), sudut pandang pengukur dan bagaimana menentukan parameter pengukuran kualitas perangkat lunak. Dari sudut pandang produk, pengukuran kualitas perangkat lunak dapat menggunakan standard dari ISO/IEC 25000 yang juga dikenal sebagai Software product Quality Requirements and Evaluation (SQuaRE). Seri ISO/IEC 25000 terdiri dari 8 karakteristik yaitu Functional Suitability, Performance Efficiency, Compatibilty, Usability, Reliability, Security, Maintainability dan Portability. Standar pengukuran kualitas berdasarkan ISO/IEC 25000 telah dirilis sejak tahun 2005 dan telah dilakukan beberapa kali revisi atau pembaharuan, namun penelitian yang menggunakan seri ISO/IEC 25000 masih relatif sedikit jika dibandingkan dengan ISO/IEC 9126. Pada penelitian ini dilakukan mapping study untuk mengetahui perkembangan sebaran penelitian yang menggunakan standar SQuaRE. Diharapkan dengan systematic mapping akan dapat memunculkan permasalahan - permasalahan pada pengukuran sebuah kualitas perangkat lunak beradsarkan ISO/IEC 25000 yang dapat digunakan sebagai penelitian untuk peneliti yang lain. Kata Kunci: Systematic mapping, kualitas perangkat lunak, ISO/IEC 25000, SQuaR

    Using Dependability Benchmarks To Support Iso/iec Square

    No full text
    The integration of Commercial-Off-The-Shelf (COTS) components in software has reduced time-to-market and production costs, but selecting the most suitable component, among those available, remains still a challenging task. This selection process, typically named benchmarking, requires evaluating the behaviour of eligible components in operation, and ranking them attending to quality characteristics. Most existing benchmarks only provide measures characterising the behaviour of software systems in absence of faults ignoring the hard impact that both accidental and malicious faults have on software quality. However, since using COTS to build a system may motivate the emergence of dependability issues due to the interaction between components, benchmarking the system in presence of faults is essential. The recent ISO/IEC 25045 standard copes with this lack by considering accidental faults when assessing the recoverability capabilities of software systems. This paper proposes a dependability benchmarking approach to determine the impact that faults (noted as disturbances in the standard) either accidental or malicious may have on the quality features exhibited by software components. As will be shown, the usefulness of the approach embraces all evaluator profiles (developers, acquirers and third-party evaluators) identified in the ISO/IEC 25000 "SQuaRE" standard. The feasibility of the proposal is finally illustrated through the benchmarking of three distinct software components, which implement the OLSR protocol specification, competing for integration in a wireless mesh network. 漏 2011 IEEE.2837IEEE,IEEE Computer Society,ifip,JPLEgyed, A., Balzer, R., Integrating COTS software into systems through instrumentation and reasoning (2006) Automated Software Engineering, 13 (1), pp. 41-64. , DOI 10.1007/s10515-006-5466-4Wallnau, K., Hissam, S., Seacord, R., Building systems from commercial components (2002) SEI Series in Software Eng., , Addison-WesleyDonzelli, P., Zelkowitz, M., Basili, V., Allard, D., Meyer, K.N., Evaluating COTS component dependability in context (2005) IEEE Software, 22 (4), pp. 46-53. , DOI 10.1109/MS.2005.91Misra, K.B., (2008) Handbook of Performability Engineering, , 1st ed. Springer Publishing Company, IncorporatedKanoun, K., Spainhower, L., (2008) Dependability Benchmarking for Computer Systems, , Wiley and IEEE Computer Society PressVoas, J.M., McGraw, G., (1997) Software Fault Injection: Inoculating Programs Against Errors, , John Wiley & Sons, Inc(2010) ISO/IEC 25000. Software Engineering - Software Product Quality Requirements and Evaluation (SQuaRE) - Guide to SQuaRE, , Geneve ISO(2010) ISO/IEC 25045. Systems and Software Engineering - Systems and Software Quality Requirements and Evaluation (SQuaRE) - Evaluation Module for Recoverability, , Geneve ISO(2011), http://www.tpc.org, onlineArlat, J., Aguera, M., Amat, L., Crouzet, Y., Fabre J.-Charles, Laprie J.-Claude, Martins, E., Powell, D., Fault injection for dependability validation: A methodology and some applications (1990) IEEE Transactions on Software Engineering, 16 (2), pp. 166-182. , DOI 10.1109/32.44380Avizienis, A., Laprie, J.-C., Randell, B., Landwehr, C., Basic concepts and taxonomy of dependable and secure computing (2004) IEEE Transactions on Dependable and Secure Computing, 1 (1), pp. 11-33. , DOI 10.1109/TDSC.2004.2Dujmovic, J., Elnicki, R., (1982) A DMS Cost/Benefit Decision Model: Mathematical Models for Data Management System Evaluation, Comparison, and Selection, , National Bureau of Standards, Washington D.C., No. GCR 82-374. NTIS No. PB 82-170150Morris, M.F., Kiviat graphs: Conventions and figures of merit (1974) ACM/Sigmetrics Performance Evaluation Review, 3 (3), pp. 2-8(2011), http://ruralnet.sourceforge.net/, online(2010) Hillsdale WMN, , http://dashboard.openmesh.com/overview2.php?id=Hillsdale, OnlineChhabra, J., Real-world experiences with an interactive ad hoc sensor network (2002) Proceedings of the 2002 International Conference on Parallel Processing Workshops, pp. 143-151Lu, W., Seah, W.K.G., Peh, E.W.C., Ge, Y., Communications support for disaster recovery operations using hybrid mobile ad-hoc networks Proceedings of the 32nd IEEE Conference on Local Computer Networks, 2007, pp. 763-770Akyildiz, I.F., Wireless mesh networks: A survey (2005) IEEE Radio Communications, 43, pp. S23-S30Friginal, J., De Andres, D., Ruiz, J.-C., Gil, P., Using performance, energy consumption, and resilience experimental measures to evaluate routing protocols for ad hoc networks 10th IEEE Symposium on Network Computing and Applications (NCA), 2011Andr茅s, D., Friginal, J., Ruiz, J.-C., Gil, P., An attack injection approach to evaluate the robustness of ad hoc networks IEEE Pacific Rim International Symposium on Dependable Computing (PRDC), 2009, pp. 228-233(2010) IEC 61508. Functional Safety of Electrical/ Electronic/programmable Electronic Safety-related Systems, , IEC(1992) DO-178B. Software Considerations in Airborne Systems and Equipment Certification, , Radio Technical Commission for Aeronautics(2001) EN 50128. Railway Applications. Communications, Signalling and Processing Systems. Software for Railway Control and Protection Systems, , British Standards InstitutionSkramstad, T., Assessment of Safety Critical Systems with COTS Software and Software of Uncertain Pedigree (SOUP) Proceedings from ERCIM Workshop on Dependable Systems, ERCIM Conference on Dependable Software Intensive Embedded Systems, Italy, 2005Friginal, J., De Andres, D., Ruiz, J.-C., Gil, P., Attack injection to support the evaluation of ad hoc networks IEEE Symposium on Reliable Distributed Systems (SRDS), 2010, pp. 21-2

    Improving the process of analysis and comparison of results in dependability benchmarks for computer systems

    Full text link
    Tesis por compendioLos dependability benchmarks (o benchmarks de confiabilidad en espa帽ol), est谩n dise帽ados para evaluar, mediante la categorizaci贸n cuantitativa de atributos de confiabilidad y prestaciones, el comportamiento de sistemas en presencia de fallos. En este tipo de benchmarks, donde los sistemas se eval煤an en presencia de perturbaciones, no ser capaces de elegir el sistema que mejor se adapta a nuestras necesidades puede, en ocasiones, conllevar graves consecuencias (econ贸micas, de reputaci贸n, o incluso de p茅rdida de vidas). Por esa raz贸n, estos benchmarks deben cumplir ciertas propiedades, como son la no-intrusi贸n, la representatividad, la repetibilidad o la reproducibilidad, que garantizan la robustez y precisi贸n de sus procesos. Sin embargo, a pesar de la importancia que tiene la comparaci贸n de sistemas o componentes, existe un problema en el 谩mbito del dependability benchmarking relacionado con el an谩lisis y la comparaci贸n de resultados. Mientras que el principal foco de investigaci贸n se ha centrado en el desarrollo y la mejora de procesos para obtener medidas en presencia de fallos, los aspectos relacionados con el an谩lisis y la comparaci贸n de resultados quedaron mayormente desatendidos. Esto ha dado lugar a diversos trabajos en este 谩mbito donde el proceso de an谩lisis y la comparaci贸n de resultados entre sistemas se realiza de forma ambigua, mediante argumentaci贸n, o ni siquiera queda reflejado. Bajo estas circunstancias, a los usuarios de los benchmarks se les presenta una dificultad a la hora de utilizar estos benchmarks y comparar sus resultados con los obtenidos por otros usuarios. Por tanto, extender la aplicaci贸n de los benchmarks de confiabilidad y realizar la explotaci贸n cruzada de resultados es una tarea actualmente poco viable. Esta tesis se ha centrado en el desarrollo de una metodolog铆a para dar soporte a los desarrolladores y usuarios de benchmarks de confiabilidad a la hora de afrontar los problemas existentes en el an谩lisis y comparaci贸n de resultados. Dise帽ada para asegurar el cumplimiento de las propiedades de estos benchmarks, la metodolog铆a integra el proceso de an谩lisis de resultados en el flujo procedimental de los benchmarks de confiabilidad. Inspirada en procedimientos propios del 谩mbito de la investigaci贸n operativa, esta metodolog铆a proporciona a los evaluadores los medios necesarios para hacer su proceso de an谩lisis expl铆cito, y m谩s representativo para el contexto dado. Los resultados obtenidos de aplicar esta metodolog铆a en varios casos de estudio de distintos dominios de aplicaci贸n, mostrar谩 las contribuciones de este trabajo a mejorar el proceso de an谩lisis y comparaci贸n de resultados en procesos de evaluaci贸n de la confiabilidad para sistemas basados en computador.Dependability benchmarks are designed to assess, by quantifying through quantitative performance and dependability attributes, the behavior of systems in presence of faults. In this type of benchmarks, where systems are assessed in presence of perturbations, not being able to select the most suitable system may have serious implications (economical, reputation or even lost of lives). For that reason, dependability benchmarks are expected to meet certain properties, such as non-intrusiveness, representativeness, repeatability or reproducibility, that guarantee the robustness and accuracy of their process. However, despite the importance that comparing systems or components has, there is a problem present in the field of dependability benchmarking regarding the analysis and comparison of results. While the main focus in this field of research has been on developing and improving experimental procedures to obtain the required measures in presence of faults, the processes involving the analysis and comparison of results were mostly unattended. This has caused many works in this field to analyze and compare results of different systems in an ambiguous way, as the process followed in the analysis is based on argumentation, or not even present. Hence, under these circumstances, benchmark users will have it difficult to use these benchmarks and compare their results with those from others. Therefore extending the application of these dependability benchmarks and perform cross-exploitation of results among works is not likely to happen. This thesis has focused on developing a methodology to assist dependability benchmark performers to tackle the problems present in the analysis and comparison of results of dependability benchmarks. Designed to guarantee the fulfillment of dependability benchmark's properties, this methodology seamlessly integrates the process of analysis of results within the procedural flow of a dependability benchmark. Inspired on procedures taken from the field of operational research, this methodology provides evaluators with the means not only to make their process of analysis explicit to anyone, but also more representative for the context being. The results obtained from the application of this methodology to several case studies in different domains, will show the actual contributions of this work to improving the process of analysis and comparison of results in dependability benchmarking for computer systems.Els dependability benchmarks (o benchmarks de confiabilitat, en valenci脿), s贸n dissenyats per avaluar, mitjan莽ant la categoritzaci贸 quantitativa d'atributs de confiabilitat i prestacions, el comportament de sistemes en pres猫ncia de fallades. En aquest tipus de benchmarks, on els sistemes s贸n avaluats en pres猫ncia de pertorbacions, el no ser capa莽os de triar el sistema que millor s'adapta a les nostres necessitats pot tenir, de vegades, greus conseq眉猫ncies (econ貌miques, de reputaci贸, o fins i tot p猫rdua de vides). Per aquesta ra贸, aquests benchmarks han de complir certes propietats, com s贸n la no-intrusi贸, la representativitat, la repetibilitat o la reproductibilitat, que garanteixen la robustesa i precisi贸 dels seus processos. Aix铆 i tot, malgrat la import脿ncia que t茅 la comparaci贸 de sistemes o components, existeix un problema a l'脿mbit del dependability benchmarking relacionat amb l'an脿lisi i la comparaci贸 de resultats. Mentre que el principal focus d'investigaci贸 s'ha centrat en el desenvolupament i la millora de processos per a obtenir mesures en pres猫ncia de fallades, aquells aspectes relacionats amb l'an脿lisi i la comparaci贸 de resultats es van desatendre majorit脿riament. A莽貌 ha donat lloc a diversos treballs en aquest 脿mbit on els processos d'an脿lisi i comparaci贸 es realitzen de forma ambigua, mitjan莽ant argumentaci贸, o ni tan sols queden reflectits. Sota aquestes circumst脿ncies, als usuaris dels benchmarks se'ls presenta una dificultat a l'hora d'utilitzar aquests benchmarks i comparar els seus resultats amb els obtinguts per altres usuaris. Per tant, estendre l'aplicaci贸 dels benchmarks de confiabilitat i realitzar l'explotaci贸 creuada de resultats 茅s una tasca actualment poc viable. Aquesta tesi s'ha centrat en el desenvolupament d'una metodologia per a donar suport als desenvolupadors i usuaris de benchmarks de confiabilitat a l'hora d'afrontar els problemes existents a l'an脿lisi i comparaci贸 de resultats. Dissenyada per a assegurar el compliment de les propietats d'aquests benchmarks, la metodologia integra el proc茅s d'an脿lisi de resultats en el flux procedimental dels benchmarks de confiabilitat. Inspirada en procediments propis de l'脿mbit de la investigaci贸 operativa, aquesta metodologia proporciona als avaluadors els mitjans necessaris per a fer el seu proc茅s d'an脿lisi expl铆cit, i m茅s representatiu per al context donat. Els resultats obtinguts d'aplicar aquesta metodologia en diversos casos d'estudi de distints dominis d'aplicaci贸, mostrar脿 les contribucions d'aquest treball a millorar el proc茅s d'an脿lisi i comparaci贸 de resultats en processos d'avaluaci贸 de la confiabilitat per a sistemes basats en computador.Mart铆nez Raga, M. (2018). Improving the process of analysis and comparison of results in dependability benchmarks for computer systems [Tesis doctoral no publicada]. Universitat Polit猫cnica de Val猫ncia. https://doi.org/10.4995/Thesis/10251/111945TESISCompendi
    corecore