883,462 research outputs found

    Informaticology: combining Computer Science, Data Science, and Fiction Science

    Full text link
    Motivated by an intention to remedy current complications with Dutch terminology concerning informatics, the term informaticology is positioned to denote an academic counterpart of informatics where informatics is conceived of as a container for a coherent family of practical disciplines ranging from computer engineering and software engineering to network technology, data center management, information technology, and information management in a broad sense. Informaticology escapes from the limitations of instrumental objectives and the perspective of usage that both restrict the scope of informatics. That is achieved by including fiction science in informaticology and by ranking fiction science on equal terms with computer science and data science, and framing (the study of) game design, evelopment, assessment and distribution, ranging from serious gaming to entertainment gaming, as a chapter of fiction science. A suggestion for the scope of fiction science is specified in some detail. In order to illustrate the coherence of informaticology thus conceived, a potential application of fiction to the ontology of instruction sequences and to software quality assessment is sketched, thereby highlighting a possible role of fiction (science) within informaticology but outside gaming

    The Effect of Security Education and Expertise on Security Assessments: the Case of Software Vulnerabilities

    Get PDF
    In spite of the growing importance of software security and the industry demand for more cyber security expertise in the workforce, the effect of security education and experience on the ability to assess complex software security problems has only been recently investigated. As proxy for the full range of software security skills, we considered the problem of assessing the severity of software vulnerabilities by means of a structured analysis methodology widely used in industry (i.e. the Common Vulnerability Scoring System (\CVSS) v3), and designed a study to compare how accurately individuals with background in information technology but different professional experience and education in cyber security are able to assess the severity of software vulnerabilities. Our results provide some structural insights into the complex relationship between education or experience of assessors and the quality of their assessments. In particular we find that individual characteristics matter more than professional experience or formal education; apparently it is the \emph{combination} of skills that one owns (including the actual knowledge of the system under study), rather than the specialization or the years of experience, to influence more the assessment quality. Similarly, we find that the overall advantage given by professional expertise significantly depends on the composition of the individual security skills as well as on the available information.Comment: Presented at the Workshop on the Economics of Information Security (WEIS 2018), Innsbruck, Austria, June 201

    Quality assessment technique for ubiquitous software and middleware

    Get PDF
    The new paradigm of computing or information systems is ubiquitous computing systems. The technology-oriented issues of ubiquitous computing systems have made researchers pay much attention to the feasibility study of the technologies rather than building quality assurance indices or guidelines. In this context, measuring quality is the key to developing high-quality ubiquitous computing products. For this reason, various quality models have been defined, adopted and enhanced over the years, for example, the need for one recognised standard quality model (ISO/IEC 9126) is the result of a consensus for a software quality model on three levels: characteristics, sub-characteristics, and metrics. However, it is very much unlikely that this scheme will be directly applicable to ubiquitous computing environments which are considerably different to conventional software, trailing a big concern which is being given to reformulate existing methods, and especially to elaborate new assessment techniques for ubiquitous computing environments. This paper selects appropriate quality characteristics for the ubiquitous computing environment, which can be used as the quality target for both ubiquitous computing product evaluation processes ad development processes. Further, each of the quality characteristics has been expanded with evaluation questions and metrics, in some cases with measures. In addition, this quality model has been applied to the industrial setting of the ubiquitous computing environment. These have revealed that while the approach was sound, there are some parts to be more developed in the future

    A process based approach software certification model for agile and secure environment

    Get PDF
    In today’s business environment, Agile and secure software processes are essential since they bring high quality and secured software to market faster and more cost effectively. Unfortunately, some software practitioners are not following the proper practices of both processes when developing software. There exist various studies which assess the quality of software process; nevertheless, their focus is on the conventional software process. Furthermore, they do not consider weight values in the assessment although each evaluation criterion might have different importance. Consequently, software certification is needed to give conformance on the quality of Agile and secure software processes. Therefore, the objective of this thesis is to propose Extended Software Process Assessment and Certification Model (ESPAC) which addresses both software processes and considers the weight values during the assessment. The study is conducted in four phases: 1) theoretical study to examine the factors and practices that influence the quality of Agile and secure software processes and weight value allocation techniques, 2) an exploratory study which was participated by 114 software practitioners to investigate their current practices, 3) development of an enhanced software process certification model which considers process, people, technology, project constraint and environment, provides certification guideline and utilizes the Analytic Hierarchy Process (AHP) for weight values allocation and 4) verification of Agile and secure software processes and AHP through expert reviews followed by validation on satisfaction and practicality of the proposed model through focus group discussion. The validation result shows that ESPAC Model gained software practitioners’ satisfaction and practical to be executed in the real environment. The contributions of this study straddle research perspectives of Software Process Assessment and Certification and Multiple Criteria Decision Making, and practical perspectives by providing software practitioners and assessors a mechanism to reveal the quality of software process and helps investors and customers in making investment decisions

    Assessment of Open-Source Software for High-Performance Computing

    Get PDF
    High quality software is a key component of various technology systems that are crucial to software producers, users, and society in general. Software application development today uses software from external sources, to achieve software implementation goals. Numerous methods, activities, and standards have been developed in order to realize quality software. Nevertheless, the pursuit for new methods of realizing and assuring quality in software is incessant. Researchers in the software engineering field are in pursuit of methods that can be on par with changing technology. Assessment of open-source software can be supported by a methodology that uses data from prior releases of a software product to predict the quality of a future release. The proposed methodology is validated using a case study of MPICH ? an open-source software product from the field of high-performance computing. A quantitative model and a module-order model have been developed that can predict the modules that are expected to have code-churn and the amount of code-churn in each module. Code-churn is defined as the amount of update activity that has been done to a software product in order to fix bugs. Further validation of the proposed methodology on other software and development of classification models for the quality factor code-churn are recommended as future work

    DEVELOPMENT OF A QUALITY MANAGEMENT ASSESSMENT TOOL TO EVALUATE SOFTWARE USING SOFTWARE QUALITY MANAGEMENT BEST PRACTICES

    Get PDF
    Organizations are constantly in search of competitive advantages in today’s complex global marketplace through improvement of quality, better affordability, and quicker delivery of products and services. This is significantly true for software as a product and service. With other things being equal, the quality of software will impact consumers, organizations, and nations. The quality and efficiency of the process utilized to create and deploy software can result in cost and schedule overruns, cancelled projects, loss of revenue, loss of market share, and loss of consumer confidence. Hence, it behooves us to constantly explore quality management strategies to deliver high quality software quickly at an affordable price. This research identifies software quality management best practices derived from scholarly literature using bibliometric techniques in conjunction with literature review, synthesizes these best practices into an assessment tool for industrial practitioners, refines the assessment tool based on academic expert review, further refines the assessment tool based on a pilot test with industry experts, and undertakes industry expert validation. Key elements of this software quality assessment tool include issues dealing with people, organizational environment, process, and technology best practices. Additionally, weights were assigned to issues of people, organizational environment, process, and technology best practices based on their relative importance, to calculate an overall weighted score for organizations to evaluate where they stand with respect to their peers in pursuing the business of producing quality software. This research study indicates that people best practices carry 40% of overall weight, organizational best v practices carry 30% of overall weight, process best practices carry 15% of overall weight, and technology best practices carry 15% of overall weight. The assessment tool that is developed will be valuable to organizations that seek to take advantage of rapid innovations in pursuing higher software quality. These organizations can use the assessment tool for implementing best practices based on the latest cutting edge management strategies that can lead to improved software quality and other competitive advantages in the global marketplace. This research contributed to the current academic literature in software quality by presenting a quality assessment tool based on software quality management best practices, contributed to the body of knowledge on software quality management, and expanded the knowledgebase on quality management practices. This research also contributed to current professional practice by incorporating software quality management best practices into a quality management assessment tool to evaluate software

    TLAD 2011 Proceedings:9th international workshop on teaching, learning and assesment of databases (TLAD)

    Get PDF
    This is the ninth in the series of highly successful international workshops on the Teaching, Learning and Assessment of Databases (TLAD 2011), which once again is held as a workshop of BNCOD 2011 - the 28th British National Conference on Databases. TLAD 2011 is held on the 11th July at Manchester University, just before BNCOD, and hopes to be just as successful as its predecessors.The teaching of databases is central to all Computing Science, Software Engineering, Information Systems and Information Technology courses, and this year, the workshop aims to continue the tradition of bringing together both database teachers and researchers, in order to share good learning, teaching and assessment practice and experience, and further the growing community amongst database academics. As well as attracting academics from the UK community, the workshop has also been successful in attracting academics from the wider international community, through serving on the programme committee, and attending and presenting papers.Due to the healthy number of high quality submissions this year, the workshop will present eight peer reviewed papers. Of these, six will be presented as full papers and two as short papers. These papers cover a number of themes, including: the teaching of data mining and data warehousing, databases and the cloud, and novel uses of technology in teaching and assessment. It is expected that these papers will stimulate discussion at the workshop itself and beyond. This year, the focus on providing a forum for discussion is enhanced through a panel discussion on assessment in database modules, with David Nelson (of the University of Sunderland), Al Monger (of Southampton Solent University) and Charles Boisvert (of Sheffield Hallam University) as the expert panel

    TLAD 2011 Proceedings:9th international workshop on teaching, learning and assesment of databases (TLAD)

    Get PDF
    This is the ninth in the series of highly successful international workshops on the Teaching, Learning and Assessment of Databases (TLAD 2011), which once again is held as a workshop of BNCOD 2011 - the 28th British National Conference on Databases. TLAD 2011 is held on the 11th July at Manchester University, just before BNCOD, and hopes to be just as successful as its predecessors.The teaching of databases is central to all Computing Science, Software Engineering, Information Systems and Information Technology courses, and this year, the workshop aims to continue the tradition of bringing together both database teachers and researchers, in order to share good learning, teaching and assessment practice and experience, and further the growing community amongst database academics. As well as attracting academics from the UK community, the workshop has also been successful in attracting academics from the wider international community, through serving on the programme committee, and attending and presenting papers.Due to the healthy number of high quality submissions this year, the workshop will present eight peer reviewed papers. Of these, six will be presented as full papers and two as short papers. These papers cover a number of themes, including: the teaching of data mining and data warehousing, databases and the cloud, and novel uses of technology in teaching and assessment. It is expected that these papers will stimulate discussion at the workshop itself and beyond. This year, the focus on providing a forum for discussion is enhanced through a panel discussion on assessment in database modules, with David Nelson (of the University of Sunderland), Al Monger (of Southampton Solent University) and Charles Boisvert (of Sheffield Hallam University) as the expert panel
    • …
    corecore