6,030 research outputs found

    An Approach for the Empirical Validation of Software Complexity Measures

    Get PDF
    Software metrics are widely accepted tools to control and assure software quality. A large number of software metrics with a variety of content can be found in the literature; however most of them are not adopted in industry as they are seen as irrelevant to needs, as they are unsupported, and the major reason behind this is due to improper empirical validation. This paper tries to identify possible root causes for the improper empirical validation of the software metrics. A practical model for the empirical validation of software metrics is proposed along with root causes. The model is validated by applying it to recently proposed and well known metrics

    Visualising Process Model Hierarchies

    Get PDF

    A strategic approach to making sense of the “wicked” problem of ERM

    Get PDF
    Purpose – The purpose of this paper is to provide an approach to viewing the “wicked” problem of electronic records management (ERM), using the Cynefin framework, a sense-making tool. It re-conceptualises the ERM challenge by understanding the nature of the people issues. This supports decision making about the most appropriate tactics to adopt to effect positive change. Design/methodology/approach – Cynefin was used to synthesise qualitative data from an empirical research project that investigated strategies and tactics for improving ERM. Findings – ERM may be thought of as a dynamic, complex challenge but, viewed through the Cynefin framework, many issues are not complex; they are simple or complicated and can be addressed using best or good practice. The truly complex issues need a different approach, described as emergent practice. Cynefin provides a different lens through which to view, make sense of and re-perceive the ERM challenge and offers a strategic approach to accelerating change. Research limitations/implications – Since Cynefin has been applied to one data set, the findings are transferrable not generalisable. They, and/or the approach, can be used to further test the propositions. Practical implications – The resultant ERM framework provides a practical example for information and records managers to exploit or use as a starting point to explore the situation in particular organisational contexts. It could also be used in other practical, teaching and/or research-related records contexts. Originality/value – This paper provides a new strategic approach to addressing the wicked problem of ERM, which is applicable for any organisational context

    Autoencoders for strategic decision support

    Full text link
    In the majority of executive domains, a notion of normality is involved in most strategic decisions. However, few data-driven tools that support strategic decision-making are available. We introduce and extend the use of autoencoders to provide strategically relevant granular feedback. A first experiment indicates that experts are inconsistent in their decision making, highlighting the need for strategic decision support. Furthermore, using two large industry-provided human resources datasets, the proposed solution is evaluated in terms of ranking accuracy, synergy with human experts, and dimension-level feedback. This three-point scheme is validated using (a) synthetic data, (b) the perspective of data quality, (c) blind expert validation, and (d) transparent expert evaluation. Our study confirms several principal weaknesses of human decision-making and stresses the importance of synergy between a model and humans. Moreover, unsupervised learning and in particular the autoencoder are shown to be valuable tools for strategic decision-making

    A finder and representation system for knowledge carriers based on granular computing

    Get PDF
    In one of his publications Aristotle states ”All human beings by their nature desire to know” [Kraut 1991]. This desire is initiated the day we are born and accompanies us for the rest of our life. While at a young age our parents serve as one of the principle sources for knowledge, this changes over the course of time. Technological advances and particularly the introduction of the Internet, have given us new possibilities to share and access knowledge from almost anywhere at any given time. Being able to access and share large collections of written down knowledge is only one part of the equation. Just as important is the internalization of it, which in many cases can prove to be difficult to accomplish. Hence, being able to request assistance from someone who holds the necessary knowledge is of great importance, as it can positively stimulate the internalization procedure. However, digitalization does not only provide a larger pool of knowledge sources to choose from but also more people that can be potentially activated, in a bid to receive personalized assistance with a given problem statement or question. While this is beneficial, it imposes the issue that it is hard to keep track of who knows what. For this task so-called Expert Finder Systems have been introduced, which are designed to identify and suggest the most suited candidates to provide assistance. Throughout this Ph.D. thesis a novel type of Expert Finder System will be introduced that is capable of capturing the knowledge users within a community hold, from explicit and implicit data sources. This is accomplished with the use of granular computing, natural language processing and a set of metrics that have been introduced to measure and compare the suitability of candidates. Furthermore, are the knowledge requirements of a problem statement or question being assessed, in order to ensure that only the most suited candidates are being recommended to provide assistance

    Conceptual Foundations on Debiasing for Machine Learning-Based Software

    Get PDF
    Machine learning (ML)-based software’s deployment has raised serious concerns about its pervasive and harmful consequences for users, business, and society inflicted through bias. While approaches to address bias are increasingly recognized and developed, our understanding of debiasing remains nascent. Research has yet to provide a comprehensive coverage of this vast growing field, much of which is not embedded in theoretical understanding. Conceptualizing and structuring the nature, effect, and implementation of debiasing instruments could provide necessary guidance for practitioners investing in debiasing efforts. We develop a taxonomy that classifies debiasing instrument characteristics into seven key dimensions. We evaluate and refine our taxonomy through nine experts and apply our taxonomy to three actual debiasing instruments, drawing lessons for the design and choice of appropriate instruments. Bridging the gaps between our conceptual understanding of debiasing for ML-based software and its organizational implementation, we discuss contributions and future research
    • …
    corecore