6,030 research outputs found
An Approach for the Empirical Validation of Software Complexity Measures
Software metrics are widely accepted tools to control and assure software quality. A large number of software metrics with a variety of content can be found in the literature; however most of them are not adopted in industry as they are seen as irrelevant to needs, as they are unsupported, and the major reason behind this is due to improper
empirical validation. This paper tries to identify possible root causes for the improper empirical validation of the software metrics. A practical model for the empirical validation of software metrics is proposed along with root causes. The model is validated by applying it to recently proposed and well known metrics
Recommended from our members
Digitalisation and Business Model Innovation: Exploring the Microfoundations of Dynamic Consistency
The Industry 4.0 paradigm (I4.0) as the digitalisation of manufacturing firms denotes the exploitation of real-time data originating from a ubiquitous interconnection of objects, machines and humans (via the internet) across the entire value network. I4.0 not only serves as a catalyst to improve value-adding activities or to design new product and service solutions but also, more fundamentally, enables manufacturing firms to innovate their established business models (BMs). Against this rapid socio-technological shift, manufacturers face the challenge of holistically innovating their BMs. This requires the individualisation of the value proposition alongside the flexibilisation of their value creating and capturing activities, as well as a continuous adaptation and alignment of these activities with the firm’s organisational systems and the resource and competence base. Adopting the view of a BMI (business model innovation) as a system of interdependent activities, the continuous alignment of activities across the BMI is called dynamic consistency. However, it is not clear what mechanisms denote the notion of dynamic consistency. This thesis operationalises the microfoundations of dynamic consistency in an I4.0-driven BMI by empirically investigating six European manufacturing firms. Following the design themes of BMI, it argues that the notion of dynamic consistency comprises three main aspects: (1) a value focus on data and software; (2) a flexi-directional interlinkage to facilitate the exchange of information and materials; (3) agile working ensembles governing changes to the activity system. Moreover, it proposes open-mindedness and integrity of behaviour as a cognitive foundation that facilitates changes to the activity system. Taken together, these microfoundations provide reasoning for manufacturing firms to transform their traditional make-and-sell BM into a sense-and-act BM, yielding higher profits and profitability. The results demonstrate that the notion of BMI as an activity system must be complemented by the cognitive perspective of BMI to sufficiently operationalise the concept of dynamic consistency. This thesis is anticipated to be a starting point for further studies to achieve consistency during I4.0-driven BMI to generate superior and sustained value appropriation for manufacturing firms.Ford Britain Trust, Queens' Colleg
A strategic approach to making sense of the “wicked” problem of ERM
Purpose – The purpose of this paper is to provide an approach to viewing the “wicked” problem of electronic records management (ERM), using the Cynefin framework, a sense-making tool. It re-conceptualises the ERM challenge by understanding the nature of the people issues. This supports decision making about the most appropriate tactics to adopt to effect positive change.
Design/methodology/approach – Cynefin was used to synthesise qualitative data from an empirical research project that investigated strategies and tactics for improving ERM.
Findings – ERM may be thought of as a dynamic, complex challenge but, viewed through the Cynefin framework, many issues are not complex; they are simple or complicated and can be addressed using best or good practice. The truly complex issues need a different approach, described as emergent practice. Cynefin provides a different lens through which to view, make sense of and re-perceive the ERM challenge and offers a strategic approach to accelerating change.
Research limitations/implications – Since Cynefin has been applied to one data set, the findings are transferrable not generalisable. They, and/or the approach, can be used to further test the propositions.
Practical implications – The resultant ERM framework provides a practical example for information and records managers to exploit or use as a starting point to explore the situation in particular organisational contexts. It could also be used in other practical, teaching and/or research-related records contexts.
Originality/value – This paper provides a new strategic approach to addressing the wicked problem of ERM, which is applicable for any organisational context
Autoencoders for strategic decision support
In the majority of executive domains, a notion of normality is involved in
most strategic decisions. However, few data-driven tools that support strategic
decision-making are available. We introduce and extend the use of autoencoders
to provide strategically relevant granular feedback. A first experiment
indicates that experts are inconsistent in their decision making, highlighting
the need for strategic decision support. Furthermore, using two large
industry-provided human resources datasets, the proposed solution is evaluated
in terms of ranking accuracy, synergy with human experts, and dimension-level
feedback. This three-point scheme is validated using (a) synthetic data, (b)
the perspective of data quality, (c) blind expert validation, and (d)
transparent expert evaluation. Our study confirms several principal weaknesses
of human decision-making and stresses the importance of synergy between a model
and humans. Moreover, unsupervised learning and in particular the autoencoder
are shown to be valuable tools for strategic decision-making
A finder and representation system for knowledge carriers based on granular computing
In one of his publications Aristotle states ”All human beings by their nature desire to know” [Kraut 1991]. This desire is initiated the day we are born and accompanies us for the rest of our life. While at a young age our parents serve as one of the principle sources for knowledge, this changes over the course of time. Technological advances and particularly the introduction of the Internet, have given us new possibilities to share and access knowledge from almost anywhere at any given time. Being able to access and share large collections of written down knowledge is only one part of the equation. Just as important is the internalization of it, which in many cases can prove to be difficult to accomplish. Hence, being able to request assistance from someone who holds the necessary knowledge is of great importance, as it can positively stimulate the internalization procedure. However, digitalization does not only provide a larger pool of knowledge sources to choose from but also more people that can be potentially activated, in a bid to receive personalized assistance with a given problem statement or question. While this is beneficial, it imposes the issue that it is hard to keep track of who knows what. For this task so-called Expert Finder Systems have been introduced, which are designed to identify and suggest the most suited candidates to provide assistance. Throughout this Ph.D. thesis a novel type of Expert Finder System will be introduced that is capable of capturing the knowledge users within a community hold, from explicit and implicit data sources. This is accomplished with the use of granular computing, natural language processing and a set of metrics that have been introduced to measure and compare the suitability of candidates. Furthermore, are the knowledge requirements of a problem statement or question being assessed, in order to ensure that only the most suited candidates are being recommended to provide assistance
Recommended from our members
Understanding digital eco-innovation in municipalities: An institutional perspective
Municipalities consume over 67% of global energy and are responsible for over 70% of greenhouse gas emissions (GHG). The Intergovernmental Panel on Climate Change warns that rapid adjustments need to happen at a global level, or the effects of climate change will be irreversible. The contribution of municipalities is therefore vital if GHG emissions are to be reduced. Our research is timely in its exploration of the ways in which municipalities institutionalise environmental sustainability practices in and through Green digital artefacts. Using mechanism-based institutional theory as a lens, the paper presents the findings of three contrasting case studies of large municipalities in the United Kingdom in their respective programmes to leverage the direct, enabling and systemic effects of Green ICT in order to reduce GHG emission and achieve their eco-sustainability goals. The case sites are also regarded as exemplars for further research and practice on digital eco-innovation. The mechanism-based explanations illustrate how a social web of conditions and factors influence eco-sustainability outcomes. We conclude that the digital technology-enabled grassroots-based initiatives offer the best hope to begin the transition to sustainable climate change within municipalities. The contributions of our study are therefore both theoretical and practical
Conceptual Foundations on Debiasing for Machine Learning-Based Software
Machine learning (ML)-based software’s deployment has raised serious concerns about its pervasive and harmful consequences for users, business, and society inflicted through bias. While approaches to address bias are increasingly recognized and developed, our understanding of debiasing remains nascent. Research has yet to provide a comprehensive coverage of this vast growing field, much of which is not embedded in theoretical understanding. Conceptualizing and structuring the nature, effect, and implementation of debiasing instruments could provide necessary guidance for practitioners investing in debiasing efforts. We develop a taxonomy that classifies debiasing instrument characteristics into seven key dimensions. We evaluate and refine our taxonomy through nine experts and apply our taxonomy to three actual debiasing instruments, drawing lessons for the design and choice of appropriate instruments. Bridging the gaps between our conceptual understanding of debiasing for ML-based software and its organizational implementation, we discuss contributions and future research
- …