9 research outputs found

    Scheduling research grant proposal evaluation meetings

    Get PDF
    In many funding agencies a model is adopted whereby a fixed panel of evaluators evaluate the set of applications. This is then followed by a general meeting where each proposal is discussed by those evaluators assigned to it with a view to agreeing on a consensus score for that proposal. It is not uncommon for some experts to be unavailable for the entire duration of the meeting; constraints of this nature, and others complicate the search for a solution. We report on a system developed to ensure the smooth running of such meetings

    A discussion of three visualisation approaches to providing cognitive support in variability management

    Get PDF
    Variability management in software intensive systems can be a complex and cognitively challenging process. Configuring a Software Product Line with thousands of variation points in order to derive a specific product variant is an example of such a challenge. Each configurable feature can have numerous relationships with many other elements within the system. These relationships can impact greatly on the overall configuration process. Understanding the nature and impact of these relationships during configuration is key to the quality and efficiency of the configuration process. In this paper we present an overview of three visual approaches to this configuration which utilise information visualisation techniques and aspects of cognitive theory to provide stakeholder support. Using an industry example, we discuss and compare the approaches using a set of fundamental configuration tasks

    Applying ant colony optimization metaheuristic to the DAG layering problem

    Get PDF
    This paper 1 presents the design and implementation of an Ant Colony Optimization based algorithm for solving the DAG Layering Problem. This algorithm produces compact layerings by minimising their width and height. Importantly it takes into account the contribution of dummy vertices to the width of the resulting layering

    Visualising variability relationships in software product lines

    Get PDF
    Software Product Line Engineering is a development paradigm that focuses on the identification and management of the commonalities and variability of a set of software products such that core assets can be developed and (re)used to derive individual product variants with a minimum of cost. In industrial product lines where it is possible to have thousands of variation points, the scale of variability can become extremely difficult to manage. In this position paper we elaborate on our ideas of focussing the representation and visualisation on the variability relationships that exist between different product line elements such as decisions, features and components and not on those elements that they relate. Further, we provide a conceptual three-dimensional visualisation technique to manage these relationships in the context of specific stakeholder tasks

    Research tool to support feature configuration in software product lines

    Get PDF
    Configuring a large Software Product Line can be a complex and cognitively challenging task. The numerous relationships that can exist between different system elements such as features and their implementing artefacts can make the process time consuming and error prone. Appropriate tool support is key to the efficiency of the process and quality of the final product. We present our research prototype tool which takes a considered approach to feature configuration using visualisation techniques and aspects of cognitive theory. We demonstrate how it uses these to support fundamental feature configuration tasks

    STEM rebalance: a novel approach for tackling imbalanced datasets using SMOTE, edited nearest neighbour, and mixup

    No full text
    Imbalanced datasets in medical imaging are characterized by skewed class proportions and scarcity of abnormal cases. When trained using such data, models tend to assign higher probabilities to normal cases, leading to biased performance. Common oversampling techniques such as SMOTE rely on local information and can introduce marginalization issues. This paper investigates the potential of using Mixup augmentation that combines two training examples along with their corresponding labels to generate new data points as a generic vicinal distribution. To this end, we propose STEM, which combines SMOTEENN and Mixup at the instance level. This integration enables us to effectively leverage the entire distribution of minority classes, thereby mitigating both between-class and within-class imbalances. We focus on the breast cancer problem, where imbalanced datasets are prevalent. The results demonstrate the effectiveness of STEM, which achieves AUC values of 0.96 and 0.99 in the Digital Database for Screening Mammography and Wisconsin Breast Cancer (Diagnostics) datasets, respectively. Moreover, this method shows promising potential when applied with an ensemble of machine learning (ML) classifiers.</p

    Visualisation techniques to support derivation tasks in software product line development

    Get PDF
    Adopting a software product line approach al-lows companies to realise significant improve-ments in time-to-market, cost, productivity, and system quality. A fundamental problem in soft-ware product line engineering is the fact that a product line of industrial size can easily incorpo-rate several thousand variation points. The scale and interdependencies can lead to variability management and product derivation tasks that are extremely complex to manage. This paper investigates visualisation techniques to support and improve the effectiveness of these task

    A 3D visualisation to enhance cognition in software product line engineering

    Get PDF
    Software Product Line (SPL) Engineering is a development paradigm where core artefacts are developed and subsequently configured into different software products dependent on a particular customer's requirements. In industrial product lines, the scale of the configuration (variability management) can become extremely complex and very difficult to manage. Visualisation is widely used in software engineering and has proven useful to amplify cognition in data intensive applications. Adopting this approach within software product line engineering can help stakeholders in supporting essential work tasks by enhancing their understanding of large and complex product lines. In this paper we present our research into the application of visualisation techniques and cognitive theory to address SPL complexity and to enhance cognition in support of the SPL engineering processes. Specifically we present a 3D visualisation approach to enhance stakeholder cognition and thus support variability management and decision making during feature configuration

    Interpretable solutions for breast cancer diagnosis with grammatical evolution and data augmentation

    No full text
    Medical imaging diagnosis increasingly relies on Machine Learning (ML) models. This is a task that is often hampered by severely imbalanced datasets, where positive cases can be quite rare. Their use is further compromised by their limited interpretability, which is becoming increasingly important. While post-hoc interpretability techniques such as SHAP and LIME have been used with some success on so-called black box models, the use of inherently understandable models makes such endeavours more fruitful. This paper addresses these issues by demonstrating how a relatively new synthetic data generation technique, STEM, can be used to produce data to train models produced by Grammatical Evolution (GE) that are inherently understandable. STEM is a recently introduced combination of the Synthetic Minority Over-sampling Technique (SMOTE), Edited Nearest Neighbour (ENN), and Mixup; it has previously been successfully used to tackle both between-class and within-class imbalance issues. We test our technique on the Digital Database for Screening Mammography (DDSM) and the Wisconsin Breast Cancer (WBC) datasets and compare Area Under the Curve (AUC) results with an ensemble of the top three performing classifiers from a set of eight standard ML classifiers with varying degrees of interpretability. We demonstrate that the GE-derived models present the best AUC while still maintaining interpretable solutions.</p
    corecore