77,250 research outputs found

    Measuring Process Modelling Success

    Get PDF
    Process-modelling has seen widespread acceptance, par ticularly on large IT-enabled Business Process Reengineering projects. It is applied, as a process design and management technique, across all life-cycle phases of a system. While there has been much research on aspects of process-modelling, little attention has focused on post-hoc evaluation of process-modelling success. This paper addresses this gap, and presents a process-modelling success measurement (PMS) framework, which includes the dimensions: process-model quality; model use; user satisfaction; and process modelling impact. Measurement items for each dimension are also suggested

    Understanding customers' holistic perception of switches in automotive human–machine interfaces

    Get PDF
    For successful new product development, it is necessary to understand the customers' holistic experience of the product beyond traditional task completion, and acceptance measures. This paper describes research in which ninety-eight UK owners of luxury saloons assessed the feel of push-switches in five luxury saloon cars both in context (in-car) and out of context (on a bench). A combination of hedonic data (i.e. a measure of ‘liking’), qualitative data and semantic differential data was collected. It was found that customers are clearly able to differentiate between switches based on the degree of liking for the samples' perceived haptic qualities, and that the assessment environment had a statistically significant effect, but that it was not universal. A factor analysis has shown that perceived characteristics of switch haptics can be explained by three independent factors defined as ‘Image’, ‘Build Quality’, and ‘Clickiness’. Preliminary steps have also been taken towards identifying whether existing theoretical frameworks for user experience may be applicable to automotive human–machine interfaces

    A comparative evaluation of interactive segmentation algorithms

    Get PDF
    In this paper we present a comparative evaluation of four popular interactive segmentation algorithms. The evaluation was carried out as a series of user-experiments, in which participants were tasked with extracting 100 objects from a common dataset: 25 with each algorithm, constrained within a time limit of 2 min for each object. To facilitate the experiments, a “scribble-driven” segmentation tool was developed to enable interactive image segmentation by simply marking areas of foreground and background with the mouse. As the participants refined and improved their respective segmentations, the corresponding updated segmentation mask was stored along with the elapsed time. We then collected and evaluated each recorded mask against a manually segmented ground truth, thus allowing us to gauge segmentation accuracy over time. Two benchmarks were used for the evaluation: the well-known Jaccard index for measuring object accuracy, and a new fuzzy metric, proposed in this paper, designed for measuring boundary accuracy. Analysis of the experimental results demonstrates the effectiveness of the suggested measures and provides valuable insights into the performance and characteristics of the evaluated algorithms

    Quality model for semantic IS standards

    Get PDF
    Semantic IS (Information Systems) standards are essential for achieving\ud interoperability between organizations. However a recent survey suggests that\ud not the full benefits of standards are achieved, due to the quality issues. This\ud paper presents a quality model for semantic IS standards, that should support\ud standards development organizations in assessing the quality of their\ud standards. Although intended for semantic IS standards the potential use of\ud this quality model is much broader and might be applicable to all kind of\ud standards

    On Cognitive Preferences and the Plausibility of Rule-based Models

    Get PDF
    It is conventional wisdom in machine learning and data mining that logical models such as rule sets are more interpretable than other models, and that among such rule-based models, simpler models are more interpretable than more complex ones. In this position paper, we question this latter assumption by focusing on one particular aspect of interpretability, namely the plausibility of models. Roughly speaking, we equate the plausibility of a model with the likeliness that a user accepts it as an explanation for a prediction. In particular, we argue that, all other things being equal, longer explanations may be more convincing than shorter ones, and that the predominant bias for shorter models, which is typically necessary for learning powerful discriminative models, may not be suitable when it comes to user acceptance of the learned models. To that end, we first recapitulate evidence for and against this postulate, and then report the results of an evaluation in a crowd-sourcing study based on about 3.000 judgments. The results do not reveal a strong preference for simple rules, whereas we can observe a weak preference for longer rules in some domains. We then relate these results to well-known cognitive biases such as the conjunction fallacy, the representative heuristic, or the recogition heuristic, and investigate their relation to rule length and plausibility.Comment: V4: Another rewrite of section on interpretability to clarify focus on plausibility and relation to interpretability, comprehensibility, and justifiabilit

    Soft peer review: social software and distributed scientific evaluation

    Get PDF
    The debate on the prospects of peer-review in the Internet age and the increasing criticism leveled against the dominant role of impact factor indicators are calling for new measurable criteria to assess scientific quality. Usage-based metrics offer a new avenue to scientific quality assessment but face the same risks as first generation search engines that used unreliable metrics (such as raw traffic data) to estimate content quality. In this article I analyze the contribution that social bookmarking systems can provide to the problem of usage-based metrics for scientific evaluation. I suggest that collaboratively aggregated metadata may help fill the gap between traditional citation-based criteria and raw usage factors. I submit that bottom-up, distributed evaluation models such as those afforded by social bookmarking will challenge more traditional quality assessment models in terms of coverage, efficiency and scalability. Services aggregating user-related quality indicators for online scientific content will come to occupy a key function in the scholarly communication system
    corecore