46,229 research outputs found
Structured Discussion and Early Failure Prediction in Feature Requests
Feature request management systems are popular tools for gathering and negotiating stakeholders' change requests during system evolution. While these frameworks encourage stakeholder participation in distributed software development, their lack of structure also raises challenges. We present a study of requirements defects and failures in large scale feature request management systems, which we build upon to propose and evaluate two distinct solutions for key challenges in feature requests. The discussion forums on which feature request management systems are based make it difficult for developers to understand stakeholders' real needs. We propose a tool-supported argumentation framework, DoArgue, that integrates into feature request management systems allowing stakeholders to annotate comments on whether a suggested feature should be implemented. DoArgue aims to help stakeholders provide input into requirements activity that is more effective and understandable to developers. A case study evaluation suggests that DoArgue encapsulates the key discussion concepts on implementing a feature, and requires little additional effort to use. Therefore it could be adopted to clarify the complexities of requirements discussions in distributed settings. Deciding how much upfront requirements analysis to perform on feature requests is another important challenge: too little may result in inadequate functionalities being developed, costly changes, and wasted development effort; too much is a waste of time and resources. We propose an automated tool-supported framework for predicting failures early in a feature request's life-cycle when a decision is made on whether to implement it. A cost-benefit model assesses the value of conducting additional requirements analysis on a body of feature requests predicted to fail. An evaluation on six large-scale projects shows that prediction models provide more value than the best baseline predictors for many failure types. This suggests that failure prediction during requirements elicitation is a promising approach for localising, guiding, and deciding how much requirements analysis to conduct
Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud
With the advent of cloud computing, organizations are nowadays able to react
rapidly to changing demands for computational resources. Not only individual
applications can be hosted on virtual cloud infrastructures, but also complete
business processes. This allows the realization of so-called elastic processes,
i.e., processes which are carried out using elastic cloud resources. Despite
the manifold benefits of elastic processes, there is still a lack of solutions
supporting them.
In this paper, we identify the state of the art of elastic Business Process
Management with a focus on infrastructural challenges. We conceptualize an
architecture for an elastic Business Process Management System and discuss
existing work on scheduling, resource allocation, monitoring, decentralized
coordination, and state management for elastic processes. Furthermore, we
present two representative elastic Business Process Management Systems which
are intended to counter these challenges. Based on our findings, we identify
open issues and outline possible research directions for the realization of
elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and
P. Hoenisch (2015). Elastic Business Process Management: State of the Art and
Open Challenges for BPM in the Cloud. Future Generation Computer Systems,
Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00
Managing Dynamic Enterprise and Urgent Workloads on Clouds Using Layered Queuing and Historical Performance Models
The automatic allocation of enterprise workload to resources can be enhanced by being able to make what-if response time predictions whilst different allocations are being considered. We experimentally investigate an historical and a layered queuing performance model and show how they can provide a good level of support for a dynamic-urgent cloud environment. Using this we define, implement and experimentally investigate the effectiveness of a prediction-based cloud workload and resource management algorithm. Based on these experimental analyses we: i.) comparatively evaluate the layered queuing and historical techniques; ii.) evaluate the effectiveness of the management algorithm in different operating scenarios; and iii.) provide guidance on using prediction-based workload and resource management
Many-Task Computing and Blue Waters
This report discusses many-task computing (MTC) generically and in the
context of the proposed Blue Waters systems, which is planned to be the largest
NSF-funded supercomputer when it begins production use in 2012. The aim of this
report is to inform the BW project about MTC, including understanding aspects
of MTC applications that can be used to characterize the domain and
understanding the implications of these aspects to middleware and policies.
Many MTC applications do not neatly fit the stereotypes of high-performance
computing (HPC) or high-throughput computing (HTC) applications. Like HTC
applications, by definition MTC applications are structured as graphs of
discrete tasks, with explicit input and output dependencies forming the graph
edges. However, MTC applications have significant features that distinguish
them from typical HTC applications. In particular, different engineering
constraints for hardware and software must be met in order to support these
applications. HTC applications have traditionally run on platforms such as
grids and clusters, through either workflow systems or parallel programming
systems. MTC applications, in contrast, will often demand a short time to
solution, may be communication intensive or data intensive, and may comprise
very short tasks. Therefore, hardware and software for MTC must be engineered
to support the additional communication and I/O and must minimize task dispatch
overheads. The hardware of large-scale HPC systems, with its high degree of
parallelism and support for intensive communication, is well suited for MTC
applications. However, HPC systems often lack a dynamic resource-provisioning
feature, are not ideal for task communication via the file system, and have an
I/O system that is not optimized for MTC-style applications. Hence, additional
software support is likely to be required to gain full benefit from the HPC
hardware
Donor Retention in Online Crowdfunding Communities: A Case Study of DonorsChoose.org
Online crowdfunding platforms like DonorsChoose.org and Kickstarter allow
specific projects to get funded by targeted contributions from a large number
of people. Critical for the success of crowdfunding communities is recruitment
and continued engagement of donors. With donor attrition rates above 70%, a
significant challenge for online crowdfunding platforms as well as traditional
offline non-profit organizations is the problem of donor retention.
We present a large-scale study of millions of donors and donations on
DonorsChoose.org, a crowdfunding platform for education projects. Studying an
online crowdfunding platform allows for an unprecedented detailed view of how
people direct their donations. We explore various factors impacting donor
retention which allows us to identify different groups of donors and quantify
their propensity to return for subsequent donations. We find that donors are
more likely to return if they had a positive interaction with the receiver of
the donation. We also show that this includes appropriate and timely
recognition of their support as well as detailed communication of their impact.
Finally, we discuss how our findings could inform steps to improve donor
retention in crowdfunding communities and non-profit organizations.Comment: preprint version of WWW 2015 pape
Improving root cause analysis through the integration of PLM systems with cross supply chain maintenance data
The purpose of this paper is to demonstrate a system architecture for integrating Product Lifecycle Management (PLM) systems with cross supply chain maintenance information to support root-cause analysis. By integrating product-data from PLM systems with warranty claims, vehicle diagnostics and technical publications, engineers were able to improve the root-cause analysis and close the information gaps. Data collection was achieved via in-depth semi-structured interviews and workshops with experts from the automotive sector. Unified Modelling Language (UML) diagrams were used to design the system architecture proposed. A user scenario is also presented to demonstrate the functionality of the system
- …