5,987 research outputs found

    Investigating Implemented Process Design: A Case Study on the Impact of Process-aware Information Systems on Core Job Dimensions

    Get PDF
    Adequate process design particularly means that a process fulfills its stakeholders’ expectations. However, when designing process-aware information systems (PAIS), one stakeholder and his expectations are often neglected: the end user. Frequently, this results in end user fears, which, in turn, lead to emotional resistance and a lack of user support during process and information system design. In order to overcome this vicious circle it becomes necessary to better understand the impact of operationalized process design on the end users’ work profile. This paper presents the results of a case study at two Dutch companies.We investigate in which way employees perceive the impact of a newly introduced PAIS based on workflow management technology with respect to five job dimensions: skill variety, task identity, task significance, autonomy, and feedback from the job

    HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    Full text link
    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing nterest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.Comment: 15 pages, 9 figure

    Open-source workflow approaches to passive acoustic monitoring of bats

    Get PDF
    The work was funded by grants to PTM from Carlsberg Semper Ardens Research Projects and the Independent Research Fund Denmark.The affordability, storage and power capacity of compact modern recording hardware have evolved passive acoustic monitoring (PAM) of animals and soundscapes into a non-invasive, cost-effective tool for research and ecological management particularly useful for bats and toothed whales that orient and forage using ultrasonic echolocation. The use of PAM at large scales hinges on effective automated detectors and species classifiers which, combined with distance sampling approaches, have enabled species abundance estimation of toothed whales. But standardized, user-friendly and open access automated detection and classification workflows are in demand for this key conservation metric to be realized for bats. We used the PAMGuard toolbox including its new deep learning classification module to test the performance of four open-source workflows for automated analyses of acoustic datasets from bats. Each workflow used a different initial detection algorithm followed by the same deep learning classification algorithm and was evaluated against the performance of an expert manual analyst. Workflow performance depended strongly on the signal-to-noise ratio and detection algorithm used: the full deep learning workflow had the best classification accuracy (≤67%) but was computationally too slow for practical large-scale bat PAM. Workflows using PAMGuard's detection module or triggers onboard an SM4BAT or AudioMoth accurately classified up to 47%, 59% and 34%, respectively, of calls to species. Not all workflows included noise sampling critical to estimating changes in detection probability over time, a vital parameter for abundance estimation. The workflow using PAMGuard's detection module was 40 times faster than the full deep learning workflow and missed as few calls (recall for both ~0.6), thus balancing computational speed and performance. We show that complete acoustic detection and classification workflows for bat PAM data can be efficiently automated using open-source software such as PAMGuard and exemplify how detection choices, whether pre- or post-deployment, hardware or software-driven, affect the performance of deep learning classification and the downstream ecological information that can be extracted from acoustic recordings. In particular, understanding and quantifying detection/classification accuracy and the probability of detection are key to avoid introducing biases that may ultimately affect the quality of data for ecological management.Publisher PDFPeer reviewe

    MOLNs: A cloud platform for interactive, reproducible and scalable spatial stochastic computational experiments in systems biology using PyURDME

    Full text link
    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools, a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments
    • …
    corecore