58,917 research outputs found

    Individual Differences in the Experience of Cognitive Workload

    Get PDF
    This study investigated the roles of four psychosocial variables – anxiety, conscientiousness, emotional intelligence, and Protestant work ethic – on subjective ratings of cognitive workload as measured by the Task Load Index (TLX) and the further connections between the four variables and TLX ratings of task performance. The four variables represented aspects of an underlying construct of elasticity versus rigidity in response to workload. Participants were 141 undergraduates who performed a vigilance task under different speeded conditions while working on a jigsaw puzzle for 90 minutes. Regression analysis showed that anxiety and emotional intelligence were the two variables most proximally related to TLX ratings. TLX ratings contributed to the prediction of performance on the puzzle, but not the vigilance task. Severity error bias was evident in some of the ratings. Although working in pairs improved performance, it also resulted in higher ratings of temporal demand and perceived performance pressure

    Crew workload strategies in advanced cockpits

    Get PDF
    Many methods of measuring and predicting operator workload have been developed that provide useful information in the design, evaluation, and operation of complex systems and which aid in developing models of human attention and performance. However, the relationships between such measures, imposed task demands, and measures of performance remain complex and even contradictory. It appears that we have ignored an important factor: people do not passively translate task demands into performance. Rather, they actively manage their time, resources, and effort to achieve an acceptable level of performance while maintaining a comfortable level of workload. While such adaptive, creative, and strategic behaviors are the primary reason that human operators remain an essential component of all advanced man-machine systems, they also result in individual differences in the way people respond to the same task demands and inconsistent relationships among measures. Finally, we are able to measure workload and performance, but interpreting such measures remains difficult; it is still not clear how much workload is too much or too little nor the consequences of suboptimal workload on system performance and the mental, physical, and emotional well-being of the human operators. The rationale and philosophy of a program of research developed to address these issues will be reviewed and contrasted to traditional methods of defining, measuring, and predicting human operator workload. Viewgraphs are given

    Proactive cloud management for highly heterogeneous multi-cloud infrastructures

    Get PDF
    Various literature studies demonstrated that the cloud computing paradigm can help to improve availability and performance of applications subject to the problem of software anomalies. Indeed, the cloud resource provisioning model enables users to rapidly access new processing resources, even distributed over different geographical regions, that can be promptly used in the case of, e.g., crashes or hangs of running machines, as well as to balance the load in the case of overloaded machines. Nevertheless, managing a complex geographically-distributed cloud deploy could be a complex and time-consuming task. Autonomic Cloud Manager (ACM) Framework is an autonomic framework for supporting proactive management of applications deployed over multiple cloud regions. It uses machine learning models to predict failures of virtual machines and to proactively redirect the load to healthy machines/cloud regions. In this paper, we study different policies to perform efficient proactive load balancing across cloud regions in order to mitigate the effect of software anomalies. These policies use predictions about the mean time to failure of virtual machines. We consider the case of heterogeneous cloud regions, i.e regions with different amount of resources, and we provide an experimental assessment of these policies in the context of ACM Framework

    Towards Operator-less Data Centers Through Data-Driven, Predictive, Proactive Autonomics

    Get PDF
    Continued reliance on human operators for managing data centers is a major impediment for them from ever reaching extreme dimensions. Large computer systems in general, and data centers in particular, will ultimately be managed using predictive computational and executable models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing low-level operations. Data-driven autonomics, where management and control are based on holistic predictive models that are built and updated using live data, opens one possible path towards limiting the role of operators in data centers. In this paper, we present a data-science study of a public Google dataset collected in a 12K-node cluster with the goal of building and evaluating predictive models for node failures. Our results support the practicality of a data-driven approach by showing the effectiveness of predictive models based on data found in typical data center logs. We use BigQuery, the big data SQL platform from the Google Cloud suite, to process massive amounts of data and generate a rich feature set characterizing node state over time. We describe how an ensemble classifier can be built out of many Random Forest classifiers each trained on these features, to predict if nodes will fail in a future 24-hour window. Our evaluation reveals that if we limit false positive rates to 5%, we can achieve true positive rates between 27% and 88% with precision varying between 50% and 72%.This level of performance allows us to recover large fraction of jobs' executions (by redirecting them to other nodes when a failure of the present node is predicted) that would otherwise have been wasted due to failures. [...

    Cloud Index Tracking: Enabling Predictable Costs in Cloud Spot Markets

    Full text link
    Cloud spot markets rent VMs for a variable price that is typically much lower than the price of on-demand VMs, which makes them attractive for a wide range of large-scale applications. However, applications that run on spot VMs suffer from cost uncertainty, since spot prices fluctuate, in part, based on supply, demand, or both. The difficulty in predicting spot prices affects users and applications: the former cannot effectively plan their IT expenditures, while the latter cannot infer the availability and performance of spot VMs, which are a function of their variable price. To address the problem, we use properties of cloud infrastructure and workloads to show that prices become more stable and predictable as they are aggregated together. We leverage this observation to define an aggregate index price for spot VMs that serves as a reference for what users should expect to pay. We show that, even when the spot prices for individual VMs are volatile, the index price remains stable and predictable. We then introduce cloud index tracking: a migration policy that tracks the index price to ensure applications running on spot VMs incur a predictable cost by migrating to a new spot VM if the current VM's price significantly deviates from the index price.Comment: ACM Symposium on Cloud Computing 201
    • …
    corecore