24 research outputs found

    Can Charisma Be Taught? Tests of Two Interventions

    Get PDF
    We tested whether we could teach individuals to behave more charismatically, andwhether changes in charisma affected leader outcomes. In Study 1, a mixed-design fieldexperiment, we randomly assigned 34 middle-level managers to a control or anexperimental group. Three months later, we reassessed the managers using theircoworker ratings (Time 1 raters = 343; Time 2 raters = 321). In Study 2, a within-subjectslaboratory experiment, we videotaped 41 MBA participants giving a speech. We thentaught them how to behave more charismatically, and they redelivered the speech6 weeks later. Independent assessors (n = 135) rated the speeches. Results from thestudies indicated that the training had significant effects on ratings of leader charisma(mean D = .62) and that charisma had significant effects on ratings of leaderprototypicality and emergence...............................................................................................................................

    Great American Desert: Arid Lands, Federal Exploration, and the Construction of a Continental United States

    No full text
    This dissertation examines how the Great American Desert of the pre-Civil War era ceased to be a desert and how the modern American desert became American. The project begins after the Louisiana Purchase with the advent of the Great American Desert, the historical geography that framed the Great Plains as an American Sahara and thus as a foreign land unfit for agricultural occupation. Modern historians and historical geographers have largely dismissed the Great American Desert as a geographic myth. This work takes a different approach. One of the central contentions here is that it is impossible to know precisely what nineteenth-century Americans meant when they used the word desert because they used the word desert in a variety of ways that do not conform to modern usage. Sometimes they used it reference to arid landscapes; other times they used it—without climatic specificity—to describe any tract of land deemed foreign, barren, waste, or unreclaimed (including forests and wetlands). All of which explains why roughly half of the conterminous United States—the Great Plains, eastern California, Oregon, and Washington, and much of everything in between—has, at time or another, been mapped or described as desert.An environmental and cultural history of US territorial exploration and expansion from Lewis and Clark to the operations of the U.S. Geological Survey at the end of the nineteenth century, the larger arc of the study plots how the old territorial regime of desert as foreign wasteland eventually gave way, or at least came to coincide, with a new territorial regime—a territorial regime that not only framed deserts as arid lands, but converted deserts from foreign into domestic territory through expressions of affection for the desert West. The principal aim here is not develop an operative definition of the word desert, or determine whether or not nineteenth-century Americans actually believed the Great Plains were comparable to the Great Desert of North Africa, but rather to track changes in the socio-cultural meaning of deserts in American territorial discourse and how those changes in meaning informed the larger project of American continentalism

    Resilience and fault tolerance in high-performance computing for numerical weather and climate prediction

    Get PDF
    Progress in numerical weather and climate prediction accuracy greatly depends on the growth of the available computing power. As the number of cores in top computing facilities pushes into the millions, increased average frequency of hardware and software failures forces users to review their algorithms and systems in order to protect simulations from breakdown. This report surveys hardware, application-level and algorithm-level resilience approaches of particular relevance to time-critical numerical weather and climate prediction systems. A selection of applicable existing strategies is analysed, featuring interpolation-restart and compressed checkpointing for the numerical schemes, in-memory checkpointing, user-level failure mitigation and backup-based methods for the systems. Numerical examples showcase the performance of the techniques in addressing faults, with particular emphasis on iterative solvers for linear systems, a staple of atmospheric fluid flow solvers. The potential impact of these strategies is discussed in relation to current development of numerical weather prediction algorithms and systems towards the exascale. Trade-offs between performance, efficiency and effectiveness of resiliency strategies are analysed and some recommendations outlined for future developments

    Resiliency in numerical algorithm design for extreme scale simulations

    Get PDF
    This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically
    corecore