10 research outputs found

    Is Being Funny a Useful Policy? How Local Governments' Humorous Crisis Response Strategies and Crisis Responsibilities Influence Trust, Emotions, and Behavioral Intentions

    Get PDF
    This study is the first to investigate how a local government's humorously framed response strategy on social media to a low-severity crisis influences people's trust in the local government and their crisis-related behavioral intentions, specifically when considering the government's responsibility for the crisis. Based on the situational crisis communication theory, we examined the mediating role of experienced positive or negative affect on people's responses to a local government’s crisis communication strategy. Further, we exploratorily examined the predictive power and moderating role of demographics, sense of humor, disposition to trust, and the respective crisis scenarios. A total of 517 people participated in an online experiment in which they were confronted with three randomly presented fictive crisis scenarios where the local government’s crisis responsibility (high versus low) and the framing of their crisis response strategy (in form of humorous versus rational Twitter posts) were systematically varied between subjects. First, the results mostly corroborate earlier findings about the degree of crisis responsibility (that is, when a government's crisis responsibility is high, people have less trust and behavioral intentions) and about the mediating role of experienced affect. Second, we found that humorously framed strategies negatively influence trust and positive affect (but not behavioral intentions). In contrast to earlier findings, the crisis responsibility × framing interaction was not significant. Altogether, the results advise against using humor in crisis communications on social media, even in low-severity crisis. Exploratory analyses indicate that further investigations should focus on specific crisis characteristics and potential moderators

    Crowdsourcing hypothesis tests: Making transparent how design choices shape research results

    Get PDF
    To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer fiveoriginal research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams renderedstatistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.</div

    Feedback-Instrument zur Rettungskraefte-Entwicklung – Seminare und Tagesveranstaltungen (FIRE-ST)

    Full text link
    Das Feedback-Instrument zur Rettungskräfte-Entwicklung – Seminare und Tagesveranstaltungen (FIRE-ST) erfasst die Qualität der Fortbildung von Rettungskräften im Kontext von ein- bis dreitägigen Tagesveranstaltungen. Das Instrument ist eine adaptierte Version des Feedback-Instruments zur Rettungskräfte-Entwicklung (FIRE, Schulte & Thielsch, 2019), einem Fragebogen für die Evaluation mehrwöchiger Ausbildungslehrgänge für Führungskräfte im Rettungswesen. Die Validierung des FIRE-ST fand mithilfe von Seminarteilnehmenden am Institut der Feuerwehr Nordrhein-Westfalen statt (N = 247). Konfirmatorische Faktorenanalysen legten drei Faktoren auf Ebene der Lernprozesse (Dozentenverhalten, Anforderungsniveau und Struktur) und zwei Faktoren auf Ebene der Lernoutcomes (Kompetenzerwerb und Transfer) nahe. Die Skalen zeigen eine akzeptable bis gute interne Konsistenz und es liegen eindeutige Hinweise auf Konstrukt- und Kriteriumsvalidität vor.The Feedback Instrument for Rescue Force Development – Seminars and One-day Events (FIRE-ST) measures the quality of the training of rescue forces in the context of one-day to three-day seminars and workshops. This instrument is an adapted version of the Feedback Instrument for Rescue Force Development (FIRE, Schulte & Thielsch, 2019), which is a questionnaire for the evaluation of multi-week leadership training courses for rescue forces. The FIRE-ST was validated on basis of 247 firefighters who participated in seminars at the Institute of the North Rhine-Westphalia Fire Brigade (IdF NRW). Confirmatory factor analyses suggested three factors at the level of learning processes (trainer, overextension and structure) as well as two factors at the level of learning outcomes (competence and transfer). The scales show an acceptable to good internal consistency and there are clear indications of construct and criterion validity

    Feedback-Instrument zur Rettungskraefte Entwicklung – Einsatzuebungen (FIRE-E)

    Full text link
    Das Feedback-Instrument zur Rettungskräfte-Entwicklung - Einsatzübungen (FIRE-E) erfasst die Qualität von Einsatzübungen in der Rettungskräfte-Ausbildung. Das Instrument ist ein Zusatzmodul für das Feedback-Instrument zur Rettungskräfte-Entwicklung (FIRE, Schulte & Thielsch, 2019; Schulte, Babiel, Messinger & Thielsch, 2019), einem Fragebogen für die Evaluation von Ausbildungslehrgängen für Führungskräfte im Rettungswesen. Die Validierung des FIRE-E fand mithilfe von Lehrgangsteilnehmenden am Institut der Feuerwehr Nordrhein-Westfalen statt (N = 375). Eine konfirmatorische Faktorenanalyse legte einen zugrundeliegenden Faktor nahe. Die Skala weist eine gute interne Konsistenz auf. Es liegen eindeutige Hinweise auf Konstrukt- und Kriteriumsvalidität vor.The Feedback Instrument for Rescue Force Development - Emergency Training (FIRE-E) measures the quality of rescue force training in simulated acute settings. This instrument is an additional module for the Feedback Instrument for Rescue Force Development (FIRE, Schulte & Thielsch, 2019; Schulte, Babiel, Messinger & Thielsch, 2019), which is a questionnaire for the evaluation of leadership training courses for rescue forces. The FIRE-ST was validated on basis of 375 firefighters who participated in training courses at the Institute of the North Rhine-Westphalia Fire Brigade (IdF NRW). A confirmatory factor analysis suggested one underlying factor. The scale shows good internal consistency and there are clear indications of construct and criterion validity

    Managing Pandemics—Demands, Resources, and Effective Behaviors Within Crisis Management Teams

    Full text link
    Pandemics, such as the COVID‐19 crisis, are very complex emergencies that can neither be handled by individuals nor by any single municipality, organization or even country alone. Such situations require multidisciplinary crisis management teams (CMTs) at different administrative levels. However, most existing CMTs are trained for rather local and temporary emergencies but not for international and long‐lasting crises. Moreover, CMT members in a pandemic face additional demands due to unknown characteristics of the disease and a highly volatile environment. To support and ensure the effectiveness of CMTs, we need to understand how CMT members can successfully cope with these multiple demands. Connecting teamwork research with the job demands and resources approach as starting framework, we conducted structured interviews and critical incident analyses with 144 members of various CMTs during the COVID‐19 pandemic. Content analyses revealed both perceived demands as well as perceived resources in CMTs. Moreover, structuring work processes, open, precise and regular communication, and anticipatory, goal‐oriented and fast problem solving were described as particularly effective behaviors in CMTs. We illustrate our findings in an integrated model and derive practical recommendations for the work and future training of CMTs

    Crowdsourcing hypothesis tests: making transparent how design choices shape research results

    Get PDF
    To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams rendered statistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim
    corecore