6,316 research outputs found

    Exploring the Mind of the Interviewer: Findings from Research with Interviewers to Improve the Survey Process

    Get PDF
    The interviewers’ task in the data collection process is a complex one, with many judgments and decisions being made from moment to moment as they ask questions to get answers from respondents (Japec, 2008). Many survey organizations train their interviewers to use standardized language and read questions verbatim. However, in practice, interviewers may need to use a conversational approach and probe respondents to get the answers needed. This research explores the process by which interviewers make such decisions in real-time by conducting research with interviewers about their experiences collecting data. Using a cognitive interview approach, we asked interviewers about multiple aspects of the survey process, including how they handle asking and probing about sensitive or difficult-to-answer questions, how they decide to probe further versus accept an answer as-is, and when they decide to use lead-ins to questions such as apologizing or distancing themselves from the survey. We also had interviewers provide feedback on hypothetical vignettes (varying in their level of sensitivity and difficulty) that closely mimicked interviewer-respondent interactions they might experience in the field. We conducted a total of 27 semi-structured cognitive interviews with survey interviewers from a federal statistical agency. The interviewers had a wide range of experience interviewing at their agency, from under one year to over 15 years, and across multiple survey topics, including employment, health, housing, crime, and expenditures. Two researchers conducted the interviews, three of which were conducted in person and 24 by telephone, each lasting approximately 60 minutes. Major themes that emerged during the interviews were coded and analyzed by the researchers. For instance, we categorized the reasons respondents find questions sensitive or difficult to answer (e.g., invasive questions, recall problems, privacy concerns). We also identified themes and coded the types of question lead-ins interviewers reported using to address sensitive or difficult questions (e.g., distancing, apologizing, and repeating the question). We also provide qualitative analysis and descriptions of emergent probes and other techniques interviewers reported using to help with the survey process, such as reminding respondents of the confidentiality of their responses, the importance of their data, the ability to skip a question, and how interviewers go about deciding whether to probe further or accept a response. We also found evidence that interviewers sometimes experience sensitivity or discomfort themselves when asking respondents about sensitive topics, and strategies they have identified to overcome those challenges. Finally, we will report on the interviewers’ reactions to hypothetical vignettes depicting interviewer-respondent interactions and provide analyses about how interviewers handle these situations, as well as their ratings of how sensitive or difficult these survey questions would be for them to administer and for respondents to answer in the field. Learning directly from interviewers about how they think through an interview and what obstacles they face is a critical step in beginning to understand how to develop realistic data collection decisions, and improve training and support for interviewers. We will discuss the results of these interviews and their implications for improving the survey process

    Are your lights off? Using problem frames to diagnose system failures

    Get PDF
    This paper reports on our experience of investigating the role of software systems in the power blackout that affected parts of the United States and Canada on 14 August 2003. Based on a detailed study of the official report on the blackout, our investigation has aimed to bring out requirements engineering lessons that can inform development practices for dependable software systems. Since the causes of failures are typically rooted in the complex structures of software systems and their world contexts, we have deployed and evaluated a framework that looks beyond the scope of software and into its physical context, directing attention to places in the system structures where failures are likely to occur. We report that (i) Problem Frames were effective in diagnosing the causes of failures and documenting the causes in a schematic and accessible way, and (ii) errors in addressing the concerns of biddable domains, model building problems, and monitoring problems had contributed to the blackout

    Information theory based detection against network behavior mimicking DDoS attacks

    Full text link
    DDoS is a spy-on-spy game between attackers and detectors. Attackers are mimicking network traffic patterns to disable the detection algorithms which are based on these features. It is an open problem of discriminating the mimicking DDoS attacks from massive legitimate network accessing. We observed that the zombies use controlled function(s) to pump attack packages to the victim, therefore, the attack flows to the victim are always share some properties, e.g. packages distribution behaviors, which are not possessed by legitimate flows in a short time period. Based on this observation, once there appear suspicious flows to a server, we start to calculate the distance of the package distribution behavior among the suspicious flows. If the distance is less than a given threshold, then it is a DDoS attack, otherwise, it is a legitimate accessing. Our analysis and the preliminary experiments indicate that the proposed method- can discriminate mimicking flooding attacks from legitimate accessing efficiently and effectively. <br /

    Meta-Learning Dynamics Forecasting Using Task Inference

    Full text link
    Current deep learning models for dynamics forecasting struggle with generalization. They can only forecast in a specific domain and fail when applied to systems with different parameters, external forces, or boundary conditions. We propose a model-based meta-learning method called DyAd which can generalize across heterogeneous domains by partitioning them into different tasks. DyAd has two parts: an encoder which infers the time-invariant hidden features of the task with weak supervision, and a forecaster which learns the shared dynamics of the entire domain. The encoder adapts and controls the forecaster during inference using adaptive instance normalization and adaptive padding. Theoretically, we prove that the generalization error of such procedure is related to the task relatedness in the source domain, as well as the domain differences between source and target. Experimentally, we demonstrate that our model outperforms state-of-the-art approaches on both turbulent flow and real-world ocean data forecasting tasks

    Improving Prototypical Part Networks with Reward Reweighing, Reselection, and Retraining

    Full text link
    In recent years, work has gone into developing deep interpretable methods for image classification that clearly attributes a model's output to specific features of the data. One such of these methods is the prototypical part network (ProtoPNet), which attempts to classify images based on meaningful parts of the input. While this method results in interpretable classifications, this method often learns to classify from spurious or inconsistent parts of the image. Hoping to remedy this, we take inspiration from the recent developments in Reinforcement Learning with Human Feedback (RLHF) to fine-tune these prototypes. By collecting human annotations of prototypes quality via a 1-5 scale on the CUB-200-2011 dataset, we construct a reward model that learns to identify non-spurious prototypes. In place of a full RL update, we propose the reweighted, reselected, and retrained prototypical part network (R3-ProtoPNet), which adds an additional three steps to the ProtoPNet training loop. The first two steps are reward-based reweighting and reselection, which align prototypes with human feedback. The final step is retraining to realign the model's features with the updated prototypes. We find that R3-ProtoPNet improves the overall consistency and meaningfulness of the prototypes, but lower the test predictive accuracy when used independently. When multiple R3-ProtoPNets are incorporated into an ensemble, we find an increase in test predictive performance while maintaining interpretability
    corecore