386 research outputs found

    Information for adaptation and response to flooding, multi-stakeholder perspectives in Nigeria

    Get PDF
    © 2019 by the authors. Provision of data and information for disaster risk reduction is increasingly important to enable resilience. However, the focus of provision in many African countries is limited to national scale risk assessment andmeteorological data. The research aimed to consider the perspectives on availability and access to information of different local urban actors that require reliable and specific information to make informed decisions. The research used workshop discussions and questionnaires to collect views from stakeholders in flood risk management in Nigerian cities about their current access to information and requirements. The results confirmed that stakeholders and communities agree in recognising the importance of climate information. Findings identified issues surrounding communication between agencies, government and technical experts. The role of the media and business in filling the vacuum left by state provision of information was further highlighted, demonstrating the potential for Private Public Partnerships in supporting adaptation and response to flooding. However, significant differences in access between sub-groups were also revealed such that some marginalised groups may be excluded from information. It follows that climate services, data and information provision need to be collaboratively designed in order to be more inclusive, meet user requirements and build community capacity

    The Online Panels Benchmarking Study: a Total Survey Error comparison of fndings from probability-based surveys and nonprobability online panel surveys in Australia

    Get PDF
    The pervasiveness of the internet has led online research, and particularly online research undertaken via nonprobability online panels, to become the dominant mode of sampling and data collection used by the Australian market and social research industry. There are broad-based concerns that the rapid increase in the use of nonprobability online panels in Australia has not been accompanied by an informed debate about the advantages and disadvantages of probability and nonprobability surveys. The 2015 Australian online Panels Benchmarking Study was undertaken to inform this debate, and report on the fndings from a single national questionnaire administered across three different probability samples and fve different nonprobability online panels. This study enables us to investigate whether Australian surveys using probability sampling methods produce results different from Australian online surveys relying on nonprobability sampling methods, where accuracy is measured relative to independent population benchmarks. In doing so, we build on similar international research in this area, and discuss our fndings as they relate to coverage error, nonresponse error, adjustment error and measurement error

    Building a probability-based online panel: Life in Australia™

    Get PDF
    Life in Australia™ was created to provide Australian researchers, policy makers, academics and businesses with access to a scientifically sampled cross-section of Australian resident adults at a lower cost than telephone surveys. Panellists were recruited using dual-frame landline and mobile random digit dialling. The majority of panellists choose to complete questionnaires online. Representation of the offline population is ensured by interviewing by telephone those panellists who cannot or will not complete questionnaires online. Surveys are conducted about once a month, covering a variety of topics, most with a public opinion or health focus. Full panel waves yield 2000 or more completed surveys. Panellists are offered a small incentive for completing surveys, which they can choose to donate to a charity instead. This paper describes how Life in Australia™ was built and maintained before the first panel refreshment in June 2018. We document the qualitative pretesting used to inform the development of recruitment and enrolment communications materials, and the pilot tests used to assess alternative recruitment approaches and the comparative effectiveness of these approaches. The methods used for the main recruitment effort are detailed, together with various outcome rates. The operation of the panel after recruitment is also described. We assess the performance of the panel compared with other probability surveys and nonprobability online access panels, and against benchmarks from high-quality sources. Finally, we assess Life in Australia™ from a total survey error perspective

    Within-Household Selection Methods: A Critical Review and Experimental Examination

    Get PDF
    Probability samples are necessary for making statistical inferences to the general population (Baker et al. 2013). Some countries (e.g. Sweden) have population registers from which to randomly select samples of adults. The U.S. and many other countries, however, do not have population registers. Instead, researchers (i) select a probability sample of households from lists of areas, addresses, or telephone numbers and (ii) select an adult within these sampled households. The process by which individuals are selected from sampled households to obtain a probability-based sample of individuals is called within-household (or within-unit) selection (Gaziano 2005).Within-household selection aims to provide each member of a sampled household with a known, nonzero chance of being selected for the survey (Gaziano 2005; Lavrakas 2008). Thus, it helps to ensure that the sample represents the target population rather than only those most willing and available to participate and, as such, reduces total survey error (TSE). In interviewer-administered surveys, trained interviewers can implement a prespecified within-household selection procedure, making the selection process relatively straightforward. In self-administered surveys, within-household selection is more challenging because households must carry out the selection task themselves. This can lead to errors in the selection process or nonresponse, resulting in too many or too few of certain types of people in the data (e.g. typically too many female, highly educated, older, and white respondents), and may also lead to biased estimates for other items. We expect the smallest biases in estimates for items that do not differ across household members (e.g. political views, household income) and the largest biases for items that do differ across household members (e.g. household division of labor). In this chapter, we review recent literature on within-household selection across survey modes, identify the methodological requirements of studying within-household selection methods experimentally, provide an example of an experiment designed to improve the quality of selecting an adult within a household in mail surveys, and summarize current implications for survey practice regarding within-household selection. We focus on selection of one adult out of all possible adults in a household; screening households for members who have particular characteristics has additional complications (e.g. Tourangeau et al. 2012; Brick et al. 2016; Brick et al. 2011), although designing experimental studies for screening follows the same principles

    Current Knowledge and Considerations Regarding Survey Refusals: Executive Summary of the AAPOR Task Force Report on Survey Refusals

    Get PDF
    The landscape of survey research has arguably changed more significantly in the past decade than at any other time in its relatively brief history. In that short time, landline telephone ownership has dropped from some 98 percent of all households to less than 60 percent; cell-phone interviewing went from a novelty to a mainstay; address-based designs quickly became an accepted method of sampling the general population; and surveys via Internet panels became ubiquitous in many sectors of social and market research, even as they continue to raise concerns given their lack of random selection. Among these widespread changes, it is perhaps not surprising that the substantial increase in refusal rates has received comparatively little attention. As we will detail, it was not uncommon for a study conducted 20 years ago to have encountered one refusal for every one or two completed interviews, while today experiencing three or more refusals for every one completed interview is commonplace. This trend has led to several concerns that motivate this Task Force. As refusal rates have increased, refusal bias (as a component of nonresponse bias) is an increased threat to the validity of survey results. Of practical concern are the efficacy and cost implications of enhanced efforts to avert initial refusals and convert refusals that do occur. Finally, though no less significant, are the ethical concerns raised by the possibility that efforts to minimize refusals can be perceived as coercive or harassing potential respondents. Indeed, perhaps the most important goal of this document is to foster greater consideration by the reader of the rights of respondents in survey research
    • …
    corecore