100,644 research outputs found

    The cultural, ethnic and linguistic classification of populations and neighbourhoods using personal names

    Get PDF
    There are growing needs to understand the nature and detailed composition of ethnicgroups in today?s increasingly multicultural societies. Ethnicity classifications areoften hotly contested, but still greater problems arise from the quality and availabilityof classifications, with knock on consequences for our ability meaningfully tosubdivide populations. Name analysis and classification has been proposed as oneefficient method of achieving such subdivisions in the absence of ethnicity data, andmay be especially pertinent to public health and demographic applications. However,previous approaches to name analysis have been designed to identify one or a smallnumber of ethnic minorities, and not complete populations.This working paper presents a new methodology to classify the UK population andneighbourhoods into groups of common origin using surnames and forenames. Itproposes a new ontology of ethnicity that combines some of its multidimensionalfacets; language, religion, geographical region, and culture. It uses data collected atvery fine temporal and spatial scales, and made available, subject to safeguards, at thelevel of the individual. Such individuals are classified into 185 independentlyassigned categories of Cultural Ethnic and Linguistic (CEL) groups, based on theprobable origins of names. We include a justification for the need of classifyingethnicity, a proposed CEL taxonomy, a description of how the CEL classification wasbuilt and applied, a preliminary external validation, and some examples of current andpotential applications

    The UK geography of the E-Society: a national classification

    Get PDF
    It is simplistic to think of the impacts of new information and communication technologies (ICTs) in terms of a single, or even small number of, 'digital divides'. As developments in what has been termed the ?e-society? reach wider and more generalisedaudiences, so it becomes appropriate to think of digital media as having wider-ranging but differentiated impacts upon consumer transactions, information gathering and citizen participation. This paper describes the development of a detailed, nationwide household classification based on levels of awareness of different ICTs; levels of use of ICTs; andtheir perceived impacts upon human capital formation and the quality of life. It discusses how geodemographic classification makes it possible to provide context for detailed case studies, and hence identify how policy might best improve both the quality and degree ofsociety?s access to ICTs. The primary focus of the paper is methodological, but it alsoillustrates how the classification may be used to investigate a range of regional and subregional policy issues. This paper illustrates the potential contribution of bespoke classifications to evidence-based policy, and the likely benefits of combining the most appropriate methods, techniques, datasets and practices that are used in the public and private sectors. It is simplistic to think of the impacts of new information and communication technologies (ICTs) in terms of a single, or even small number of, 'digital divides'. As developments in what has been termed the ?e-society? reach wider and more generalisedaudiences, so it becomes appropriate to think of digital media as having wider-rangingbut differentiated impacts upon consumer transactions, information gathering and citizen participation. This paper describes the development of a detailed, nationwide household classification based on levels of awareness of different ICTs; levels of use of ICTs; and their perceived impacts upon human capital formation and the quality of life. It discusses how geodemographic classification makes it possible to provide context for detailed case studies, and hence identify how policy might best improve both the quality and degree of society?s access to ICTs. The primary focus of the paper is methodological, but it also illustrates how the classification may be used to investigate a range of regional and subregional policy issues. This paper illustrates the potential contribution of bespoke classifications to evidence-based policy, and the likely benefits of combining the most appropriate methods, techniques, datasets and practices that are used in the public and private sectors

    Collective labor supply with children

    Get PDF
    We extend the collective model of household behavior to allow for the existence of public consumption. We show how this model allows the analysis of welfare consequences of policies aimed at changing the distribution of power within the household. Our setting provides a conceptual framework for addressing issues linked to the "targeting" of specific benefits or taxes. We also show that the observation of the labor supplies and the household demand for the public good allow one to identify individual welfare and the decision process. This requires either a separability assumption or the presence of a distribution factor

    Understanding the gender and ethnicity attainment gap in UK higher education

    Get PDF
    In recent years the success rates of different groups of students in higher education (HE), have come under considerable scrutiny, with gender and ethnicity identified as key attributes predicting differential achievement of ‘good degrees’. A review of previous studies highlights the need for research which looks beyond ‘the deficit model’ to explain the attainment gap. This research used a mixed-methods approach to explore the academic and social experiences of students, as well as lecturers’ views on student achievement, in one UK University. Findings suggest that there are significant differences in motivation and confidence speaking English for different ethnic groups in this study, and a divergence in attendance and study time by gender – both of which may go some way to helping understand the gaps in attainment. In addition, male and BME students tended to over-estimate their likelihood of achieving a good degree outcome, compared to other groups

    Tools for Risk Analysis: Updating the 2006 WHO guidelines

    No full text
    This chapter reviews developments since the WHO Guidelines for the safe use of wastewater in agriculture were published in 2006. The six main developments are: the recognition that the tolerable additional disease burden may be too stringent for many developing countries; the benefits of focusing on single-event infection risks as a measure of outbreak potential when evaluating risk acceptability; a more rigorous method for estimating annual risks; the availability of dose-response data for norovirus; the use of QMRA to estimate Ascaris infection risks; and a detailed evaluation of pathogen reductions achieved by produce-washing and disinfection. Application of the developments results in more realistic estimates of the pathogen reductions required for the safe use of wastewater in agriculture and consequently permits the use of simpler wastewater treatment processes

    Collaborative Mapping of London Using Google Maps: The LondonProfiler

    Get PDF
    This paper begins by reviewing the ways in which the innovation of Google Maps has transformed our ability to reference and view geographically referenced data. We describe the ways in which the GMap Creator tool developed under the ESRC National Centre for E Social Science programme enables users to ‘mashup’ thematic choropleth maps using the Google API. We illustrate the application of GMap Creator using the example of www.londonprofiler.org, which makes it possible to view a range of health, education and other socioeconomic datasets against a backcloth of Google Maps data. Our conclusions address the ways in which Google Map mashups developed using GMap Creator facilitate online exploratory cartographic visualisation in a range of areas of policy concern

    Optimizing the computation of overriding

    Full text link
    We introduce optimization techniques for reasoning in DLN---a recently introduced family of nonmonotonic description logics whose characterizing features appear well-suited to model the applicative examples naturally arising in biomedical domains and semantic web access control policies. Such optimizations are validated experimentally on large KBs with more than 30K axioms. Speedups exceed 1 order of magnitude. For the first time, response times compatible with real-time reasoning are obtained with nonmonotonic KBs of this size

    Luminosities of AGB Variables

    Get PDF
    The prevailing evidence suggests that most large-amplitude AGB variables follow the period luminosity (PL) relation that has been established for Miras in the LMC and galactic globular clusters. Hipparcos observations indicate that most Miras in the solar neighbourhood are consistent with such a relation. There are two groups of stars with luminosities that are apparently greater than the PL relation would predict: (1) in the LMC and SMC there are large amplitude variables, with long periods, P> 420 days, which are probably undergoing hot bottom burning, but which are very clearly more luminous than the PL relation (these are visually bright and are likely to be among the first stars discovered in more distant intermediate age populations); (2) in the solar neighbourhood there are short period, P<235 days, red stars which are probably more luminous than the PL relation. Similar short-period red stars, with high luminosities, have not been identified in the Magellanic Clouds.Comment: 8 pages, 2 figure, to be published in Mass-Losing Pulsating Stars and their Circumstellar Matter, Y. Nakada & M. Honma (eds) Kluwer ASSL serie

    Revealed cardinal preference

    Get PDF
    I prove that as long as we allow the marginal utility for money (lambda) to vary between purchases (similarly to the budget) then the quasi-linear and the ordinal budget-constrained models rationalize the same data. However, we know that lambda is approximately constant. I provide a simple constructive proof for the necessary and sufficient condition for the constant lambda rationalization, which I argue should replace the Generalized Axiom of Revealed Preference in empirical studies of consumer behavior. 'Go Cardinals!' It is the minimal requirement of any scientifi c theory that it is consistent with the data it is trying to explain. In the case of (Hicksian) consumer theory it was revealed preference -introduced by Samuelson (1938,1948) - that provided an empirical test to satisfy this need. At that time most of economic reasoning was done in terms of a competitive general equilibrium, a concept abstract enough so that it can be built on the ordinal preferences over baskets of goods - even if the extremely specialized ones of Arrow and Debreu. However, starting in the sixties, economics has moved beyond the 'invisible hand' explanation of how -even competitive- markets operate. A seemingly unavoidable step of this 'revolution' was that ever since, most economic research has been carried out in a partial equilibrium context. Now, the partial equilibrium approach does not mean that the rest of the markets are ignored, rather that they are held constant. In other words, there is a special commodity -call it money - that reflects the trade-offs of moving purchasing power across markets. As a result, the basic building block of consumer behavior in partial equilibrium is no longer the consumer's preferences over goods, rather her valuation of them, in terms of money. This new paradigm necessitates a new theory of revealed preference
    corecore