1,183 research outputs found

    Combining quantitative and qualitative methods in assessing chronic poverty: the case of Rwanda

    Get PDF
    This paper addresses the issue of chronic poverty in Rwanda, an issue which has not been addressed specifically in the policy debate, despite the fact that it is likely to be widespread. In part this has reflected lack of available evidence, in that the conventional sources used to analyze chronic poverty are not available. We argue in this paper that by judicious combination of existing qualitative (a high quality nationwide participatory poverty assessment) and quantitative sources (a household survey) it is possible to identify and characterize a clearly distinct group of chronically poor households, whose characteristics are different from the poor as a whole

    Lost & Found: Samuel Fuller’s Tigrero and Accidental Ethnography

    Full text link
    In 1954, Darryl Zanuck commissioned Samuel Fuller to journey to the Amazon and shoot footage of the Karaja tribe, around which the director would construct a screenplay based upon the life of Sasha Siemel, a big game hunter of note. Zanuck had optioned Siemel’s best-selling autobiography, Tigrero. Although John Wayne and Ava Gardner were soon attached to the project, executives at Fox would not sanction a shoot in such a dangerous location. The project was set aside and forgotten. Nearly 40 years after his visit to Brazil, Fuller would return to the Karaja tribe. Out of this experience came a documentary, Tigrero: A Film That Was Never Made (1994), by Finnish director Mika Kaurismaki. The result is equal parts documentary and ethnography, serving as a time capsule for the Karaja–many of whom recognized long-deceased relatives in the footage shown to them–demonstrating just how much their culture had changed in four decades. This paper examines how the failed initial project eventually yielded a fascinating insight into the vicissitudes of Hollywood production as well as an accidental ethnographic study of the Karaja and their change over time

    Learning in complex tasks: A comparison of cognitvie load and dual space theories.

    Get PDF
    Cognitive Load Theory (CLT) and Dual Space Theory (DST) offer differing accounts of learning in complex settings. CLT argues that reducing processing demands on working memory (i.e. reducing cognitive load) will facilitate learning. Conversely, DST suggests that learning is improved by encouraging learners to focus on task rules (rule space search) rather than task instances (instance space search). Despite these differences, CLT researchers have proposed that the theories are complementary, suggesting that rule space search is contingent on low cognitive load. Three studies were conducted to examine this proposal with particular focus on the goal free effect. Study 1 trained participants on a complex task under conditions of high or low rule space search with cognitive load held constant. Results indicated that the high rule space search group acquired greater knowledge despite equivalent cognitive load between the groups. However, results may have been confounded by motivational differences. Study 2 manipulated rule space search and cognitive load in a 2 (goal type) x 2 (information level) between-subjects design. Manipulations were intended to create conditions where cognitive load and rule space search were both high or low, contrary to their proposed dependence. Results however were mixed. Whilst cognitive load and rule space search were unrelated in between-group comparisons, they were negatively related overall, consistent with CLT’s proposal. Study 3 refined the previous 2 x 2 design to clarify these findings. Results indicated that groups encouraged to search rule space did so independently of cognitive load, though results were not entirely consistent with either theory. Taken together, results tentatively suggest that cognitive load does not influence rule space search in all situations. The theories may therefore be independent explanations of learning in complex settings

    A review of in-situ loading conditions for mathematical modelling of asymmetric wind turbine blades

    Get PDF
    This paper reviews generalized solutions to the classical beam moment equation for solving the deflexion and strain fields of composite wind turbine blades. A generalized moment functional is presented to effectively model the moment at any point on a blade/beam utilizing in-situ load cases. Models assume that the components are constructed from inplane quasi-isotropic composite materials of an overall elastic modulus of 42 GPa. Exact solutions for the displacement and strains for an adjusted aerofoil to that presented in the literature and compared with another defined by the Joukowski transform. Models without stiffening ribs resulted in deflexions of the blades which exceeded the generally acceptable design code criteria. Each of the models developed were rigorously validated via numerical (Runge-Kutta) solutions of an identical differential equation used to derive the analytical models presented. The results obtained from the robust design codes, written in the open source Computer Aided Software (CAS) Maxima, are shown to be congruent with simulations using the ANSYS commercial finite element (FE) codes as well as experimental data. One major implication of the theoretical treatment is that these solutions can now be used in design codes to maximize the strength of analogues components, used in aerospace and most notably renewable energy sectors, while significantly reducing their weight and hence cost. The most realistic in-situ loading conditions for a dynamic blade and stationary blade are presented which are shown to be unique to the blade optimal tip speed ratio, blade dimensions and wind speed

    A model study of enhanced oil recovery by flooding with aqueous surfactant solution and comparison with theory

    Get PDF
    With the aim of elucidating the details of enhanced oil recovery by surfactant solution flooding, we have determined the detailed behavior of model systems consisting of a packed column of calcium carbonate particles as the porous rock, n-decane as the trapped oil, and aqueous solutions of the anionic surfactant sodium bis(2-ethylhexyl) sulfosuccinate (AOT). The AOT concentration was varied from zero to above the critical aggregation concentration (cac). The salt content of the aqueous solutions was varied to give systems of widely different, post-cac oil–water interfacial tensions. The systems were characterized in detail by measuring the permeability behavior of the packed columns, the adsorption isotherms of AOT from the water to the oil–water interface and to the water–calcium carbonate interface, and oil–water–calcium carbonate contact angles. Measurements of the percent oil recovery by pumping surfactant solutions into calcium carbonate-packed columns initially filled with oil were analyzed in terms of the characterization results. We show that the measured contact angles as a function of AOT concentration are in reasonable agreement with those calculated from values of the surface energy of the calcium carbonate–air surface plus the measured adsorption isotherms. Surfactant adsorption onto the calcium carbonate–water interface causes depletion of its aqueous-phase concentration, and we derive equations which enable the concentration of nonadsorbed surfactant within the packed column to be estimated from measured parameters. The percent oil recovery as a function of the surfactant concentration is determined solely by the oil–water–calcium carbonate contact angle for nonadsorbed surfactant concentrations less than the cac. For surfactant concentrations greater than the cac, additional oil removal occurs by a combination of solubilization and emulsification plus oil mobilization due to the low oil–water interfacial tension and a pumping pressure increase

    A New Generation of Mixture-Model Cluster Analysis with Information Complexity and the Genetic EM Algorithm

    Get PDF
    In this dissertation, we extend several relatively new developments in statistical model selection and data mining in order to improve one of the workhorse statistical tools - mixture modeling (Pearson, 1894). The traditional mixture model assumes data comes from several populations of Gaussian distributions. Thus, what remains is to determine how many distributions, their population parameters, and the mixing proportions. However, real data often do not fit the restrictions of normality very well. It is likely that data from a single population exhibiting either asymmetrical or nonnormal tail behavior could be erroneously modeled as two populations, resulting in suboptimal decisions. To avoid these pitfalls, we develop the mixture model under a broader distributional assumption by fitting a group of multivariate elliptically-contoured distributions (Anderson and Fang, 1990; Fang et al., 1990). Special cases include the multivariate Gaussian and power exponential distributions, as well as the multivariate generalization of the Student’s T. This gives us the flexibility to model nonnormal tail and peak behavior, though the symmetry restriction still exists. The literature has many examples of research generalizing the Gaussian mixture model to other distributions (Farrell and Mersereau, 2004; Hasselblad, 1966; John, 1970a), but our effort is more general. Further, we generalize the mixture model to be non-parametric, by developing two types of kernel mixture model. First, we generalize the mixture model to use the truly multivariate kernel density estimators (Wand and Jones, 1995). Additionally, we develop the power exponential product kernel mixture model, which allows the density to adjust to the shape of each dimension independently. Because kernel density estimators enforce no functional form, both of these methods can adapt to nonnormal asymmetric, kurtotic, and tail characteristics. Over the past two decades or so, evolutionary algorithms have grown in popularity, as they have provided encouraging results in a variety of optimization problems. Several authors have applied the genetic algorithm - a subset of evolutionary algorithms - to mixture modeling, including Bhuyan et al. (1991), Krishna and Murty (1999), and Wicker (2006). These procedures have the benefit that they bypass computational issues that plague the traditional methods. We extend these initialization and optimization methods by combining them with our updated mixture models. Additionally, we “borrow” results from robust estimation theory (Ledoit and Wolf, 2003; Shurygin, 1983; Thomaz, 2004) in order to data-adaptively regularize population covariance matrices. Numerical instability of the covariance matrix can be a significant problem for mixture modeling, since estimation is typically done on a relatively small subset of the observations. We likewise extend various information criteria (Akaike, 1973; Bozdogan, 1994b; Schwarz, 1978) to the elliptically-contoured and kernel mixture models. Information criteria guide model selection and estimation based on various approximations to the Kullback-Liebler divergence. Following Bozdogan (1994a), we use these tools to sequentially select the best mixture model, select the best subset of variables, and detect influential observations - all without making any subjective decisions. Over the course of this research, we developed a full-featured Matlab toolbox (M3) which implements all the new developments in mixture modeling presented in this dissertation. We show results on both simulated and real world datasets. Keywords: mixture modeling, nonparametric estimation, subset selection, influence detection, evidence-based medical diagnostics, unsupervised classification, robust estimation

    Power without representation? The House of Lords and social policy

    Get PDF
    In the past the House of Lords has generally, and arguably for good reasons, been ignored in discussions of the making and scrutiny of welfare. However, it has always played some role in this field, particularly in the scrutiny and passage of legislation, and since the removal of hereditary Peers in 1999, some writers have argued that the House has become more assertive. This article examines the attitudes of Peers, including a comparison with the views of Members of Parliament, and draws a number of conclusions about the role of the upper House in relation to social policy

    Curves of every genus with many points, II: Asymptotically good families

    Get PDF
    We resolve a 1983 question of Serre by constructing curves with many points of every genus over every finite field. More precisely, we show that for every prime power q there is a positive constant c_q with the following property: for every non-negative integer g, there is a genus-g curve over F_q with at least c_q * g rational points over F_q. Moreover, we show that there exists a positive constant d such that for every q we can choose c_q = d * (log q). We show also that there is a constant c > 0 such that for every q and every n > 0, and for every sufficiently large g, there is a genus-g curve over F_q that has at least c*g/n rational points and whose Jacobian contains a subgroup of rational points isomorphic to (Z/nZ)^r for some r > c*g/n.Comment: LaTeX, 18 page
    • 

    corecore