18 research outputs found

    Social Media, Gender and the Mediatisation of War: Exploring the German Armed Forces’ Visual Representation of the Afghanistan Operation on Facebook

    Get PDF
    Studies on the mediatisation of war point to attempts of governments to regulate the visual perspective of their involvements in armed conflict – the most notable example being the practice of ‘embedded reporting’ in Iraq and Afghanistan. This paper focuses on a different strategy of visual meaning-making, namely, the publication of images on social media by armed forces themselves. Specifically, we argue that the mediatisation of war literature could profit from an increased engagement with feminist research, both within Critical Security/Critical Military Studies and within Science and Technology Studies that highlight the close connection between masculinity, technology and control. The article examines the German military mission in Afghanistan as represented on the German armed forces’ official Facebook page. Germany constitutes an interesting, and largely neglected, case for the growing literature on the mediatisation of war: its strong antimilitarist political culture makes the representation of war particularly delicate. The paper examines specific representational patterns of Germany’s involvement in Afghanistan and discusses the implications which arise from what is placed inside the frame of visibility and what remains out of its view

    Light truck forecasts

    No full text
    The recent dramatic increase in the number of light trucks (109% between 1963 and 1974) has prompted concern about the energy consequences of the growing popularity of the light truck. An estimate of the future number of light trucks is considered to be a reasonable first step in assessing the energy impact of these vehicles. The monograph contains forecasts based on two models and six scenarios. The coefficients for the models have been derived by ordinary least squares regression of national level time series data. The first model is a two stage model. The first stage estimates the number of light trucks and cars (together), and the second stage applies a share's submodel to determine the number of light trucks. The second model is a simultaneous equation model. The two models track one another remarkably well, within about 2%. The scenarios were chosen to be consistent with those used in the Lindsey-Kaufman study Projection of Light Truck Population to Year 2025. Except in the case of the most dismal economic scenario, the number of light trucks is expected to increase from the 1974 level of 0.09 light truck per person to about 0.12 light truck per person in 1995

    Rigorous, systematic approach to automatic data editing and its statistical basis

    No full text
    Automatic data editing is the computerized identification and correction (optional) of data errors. These techniques can provide error statistics that indicate the frequency of various types of data errors, diagnostic information that aids in identifying inadequacies in the data collection system, and a clean data base appropriate for use in further decision making, in modeling, and for inferential purposes. However, before these numerous benefits can be fully realized, certain research problems need to be resolved, and the linkage between statistical error analysis and extreme-value programing needs to be carefully determined. The linkage is provide here for the special case that certain independence and symmetry conditions obtain; also provided are rigorous proofs of results central to the functioning of the Boolean approach to automatic data editing of coded (categorical) data. In particular, sufficient collections of edits are defined, and it is shown that for a fixed objective function the solution to the fields to impute problem is obtainable simply from knowing which edits of the sufficient collection are failed, and this solution is invariant of the particular sufficient collection of edits identified. Similarly, disjoint-sufficient collections of edits are defined, and it is shown that, if the objective function of the fields to impute problem is determined by what Freund and Hartley call the number of involvements in unsatisfied consistency checks, then the objective function will be independent of the disjoint-sufficient collection of edits used

    Fourier-Motzkin elimination for mixed systems

    No full text
    A simple extension of Fourier-Motzkin elimination is made to mixed systems of equations, that is, systems consisting of equalities in conjunction with inequalities and strict inequalities. The principal observation is that inequalities combined with strict inequalities result in strict inequalities. Two applications are made to automatic data editing. First, a constructive method is provided to test for the existence of a linear objective function for the minimum weighted fields to impute (MWFI) problem with side constraints. If the linear objective function exists, it is determined; if it does not exist, the extension to a quadratic objective function is given. Next, for any fixed linear objective function, a solution algorithm based on extended Fourier-Motzkin elimination is given for the resultant MWFI and is illustrated with an example. It is believed that the applications are significant in their own right: they provide solution techniques to difficult problems in the field of automatic data editing

    Refinements to the Boolean approach to automatic data editing

    No full text
    Automatic data editing consists of three components: identification of erroneous records, identification of most likely erroneous fields within an erroneous record (fields to impute), and assignment of acceptable values to failing records. Moreover the types of data considered naturally fall into three categories: coded (categorical) data, continuous data, and mixed data (both coded and continuous). For the case of coded data, a natural way to approach automatic data is commonly referred to as the Boolean approach, first developed by Fellegi and Holt. For the fields to impute problem, central to the operation of the Fellegi-Holt approach is the explicit recognition of certain implied edits; Fellegi and Holt orginally required a complete set of edits, and their algorithm to generate this complete set has occasionally had the distinct disadvantage of failing to converge within reasonable time. The primary results of this paper is an algorithm that significantly prunes the Fellegi-Holt edit generation process, yet, nonetheless, generates a sufficient collection of implied edits adequate for the solution of the fields to impute problem. 3 figures

    N-Learners Problem: Fusion of Concepts

    No full text

    Self-Adaptive Genotype-Phenotype Maps: Neural Networks as a Meta-Representation

    No full text
    In this work we investigate the usage of feedforward neural networks for defining the genotype-phenotype maps of arbitrary continuous optimization problems. A study is carried out over the neural network parameters space, aimed at understanding their impact on the locality and redundancy of representations thus defined. Driving such an approach is the goal of placing problems' genetic representations under automated adaptation. We therefore conclude with a proof-of-concept, showing genotype-phenotype maps being successfully self-adapted, concurrently with the evolution of solutions for hard real-world problems

    Fitness distributions and GA hardness

    No full text
    Considerable research effort has been spent in trying to formulate a good definition of GA-Hardness. Given an instance of a problem, the objective is to estimate the performance of a GA. Despite partial successes current definitions are still unsatisfactory. In this paper we make some steps towards a new, more powerful way of assessing problem difficulty based on the properties of a problem's fitness distribution. We present experimental results that strongly support this idea
    corecore