84,060 research outputs found

    Indicators: tools for informing, monitoring or controlling?

    Get PDF
    Today, indicators are produced and used worldwide; across all levels and sectors of society; by public, private and civil society actors; for a variety of purposes, ranging from knowledge-provision to administrative control. While the use of quantitative data as policy support, including policy formulation, has a long history, recent decades have seen the rise of what some have called an ‘indicator industry’ (for example, Hezri and Hasan 2004), focused especially on the production of environmental and sustainability indicators, within a framework variously called ‘governance by numbers' (Miller 2001; Lascoumes and Le Galès 2005; Jackson 2011), ‘management by numbers’ in public service (for example, Hood 2007) or ‘numbers discourse’ (Jackson 2011, p. 23). Indicators are generally expected to enhance the rationality of policymaking and public debate by providing a supposedly more objective, robust, and reliable information base. Indicators can operate as ‘boundary objects’ (for example, Turnhout 2009; Star 2010), catering to both technocratic and deliberative ideals, by combining ‘hard facts’ and modelling with collective reasoning and ‘speculation’. Research and development work in the area has hitherto overwhelmingly concentrated on improving the technical quality of indicators, while the fate of indicators in policymaking and the associated sociopolitical aspects have attracted little attention. This chapter focuses on this neglected area of indicator research, by providing an overview of the multiple types of existing indicators, as well as their use and influence in various venues of policymaking. Empirical examples are drawn mainly from the fields of environmental and sustainability indicators

    Deep Boosting: Layered Feature Mining for General Image Classification

    Full text link
    Constructing effective representations is a critical but challenging problem in multimedia understanding. The traditional handcraft features often rely on domain knowledge, limiting the performances of exiting methods. This paper discusses a novel computational architecture for general image feature mining, which assembles the primitive filters (i.e. Gabor wavelets) into compositional features in a layer-wise manner. In each layer, we produce a number of base classifiers (i.e. regression stumps) associated with the generated features, and discover informative compositions by using the boosting algorithm. The output compositional features of each layer are treated as the base components to build up the next layer. Our framework is able to generate expressive image representations while inducing very discriminate functions for image classification. The experiments are conducted on several public datasets, and we demonstrate superior performances over state-of-the-art approaches.Comment: 6 pages, 4 figures, ICME 201

    Skill Rating by Bayesian Inference

    Get PDF
    Systems Engineering often involves computer modelling the behaviour of proposed systems and their components. Where a component is human, fallibility must be modelled by a stochastic agent. The identification of a model of decision-making over quantifiable options is investigated using the game-domain of Chess. Bayesian methods are used to infer the distribution of players’ skill levels from the moves they play rather than from their competitive results. The approach is used on large sets of games by players across a broad FIDE Elo range, and is in principle applicable to any scenario where high-value decisions are being made under pressure

    Living Innovation Laboratory Model Design and Implementation

    Full text link
    Living Innovation Laboratory (LIL) is an open and recyclable way for multidisciplinary researchers to remote control resources and co-develop user centered projects. In the past few years, there were several papers about LIL published and trying to discuss and define the model and architecture of LIL. People all acknowledge about the three characteristics of LIL: user centered, co-creation, and context aware, which make it distinguished from test platform and other innovation approaches. Its existing model consists of five phases: initialization, preparation, formation, development, and evaluation. Goal Net is a goal-oriented methodology to formularize a progress. In this thesis, Goal Net is adopted to subtract a detailed and systemic methodology for LIL. LIL Goal Net Model breaks the five phases of LIL into more detailed steps. Big data, crowd sourcing, crowd funding and crowd testing take place in suitable steps to realize UUI, MCC and PCA throughout the innovation process in LIL 2.0. It would become a guideline for any company or organization to develop a project in the form of an LIL 2.0 project. To prove the feasibility of LIL Goal Net Model, it was applied to two real cases. One project is a Kinect game and the other one is an Internet product. They were both transformed to LIL 2.0 successfully, based on LIL goal net based methodology. The two projects were evaluated by phenomenography, which was a qualitative research method to study human experiences and their relations in hope of finding the better way to improve human experiences. Through phenomenographic study, the positive evaluation results showed that the new generation of LIL had more advantages in terms of effectiveness and efficiency.Comment: This is a book draf
    • …
    corecore