1,593 research outputs found

    Human factors aspects of control room design: Guidelines and annotated bibliography

    Get PDF
    A human factors analysis of the workstation design for the Earth Radiation Budget Satellite mission operation room is discussed. The relevance of anthropometry, design rules, environmental design goals, and the social-psychological environment are discussed

    Impact Evaluations and Development: Nonie Guidance on Impact Evaluation

    Get PDF
    In international development, impact evaluation is principally concerned with final results of interventions (programs, projects, policy measures, reforms) on the welfare of communities, households, and individuals, including taxpayers and voters. Impact evaluation is one tool within the larger toolkit of monitoring and evaluation (including broad program evaluations, process evaluations, ex ante studies, etc.).The Network of Networks for Impact Evaluation (NONIE) was established in 2006 to foster more and better impact evaluations by its membership -- the evaluation networks of bilateral and multilateral organizations focusing on development issues, as well as networks of developing country evaluators. NONIE's member networks conduct a broad set of evaluations, examining issues such as project and strategy performance, institutional development, and aid effectiveness. By sharing methodological approaches and promoting learning by doing on impact evaluations, NONIE aims to promote the use of this more specific approach by its members within their larger portfolio of evaluations. This document, by Frans Leeuw and Jos Vaessen, has been developed to support this focus.For development practitioners, impact evaluations play a keyrole in the drive for better evidence on results and development effectiveness. They are particularly well suited to answer important questions about whether development interventions do or do not work, whether they make a difference, and how cost-effective they are. Consequently, they can help ensure that scarce resources are allocated where they can have the most developmental impact

    Probabilistic learning for selective dissemination of information

    Get PDF
    New methods and new systems are needed to filter or to selectively distribute the increasing volume of electronic information being produced nowadays. An effective information filtering system is one that provides the exact information that fulfills user's interests with the minimum effort by the user to describe it. Such a system will have to be adaptive to the user changing interest. In this paper we describe and evaluate a learning model for information filtering which is an adaptation of the generalized probabilistic model of information retrieval. The model is based on the concept of 'uncertainty sampling', a technique that allows for relevance feedback both on relevant and nonrelevant documents. The proposed learning model is the core of a prototype information filtering system called ProFile

    An analytical inspection framework for evaluating the search tactics and user profiles supported by information seeking interfaces

    No full text
    Searching is something we do everyday both in digital and physical environments. Whether we are searching for books in a library or information on the web, search is becoming increasingly important. For many years, however, the standard for search in software has been to provide a keyword search box that has, over time, been embellished with query suggestions, Boolean operators, and interactive feedback. More recent research has focused on designing search interfaces that better support exploration and learning. Consequently, the aim of this research has been to develop a framework that can reveal to designers how well their search interfaces support different styles of searching behaviour.The primary contribution of this research has been to develop a usability evaluation method, in the form of a lightweight analytical inspection framework, that can assess both search designs and fully implemented systems. The framework, called Sii, provides three types of analyses: 1) an analysis of the amount of support the different features of a design provide; 2) an analysis of the amount of support provided for 32 known search tactics; and 3) an analysis of the amount of support provided for 16 different searcher profiles, such as those who are finding, browsing, exploring, and learning. The design of the framework was validated by six independent judges, and the results were positively correlated against the results of empirical user studies. Further, early investigations showed that Sii has a learning curve that begins at around one and a half hours, and, when using identical analysis results, different evaluators produce similar design revisions.For Search experts, building interfaces for their systems, Sii provides a Human-Computer Interaction evaluation method that addresses searcher needs rather than system optimisation. For Human-Computer Interaction experts, designing novel interfaces that provide search functions, Sii provides the opportunity to assess designs using the knowledge and theories generated by the Information Seeking community. While the research reported here is under controlled environments, future work is planned that will investigate the use of Sii by independent practitioners on their own projects

    A semantic framework for ontology usage analysis

    Get PDF
    The Semantic Web envisions a Web where information is accessible and processable by computers as well as humans. Ontologies are the cornerstones for realizing this vision of the Semantic Web by capturing domain knowledge by defining the terms and the relationship between these terms to provide a formal representation of the domain with machine-understandable semantics. Ontologies are used for semantic annotation, data interoperability and knowledge assimilation and dissemination.In the literature, different approaches have been proposed to build and evolve ontologies, but in addition to these, one more important concept needs to be considered in the ontology lifecycle, that is, its usage. Measuring the “usage” of ontologies will help us to effectively and efficiently make use of semantically annotated structured data published on the Web (formalized knowledge published on the Web), improve the state of ontology adoption and reusability, provide a usage-based feedback loop to the ontology maintenance process for a pragmatic conceptual model update, and source information accurately and automatically which can then be utilized in the other different areas of the ontology lifecycle. Ontology Usage Analysis is the area which evaluates, measures and analyses the use of ontologies on the Web. However, in spite of its importance, no formal approach is present in the literature which focuses on measuring the use of ontologies on the Web. This is in contrast to the approaches proposed in the literature on the other concepts of the ontology lifecycle, such as ontology development, ontology evaluation and ontology evolution. So, to address this gap, this thesis is an effort in such a direction to assess, analyse and represent the use of ontologies on the Web.In order to address the problem and realize the abovementioned benefits, an Ontology Usage Analysis Framework (OUSAF) is presented. The OUSAF Framework implements a methodological approach which is comprised of identification, investigation, representation and utilization phases. These phases provide a complete solution for usage analysis by allowing users to identify the key ontologies, and investigate, represent and utilize usage analysis results. Various computation components with several methods, techniques, and metrics for each phase are presented and evaluated using the Semantic Web data crawled from the Web. For the dissemination of ontology-usage-related information accessible to machines and humans, The U Ontology is presented to formalize the conceptual model of the ontology usage domain. The evaluation of the framework, solution components, methods, and a formalized conceptual model is presented, indicating the usefulness of the overall proposed solution

    Integrating Big Data Into the Monitoring and Evaluation of Development Programmes

    Get PDF
    This report provides guidelines for evaluators, evaluation and programme managers, policy makers and funding agencies on how to take advantage of the rapidly emerging field of big data in the design and implementation of systems for monitoring and evaluating development programmes. The report is organized into two parts. Part I: Development evaluation in the age of big data reviews the data revolution and discusses the promise, and challenges this offers for strengthening development monitoring and evaluation. Part II: Guidelines for integrating big data into the monitoring and evaluation frameworks of development programmes focuses on what a big data inclusive M&E system would look like. The report also includes guidelines for integrating big data into programme monitoring and evaluation

    Computational model of negotiation skills in virtual artificial agents

    Get PDF
    Negotiation skills represent crucial abilities for engaging in effective social interactions in formal and informal settings. Serious games, intelligent systems and virtual agents can provide solid tools upon which one-to-one training and assessment can be reliably made available. The aim of the present work is to fill the gap between the recent growing interest towards soft skills, and the lack of a robust and modern methodology for supporting their investigation. A computational model for the development of Enact, a 3D virtual intelligent platform for training and testing negotiation skills, will be presented. The serious game allows users to interact with simulated peers in scenarios depicting daily life situations and receive a psychological assessment and adaptive training reflecting their negotiation abilities. To pursue this goal, this work has gone through different research stages, each with a unique methodology, results and discussion described in its specific section. In the first phase, the platform was designed to operationalize the examined negotiation theory, developed and assessed. The negotiation styles considered, consistently with previous findings, have been found not to correlate with personality traits, coping strategies and perceived self-efficacy. The serious game has been widely tested for its usability and underwent two development and release stages aimed at improving its accuracy, usability and likeability. The variables measured by the platform have been found to predict in all cases at least two of the negotiation styles considered. Concerning the user feedback, the game has been judged as useful, more pleasant than the traditional test, and the perceived time spent on the game resulted significantly lower than the real time spent. In the second stage of this research, the game scenarios were used to collect a dataset of documents containing natural language negotiations between users and the virtual agents. The dataset was used to assess the correlations between the personal pronouns' use and the negotiation styles. Results showed that more engaged styles generally used pronouns with a significantly higher frequency than less engaged styles. Styles with a high concern for self showed a higher frequency of singular personal pronouns while styles with a high concern for others used significantly more relational pronouns. The corpus of documents was also used to perform multiclass classification on the negotiation styles using machine learning. Both linear (SVM) and non-linear models (MNB, CNN) performed reliably with a state-of-the-art accuracy

    Identifying the science and technology dimensions of emerging public policy issues through horizon scanning

    Get PDF
    Public policy requires public support, which in turn implies a need to enable the public not just to understand policy but also to be engaged in its development. Where complex science and technology issues are involved in policy making, this takes time, so it is important to identify emerging issues of this type and prepare engagement plans. In our horizon scanning exercise, we used a modified Delphi technique [1]. A wide group of people with interests in the science and policy interface (drawn from policy makers, policy adviser, practitioners, the private sector and academics) elicited a long list of emergent policy issues in which science and technology would feature strongly and which would also necessitate public engagement as policies are developed. This was then refined to a short list of top priorities for policy makers. Thirty issues were identified within broad areas of business and technology; energy and environment; government, politics and education; health, healthcare, population and aging; information, communication, infrastructure and transport; and public safety and national security.Public policy requires public support, which in turn implies a need to enable the public not just to understand policy but also to be engaged in its development. Where complex science and technology issues are involved in policy making, this takes time, so it is important to identify emerging issues of this type and prepare engagement plans. In our horizon scanning exercise, we used a modified Delphi technique [1]. A wide group of people with interests in the science and policy interface (drawn from policy makers, policy adviser, practitioners, the private sector and academics) elicited a long list of emergent policy issues in which science and technology would feature strongly and which would also necessitate public engagement as policies are developed. This was then refined to a short list of top priorities for policy makers. Thirty issues were identified within broad areas of business and technology; energy and environment; government, politics and education; health, healthcare, population and aging; information, communication, infrastructure and transport; and public safety and national security
    corecore