16,982 research outputs found

    Self-Evaluation in Youth Media and Technology Programs: A Report to the Time Warner Foundation

    Get PDF
    This 2003 report documents the self-evaluation practices, challenges, and concerns of the Time Warner Foundation's Community Grantees; reviews the resources available to youth media programs wishing to conduct program and outcome evaluations; and begins to identify useful directions for further exploration

    People, Land, Arts, Culture and Engagement: Taking Stock of the Place Initiative

    Get PDF
    This report serves as a point of entry into creative placemaking as defined and supported by the Tucson Pima Arts Council's PLACE Initiative. To assess how and to what degree the PLACE projects were helping to transform communities, TPAC was asked by the Kresge Foundation to undertake a comprehensive evaluation. This involved discussion with stakeholders about support mechanisms, professional development, investment, and impact of the PLACE Initiative in Tucson, Arizona, and the Southwest regionally and the gathering of qualitative and quantitative data to develop indicators and method for evaluating the social impact of the arts in TPAC's grantmaking. The report documents one year of observations and research by the PLACE research team, outside researchers and reviewers, local and regional working groups, TPAC staff, and TPAC constituency. It considers data from the first four years of PLACE Initiative funding, including learning exchanges, focus groups, individual interviews, grantmaking, and all reporting. It is also informed by evaluation and assessment that occurred in the development of the PLACE Initiative, in particular, Maribel Alvarez's Two-Way Mirror: Ethnography as a Way to Assess Civic Impact of Arts-Based Engagement in Tucson, Arizona (2009), and Mark Stern and Susan Seifert's Documenting Civic Engagement: A Plan for the Tucson Pima Arts Council (2009). Both of these publications were supported by Animating Democracy, a program of Americans for the Arts, that promotes arts and culture as potent contributors to community, civic, and social change. Both publications describe how TPAC approaches evaluation strategies associated with social impact of the arts in Tucson and Pima County. This report outlines the local context and historical antecedents of the PLACE Initiative in the region with an emphasis on the concept of "belonging" as a primary characteristic of PLACE projects and policy. It describes PLACE projects as well as the role of TPAC in creating and facilitating the Initiative. Based on the collective understanding of the research team, impacts of the PLACE Initiative are organized into three main realms -- institutions, artists, and communities. These realms are further addressed in case studies from select grantees, whose narratives offer rich, detailed perspectives about PLACE projects in context, with all their successes, rewards, and challenges for artists, communities, and institutions. Lastly, the report offers preliminary research findings on PLACE by TPAC in collaboration with Dr. James Roebuck, codirector of the University of Arizona's ERAD (Evaluation Research and Development) Program

    Framework for proximal personified interfaces

    Get PDF

    Rockefeller Foundation - 1999 Annual Report

    Get PDF
    Contains statement of mission and vision, president's message, program information, grants list, financial statements, and list of board members and staff

    Human-Robot Interaction architecture for interactive and lively social robots

    Get PDF
    Mención Internacional en el título de doctorLa sociedad está experimentando un proceso de envejecimiento que puede provocar un desequilibrio entre la población en edad de trabajar y aquella fuera del mercado de trabajo. Una de las soluciones a este problema que se están considerando hoy en día es la introducción de robots en multiples sectores, incluyendo el de servicios. Sin embargo, para que esto sea una solución viable, estos robots necesitan ser capaces de interactuar con personas de manera satisfactoria, entre otras habilidades. En el contexto de la aplicación de robots sociales al cuidado de mayores, esta tesis busca proporcionar a un robot social las habilidades necesarias para crear interacciones entre humanos y robots que sean naturales. En concreto, esta tesis se centra en tres problemas que deben ser solucionados: (i) el modelado de interacciones entre humanos y robots; (ii) equipar a un robot social con las capacidades expresivas necesarias para una comunicación satisfactoria; y (iii) darle al robot una apariencia vivaz. La solución al problema de modelado de diálogos presentada en esta tesis propone diseñar estos diálogos como una secuencia de elementos atómicos llamados Actos Comunicativos (CAs, por sus siglas en inglés). Se pueden parametrizar en tiempo de ejecución para completar diferentes objetivos comunicativos, y están equipados con mecanismos para manejar algunas de las imprecisiones que pueden aparecer durante interacciones. Estos CAs han sido identificados a partir de la combinación de dos dimensiones: iniciativa (si la tiene el robot o el usuario) e intención (si se pretende obtener o proporcionar información). Estos CAs pueden ser combinados siguiendo una estructura jerárquica para crear estructuras mas complejas que sean reutilizables. Esto simplifica el proceso para crear nuevas interacciones, permitiendo a los desarrolladores centrarse exclusivamente en diseñar el flujo del diálogo, sin tener que preocuparse de reimplementar otras funcionalidades que tienen que estar presentes en todas las interacciones (como el manejo de errores, por ejemplo). La expresividad del robot está basada en el uso de una librería de gestos, o expresiones, multimodales predefinidos, modelados como estructuras similares a máquinas de estados. El módulo que controla la expresividad recibe peticiones para realizar dichas expresiones, planifica su ejecución para evitar cualquier conflicto que pueda aparecer, las carga, y comprueba que su ejecución se complete sin problemas. El sistema es capaz también de generar estas expresiones en tiempo de ejecución a partir de una lista de acciones unimodales (como decir una frase, o mover una articulación). Una de las características más importantes de la arquitectura de expresividad propuesta es la integración de una serie de métodos de modulación que pueden ser usados para modificar los gestos del robot en tiempo de ejecución. Esto permite al robot adaptar estas expresiones en base a circunstancias particulares (aumentando al mismo tiempo la variabilidad de la expresividad del robot), y usar un número limitado de gestos para mostrar diferentes estados internos (como el estado emocional). Teniendo en cuenta que ser reconocido como un ser vivo es un requisito para poder participar en interacciones sociales, que un robot social muestre una apariencia de vivacidad es un factor clave en interacciones entre humanos y robots. Para ello, esta tesis propone dos soluciones. El primer método genera acciones a través de las diferentes interfaces del robot a intervalos. La frecuencia e intensidad de estas acciones están definidas en base a una señal que representa el pulso del robot. Dicha señal puede adaptarse al contexto de la interacción o al estado interno del robot. El segundo método enriquece las interacciones verbales entre el robot y el usuario prediciendo los gestos no verbales más apropiados en base al contenido del diálogo y a la intención comunicativa del robot. Un modelo basado en aprendizaje automático recibe la transcripción del mensaje verbal del robot, predice los gestos que deberían acompañarlo, y los sincroniza para que cada gesto empiece en el momento preciso. Este modelo se ha desarrollado usando una combinación de un encoder diseñado con una red neuronal Long-Short Term Memory, y un Conditional Random Field para predecir la secuencia de gestos que deben acompañar a la frase del robot. Todos los elementos presentados conforman el núcleo de una arquitectura de interacción humano-robot modular que ha sido integrada en múltiples plataformas, y probada bajo diferentes condiciones. El objetivo central de esta tesis es contribuir al área de interacción humano-robot con una nueva solución que es modular e independiente de la plataforma robótica, y que se centra en proporcionar a los desarrolladores las herramientas necesarias para desarrollar aplicaciones que requieran interacciones con personas.Society is experiencing a series of demographic changes that can result in an unbalance between the active working and non-working age populations. One of the solutions considered to mitigate this problem is the inclusion of robots in multiple sectors, including the service sector. But for this to be a viable solution, among other features, robots need to be able to interact with humans successfully. This thesis seeks to endow a social robot with the abilities required for a natural human-robot interactions. The main objective is to contribute to the body of knowledge on the area of Human-Robot Interaction with a new, platform-independent, modular approach that focuses on giving roboticists the tools required to develop applications that involve interactions with humans. In particular, this thesis focuses on three problems that need to be addressed: (i) modelling interactions between a robot and an user; (ii) endow the robot with the expressive capabilities required for a successful communication; and (iii) endow the robot with a lively appearance. The approach to dialogue modelling presented in this thesis proposes to model dialogues as a sequence of atomic interaction units, called Communicative Acts, or CAs. They can be parametrized in runtime to achieve different communicative goals, and are endowed with mechanisms oriented to solve some of the uncertainties related to interaction. Two dimensions have been used to identify the required CAs: initiative (the robot or the user), and intention (either retrieve information or to convey it). These basic CAs can be combined in a hierarchical manner to create more re-usable complex structures. This approach simplifies the creation of new interactions, by allowing developers to focus exclusively on designing the flow of the dialogue, without having to re-implement functionalities that are common to all dialogues (like error handling, for example). The expressiveness of the robot is based on the use of a library of predefined multimodal gestures, or expressions, modelled as state machines. The module managing the expressiveness receives requests for performing gestures, schedules their execution in order to avoid any possible conflict that might arise, loads them, and ensures that their execution goes without problems. The proposed approach is also able to generate expressions in runtime based on a list of unimodal actions (an utterance, the motion of a limb, etc...). One of the key features of the proposed expressiveness management approach is the integration of a series of modulation techniques that can be used to modify the robot’s expressions in runtime. This would allow the robot to adapt them to the particularities of a given situation (which would also increase the variability of the robot expressiveness), and to display different internal states with the same expressions. Considering that being recognized as a living being is a requirement for engaging in social encounters, the perception of a social robot as a living entity is a key requirement to foster human-robot interactions. In this dissertation, two approaches have been proposed. The first method generates actions for the different interfaces of the robot at certain intervals. The frequency and intensity of these actions are defined by a signal that represents the pulse of the robot, which can be adapted to the context of the interaction or the internal state of the robot. The second method enhances the robot’s utterance by predicting the appropriate non-verbal expressions that should accompany them, according to the content of the robot’s message, as well as its communicative intention. A deep learning model receives the transcription of the robot’s utterances, predicts which expressions should accompany it, and synchronizes them, so each gesture selected starts at the appropriate time. The model has been developed using a combination of a Long-Short Term Memory network-based encoder and a Conditional Random Field for generating a sequence of gestures that are combined with the robot’s utterance. All the elements presented above conform the core of a modular Human-Robot Interaction architecture that has been integrated in multiple platforms, and tested under different conditions.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Fernando Torres Medina.- Secretario: Concepción Alicia Monje Micharet.- Vocal: Amirabdollahian Farshi

    Assessment @ Bond

    Get PDF

    Reflections from Participants

    Get PDF
    The Road Ahead: Public Dialogue on Science and Technology brings together some of the UK’s leading thinkers and practitioners in science and society to ask where we have got to, how we have got here, why we are doing what we are doing and what we should do next. The collection of essays aims to provide policy makers and dialogue deliverers with insights into how dialogue could be used in the future to strengthen the links between science and society. It is introduced by Professor Kathy Sykes, one of the UK’s best known science communicators, who is also the head of the Sciencewise-ERC Steering Group, and Jack Stilgoe, a DEMOS associate, who compiled the collection
    corecore