899 research outputs found
Pinching sweaters on your phone – iShoogle : multi-gesture touchscreen fabric simulator using natural on-fabric gestures to communicate textile qualities
The inability to touch fabrics online frustrates consumers, who are used to evaluating
physical textiles by engaging in complex, natural gestural interactions. When
customers interact with physical fabrics, they combine cross-modal information about
the fabric's look, sound and handle to build an impression of its physical qualities. But
whenever an interaction with a fabric is limited (i.e. when watching clothes online)
there is a perceptual gap between the fabric qualities perceived digitally and the actual
fabric qualities that a person would perceive when interacting with the physical fabric.
The goal of this thesis was to create a fabric simulator that minimized this perceptual
gap, enabling accurate perception of the qualities of fabrics presented digitally.
We designed iShoogle, a multi-gesture touch-screen sound-enabled fabric simulator
that aimed to create an accurate representation of fabric qualities without the need for
touching the physical fabric swatch. iShoogle uses on-screen gestures (inspired by
natural on-fabric movements e.g. Crunching) to control pre-recorded videos and
audio of fabrics being deformed (e.g. being Crunched). iShoogle creates an illusion of
direct video manipulation and also direct manipulation of the displayed fabric.
This thesis describes the results of nine studies leading towards the development and
evaluation of iShoogle. In the first three studies, we combined expert and non-expert
textile-descriptive words and grouped them into eight dimensions labelled with terms
Crisp, Hard, Soft, Textured, Flexible, Furry, Rough and Smooth. These terms were
used to rate fabric qualities throughout the thesis. We observed natural on-fabric
gestures during a fabric handling study (Study 4) and used the results to design
iShoogle's on-screen gestures. In Study 5 we examined iShoogle's performance and
speed in a fabric handling task and in Study 6 we investigated users' preferences for
sound playback interactivity. iShoogle's accuracy was then evaluated in the last three
studies by comparing participants’ ratings of textile qualities when using iShoogle
with ratings produced when handling physical swatches. We also described the
recording and processing techniques for the video and audio content that iShoogle
used. Finally, we described the iShoogle iPhone app that was released to the general
public. Our evaluation studies showed that iShoogle significantly improved the accuracy of
fabric perception in at least some cases. Further research could investigate which
fabric qualities and which fabrics are particularly suited to be represented with
iShoogle
INTERRACIAL VIOLENCE AND RACIALIZED NARRATIVES: DISCOVERING THE ROAD LESS TRAVELED
Abstract: This article question the underlying assumptions and, therefore, potential effectiveness of Anthony Alfieri's recent essay, "Defending Racial Violence. Alfieri's proposal, in the form of an enforceable rule, would likely wind up on a collision course with principles underlying the First Amendment to the U.S. Constitution. The article demonstrates the level of confusion that develops from rules that too easily or arbitrarily frustrate the legitimate interests of attorneys and clients in pursuing the best criminal defense. It also recommends providing carefully constructed, simulated exercises for classroom dialogue in ethics courses as a viable, alternative method for introducing a race - conscious ethic to young lawyers that does not run afoul of basic constitutional freedoms. The article disagrees with Alfieri's conclusion that "defense lawyers find scarce opportunity to contest the dominant narratives embedded in laws, institutional practices, and legal relations, even when those narratives inscribe negative racial stereotypes." The article concludes that the history and evolution of the entire system of criminal justice in this country dictates greater reliance upon mainstream prescriptions of neutrality rather than race-conscious rules and affirm that on questions concerning injury to black America's social identity, critics like Alfieri usually fail to consider just how broad the range of race-based assumptions are that ground representations of moral agency.Keywords: Racialized narratives. Criminal justice system. Race relations in the United States
Authoring Multi-Actor Behaviors in Crowds With Diverse Personalities
Multi-actor simulation is critical to cinematic content creation, disaster and security simulation, and interactive entertainment. A key challenge is providing an appropriate interface for authoring high-fidelity virtual actors with featurerich control mechanisms capable of complex interactions with the environment and other actors. In this chapter, we present work that addresses the problem of behavior authoring at three levels: Individual and group interactions are conducted in an event-centric manner using parameterized behavior trees, social crowd dynamics are captured using the OCEAN personality model, and a centralized automated planner is used to enforce global narrative constraints on the scale of the entire simulation. We demonstrate the benefits and limitations of each of these approaches and propose the need for a single unifying construct capable of authoring functional, purposeful, autonomous actors which conform to a global narrative in an interactive simulation
Simulation of nonverbal social interaction and small groups dynamics in virtual environments
How can the behaviour of humans who interact with other humans be simulated in virtual environments? This thesis investigates the issue by proposing a number of dedicated models, computer languages, software architectures, and specifications of computational components. It relies on a large knowledge base from the social sciences, which offers concepts, descriptions, and classifications that guided the research process. The simulation of nonverbal social interaction and group dynamics in virtual environments can be divided in two main research problems: (1) an action selection problem, where autonomous agents must be made capable of deciding when, with whom, and how they interact according to individual characteristics of themselves and others; and (2) a behavioural animation problem, where, on the basis of the selected interaction, 3D characters must realistically behave in their virtual environment and communicate nonverbally with others by automatically triggering appropriate actions such as facial expressions, gestures, and postural shifts. In order to introduce the problem of action selection in social environments, a high-level architecture for social agents, based on the sociological concepts of role, norm, and value, is first discussed. A model of action selection for members of small groups, based on proactive and reactive motivational components, is then presented. This model relies on a new tagbased language called Social Identity Markup Language (SIML), allowing the rich specification of agents' social identities and relationships. A complementary model controls the simulation of interpersonal relationship development within small groups. The interactions of these two models create a complex system exhibiting emergent properties for the generation of meaningful sequences of social interactions in the temporal dimension. To address the issues related to the visualization of nonverbal interactions, results are presented of an evaluation experiment aimed at identifying the application requirements through an analysis of how real people interact nonverbally in virtual environments. Based on these results, a number of components for MPEG-4 body animation, AML — a tag-based language for the seamless integration and synchronization of facial animation, body animation, and speech — and a high-level interaction visualization service for the VHD++ platform are described. This service simulates the proxemic and kinesic aspects of nonverbal social interactions, and comprises such functionalities as parametric postures, adapters and observation behaviours, the social avoidance of collisions, intelligent approach behaviours, and the calculation of suitable interaction distances and angles
Getting Pushy in Brazil: Using P.U.S.H. (“Present Up The Stairway To Heaven”) To Teach Presentation Skills In A Business English Context
In a business English (“English As A Second Language” or “ESL”) context, is there a relatively straightforward, effective, and fun way for students to improve their presentation skills? In this paper, I propose a modular system which I call “Present Up The Stairway To Heaven” (or P.U.S.H.), complete with exercises and simulations, designed to take students through successive steps towards better business presentations in English in approximately 24 hours of classroom instruction and practice
Human-Robot Interaction architecture for interactive and lively social robots
Mención Internacional en el título de doctorLa sociedad está experimentando un proceso de envejecimiento que puede provocar un desequilibrio
entre la población en edad de trabajar y aquella fuera del mercado de trabajo. Una de las soluciones
a este problema que se están considerando hoy en día es la introducción de robots en multiples
sectores, incluyendo el de servicios. Sin embargo, para que esto sea una solución viable, estos robots
necesitan ser capaces de interactuar con personas de manera satisfactoria, entre otras habilidades. En
el contexto de la aplicación de robots sociales al cuidado de mayores, esta tesis busca proporcionar
a un robot social las habilidades necesarias para crear interacciones entre humanos y robots que
sean naturales. En concreto, esta tesis se centra en tres problemas que deben ser solucionados: (i) el
modelado de interacciones entre humanos y robots; (ii) equipar a un robot social con las capacidades
expresivas necesarias para una comunicación satisfactoria; y (iii) darle al robot una apariencia vivaz.
La solución al problema de modelado de diálogos presentada en esta tesis propone diseñar estos
diálogos como una secuencia de elementos atómicos llamados Actos Comunicativos (CAs, por sus
siglas en inglés). Se pueden parametrizar en tiempo de ejecución para completar diferentes objetivos
comunicativos, y están equipados con mecanismos para manejar algunas de las imprecisiones que
pueden aparecer durante interacciones. Estos CAs han sido identificados a partir de la combinación
de dos dimensiones: iniciativa (si la tiene el robot o el usuario) e intención (si se pretende obtener o
proporcionar información). Estos CAs pueden ser combinados siguiendo una estructura jerárquica
para crear estructuras mas complejas que sean reutilizables. Esto simplifica el proceso para crear
nuevas interacciones, permitiendo a los desarrolladores centrarse exclusivamente en diseñar el flujo
del diálogo, sin tener que preocuparse de reimplementar otras funcionalidades que tienen que estar
presentes en todas las interacciones (como el manejo de errores, por ejemplo).
La expresividad del robot está basada en el uso de una librería de gestos, o expresiones,
multimodales predefinidos, modelados como estructuras similares a máquinas de estados. El
módulo que controla la expresividad recibe peticiones para realizar dichas expresiones, planifica
su ejecución para evitar cualquier conflicto que pueda aparecer, las carga, y comprueba que su
ejecución se complete sin problemas. El sistema es capaz también de generar estas expresiones en
tiempo de ejecución a partir de una lista de acciones unimodales (como decir una frase, o mover una
articulación). Una de las características más importantes de la arquitectura de expresividad propuesta
es la integración de una serie de métodos de modulación que pueden ser usados para modificar los
gestos del robot en tiempo de ejecución. Esto permite al robot adaptar estas expresiones en base
a circunstancias particulares (aumentando al mismo tiempo la variabilidad de la expresividad del robot), y usar un número limitado de gestos para mostrar diferentes estados internos (como el estado
emocional).
Teniendo en cuenta que ser reconocido como un ser vivo es un requisito para poder participar en
interacciones sociales, que un robot social muestre una apariencia de vivacidad es un factor clave
en interacciones entre humanos y robots. Para ello, esta tesis propone dos soluciones. El primer
método genera acciones a través de las diferentes interfaces del robot a intervalos. La frecuencia e
intensidad de estas acciones están definidas en base a una señal que representa el pulso del robot.
Dicha señal puede adaptarse al contexto de la interacción o al estado interno del robot. El segundo
método enriquece las interacciones verbales entre el robot y el usuario prediciendo los gestos no
verbales más apropiados en base al contenido del diálogo y a la intención comunicativa del robot.
Un modelo basado en aprendizaje automático recibe la transcripción del mensaje verbal del robot,
predice los gestos que deberían acompañarlo, y los sincroniza para que cada gesto empiece en el
momento preciso. Este modelo se ha desarrollado usando una combinación de un encoder diseñado
con una red neuronal Long-Short Term Memory, y un Conditional Random Field para predecir la
secuencia de gestos que deben acompañar a la frase del robot.
Todos los elementos presentados conforman el núcleo de una arquitectura de interacción
humano-robot modular que ha sido integrada en múltiples plataformas, y probada bajo diferentes
condiciones. El objetivo central de esta tesis es contribuir al área de interacción humano-robot
con una nueva solución que es modular e independiente de la plataforma robótica, y que se centra
en proporcionar a los desarrolladores las herramientas necesarias para desarrollar aplicaciones que
requieran interacciones con personas.Society is experiencing a series of demographic changes that can result in an unbalance between
the active working and non-working age populations. One of the solutions considered to mitigate
this problem is the inclusion of robots in multiple sectors, including the service sector. But for
this to be a viable solution, among other features, robots need to be able to interact with humans
successfully. This thesis seeks to endow a social robot with the abilities required for a natural
human-robot interactions. The main objective is to contribute to the body of knowledge on the area
of Human-Robot Interaction with a new, platform-independent, modular approach that focuses on
giving roboticists the tools required to develop applications that involve interactions with humans. In
particular, this thesis focuses on three problems that need to be addressed: (i) modelling interactions
between a robot and an user; (ii) endow the robot with the expressive capabilities required for a
successful communication; and (iii) endow the robot with a lively appearance.
The approach to dialogue modelling presented in this thesis proposes to model dialogues as a
sequence of atomic interaction units, called Communicative Acts, or CAs. They can be parametrized
in runtime to achieve different communicative goals, and are endowed with mechanisms oriented to
solve some of the uncertainties related to interaction. Two dimensions have been used to identify the
required CAs: initiative (the robot or the user), and intention (either retrieve information or to convey
it). These basic CAs can be combined in a hierarchical manner to create more re-usable complex
structures. This approach simplifies the creation of new interactions, by allowing developers to focus
exclusively on designing the flow of the dialogue, without having to re-implement functionalities
that are common to all dialogues (like error handling, for example).
The expressiveness of the robot is based on the use of a library of predefined multimodal gestures,
or expressions, modelled as state machines. The module managing the expressiveness receives requests
for performing gestures, schedules their execution in order to avoid any possible conflict that might
arise, loads them, and ensures that their execution goes without problems. The proposed approach
is also able to generate expressions in runtime based on a list of unimodal actions (an utterance,
the motion of a limb, etc...). One of the key features of the proposed expressiveness management
approach is the integration of a series of modulation techniques that can be used to modify the
robot’s expressions in runtime. This would allow the robot to adapt them to the particularities of a
given situation (which would also increase the variability of the robot expressiveness), and to display
different internal states with the same expressions. Considering that being recognized as a living being is a requirement for engaging in social
encounters, the perception of a social robot as a living entity is a key requirement to foster
human-robot interactions. In this dissertation, two approaches have been proposed. The first
method generates actions for the different interfaces of the robot at certain intervals. The frequency
and intensity of these actions are defined by a signal that represents the pulse of the robot, which can
be adapted to the context of the interaction or the internal state of the robot. The second method
enhances the robot’s utterance by predicting the appropriate non-verbal expressions that should
accompany them, according to the content of the robot’s message, as well as its communicative
intention. A deep learning model receives the transcription of the robot’s utterances, predicts
which expressions should accompany it, and synchronizes them, so each gesture selected starts at
the appropriate time. The model has been developed using a combination of a Long-Short Term
Memory network-based encoder and a Conditional Random Field for generating a sequence of
gestures that are combined with the robot’s utterance.
All the elements presented above conform the core of a modular Human-Robot Interaction
architecture that has been integrated in multiple platforms, and tested under different conditions.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Fernando Torres Medina.- Secretario: Concepción Alicia Monje Micharet.- Vocal: Amirabdollahian Farshi
The order of ordering: analysing customer-bartender service encounters in public bars
This thesis will explore how customers and bartenders accomplish the service encounter in a public house, or bar. Whilst there is a body of existing literature on service encounters, this mainly investigates customer satisfaction and ignores the mundane activities that comprise the service encounter itself. In an attempt to fill this gap, I will examine how the activities unfold sequentially by examining the spoken and embodied conduct of the participants, over the course of the encounter. The data comprise audio -and video- recorded, dyadic and multi-party interactions between customer(s) and bartender(s), occurring at the bar counter. The data were analyzed using conversation analysis (CA) to investigate the talk and embodied conduct of participants, as these unfold sequentially.
The first analytic chapter investigates how interactions between customers and bartenders are opened. The analysis reveals practices for communicating availability to enter into a service encounter; with customers being found to do this primarily through embodied conduct, and bartenders primarily through spoken turns. The second analytic chapter investigates the role of objects in the ordering sequence. Specifically, the analysis reveals how the Cash Till and the seating tables in the bar are mobilized by participants to accomplish action. In the third analytic chapter, multi-party interactions are investigated, focusing on the organization of turn-taking when two or more customers interact with one or more bartenders. Here, customers are found to engage in activities where they align as a unit, with a lead speaker, who interacts with the bartender on behalf of the party. In the final analytic chapter, the payment sequence of the service encounter is explored to investigate at what sequential position in the interaction payment, as an action, is oriented to. Analysis reveals that a wallet, purse, or bag, may be displayed and money or a payment card retrieved, in a variety of sequential slots, with each contributing differentially to the efficiency of the interaction. I also find that payment may be prematurely proffered due to the preference for efficiency.
Overall, the thesis makes innovative contributions to our understanding of customer and bartender practices for accomplishing core activities in what members come to recognize as a service encounter It also contributes substantially to basic conversation analytic research on openings , which has traditionally been founded on telephone interactions, as well as the action of requesting. I enhance our knowledge of face-to-face opening practices, by revealing that the canonical opening sequence (see Schegloff, 1968; 1979; 1986) is not present, at least in this context. From the findings, I also develop our understanding of how objects constrain, or further, progressivity in interaction; while arguing for the importance of analysing the participants semiotic field in aggregate with talk and embodied conduct. The thesis also contributes to existing literature on multi-party interactions, identifying a new turn-taking practice with a directional flow that works effectively to accomplish ordering. Finally, I contribute to knowledge on the provision of payment, an under-researched yet prominent action in the service encounter. This thesis will show the applicability of CA to service providers; by analysing the talk and embodied conduct in aggregate, effective practices for accomplishing a successful service encounter are revealed
BGSU Football Program November 02, 1974
Football program: Bowling Green State University vs. Ohio University, Homecoming, November 2, 1974.https://scholarworks.bgsu.edu/football_programs/1145/thumbnail.jp
- …