7,760 research outputs found

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    Stateful SOA-conformant Services as Building Blocks for Interactive Software Systems

    Get PDF
    Services implemented through information and communication technology need to provide value for customers, with whom they usually have non-trivial interaction. However, user interface and (Web) service specifications are often disconnected. The most widely used Web services are stateless, hence only trivial user interaction with one-step input and output can be embedded in such a service. Remembering the state is a prerequisite for implementing non-trivial user interaction with a service. We present new stateful SOA-conformant services as building blocks for interactive software systems. This new kind of service has a unified high-level protocol both for (non-trivial) user interaction with a machine and for machine-machine communication. Services with the same protocol can substitute each other (also dynamically at runtime), whether they are machine or user services. Using such services as building blocks, interactive software systems can be composed, also recursively. As a matter of fact, from such service specifications (graphical) user interfaces for non-trivial interaction can be automatically generated

    Automatic design of multimodal presentations

    Get PDF
    We describe our attempt to integrate multiple AI components such as planning, knowledge representation, natural language generation, and graphics generation into a functioning prototype called WIP that plans and coordinates multimodal presentations in which all material is generated by the system. WIP allows the generation of alternate presentations of the same content taking into account various contextual factors such as the user\u27s degree of expertise and preferences for a particular output medium or mode. The current prototype of WIP generates multimodal explanations and instructions for assembling, using, maintaining or repairing physical devices. This paper introduces the task, the functionality and the architecture of the WIP system. We show that in WIP the design of a multimodal document is viewed as a non-monotonic process that includes various revisions of preliminary results, massive replanning and plan repairs, and many negotiations between design and realization components in order to achieve an optimal division of work between text and graphics. We describe how the plan-based approach to presentation design can be exploited so that graphics generation influences the production of text and vice versa. Finally, we discuss the generation of cross-modal expressions that establish referential relationships between text and graphics elements

    Application of software mining to automatic user interface generation

    Full text link
    Many software projects spend a significant proportion of their time developing the User Interface, so any degree of automation in this area has clear benefits. Research projects to date generally take one of three approaches: interactive graphical specification tools, model-based generation tools, or languagebased tools. The first two have proven popular in industry but are labour intensive and error-prone. The third is more automated but has practical problems which limit its usefulness. This paper proposes applying the emerging field of software mining to perform runtime inspection of an application's architecture and reduce the labour intensive nature of interactive graphical specification tools and model-based generation tools. It also proposes UI generation can be made more practical by delimiting useful bounds to the generation process. The paper concludes with a description of a prototype project that implements these ideas

    Desiderata for an Every Citizen Interface to the National Information Infrastructure: Challenges for NLP

    Get PDF
    In this paper, I provide desiderata for an interface that would enable ordinary people to properly access the capabilities of the NII. I identify some of the technologies that will be needed to achieve these desiderata, and discuss current and future research directions that could lead to the development of such technologies. In particular, I focus on the ways in which theory and techniques from natural language processing could contribute to future interfaces to the NII. Introduction The evolving national information infrastructure (NII) has made available a vast array of on-line services and networked information resources in a variety of forms (text, speech, graphics, images, video). At the same time, advances in computing and telecommunications technology have made it possible for an increasing number of households to own (or lease or use) powerful personal computers that are connected to this resource. Accompanying this progress is the expectation that people will be able to more..

    Meetings and Meeting Modeling in Smart Environments

    Get PDF
    In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-line. The research reported here forms part of the European 5th and 6th framework programme projects multi-modal meeting manager (M4) and augmented multi-party interaction (AMI). Both projects aim at building a smart meeting environment that is able to collect multimodal captures of the activities and discussions in a meeting room, with the aim to use this information as input to tools that allow real-time support, browsing, retrieval and summarization of meetings. Our aim is to research (semantic) representations of what takes place during meetings in order to allow generation, e.g. in virtual reality, of meeting activities (discussions, presentations, voting, etc.). Being able to do so also allows us to look at tools that provide support during a meeting and at tools that allow those not able to be physically present during a meeting to take part in a virtual way. This may lead to situations where the differences between real meeting participants, human-controlled virtual participants and (semi-) autonomous virtual participants disappear

    MultiModal semantic representation

    Get PDF
    corecore