37,652 research outputs found

    Affect and believability in game characters:a review of the use of affective computing in games

    Get PDF
    Virtual agents are important in many digital environments. Designing a character that highly engages users in terms of interaction is an intricate task constrained by many requirements. One aspect that has gained more attention recently is the effective dimension of the agent. Several studies have addressed the possibility of developing an affect-aware system for a better user experience. Particularly in games, including emotional and social features in NPCs adds depth to the characters, enriches interaction possibilities, and combined with the basic level of competence, creates a more appealing game. Design requirements for emotionally intelligent NPCs differ from general autonomous agents with the main goal being a stronger player-agent relationship as opposed to problem solving and goal assessment. Nevertheless, deploying an affective module into NPCs adds to the complexity of the architecture and constraints. In addition, using such composite NPC in games seems beyond current technology, despite some brave attempts. However, a MARPO-type modular architecture would seem a useful starting point for adding emotions

    Why (and How) Networks Should Run Themselves

    Full text link
    The proliferation of networked devices, systems, and applications that we depend on every day makes managing networks more important than ever. The increasing security, availability, and performance demands of these applications suggest that these increasingly difficult network management problems be solved in real time, across a complex web of interacting protocols and systems. Alas, just as the importance of network management has increased, the network has grown so complex that it is seemingly unmanageable. In this new era, network management requires a fundamentally new approach. Instead of optimizations based on closed-form analysis of individual protocols, network operators need data-driven, machine-learning-based models of end-to-end and application performance based on high-level policy goals and a holistic view of the underlying components. Instead of anomaly detection algorithms that operate on offline analysis of network traces, operators need classification and detection algorithms that can make real-time, closed-loop decisions. Networks should learn to drive themselves. This paper explores this concept, discussing how we might attain this ambitious goal by more closely coupling measurement with real-time control and by relying on learning for inference and prediction about a networked application or system, as opposed to closed-form analysis of individual protocols

    A Personalized System for Conversational Recommendations

    Full text link
    Searching for and making decisions about information is becoming increasingly difficult as the amount of information and number of choices increases. Recommendation systems help users find items of interest of a particular type, such as movies or restaurants, but are still somewhat awkward to use. Our solution is to take advantage of the complementary strengths of personalized recommendation systems and dialogue systems, creating personalized aides. We present a system -- the Adaptive Place Advisor -- that treats item selection as an interactive, conversational process, with the program inquiring about item attributes and the user responding. Individual, long-term user preferences are unobtrusively obtained in the course of normal recommendation dialogues and used to direct future conversations with the same user. We present a novel user model that influences both item search and the questions asked during a conversation. We demonstrate the effectiveness of our system in significantly reducing the time and number of interactions required to find a satisfactory item, as compared to a control group of users interacting with a non-adaptive version of the system

    The TEGRID Semantic Web Application

    Get PDF
    Over the past several years there has been an increasing recognition of the shortcomings of message-passing data-processing systems that compute data without understanding, and the vastly superior potential capabilities of information-centric systems that incorporate an internal information model with sufficient context to support a useful level of automatic reasoning. The key difference between a data-processing and an information-centric environment is the ability to embed in the information-centric software some understanding of the information being processed. The term information-centric refers to the representation of information in the computer, not to the way it is actually stored in a digital machine. This notion of understanding can be achieved in software through the representational medium of an ontological framework of objects with characteristics and interrelationships (i.e., an internal information model). How these objects, characteristics and relationships are actually stored at the lowest level of bits in the computer is immaterial to the ability of the computer to undertake reasoning tasks. The conversion of these bits into data and the transformation of data into information, knowledge and context takes place at higher levels, and is ultimately made possible by the skillful construction of a network of richly described objects and their relationships that represent those physical and conceptual aspects of the real world that the computer is required to reason about. In a distributed environment such information-centric systems interoperate by exchanging ontology-based information instead of data expressed in standardized formats. The use of ontologies is designed to provide a context that enhances the ability of the software to reason about information received from outside sources. In the past, approaches to inter-system communication have relied on agreements to use pre-defined formats for data representation. Each participant in the communication then implemented translation from the communication format to its own internal data or information model. While relatively simple to construct, this approach led to distributed systems that are brittle, static, and resistant to change. It is the premise of the TEGRID (Taming the Electric Grid) proof-of-concept demonstration that, for large scale ontology-based systems to be practical, we must allow for dynamic ontology definitions instead of static, pre-defined standards. The need for ontology models that can change after deployment can be most clearly seen when we consider providing information on the World Wide Web as a set of web services augmented with ontologies. In that case, we need to allow client programs to discover the ontologies of services at run-time, enabling opportunistic access to remote information. As clients incorporate new ontologies into their own internal information models, the clients build context that enables them to reason on the information they receive from other systems. The flexible information model of such systems allows them to evolve over time as new information needs and new information sources are found
    • …
    corecore