105,833 research outputs found
Modular Digital Game System
This project created an Application Programming Interface (API) for a simulated modular digital game system. Each module consists of a triangle that displays colors at the center and edges, monitors an input and signals from the surrounding modules, and communicates with a computer controller. The API allows users to develop game programs for the system. The simulation runs the game files and displays the results. The focus was on practical coding and design of an instructional game system. The challenges in creating such a system provided a valuable learning environment for us in the areas of user interface design, system tool management and design, human computer interaction, and designing educational platforms
Brain-computer interfaces for hci and games
We study the research themes and the state-of-the-art of brain-computer interaction. Brain-computer interface research has seen much progress in the medical domain, for example for prosthesis control or as biofeedback therapy for the treatment of neurological disorders. Here, however, we look at brain-computer interaction especially as it applies to research in Human-Computer Interaction (HCI). Through this workshop and continuing discussions, we aim to define research approaches and applications that apply to disabled and able-bodied users across a variety of real-world usage scenarios. Entertainment and game design is one of the application areas that will be considered
Brain-Computer Interfaces for HCI and Games
In this workshop we study the research themes and the state-of-the-art of brain-computer interaction. Braincomputer interface research has seen much progress in the medical domain, for example for prosthesis control or as biofeedback therapy for the treatment of neurological disorders. Here, however, we look at brain-computer interaction especially as it applies to research in Human-Computer Interaction (HCI). Through this workshop and continuing discussions, we aim to define research approaches and applications that apply to disabled and able-bodied users across a variety of real-world usage scenarios. Entertainment and game design is one of the application areas that will be considered
DEVELOPMENT OF SENSORY-MODE INTERACTION IN HAPTIC SYSTEM
This final report is an overview for Final Year Project titled "Development of
sensory-mode interaction in haptic system". A haptic device enables interaction
between human and computer, which also give response due to the force applied by
the user movements. The aim of this project is to design and develop a simple haptic
device to analyze on the concept of sensory-mode interaction by using strain gauge
sensor. Current application of haptic technology has been widely used in robotics,
teleoperators, simulators, and video game controller. However, most of the
application of existing haptic device are expensive, sophisticated, and require high
level of technology. Therefore, due to the complexity of the system, a simple haptic
device is designed after analyzing literature review on the related work. The device
also will enable user to obtain the tactile feedback when exerting force to the
interface. In order to perform the virtual measurement, a Graphical User Interface
(GUI) is developed using Lab View software. The hardware device will interact
directly with the computer via communication board. Hence, whenever the user
applies force on the device, the force value will transfer to the computer for further
conversion and calculation. User can acquire data and the output value generated will
be displayed on the screen of computer. The overall summary about this project is to
produce a simple haptic device using the stain gauge sensor, and the amount force
exerted by user can be measured and monitor via the Lab View Software
Gesture based interface for image annotation
Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Informática pela
Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaGiven the complexity of visual information, multimedia content search presents more problems than textual search. This level of complexity is related with the difficulty of doing automatic image and video tagging, using a set of keywords to describe the content. Generally, this annotation is performed manually (e.g., Google Image) and the search is based on pre-defined
keywords. However, this task takes time and can be dull.
In this dissertation project the objective is to define and implement a game to annotate personal digital photos with a semi-automatic system. The game engine tags images automatically and the player role is to contribute with correct annotations. The application is composed by the following main modules: a module for automatic image annotation, a module that manages the
game graphical interface (showing images and tags), a module for the game engine and a module for human interaction. The interaction is made with a pre-defined set of gestures, using a web camera. These gestures will be detected using computer vision techniques interpreted as the user actions. The dissertation also presents a detailed analysis of this application, computational modules and design, as well as a series of usability tests
Embedding 3-D Gaze Points on a 3-D Visual Field: A Case of Transparency
The paper seeks to demonstrates the likelihood of embedding a 3D gaze point on a 3D visual field, the visual field is inform of a game console where the user has to play from one level to the other by overcoming obstacles that will lead them to the next level. Complex game interface is sometimes difficult for the player to progress to next level of the game and the developers also find it difficult to regulate the game for an average player. The model serves as an analytical tool for game adaptations and also players can track their response to the game. Custom eye tracking and 3D object tracking algorithms were developed to enhance the analysis of the procedure. This is a part of the contributions to user interface design in the aspect of visual transparency. The development and testing of human computer interaction uses and application is more easily investigated than ever, part of the contribution to this is the embedding of 3-D gaze point on a 3-D visual field. This could be used in a number of applications, for instance in medical applications that includes long and short sightedness diagnosis and treatment. Experiments and Test were conducted on five different episodes of user attributes, result show that fixation points and pupil changes are the two most likely user attributes that contributes most significantly in the performance of the custom eye tracking algorithm the study. As the advancement in development of eye movement algorithm continues user attributes that showed the least likely appearance will prove to be redundant
From Usability Testing and Text Analysis to User Response Criticism
The article creates a bridge between the fields of Human-Computer Interaction (HCI) and Digital Humanities (DH), where HCI techniques are used to evaluate tools developed in DH projects and the results of this evaluation are analysed via DH methods. Two case studies in interface and game design are presented by the application of textual analysis to user-response via three systems, for visualisation of the text as a network (Textexture), corpus analysis (TXM), and sentiment analysis (TheySay). Although further experiments and more insight into the theoretical matters are intended, we assume that this kind of analysis, beyond its usability-oriented value, may inform humanistic interface design and approaching of user models, and inspire new paths of reflection on user’s self projection in the digital space, at the intersection of digital hermeneutics, digital aesthetics, and the theory of literary response
A grammatical specification of human-computer dialogue
The Seeheim Model of human-computer interaction partitions an interactive application into a user-interface, a dialogue controller and the application itself. One of the formal techniques of implementing the dialogue controller is based on context-free grammars and automata. In this work, we modify an off-the-shelf compiler generator (YACC) to generate the dialogue controller. The dialogue controller is then integrated into the popular X-window system, to create an interactive-application generator. The actions of the user drive the automaton, which in turn controls the application
The design-by-adaptation approach to universal access: learning from videogame technology
This paper proposes an alternative approach to the design of universally accessible interfaces to that provided by formal design frameworks applied ab initio to the development of new software. This approach, design-byadaptation, involves the transfer of interface technology and/or design principles from one application domain to another, in situations where the recipient domain is similar to the host domain in terms of modelled systems, tasks and users. Using the example of interaction in 3D virtual environments, the paper explores how principles underlying the design of videogame interfaces may be applied to a broad family of visualization and analysis software which handles geographical data (virtual geographic environments, or VGEs). One of the motivations behind the current study is that VGE technology lags some way behind videogame technology in the modelling of 3D environments, and has a less-developed track record in providing the variety of interaction methods needed to undertake varied tasks in 3D virtual worlds by users with varied levels of experience. The current analysis extracted a set of interaction principles from videogames which were used to devise a set of 3D task interfaces that have been implemented in a prototype VGE for formal evaluation
- …