32,901 research outputs found
Automatic generation of user interfaces using the set description language
We present a paradigm to generate automatically graphical user interfaces from a formal description of the data model following the well-known model-view-control paradigm. This paradigm provide complete separation between data model and interface description, setting the programmer free from the low-level aspects of programming interfaces, letting him take care of higher level aspects. The interface along with the data model is described by means of a formal language, the Set Description Language. We also describe the infrastructure based on this paradigm we implemented to generate graphical user interfaces for generic applications. Moreover, it can adapt the user interface of a program to the needs derived from the type of data managed by the user from time to time
GUI Library on SDL2 Platform Designed for Games
Tato prĂĄce se zabĂœvĂĄ grafickĂœmi uĆŸivatelskĂœmi rozhranĂmi v poÄĂtaÄovĂœch hrĂĄch jako prostĆedky pro komunikaci mezi uĆŸivatelem a aplikacĂ. DĂĄle se vÄnuje nĂĄvrhu a implementaci obecnĂ©ho uĆŸivatelskĂ©ho rozhranĂ postavenĂ©ho na platformÄ SDL2 a rozhranĂ OpenGL.This thesis examines the graphical user interfaces in computer games as a medium for communication between the user and the application. Then it describes the design and implementation of generic user interface using the SDL2 platform and the OpenGL specification.
An Effective Generic Lasso Selection Tool for Multiselection
Multiselection is widely available in the graphical user interfaces of common applications, often through rectangular or row-wise selection tools. Lasso selection, though often provided in image manipulation applications, is uncommon in applications of everyday selection tasks. Lasso selection would be a useful addition for many applications. This thesis presents an effective and generic implementation of lasso selection. Its effectiveness is achieved by making the computation incremental: only the elements affected by the extension of the selection path are inspected. The solution is generic, easily reused in new selection contexts.Masteroppgave i informatikkINF399MAMN-PROGMAMN-IN
A toolkit of mechanism and context independent widgets
Most human-computer interfaces are designed to run on a static platform (e.g. a workstation with a monitor) in a static environment (e.g. an office). However, with mobile devices becoming ubiquitous and capable of running applications similar to those found on static devices, it is no longer valid to design static interfaces. This paper describes a user-interface architecture which allows interactors to be flexible about the way they are presented. This flexibility is defined by the different input and output mechanisms used. An interactor may use different mechanisms depending upon their suitability in the current context, user preference and the resources available for presentation using that mechanism
Semi-automated creation of converged iTV services: From macromedia director simulations to services ready for broadcast
While sound and video may capture viewersâ attention, interaction can captivate them. This has not been available prior to the advent of Digital Television. In fact, what lies at the heart of the Digital Television revolution
is this new type of interactive content, offered
in the form of interactive Television (iTV) services. On top of that, the new world of converged networks has created a demand for a new type of converged services on a range of mobile terminals (Tablet PCs, PDAs and mobile phones). This paper aims at presenting a new approach to service creation that allows for the semi-automatic translation of simulations and rapid prototypes created in the accessible desktop
multimedia authoring package Macromedia Director
into services ready for broadcast. This is achieved by a series of tools that de-skill and speed-up the process of creating digital TV user interfaces (UI) and applications for mobile terminals.
The benefits of rapid prototyping are essential for the production of these new types of services, and are therefore discussed in the first section of this paper.
In the following sections, an overview of the
operation of content, service, creation and management sub-systems is presented, which illustrates why these tools compose an important and integral part of a system responsible of creating, delivering and managing converged broadcast and telecommunications services.
The next section examines a number of metadata
languages candidates for describing the iTV services user interface and the schema language adopted in this project. A detailed description of the operation of the two tools is provided to offer an insight of how they can be used to de-skill and speed-up the process of creating digital TV user interfaces and applications for mobile terminals. Finally, representative broadcast oriented and telecommunication oriented converged service components are also introduced, demonstrating how these tools have been used to generate different types of services
Extending snBench to Support a Graphical Programming Interface for a Sensor Network Tasking Language (STEP)
The purpose of this project is the creation of a graphical "programming" interface for a sensor network tasking language called STEP. The graphical interface allows the user to specify a program execution graphically from an extensible pallet of functionalities and save the results as a properly formatted STEP file. Moreover, the software is able to load a file in STEP format and convert it into the corresponding graphical representation. During both phases a type-checker is running on the background to ensure that both the graphical representation and the STEP file are syntactically correct. This project has been motivated by the Sensorium project at Boston University. In this technical report we present the basic features of the software, the process that has been followed during the design and implementation. Finally, we describe the approach used to test and validate our software
Sketching-out virtual humans: A smart interface for human modelling and animation
In this paper, we present a fast and intuitive interface for sketching out
3D virtual humans and animation. The user draws stick figure key frames first and
chooses one for âfleshing-outâ with freehand body contours. The system
automatically constructs a plausible 3D skin surface from the rendered figure, and
maps it onto the posed stick figures to produce the 3D character animation. A
âcreative model-based methodâ is developed, which performs a human perception
process to generate 3D human bodies of various body sizes, shapes and fat
distributions. In this approach, an anatomical 3D generic model has been created with
three distinct layers: skeleton, fat tissue, and skin. It can be transformed sequentially
through rigid morphing, fatness morphing, and surface fitting to match the original
2D sketch. An auto-beautification function is also offered to regularise the 3D
asymmetrical bodies from usersâ imperfect figure sketches. Our current system
delivers character animation in various forms, including articulated figure animation,
3D mesh model animation, 2D contour figure animation, and even 2D NPR animation
with personalised drawing styles. The system has been formally tested by various
users on Tablet PC. After minimal training, even a beginner can create vivid virtual
humans and animate them within minutes
- âŠ