32 research outputs found
Domain-Specific Modeling and Code Generation for Cross-Platform Multi-Device Mobile Apps
Nowadays, mobile devices constitute the most common computing device. This
new computing model has brought intense competition among hardware and software
providers who are continuously introducing increasingly powerful mobile devices
and innovative OSs into the market. In consequence, cross-platform and
multi-device development has become a priority for software companies that want
to reach the widest possible audience. However, developing an application for
several platforms implies high costs and technical complexity. Currently, there
are several frameworks that allow cross-platform application development.
However, these approaches still require manual programming. My research
proposes to face the challenge of the mobile revolution by exploiting
abstraction, modeling and code generation, in the spirit of the modern paradigm
of Model Driven Engineering
Refinement for user interface designs
Formal approaches to software development require that we correctly describe (or specify) systems in order to prove properties about our proposed solution prior to building it. We must then follow a rigorous process to transform our specification into an implementation to ensure that the properties we have proved are retained. Different transformation, or refinement, methods exist for different formal methods, but they all seek to ensure that we can guide the transformation in a way which preserves the desired properties of the system. Refinement methods also allow us to subsequently compare two systems to see if a refinement relation exists between the two. When we design and build the user interfaces of our systems we are similarly keen to ensure that they have certain properties before we build them. For example, do they satisfy the requirements of the user? Are they designed with known good design principles and usability considerations in mind? Are they correct in terms of the overall system specification? However, when we come to implement our interface designs we do not have a defined process to follow which ensures that we maintain these properties as we transform the design into code. Instead, we rely on our judgement and belief that we are doing the right thing and subsequent user testing to ensure that our final solution remains useable and satisfactory. We suggest an alternative approach, which is to define a refinement process for user interfaces which will allow us to maintain the same rigorous standards we apply to the rest of the system when we implement our user interface designs
How to exploit abstract user interfaces in MARIA
In model-based approaches, Abstract User Interfaces enable the specification of interactive applications in a modality-independent manner, which is then often used for authoring multi-device interactive applications. In this paper we discuss two solutions for exploiting abstract UIs. We consider the MARIA language for such comparison. The overall aim is to improve the efficiency of the model-based process, thus making it easier to adopt and apply
How to exploit abstract user interfaces in MARIA
In model-based approaches, Abstract User Interfaces enable the specification of interactive applications in a modality-independent manner, which is then often used for authoring multi-device interactive applications. In this paper we discuss two solutions for exploiting abstract UIs. We consider the MARIA language for such comparison. The overall aim is to improve the efficiency of the model-based process, thus making it easier to adopt and apply
Ifaces: Adaptative user interfaces for ambient intelligence
Proceedings of the IADIS International Conference on Interfaces and Human Computer Interaction. Amsterdam, The Netherlands 25-27 July 2008In this paper we present an ontology language to model an environment and its graphical user interface in the field of
ambient intelligence. This language allows a simple definition of the environment and automatically produces its
associated interaction interface. The interface dynamically readjusts to the characteristics of the environment and the
available devices. Therefore it adapts to the necessities of the people who have to use it and their resources. The system
has been developed and tested employing a real ambient intelligence environment.This work has been partly funded by HADA project number TIN2007 – 64718 and the UAM – Indra Chair in Ambient Intelligence
Capturing the requirements for multiple user interfaces
non-peer-reviewedIn this paper we describe MANTRA, a model-driven
approach for the development of multiple consistent
user interfaces for one application. The common requirements
of all these user interfaces are captured in
an abstract UI model (AUI) which is annotated with
constraints on the dialogue flow.
We exemplify all further steps along a well known
application scenario in which a user queries train
connections from a simple timetable service.
We consider in particular how the user interface
can be adapted on the AUI level by deriving and tailoring
dialogue structures which take into account
constraints imposed by front-end platforms or inexperienced
users. With this input we use model transformations
to derive concrete, platform-specific UI models
(CUI). These can be used to generate implementation
code for several UI platforms including GUI applications,
dynamic websites and mobile applications.
The user interfaces are integrated with a multi tier
application by referencing WSDL-based (Web Service
Description Language) interface descriptions.
Finally, we discuss how our approach can be extended
to include voice interfaces. This imposes special
challenges as these interfaces tend to be structurally
different from visual platforms and have to be
specified using speech-input grammars
Extending an XML environment definition language for spoken dialogue and web-based interfaces
This is an electronic version of the paper presented at the Workshop "Developing User Interfaces with XML: Advances on User Interface Description Languages", during the International Working Conference on Advanced Visual Interfaces (AVI), held in Gallipoli (Italy) on 2004In this work we describe how we employ XML-compliant languages to define an intelligent environment. This language represents the environment, its entities and their
relationships. The XML environment definition is transformed in a middleware layer that provides interaction with the environment. Additionally, this XML definition language has been extended to support two different user interfaces. A spoken dialogue interface is created by means of specific linguistic information. GUI interaction information is converted in a web-based interface.This work has been sponsored by the Spanish Ministry of Science and Technology, project number TIC2000-046
FlexiXML: Um animador de modelos UsiXML
Uma parte considerável do desenvolvimento de software é dedicada à camada de interacção com o utilizador. Face à complexidade inerente ao desenvolvimento desta camada, é importante possibilitar uma análise tão cedo quanto possível dos conceitos e ideias em desenvolvimento para uma dada interface. O desenvolvimento baseado em modelos fornece uma solução para este problema ao facilitar a prototipagem de interfaces a partir dos modelos desenvolvidos. Este artigo descreve uma abordagem à prototipagem de interfaces e apresenta a primeira versão da ferramenta FlexiXML que realiza a interpretação e animação de interfaces descritas em UsiXML
Extending the MVC Design Pattern towards a Task-Oriented Development Approach for Pervasive Computing Applications
This paper addresses the implementation of pervasive Java Web applications using a development approach that is based on the Model-View- Controller (MVC) design pattern. We combine the MVC methodology with a hierarchical task-based state transition model in order to achieve the distinction
between the task state and the view state of an application. More precisely, we propose to add a device-independent TaskStateBean and a device-specific ViewStateBean for each task state as an extension to the J2EE Service to Worker design pattern. Furthermore, we suggest representing the task state and view state transition models as finite state automata in two sets of XML files
A Model-View-Controller Extension for Pervasive Multi-Client User Interfaces
This paper addresses the implementation of pervasive Java Web applications using a development approach that is based on the Model–View–Controller (MVC) design pattern. We combine the MVC methodology with a hierarchical task-based state transition model in order to achieve the distinction between the task state and the view state of an application. More precisely, we propose to add a device-independent TaskStateBean and a device-specific ViewStateBean for each task state as an extension to the J2EE Service to Worker design pattern. Furthermore, we suggest representing the task state and view state transitionmodels as finite state automata in two sets of XML files. This paper shows that the distinction between an application’s task state and view state is both intuitive and facilitates several, otherwise complex, tasks, such as changing devices 'on the fly'