177 research outputs found
Human factors aspects of control room design: Guidelines and annotated bibliography
A human factors analysis of the workstation design for the Earth Radiation Budget Satellite mission operation room is discussed. The relevance of anthropometry, design rules, environmental design goals, and the social-psychological environment are discussed
Recommended from our members
Factors Which Influence Key Entry Speed On Hard and Soft Keyboards: Experience, Eye Behaviors and Finger Movements
Soft keyboards have become ubiquitous, especially with the introduction of the iPad. This study aims to determine for experienced touch typists whether there are characteristics of soft QWERTY keyboards that can make them easier to use and why those characteristics provide an advantage. Two characteristics would appear to be of central importance. First, hard keyboards provide home row positioning information that is not as easily provided by soft keyboards. Second, hard keyboards also provide auditory and tactile feedback when a key is depressed, something not generally provided with soft keyboards.
In order to test the hypothesis that the absence of home row positioning and key strike feedback information can reduce expert touch typists’ speeds on soft keyboards, expert touch typists were run in two experiments. In Experiment 1, soft and hard keyboards in landscape and portrait mode were evaluated. The hard keyboards had the standard home row positioning and key strike feedback whereas the soft keyboards had neither. If these are important elements in typing speed, then experienced hard keyboard typists should type less quickly when using soft keyboards than when using hard keyboards. Moreover, if reducing the footprint of the keyboard, from landscape to portrait, requires more eye movements, then typists using both hard and soft keyboards should be slower when using the portrait size keyboard than when using the landscape size keyboard. Perhaps not surprisingly, experienced hard keyboard touch typists do less well when entering information on soft keyboards without home row positioning information or auditory feedback. Moreover, both groups appear to type more slowly in keyboards laid out in a portrait format than they do in keyboards laid out in a landscape format.
In summary, the results from Experiment 1 suggest that both home row positioning information and auditory key strike feedback should speed performance. In Experiment 2, an attempt was made to determine just how much of a gain can be made in the typing speed of more experienced soft keyboard users if home row positioning information (tactile feedback), auditory feedback, or both are added. Participants were run in four conditions: auditory key strike feedback (with and without) was crossed with tactile home row positioning information (with and without). Participants included expert level hard keypad QWERTY touch typists who have had at least five hours’ typing experience with an iPad. Participants were given four passages to type, all of equal length and all balanced for letter frequency. Participants typed one passage in each of the four conditions. The passage sequence was counterbalanced across participants. Typing speeds for each of the passages was measured and averaged across participants within conditions. A repeated measures analysis of variance was used to determine whether there was a main effect of position or feedback.
In order to determine why it is that home row positioning and key strike feedback alters performance, eye behaviors, movement times and task completion times are calculated. If home row position information is important, soft keyboards without this information may have a larger number of glances that a typist directs at the keyboard. These glances will help the typist determine either whether a finger is positioned over the correct home key (the launch key) or whether the location of the key to be typed next (the target key) is in the expected position. If key strike feedback is important, soft keyboards without this information should have longer movement times where the typists do not need to glance at the keyboard. This follows since the typist will process less quickly the fact that a finger has landed on a key.
Key press and key release times will be included each time a character, number or spacebar is depressed or releases. The finger movement time between any pair of keys i and j will be derived from the key press and key release times. This time will be measured from the moment the finger leaves the launch key i until the moment that the finger arrives at the target key j. Task completion times were defined as the difference between the first key press in a passage and the last key release. Finger movement times, inter-keystroke intervals and task completion times were recorded using a program developed in JAVA 2SE. Eye movements are recorded with aid of an ASL Mobile EYE tracker.
Analyses of the finger movement times and task completions times in Experiment 2 indicated that participants were fastest when both position information and auditory feedback were included. When just finger movement times are considered, there was a significant effect of auditory feedback but not of positioning information. This was what was expected given that the speed of finger movement times is arguably largely a function of how quickly a typist perceives that a movement has been completed, something that auditory feedback, but not positioning information provides. When just the task completion times were analyzed, position information had a significant effect. The effect of auditory feedback was only marginally significant. It was expected that both factors would be significant. Perhaps the power was too small. Finally, when the eye movements were analyzed, the total scanning time was shortest when both position information and auditory feedback were available. The effects of both were statistically significant.
In summary, on the basis of the results from Experiment 1 it appeared likely that auditory feedback and positioning information accounted in part for the faster typing times of touch typists on hard keyboards as opposed to soft keyboards. In Experiment 2, this hypothesis was evaluated. Finger movement and task completion times were fastest when both auditory feedback and positioning information were present. The effect of auditory feedback appeared to impact only the finger movement times. The effect of both auditory feedback and positioning information appeared to impact the task completion times. However, the effect of auditory feedback on task completion times was only marginal. Finally, it was clear that much of the reduction in task completion times occurred because the time that the touch typists spent scanning the keyboard was smaller when both auditory feedback and positioning information was available.
It is recommended in the future that soft keyboards have both sets of feedback available, auditory (through simulated key clicks) and tactile (through home row positioning information). The gains in typing speed with these additions were models (about 10%), considered over the entire population of users the impact could be considerable
User interface design : for existing system monitoring application
The main purpose of the project was to make use of elements of interface design to create an application. Another purpose was to see how Enoro (customer) Generis system (customer's internal system) merges with the web in particular application. The goal was to create an application web interface for existing System Monitoring application.
ASP.NET framework with C# programming language, Enoro Generis System and user interface design elements were used for creating the application. The application serves Enoro's customers to monitor their servers and view the result via web interface. The application development is still ongoing. The final product was not aimed for this study but the thesis application well demonstrates the concept of final application.
The application carries good importance to both Enoro and its customers regarding business values and system robustness monitoring and immediate problem fixing. The application also demonstrates how user interface design influences application's use. The concept of the application itself is important for users who can easily know what is going on in their systems regardless of their location and the platform they use
Automating Software Development for Mobile Computing Platforms
Mobile devices such as smartphones and tablets have become ubiquitous in today\u27s computing landscape. These devices have ushered in entirely new populations of users, and mobile operating systems are now outpacing more traditional desktop systems in terms of market share. The applications that run on these mobile devices (often referred to as apps ) have become a primary means of computing for millions of users and, as such, have garnered immense developer interest. These apps allow for unique, personal software experiences through touch-based UIs and a complex assortment of sensors. However, designing and implementing high quality mobile apps can be a difficult process. This is primarily due to challenges unique to mobile development including change-prone APIs and platform fragmentation, just to name a few. in this dissertation we develop techniques that aid developers in overcoming these challenges by automating and improving current software design and testing practices for mobile apps. More specifically, we first introduce a technique, called Gvt, that improves the quality of graphical user interfaces (GUIs) for mobile apps by automatically detecting instances where a GUI was not implemented to its intended specifications. Gvt does this by constructing hierarchal models of mobile GUIs from metadata associated with both graphical mock-ups (i.e., created by designers using photo-editing software) and running instances of the GUI from the corresponding implementation. Second, we develop an approach that completely automates prototyping of GUIs for mobile apps. This approach, called ReDraw, is able to transform an image of a mobile app GUI into runnable code by detecting discrete GUI-components using computer vision techniques, classifying these components into proper functional categories (e.g., button, dropdown menu) using a Convolutional Neural Network (CNN), and assembling these components into realistic code. Finally, we design a novel approach for automated testing of mobile apps, called CrashScope, that explores a given android app using systematic input generation with the intrinsic goal of triggering crashes. The GUI-based input generation engine is driven by a combination of static and dynamic analyses that create a model of an app\u27s GUI and targets common, empirically derived root causes of crashes in android apps. We illustrate that the techniques presented in this dissertation represent significant advancements in mobile development processes through a series of empirical investigations, user studies, and industrial case studies that demonstrate the effectiveness of these approaches and the benefit they provide developers
Multimodal access to social media services
Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto, Microsoft Language Development Center. 201
Prototyping tools for hybrid interactions
In using the term 'hybrid interactions', we refer to interaction forms that comprise both tangible and intangible interactions as well as a close coupling of the physical or embodied representation with digital output. Until now, there has been no description of a formal design process for this emerging research domain, no description that can be followed during the creation of these types of interactions. As a result, designers face limitations in prototyping these systems.
In this thesis, we share our systematic approach to envisioning, prototyping, and iteratively developing these interaction forms by following an extended interaction design process. We share our experiences with process extensions in the form of toolkits, which we built for this research and utilized to aid designers in the development of hybrid interactive systems.
The proposed tools incorporate different characteristics and are intended to be used at different points in the design process. In Sketching with Objects, we describe a low-fdelity toolkit that is intended to be used in the very early phases of the process, such as ideation and user research. By introducing Paperbox, we present an implementation to be used in the mid-process phases for fnding the appropriate mapping between physical representation and digital content during the creation of tangible user interfaces (TUI) atop interactive surfaces. In a follow-up project, we extended this toolkit to also be used in conjunction with capacitive sensing devices. To do this, we implemented Sketch-a-TUI. This approach allows designers to create TUIs on capacitive sensing devices rapidly and at low cost. To lower the barriers for designers using the toolkit, we created the Sketch-a-TUIApp, an application that allows even novice users (users without previous coding experience) to create early instantiations of TUIs.
In order to prototype intangible interactions, we used open soft- and hardware components and proposed an approach of investigating interactivity in correlation with intangible interaction forms on a higher fdelity. With our fnal design process extension, Lightbox, we assisted a design team in systematically developing a remote interaction system connected to a media façade covering a building.
All of the above-mentioned toolkits were explored both in real-life contexts and in projects with industrial partners. The evaluation was therefore mainly performed in the wild, which led to the adaptation of metrics suitable to the individual cases and contexts.Unter dem Sammelbegriff Hybrid Interactions verstehen wir Interaktionen, die physikalische oder immaterielle Bedienelemente einbeziehen. Diese Bezeichnung beinhaltet ausserdem eine enge Verbindung zwischen physikalischer oder verkörperter Interaktion und digitaler Darstellung der Nutzerschnittstelle. Es existiert jedoch kein allgemeingültiger Entwicklungsprozess den die mit der Gestaltung solcher Systeme betrauten Designer und Entwickler anwenden können. Eine Tatsache welche die systematische Entwicklung dieser neuartigen Interaktionsformen erschwert. In dieser Doktorarbeit präsentieren wir unseren Ansatz zur Erstellung hybrider Interaktionen mit der Hilfe von Designprozess-Werkzeugen.
Unsere vorschlagen Werkzeuge können an verschiedenen Stellen im Design- Prozess eingesetzt zu werden: Mit Sketching with Objects präsentieren wir ein Werkzeug auf einer niedrigen Genauigkeitsstufe, das in sehr frühen Prozessphasen wie Ideenfndung und Nutzerforschung verwendet werden soll. Eine weitere Implementierung, Paperbox, bietet eine Methode für mittlere Designprozess-Phasen bei der Gestaltung von begreifbaren Interaktionen auf interaktiven Oberflächen. Im Verlauf unserer Forschungstätigkeit haben wir dieses Werkzeug erweitert, um auch in Verbindung mit graphischen, kapazitiven Oberflächen (z.B. iPad) verwendet werden zu können. Das für diesen Zweck erarbeitete Werkzeug Sketch-a-TUI ermöglicht Designern ein schnelles und kostengünstiges Entwerfen von interaktiven, physikalischen Objekten auf interaktiven Oberflächen. Für Nutzer ohne Programmierkenntnisse bietet die Sketch-a-TUIApp die Möglichkeit frühe Instanzen von begreifbaren Interaktionen selbständig zu erzeugen.
Um hybride immaterielle Interaktionen systematisch zu gestalten, untersuchten wir die Verwendung von frei verfügbaren Soft- und Hardwarekomponenten. Durch diese Vorgehensweise stellen wir einen Ansatz zur prozessorientierten Erstellung von Prototypen in Verbindung mit immaterieller Interaktion vor. Ein weiteres Werkzeug für die Gestaltung von räumlich getrennten Interaktionen, Lightbox, unterstützte ein Designteam bei der Entwicklung einer räumlich getrennten (Nutzer-) Schnittstelle in Verbindung mit einer Medienfassade.
Alle in dieser Doktorarbeit vorgestellten Werkzeuge wurden in Feldstudien durch Projekte mit Partnern aus der Industrie erforscht. Die Evaluation wurde daher hauptsächlich ausserhalb des Labors absolviert und resultierte in einer Anpassung der verwendeten Methoden im jeweiligen Kontext
Design as a thing: how designers make up design as an object in human-centred design practices
Design as a thing: how designers make up design as an object in human-centred design practice
Design and implementation of a high productivity user interface for a digital dermatoscope
Information technology offers great potential for healthcare applications. Modern medicine is increasingly taking advantage of digital imaging and computer-assisted diagnosis. Dermatology is no different. Digital dermatoscopy is emerging as the standard for diagnosis of cutaneous lesions. High quality digital images allow dermatologists to improve accuracy, and to assess the evolution of lesions. However, state-of-the-art technology fails to support dermatologists in daily practice: the available systems on the market increase average visit time, and are expensive. Enabling a highly efficient use of the digital dermatoscope will shorten average visit time, and thus allow screening a higher portion of the population at risk with higher frequenc
Technical Workshop: Advanced Helicopter Cockpit Design
Information processing demands on both civilian and military aircrews have increased enormously as rotorcraft have come to be used for adverse weather, day/night, and remote area missions. Applied psychology, engineering, or operational research for future helicopter cockpit design criteria were identified. Three areas were addressed: (1) operational requirements, (2) advanced avionics, and (3) man-system integration
Studying Serious Games for the Therapy of Children with Disabilities following a Co-Design Process
Therapy can be a long and tedious process where progress is usually not immediately
visible. This slow process can discourage younger patients, especially children who do not
understand exactly what they are doing. Serious Games can help in these situations since
they are games designed for a primary purpose other than pure entertainment. These
games can be helpful as therapy tools because they promote engagement on the side of
the patients, which in turn will make them feel more motivated to follow the therapeutic
programme.
In order to develop a game with a meaningful experience for users, beyond the fun
of playing it, which helps them in their therapy, experts in the area need to be involved
through close collaboration throughout the whole research process. Therefore, we de-
veloped a game suite for the therapy of children with disabilities following a co-design
process that included Cresce com Amor as the partner clinic. Cresce com Amor provided
therapy expertise to the research team, collaborating in several phases of the process.
Furthermore, by developing a classification system for serious games, based on the
International Classification of Functioning, Disability and Health (ICF), which matches
each game with body functions and therapy areas, we intend to support the classification
of serious games in order to make them more suitable for their ultimate purpose. An
in-house developed platform, called PLAY, supports the games by acting as a repository
for the data collected and giving the therapists an interface to interact with and adjust
the game parameters.
The games use different interaction methods, other than the usual keyboard and
mouse, to allow patients to seamlessly perform exercises that simulate the ones done
in current traditional therapy sessions. By using off-the-shelf controllers, such as the
balance board and dance mat, we can translate real-life movements more naturally into
character movements in the virtual space.A terapia pode ser um processo longo e tedioso onde o progresso geralmente não é imedi-
atamente visível. Este processo lento pode desencorajar os pacientes mais jovens, especi-
almente as crianças que não entendem exatamente o que estão a fazer. Jogos Sérios podem
ajudar nestas situações, uma vez que são jogos concebidos com um propósito principal
que não seja apenas entretenimento. Estes jogos podem ser úteis como ferramentas te-
rapêuticas porque promovem o envolvimento do lado dos pacientes, o que, por sua vez,
fará com que se sintam mais motivados para seguir o programa terapêutico.
Para desenvolver um jogo com uma experiência significativa para os utilizadores, para
além da diversão de jogar, que os ajude na sua terapia, os especialistas na área precisam
de estar envolvidos através de uma estreita colaboração ao longo de todo o processo de
investigação. Assim, desenvolvemos uma suite de jogos para a terapia de crianças com
incapacidades seguindo um processo de co-criação que incluiu a Cresce com Amor como
clínica parceira. A Cresce com Amor adicionou conhecimentos terapêuticos à equipa de
investigação, colaborando em várias fases do processo.
Além disso, ao desenvolver um sistema de classificação para jogos sérios, baseado na
Classificação Internacional de Funcionalidade, Incapacidade e Saúde (CIF), que combina
cada jogo com funções corporais e áreas de terapia, pretendemos apoiar a classificação de
jogos sérios, a fim de torná-los mais adequados ao seu propósito final. Uma plataforma
desenvolvida internamente, chamada PLAY, suporta os jogos, agindo como um repositório
para os dados coletados e dando aos terapeutas uma interface para interagir e ajustar os
parâmetros do jogo.
Os jogos utilizam diferentes métodos de interação, além do habitual teclado e rato,
para permitir que os pacientes realizem exercícios que simulam os que são feitos nas
sessões de terapia tradicional atuais. Usando controladores comerciais, "prontos para uso",
como a balance board e o dance mat, podemos traduzir de forma mais natural movimentos
da vida real em movimentos de personagens no espaço virtual
- …