3,574 research outputs found
How Design Plays Strategic Roles in Internet Service Innovation: Lessons from Korean Companies
In order to survive in the highly competitive internet business, companies have to provide differentiated services that can satisfy the rapidly changing users’ tastes and needs. Designers have been increasingly committed to achieving user satisfaction by generating and visualizing innovative solutions in new internet service development. The roles of internet service design have expanded from a narrow focus on aesthetics into a more strategic aspect. This paper investigates the methods of managing design in order to enhance companies’ competitiveness in internet business. The main research processes are to: (1) explore the current state of internet service design in Korea through in-depth interviews with professional designers and survey questionnaires to 30 digital design agencies and 60 clients; (2) compare how design is managed between in-house design groups and digital design agencies though the case studies of five Korean companies; and (3) develop a taxonomy characterizing four roles of designers in conjunction with the levels of their strategic contributions to internet service innovation: visualist, solution provider, concept generator, and service initiator. In addition, we demonstrate the growing contributions of the strategic use of design for innovating internet services, building robust brand equity, and increasing business performance.
Keywords:
Design Management; Internet Business; Internet Service Design; Digital Design; Digital Design Agency; In-House Design Group, Case Study</p
Translating Video Recordings of Mobile App Usages into Replayable Scenarios
Screen recordings of mobile applications are easy to obtain and capture a
wealth of information pertinent to software developers (e.g., bugs or feature
requests), making them a popular mechanism for crowdsourced app feedback. Thus,
these videos are becoming a common artifact that developers must manage. In
light of unique mobile development constraints, including swift release cycles
and rapidly evolving platforms, automated techniques for analyzing all types of
rich software artifacts provide benefit to mobile developers. Unfortunately,
automatically analyzing screen recordings presents serious challenges, due to
their graphical nature, compared to other types of (textual) artifacts. To
address these challenges, this paper introduces V2S, a lightweight, automated
approach for translating video recordings of Android app usages into replayable
scenarios. V2S is based primarily on computer vision techniques and adapts
recent solutions for object detection and image classification to detect and
classify user actions captured in a video, and convert these into a replayable
test scenario. We performed an extensive evaluation of V2S involving 175 videos
depicting 3,534 GUI-based actions collected from users exercising features and
reproducing bugs from over 80 popular Android apps. Our results illustrate that
V2S can accurately replay scenarios from screen recordings, and is capable of
reproducing 89% of our collected videos with minimal overhead. A case
study with three industrial partners illustrates the potential usefulness of
V2S from the viewpoint of developers.Comment: In proceedings of the 42nd International Conference on Software
Engineering (ICSE'20), 13 page
NITELIGHT: A Graphical Tool for Semantic Query Construction
Query formulation is a key aspect of information retrieval, contributing to both the efficiency and usability of many semantic applications. A number of query languages, such as SPARQL, have been developed for the Semantic Web; however, there are, as yet, few tools to support end users with respect to the creation and editing of semantic queries. In this paper we introduce a graphical tool for semantic query construction (NITELIGHT) that is based on the SPARQL query language specification. The tool supports end users by providing a set of graphical notations that represent semantic query language constructs. This language provides a visual query language counterpart to SPARQL that we call vSPARQL. NITELIGHT also provides an interactive graphical editing environment that combines ontology navigation capabilities with graphical query visualization techniques. This paper describes the functionality and user interaction features of the NITELIGHT tool based on our work to date. We also present details of the vSPARQL constructs used to support the graphical representation of SPARQL queries
Developing a High-Speed Craft Route Monitor Window
High-speed navigation in littoral waters is an advanced maritime operation. Reliable, timely and consistent data provided by the integrated navigation systems increases safe navigation. The workload of the navigator is high, together with the interaction between the navigator and the navigation system. Information from the graphical user interface in bridge displays must facilitate the demands for the high-speed navigator, and this article presents how eye tracking data was used to identify user requirements which in combination with a human-centred design process led to the development of an improved software application on essential navigation equipment.The Royal Norwegian Nav
Unblind Your Apps: Predicting Natural-Language Labels for Mobile GUI Components by Deep Learning
According to the World Health Organization(WHO), it is estimated that
approximately 1.3 billion people live with some forms of vision impairment
globally, of whom 36 million are blind. Due to their disability, engaging these
minority into the society is a challenging problem. The recent rise of smart
mobile phones provides a new solution by enabling blind users' convenient
access to the information and service for understanding the world. Users with
vision impairment can adopt the screen reader embedded in the mobile operating
systems to read the content of each screen within the app, and use gestures to
interact with the phone. However, the prerequisite of using screen readers is
that developers have to add natural-language labels to the image-based
components when they are developing the app. Unfortunately, more than 77% apps
have issues of missing labels, according to our analysis of 10,408 Android
apps. Most of these issues are caused by developers' lack of awareness and
knowledge in considering the minority. And even if developers want to add the
labels to UI components, they may not come up with concise and clear
description as most of them are of no visual issues. To overcome these
challenges, we develop a deep-learning based model, called LabelDroid, to
automatically predict the labels of image-based buttons by learning from
large-scale commercial apps in Google Play. The experimental results show that
our model can make accurate predictions and the generated labels are of higher
quality than that from real Android developers.Comment: Accepted to 42nd International Conference on Software Engineerin
GUI system for Elders/Patients in Intensive Care
In the old age, few people need special care if they are suffering from
specific diseases as they can get stroke while they are in normal life routine.
Also patients of any age, who are not able to walk, need to be taken care of
personally but for this, either they have to be in hospital or someone like
nurse should be with them for better care. This is costly in terms of money and
man power. A person is needed for 24x7 care of these people. To help in this
aspect we purposes a vision based system which will take input from the patient
and will provide information to the specified person, who is currently may not
in the patient room. This will reduce the need of man power, also a continuous
monitoring would not be needed. The system is using MS Kinect for gesture
detection for better accuracy and this system can be installed at home or
hospital easily. The system provides GUI for simple usage and gives visual and
audio feedback to user. This system work on natural hand interaction and need
no training before using and also no need to wear any glove or color strip.Comment: In proceedings of the 4th IEEE International Conference on
International Technology Management Conference, Chicago, IL USA, 12-15 June,
201
P2P Mapper: From User Experiences to Pattern-Based Design
User experience is an umbrella term referring to a collection of information that covers the user’s behavior and interaction with a system. It is observed when the user is actively using a service or interacting with information, includes expectations and perceptions, and is influenced by user characteristics and application or service characteristics. User characteristics include knowledge, experience, personality and demographics. We propose a process and supporting software tool called Persona to Pattern (P2P) Mapper, which guides designers in modeling user experiences and identifying appropriate design patterns. The three-step process is: Persona Creation (a representative persona set is developed), Pattern Selection (behavioral patterns are identified resulting in an ordered list of design patterns for each persona), and Pattern Composition (patterns are used to create a conceptual design). The tool supports the first two steps of the process by providing various automation algorithms for user grouping and pattern selection combined with the benefit of rapid pattern and user information access. Persona and pattern formats are augmented with a set of discrete domain variables to facilitate automation and provide an alternative view on the information. Finally, the P2P Mapper is used in the redesign of two different Bioinformatics applications: a popular website and a visualization tool. The results of the studies demonstrate a significant improvement in the system usability of both applications
HandPainter – 3D sketching in VR with hand-based physical proxy
3D sketching in virtual reality (VR) enables users to create 3D virtual objects intuitively and immersively. However, previous studies showed that mid-air drawing may lead to inaccurate sketches. To address this issue, we propose to use one hand as a canvas proxy and the index finger of the other hand as a 3D pen. To this end, we first perform a formative study to compare two-handed interaction with tablet-pen interaction for VR sketching. Based on the findings of this study, we design HandPainter, a VR sketching system which focuses on the direct use of two hands for 3D sketching without requesting any tablet, pen, or VR controller. Our implementation is based on a pair of VR gloves, which provide hand tracking and gesture capture. We devise a set of intuitive gestures to control various functionalities required during 3D sketching, such as canvas panning and drawing positioning. We show the effectiveness of HandPainter by presenting a number of sketching results and discussing the outcomes of a user study-based comparison with mid-air drawing and tablet-based sketching tools
Assessing Smartphone Ease of Use and Learning from the Perspective of Novice and Expert Users: Development and Illustration of Mobile Benchmark Tasks
Assessing usability of device types with novel function sets that are adopted by diverse user groups requires one to explore a variety of approaches. In this paper, we develop such an approach to assess usability of smartphone devices. Using a three-stage Delphi-method study, we identify sets of benchmark tasks that can be used to assess usability for various user types. These task sets enable one to evaluate smartphone platforms from two perspectives: ease of learning (for those unfamiliar with smartphone use) and ease of use (for experienced users). We then demonstrate an approach for using this task set by performing an exploratory study of both inexperienced smartphone users (using a convenience sample) and experienced users (using the keystroke model). Our exploration illustrates the methodology for using such a task set and, in so doing, reveals significant differences among the leading smartphone platforms between novice and expert users. As such, we provide some preliminary evidence that ease of use is indeed significantly different from ease of learning
- …