20,432 research outputs found

    Understanding user experience of mobile video: Framework, measurement, and optimization

    Get PDF
    Since users have become the focus of product/service design in last decade, the term User eXperience (UX) has been frequently used in the field of Human-Computer-Interaction (HCI). Research on UX facilitates a better understanding of the various aspects of the user’s interaction with the product or service. Mobile video, as a new and promising service and research field, has attracted great attention. Due to the significance of UX in the success of mobile video (Jordan, 2002), many researchers have centered on this area, examining users’ expectations, motivations, requirements, and usage context. As a result, many influencing factors have been explored (Buchinger, Kriglstein, Brandt & Hlavacs, 2011; Buchinger, Kriglstein & Hlavacs, 2009). However, a general framework for specific mobile video service is lacking for structuring such a great number of factors. To measure user experience of multimedia services such as mobile video, quality of experience (QoE) has recently become a prominent concept. In contrast to the traditionally used concept quality of service (QoS), QoE not only involves objectively measuring the delivered service but also takes into account user’s needs and desires when using the service, emphasizing the user’s overall acceptability on the service. Many QoE metrics are able to estimate the user perceived quality or acceptability of mobile video, but may be not enough accurate for the overall UX prediction due to the complexity of UX. Only a few frameworks of QoE have addressed more aspects of UX for mobile multimedia applications but need be transformed into practical measures. The challenge of optimizing UX remains adaptations to the resource constrains (e.g., network conditions, mobile device capabilities, and heterogeneous usage contexts) as well as meeting complicated user requirements (e.g., usage purposes and personal preferences). In this chapter, we investigate the existing important UX frameworks, compare their similarities and discuss some important features that fit in the mobile video service. Based on the previous research, we propose a simple UX framework for mobile video application by mapping a variety of influencing factors of UX upon a typical mobile video delivery system. Each component and its factors are explored with comprehensive literature reviews. The proposed framework may benefit in user-centred design of mobile video through taking a complete consideration of UX influences and in improvement of mobile videoservice quality by adjusting the values of certain factors to produce a positive user experience. It may also facilitate relative research in the way of locating important issues to study, clarifying research scopes, and setting up proper study procedures. We then review a great deal of research on UX measurement, including QoE metrics and QoE frameworks of mobile multimedia. Finally, we discuss how to achieve an optimal quality of user experience by focusing on the issues of various aspects of UX of mobile video. In the conclusion, we suggest some open issues for future study

    Information Management to Mitigate Loss of Control Airline Accidents

    Get PDF
    Loss of control inflight continues to be the leading contributor to airline accidents worldwide and unreliable airspeed has been a contributing factor in many of these accidents. Airlines and the FAA developed training programs for pilot recognition of these airspeed events and many checklists have been designed to help pilots troubleshoot. In addition, new aircraft designs incorporate features to detect and respond in such situations. NASA has been using unreliable airspeed events while conducting research recommended by the Commercial Aviation Safety Team. Even after significant industry focus on unreliable airspeed, research and other evidence shows that highly skilled and trained pilots can still be confused by the condition and there is a lack of understanding of what the associated checklist(s) attempts to uncover. Common mode failures of analog sensors designed for measuring airspeed continue to confound both humans and automation when determining which indicators are correct. This paper describes failures that have occurred in the past and where/how pilots may still struggle in determining reliable airspeed when confronted with conflicting information. Two latest generation aircraft architectures will be discussed and contrasted. This information will be used to describe why more sensors used in classic control theory will not solve the problem. Technology concepts are suggested for utilizing existing synoptic pages and a new synoptic page called System Interactive Synoptic (SIS). SIS details the flow of flight critical data through the avionics system and how it is used by the automation. This new synoptic page as well as existing synoptics can be designed to be used in concert with a simplified electronic checklist (sECL) to significantly reduce the time to configure the flight deck avionics in the event of a system or sensor failure

    Layered evaluation of interactive adaptive systems : framework and formative methods

    Get PDF
    Peer reviewedPostprin

    Vision systems with the human in the loop

    Get PDF
    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed

    Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    Get PDF
    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is soun

    Integration of an adaptive infotainment system in a vehicle and validation in real driving scenarios

    Get PDF
    More services, functionalities, and interfaces are increasingly being incorporated into current vehicles and may overload the driver capacity to perform primary driving tasks adequately. For this reason, a strategy for easing driver interaction with the infotainment system must be defined, and a good balance between road safety and driver experience must also be achieved. An adaptive Human Machine Interface (HMI) that manages the presentation of information and restricts drivers’ interaction in accordance with the driving complexity was designed and evaluated. For this purpose, the driving complexity value employed as a reference was computed by a predictive model, and the adaptive interface was designed following a set of proposed HMI principles. The system was validated performing acceptance and usability tests in real driving scenarios. Results showed the system performs well in real driving scenarios. Also, positive feedbacks were received from participants endorsing the benefits of integrating this kind of system as regards driving experience and road safety.Postprint (published version

    Impact of Advanced Synoptics and Simplified Checklists During Aircraft Systems Failures

    Get PDF
    AbstractNatural human capacities are becoming increasingly mismatched to the enormous data volumes, processing capabilities, and decision speeds demanded in todays aviation environment. Increasingly Autonomous Systems (IAS) are uniquely suited to solve this problem. NASA is conducting research and development of IAS - hardware and software systems, utilizing machine learning algorithms, seamlessly integrated with humans whereby task performance of the combined system is significantly greater than the individual components. IAS offer the potential for significantly improved levels of performance and safety that are superior to either human or automation alone. A human-in-the-loop test was conducted in NASA Langleys Integration Flight Deck B-737-800 simulator to evaluate advanced synoptic pages with simplified interactive electronic checklists as an IAS for routine air carrier flight operations and in response to aircraft system failures. Twelve U.S. airline crews flew various normal and non-normal procedures and their actions and performance were recorded in response to failures. These data are fundamental to and critical for the design and development of future increasingly autonomous systems that can better support the human in the cockpit. Synoptic pages and electronic checklists significantly improved pilot responses to non-normal scenarios, but implementation of these aids and other intelligent assistants have barriers to implementation (e.g., certification cost) that must overcome

    From Sensor to Observation Web with Environmental Enablers in the Future Internet

    Get PDF
    This paper outlines the grand challenges in global sustainability research and the objectives of the FP7 Future Internet PPP program within the Digital Agenda for Europe. Large user communities are generating significant amounts of valuable environmental observations at local and regional scales using the devices and services of the Future Internet. These communities’ environmental observations represent a wealth of information which is currently hardly used or used only in isolation and therefore in need of integration with other information sources. Indeed, this very integration will lead to a paradigm shift from a mere Sensor Web to an Observation Web with semantically enriched content emanating from sensors, environmental simulations and citizens. The paper also describes the research challenges to realize the Observation Web and the associated environmental enablers for the Future Internet. Such an environmental enabler could for instance be an electronic sensing device, a web-service application, or even a social networking group affording or facilitating the capability of the Future Internet applications to consume, produce, and use environmental observations in cross-domain applications. The term ?envirofied? Future Internet is coined to describe this overall target that forms a cornerstone of work in the Environmental Usage Area within the Future Internet PPP program. Relevant trends described in the paper are the usage of ubiquitous sensors (anywhere), the provision and generation of information by citizens, and the convergence of real and virtual realities to convey understanding of environmental observations. The paper addresses the technical challenges in the Environmental Usage Area and the need for designing multi-style service oriented architecture. Key topics are the mapping of requirements to capabilities, providing scalability and robustness with implementing context aware information retrieval. Another essential research topic is handling data fusion and model based computation, and the related propagation of information uncertainty. Approaches to security, standardization and harmonization, all essential for sustainable solutions, are summarized from the perspective of the Environmental Usage Area. The paper concludes with an overview of emerging, high impact applications in the environmental areas concerning land ecosystems (biodiversity), air quality (atmospheric conditions) and water ecosystems (marine asset management)
    • 

    corecore