16 research outputs found

    Visual design of interface layouts for GIS-based emergency management system

    Get PDF
    By combing the research status of GIS (Geographic Information System) and emergency management system, from the characteristics of GIS emergency management system, based on the interface layout visual design, found the common problems in the design and operation process, put forward the corresponding design solution strategy, so as to help the GIS emergency management system interface design and the improvement of man-machine efficacy operation performance

    E-Fibroid Patient Tracking System

    Get PDF
    The objectives of e-Fibroid Patient Tracking System is to allow information on fibroid patients' to be generated, updated, archived, routed and used for decision making and strategic information analysis with the combined benefits of smart card to support mobility in a pocket coupled with the ubiquitous access which presents a new paradigm for medical information access system. Smart card with the local processing capabilities facilitates the development of active programs that are designed to effectively and accurately manage complex fibroid patient's medical record. Essentially, the patient's information is augmented with active programs residing within the smart card to provide rich services such as record management facilities, security and authentication, and clinical alert system. The intended users are the administrative, doctors, specialists, hospital, clinics and fibroid patients'. The main interest arises on the solutions of providing mobility of medical data or records and preventing the increasing cost, redundancy of treatment and the most importantly obtaining necessary medication for fibroid patients. It provides better security against the misuse of patient data by implementing security mechanisms. The scope of study will covers the literature review on the effect of Multimodal Interfaces and Smart Card in Medical Application. Meanwhile, the methodologies used in the development of the system will follows four process which are planning, analysis, design and implementation. Performance and robustness, together with ease of use, which provides available, accessible and manageable informationon fibroid, are likely essential elements in the final system

    ICARE software components for rapidly developing multimodal interfaces

    Full text link

    Prosody and Kinesics Based Co-analysis Towards Continuous Gesture Recognition

    Get PDF
    The aim of this study is to develop a multimodal co-analysis framework for continuous gesture recognition by exploiting prosodic and kinesics manifestation of natural communication. Using this framework, a co-analysis pattern between correlating components is obtained. The co-analysis pattern is clustered using K-means clustering to determine how well the pattern distinguishes the gestures. Features of the proposed approach that differentiate it from the other models are its less susceptibility to idiosyncrasies, its scalability, and simplicity. The experiment was performed on Multimodal Annotated Gesture Corpus (MAGEC) that we created for research on understanding non-verbal communication community, particularly the gestures

    Sensor Data Analysis for Advanced User Interfaces

    Get PDF
    Práce se zabývá tvorbou uživatelského rozhraní založeného na více vstupních signálech, tedy multimodálním rozhraním. Za tímto účelem nejprve rozebírá výhody daného přístupu ke komunikaci s přístroji. Dále práce obsahuje přehled úrovní, na kterých lze fúzi dat provádět, a různé přístupy k rozvržení architektury systému pro zpracování multimodálních dat. Důležitou částí je samotný návrh systému, kde pro výsledné rozhraní byla zvolena distribuovaná architektura používající softwarové agenty pro zpracování vstupů. Ze studovaných metod pro integraci dat byla vybrána hybridní fúze. Cílem má být rozhraní umožňující ovládání multimediálního centra a interakci s dalšími zařízeními v okolí uživatele.The paper deals with the creation of interface based on multiple input signals, i.e. multimodal interface. For this purpose analyzes the benefits of the approach to communicate with the device that way. The work also includes an overview of the level at which you can perform data fusion, and different approaches to the layout of the system architecture for multimodal data processing. The important part is the actual design of the system, where for the interface was chosen distributed architecture using software agents for processing inputs. As a method for data integration was picked hybrid fusion based on dialog driven and unification strategy. The result should be an interface for media center control and interaction with other devices around the user.

    A real-time framework for natural multimodal interaction with large screen displays

    No full text
    This paper presents a framework for designing a natural multimodal human computer interaction (HCI) system. The core of the proposed framework is a principled method for combining information derived from audio and visual cues. To achieve natural interaction, both audio and visual modalities are fused along with feedback through a large screen display. Careful design along with due considerations of possible aspects of a systems interaction cycle and integration has resulted in a successful system. The performance of the proposed framework has been validated through the development of several prototype systems as well as commercial applications for the retail and entertainment industry. To assess the impact of these multimodal systems (MMS), informal studies have been conducted. It was found that the system performed according to its specifications in 95% of the cases and that users showed ad-hoc proficiency, indicating natural acceptance of such systems

    How to improve team collaboration in an office environment with the help of large screen displays

    Get PDF
    Displays of various sizes and forms have found its way to the modern collaborative setting. The research community is interested in studying how such settings can be improved but there still exists a gap in defining what kind of displays are best for what purposes and what kind of cues would be suitable for a particular display. It is interesting to learn how well an application built for a small display can work on a larger one and how seamlessly such system can switch context between displays. For example, a PowerPoint Presentation, designed for desktop screens are normally manipulated with the help of a mouse. But when they are displayed on a larger screen, is there a possibility for it to adapt to the dynamics of the new system? And also, how can the screen designs be adapted based on the location of the display for e.g. in a public, semi-public and a private setting. The effect of proximity to the display also affects how users tend to interact with them. There is an increasing need to understand such possibilities from the user’s perspective and to devise new technologies for the betterment of collaborative meetings. This research reflects on a modern collaborative setting involving multiple displays and lists out the main pain points of using such systems and suggests design guidelines to overcome these. The outcome emerged from the study suggests that all collaborative settings should be built towards providing three main functions: Visibility, Flexibility and Involvement

    Context-aware gestural interaction in the smart environments of the ubiquitous computing era

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms

    Audio-coupled video content understanding of unconstrained video sequences

    Get PDF
    Unconstrained video understanding is a difficult task. The main aim of this thesis is to recognise the nature of objects, activities and environment in a given video clip using both audio and video information. Traditionally, audio and video information has not been applied together for solving such complex task, and for the first time we propose, develop, implement and test a new framework of multi-modal (audio and video) data analysis for context understanding and labelling of unconstrained videos. The framework relies on feature selection techniques and introduces a novel algorithm (PCFS) that is faster than the well-established SFFS algorithm. We use the framework for studying the benefits of combining audio and video information in a number of different problems. We begin by developing two independent content recognition modules. The first one is based on image sequence analysis alone, and uses a range of colour, shape, texture and statistical features from image regions with a trained classifier to recognise the identity of objects, activities and environment present. The second module uses audio information only, and recognises activities and environment. Both of these approaches are preceded by detailed pre-processing to ensure that correct video segments containing both audio and video content are present, and that the developed system can be made robust to changes in camera movement, illumination, random object behaviour etc. For both audio and video analysis, we use a hierarchical approach of multi-stage classification such that difficult classification tasks can be decomposed into simpler and smaller tasks. When combining both modalities, we compare fusion techniques at different levels of integration and propose a novel algorithm that combines advantages of both feature and decision-level fusion. The analysis is evaluated on a large amount of test data comprising unconstrained videos collected for this work. We finally, propose a decision correction algorithm which shows that further steps towards combining multi-modal classification information effectively with semantic knowledge generates the best possible results
    corecore