605,363 research outputs found

    Using an essentiality and proficiency approach to improve the web browsing experience of visually impaired users

    Get PDF
    Increased volumes of content exacerbate the Web accessibility issues faced by people with visual impairments. Essentiality & Proficiency is presented as one method of easing access to information in Websites by addressing the volume of content coupled with how it is presented. This research develops the concept of Essentiality for Web authors. A preliminary survey was conducted to understand the accessibility issues faced by people with visual impairments. Structured interviews were conducted with twelve participants and a further 26 participants responded to online questionnaires. In total there were 38 participants (both sexes), aged 18 to 54 years. 68% had visual impairments, three had motor issues, one had a hearing impairment and two had cognitive impairments. The findings show that the overload of information on a page was the most prominent difficulty experienced when using the Web. The findings from the preliminary survey fed into an empirical study. Four participants aged 21 to 54 years (both sexes) from the preliminary survey were presented with a technology demonstrator to check the feasibility of Essentiality & Proficiency in the real environment. It was found that participants were able to identify and appreciate the reduced volume of information. This initiated the iterative development of the prototype tool. Microformatting is used in the development of the Essentiality & Proficiency prototype tool to allow the reformulated Web pages to remain standards compliant. There is a formative evaluation of the prototype tool using an experimental design methodology. A convenience sample of nine participants (both sexes) with a range of visual impairments, aged 18 to 52 performed tasks on a computer under three essentiality conditions. With an alpha level .05, the evaluation of the Essentiality & Proficiency tool has been shown to offer some improvement in accessing information

    Recreating Daily life in Pompeii

    Full text link
    [EN] We propose an integrated Mixed Reality methodology for recreating ancient daily life that features realistic simulations of animated virtual human actors (clothes, body, skin, face) who augment real environments and re-enact staged storytelling dramas. We aim to go further from traditional concepts of static cultural artifacts or rigid geometrical and 2D textual augmentations and allow for 3D, interactive, augmented historical character-based event representations in a mobile and wearable setup. This is the main contribution of the described work as well as the proposed extensions to AR Enabling technologies: a VR/AR character simulation kernel framework with real-time, clothed virtual humans that are dynamically superimposed on live camera input, animated and acting based on a predefined, historically correct scenario. We demonstrate such a real-time case study on the actual site of ancient Pompeii.The work presented has been supported by the Swiss Federal Office for Education and Science and the EU IST programme, in frame of the EU IST LIFEPLUS 34545 and EU ICT INTERMEDIA 38417 projects.Magnenat-Thalmann, N.; Papagiannakis, G. (2010). Recreating Daily life in Pompeii. Virtual Archaeology Review. 1(2):19-23. https://doi.org/10.4995/var.2010.4679OJS192312P. MILGRAM, F. KISHINO, (1994) "A Taxonomy of Mixed Reality Visual Displays", IEICE Trans. Information Systems, vol. E77-D, no. 12, pp. 1321-1329R. AZUMA, Y. BAILLOT, R. BEHRINGER, S. FEINER, S. JULIER, B. MACINTYRE, (2001) "Recent Advances in Augmented Reality", IEEE Computer Graphics and Applications, November/December http://dx.doi.org/10.1109/38.963459D. STRICKER, P. DĂ„HNE, F. SEIBERT, I. CHRISTOU, L. ALMEIDA, N. IOANNIDIS, (2001) "Design and Development Issues for ARCHEOGUIDE: An Augmented Reality-based Cultural Heritage On-site Guide", EuroImage ICAV 3D Conference in Augmented Virtual Environments and Three-dimensional Imaging, Mykonos, Greece, 30 May-01 JuneW. WOHLGEMUTH, G. TRIEBFĂśRST, (2000)"ARVIKA: augmented reality for development, production and service", DARE 2000 on Designing augmented reality environments, Elsinore, Denmark http://dx.doi.org/10.1145/354666.354688H. TAMURA, H. YAMAMOTO, A. KATAYAMA, (2001) "Mixed reality: Future dreams seen at the border between real and virtual worlds", Computer Graphics and Applications, vol.21, no.6, pp.64-70 http://dx.doi.org/10.1109/38.963462M. PONDER, G. PAPAGIANNAKIS, T. MOLET, N. MAGNENAT-THALMANN, D. THALMANN, (2003) "VHD++ Development Framework: Towards Extendible, Component Based VR/AR Simulation Engine Featuring Advanced Virtual Character Technologies", IEEE Computer Society Press, CGI Proceedings, pp. 96-104 http://dx.doi.org/10.1109/cgi.2003.1214453Archaeological Superintendence of Pompeii (2009), http://www.pompeiisites.orgG. PAPAGIANNAKIS, S. SCHERTENLEIB, B. O'KENNEDY , M. POIZAT, N.MAGNENAT-THALMANN, A. STODDART, D.THALMANN, (2005) "Mixing Virtual and Real scenes in the site of ancient Pompeii",Journal of CAVW, p 11-24, Volume 16, Issue 1, John Wiley and Sons Ltd, FebruaryEGGES, A., PAPAGIANNAKIS, G., MAGNENAT-THALMANN, N., (2007) "Presence and Interaction in Mixed Reality", The Visual Computer, Springer-Verlag Volume 23, Number 5, MaySEO H., MAGNENAT-THALMANN N. (2003), An Automatic Modeling of Human Bodies from Sizing Parameters. In ACM SIGGRAPH, Symposium on Interactive 3D Graphics, pp19-26, pp234. http://dx.doi.org/10.1145/641480.641487VOLINO P., MAGNENAT-THALMANN N. (2006), Resolving Surface Collisions through Intersection Contour Minimization. In ACM Transactions on Graphics (Siggraph 2006 proceedings), 25(3), pp 1154-1159. http://dx.doi.org/10.1145/1179352.1142007http://dx.doi.org/10.1145/1141911.1142007PAPAGIANNAKIS, G., SINGH, G., MAGNENAT-THALMANN, N., (2008) "A survey of mobile and wireless technologies for augmented reality systems", Journal of Computer Animation and Virtual Worlds, John Wiley and Sons Ltd, 19, 1, pp. 3-22, February http://dx.doi.org/10.1002/cav.22

    Inviwo -- A Visualization System with Usage Abstraction Levels

    Full text link
    The complexity of today's visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for instance by providing a data flow network editor. Unfortunately, these abstractions result in several issues, which need to be circumvented through an abstraction-centered system design. Often, a high level of abstraction hides low level details, which makes it difficult to directly access the underlying computing platform, which would be important to achieve an optimal performance. Therefore, we propose a layer structure developed for modern and sustainable visualization systems allowing developers to interact with all contained abstraction levels. We refer to this interaction capabilities as usage abstraction levels, since we target application developers with various levels of experience. We formulate the requirements for such a system, derive the desired architecture, and present how the concepts have been exemplary realized within the Inviwo visualization system. Furthermore, we address several specific challenges that arise during the realization of such a layered architecture, such as communication between different computing platforms, performance centered encapsulation, as well as layer-independent development by supporting cross layer documentation and debugging capabilities

    Music Maker – A Camera-based Music Making Tool for Physical Rehabilitation

    Full text link
    The therapeutic effects of playing music are being recognized increasingly in the field of rehabilitation medicine. People with physical disabilities, however, often do not have the motor dexterity needed to play an instrument. We developed a camera-based human-computer interface called "Music Maker" to provide such people with a means to make music by performing therapeutic exercises. Music Maker uses computer vision techniques to convert the movements of a patient's body part, for example, a finger, hand, or foot, into musical and visual feedback using the open software platform EyesWeb. It can be adjusted to a patient's particular therapeutic needs and provides quantitative tools for monitoring the recovery process and assessing therapeutic outcomes. We tested the potential of Music Maker as a rehabilitation tool with six subjects who responded to or created music in various movement exercises. In these proof-of-concept experiments, Music Maker has performed reliably and shown its promise as a therapeutic device.National Science Foundation (IIS-0308213, IIS-039009, IIS-0093367, P200A01031, EIA-0202067 to M.B.); National Institutes of Health (DC-03663 to E.S.); Boston University (Dudley Allen Sargent Research Fund (to A.L.)
    • …
    corecore