2,412 research outputs found

    The use of mobile phones as service-delivery devices in sign language machine translation system

    Get PDF
    Masters of ScienceThis thesis investigates the use of mobile phones as service-delivery devices in a sign language machine translation system. Four sign language visualization methods were evaluated on mobile phones. Three of the methods were synthetic sign language visualization methods. Three factors were considered: the intelligibility of sign language, as rendered by the method; the power consumption; and the bandwidth usage associated with each method. The average intelligibility rate was 65%, with some methods achieving intelligibility rates of up to 92%. The average size was 162 KB and, on average, the power consumption increased to 180% of the idle state, across all methods. This research forms part of the Integration of Signed and Verbal Communication: South African Sign Language Recognition and Animation (SASL) project at the University of the Western Cape and serves as an integration platform for the group's research. In order to perform this research a machine translation system that uses mobile phones as service-delivery devices was developed as well as a 3D Avatar for mobile phones. It was concluded that mobile phones are suitable service-delivery platforms for sign language machine translation systems.South Afric

    SignSupport: a limited communication domain mobile aid for a Deaf patient at the pharmacy

    Get PDF
    This paper discusses a prototype for a communication aid on a mobile phone to support a Deaf1 person visiting a public hospital pharmacy. The aim is to prevent problems of non-compliance to treatment due to poor communication between a Deaf patient and a pharmacist. We studied the communication exchange between pharmacists and Deaf patients in a pharmacy setting in order to extract the most relevant content exchange between the two parties. A prototype was developed on a mobile phone and iteratively tested using role plays, questionnaires and focus groups with pharmacy students and Deaf participants. The prototype allows pharmacists to input text and make selections that provide detailed medical instructions in signed language to a Deaf patient. The prototype demonstrates the feasibility of encoding a limited communication flow on a mobile device, with carefully sequenced sign language videos that a Deaf patient can watch and understand in order to take medicine correctly.Telkom, Cisco, Aria Technologies, THRIP, SANPADDepartment of HE and Training approved lis

    CGAMES'2009

    Get PDF

    Mobile Sensing, Simulation and Machine-learning Techniques: Improving Observations in Public Health

    Get PDF
    Entering an era where mobile phones equipped with numerous sensors have become an integral part of our lives and wearable devices such as activity trackers are very popular, studying and analyzing the data collected by these devices can give insights to the researchers and policy makers about the ongoing illnesses, outbreaks and public health in general. In this regard, new machine learning techniques can be utilized for population screening, informing centers of disease control and prevention of potential threats and outbreaks. Big data streams if not present, will limit investigating the feasibility of such new techniques in this domain. To overcome this shortcoming, simulation models even if grounded by small-size data can represent a simple platform of the more complicated systems and then be utilized as safe and still precise environments for generating synthetic ground truth big data. The objective of this thesis is to use an agent-based model (ABM) which depicts a city consisting of restaurants, consumers, and an inspector, to investigate the practicability of using smartphones data in the machine-learning component of Hidden Markov Model trained by synthetic ground-truth data generated by the ABM model to detect food-borne related outbreaks and inform the inspector about them. To this end, we also compared the results of such arrangement with traditional outbreak detection methods. We examine this method in different formations and scenarios. As another contribution, we analyzed smart phone data collected through a real world experiment where the participants were using an application Ethica Data on their phones named. This application as the first platform turning smartphones into micro research labs allows passive sensor monitoring and sending over context-dependent surveys. The collected data was later analyzed to get insights into the participants' food consumption patterns. Our results indicate that Hidden Markov Models supplied with smart phone data provide accurate systems for foodborne outbreak detection. The results also support the applicability of smart phone data to obtain information about foodborne diseases. The results also suggest that there are some limitations in using Hidden Markov Models to detect the exact source of outbreaks

    Earth as Interface: Exploring chemical senses with Multisensory HCI Design for Environmental Health Communication

    Get PDF
    As environmental problems intensify, the chemical senses -that is smell and taste, are the most relevantsenses to evidence them.As such, environmental exposure vectors that can reach human beings comprise air,food, soil and water[1].Within this context, understanding the link between environmental exposures andhealth[2]is crucial to make informed choices, protect the environment and adapt to new environmentalconditions[3].Smell and taste lead therefore to multi-sensorial experiences which convey multi-layered information aboutlocal and global events[4]. However, these senses are usually absent when those problems are represented indigital systems. The multisensory HCIdesign framework investigateschemical sense inclusion withdigital systems[5]. Ongoing efforts tackledigitalization of smell and taste for digital delivery, transmission or substitution [6]. Despite experimentsproved technological feasibility, its dissemination depends on relevant applicationdevelopment[7].This thesis aims to fillthose gaps by demonstratinghow chemical senses provide the means to link environment and health based on scientific andgeolocation narratives [8], [9],[10]. We present a Multisensory HCI design process which accomplished symbolicdisplaying smell and taste and led us to a new multi-sensorial interaction system presented herein. We describe the conceptualization, design and evaluation of Earthsensum, an exploratory case study project.Earthsensumoffered to 16 participants in the study, environmental smell and taste experiences about real geolocations to participants of the study. These experiences were represented digitally using mobilevirtual reality (MVR) and mobile augmented reality (MAR). Its technologies bridge the real and digital Worlds through digital representations where we can reproduce the multi-sensorial experiences. Our study findings showed that the purposed interaction system is intuitive and can lead not only to a betterunderstanding of smell and taste perception as also of environmental problems. Participants comprehensionabout the link between environmental exposures and health was successful and they would recommend thissystem as education tools. Our conceptual design approach was validated and further developments wereencouraged.In this thesis,we demonstratehow to applyMultisensory HCI methodology to design with chemical senses. Weconclude that the presented symbolic representation model of smell and taste allows communicatingtheseexperiences on digital platforms. Due to its context-dependency, MVR and MAR platforms are adequatetechnologies to be applied for this purpose.Future developments intend to explore further the conceptual approach. These developments are centredon the use of the system to induce hopefully behaviourchange. Thisthesisopens up new application possibilities of digital chemical sense communication,Multisensory HCI Design and environmental health communication.À medida que os problemas ambientais se intensificam, os sentidos químicos -isto é, o cheiroe sabor, são os sentidos mais relevantes para evidenciá-los. Como tais, os vetores de exposição ambiental que podem atingir os seres humanos compreendem o ar, alimentos, solo e água [1]. Neste contexto, compreender a ligação entre as exposições ambientais e a saúde [2] é crucial para exercerescolhas informadas, proteger o meio ambiente e adaptar a novas condições ambientais [3]. O cheiroe o saborconduzemassima experiências multissensoriais que transmitem informações de múltiplas camadas sobre eventos locais e globais [4]. No entanto, esses sentidos geralmente estão ausentes quando esses problemas são representados em sistemas digitais. A disciplina do design de Interação Humano-Computador(HCI)multissensorial investiga a inclusão dossentidos químicos em sistemas digitais [9]. O seu foco atual residena digitalização de cheirose sabores para o envio, transmissão ou substituiçãode sentidos[10]. Apesar dasexperimentaçõescomprovarem a viabilidade tecnológica, a sua disseminação está dependentedo desenvolvimento de aplicações relevantes [11]. Estatese pretendepreencher estas lacunas ao demonstrar como os sentidos químicos explicitama interconexãoentre o meio ambiente e a saúde, recorrendo a narrativas científicas econtextualizadasgeograficamente[12], [13], [14]. Apresentamos uma metodologiade design HCImultissensorial que concretizouum sistema de representação simbólica de cheiro e sabor e nos conduziu a um novo sistema de interação multissensorial, que aqui apresentamos. Descrevemos o nosso estudo exploratório Earthsensum, que integra aconceptualização, design e avaliação. Earthsensumofereceu a 16participantes do estudo experiências ambientais de cheiro e sabor relacionadas com localizações geográficasreais. Essas experiências foram representadas digitalmente através derealidade virtual(VR)e realidade aumentada(AR).Estas tecnologias conectamo mundo real e digital através de representações digitais onde podemos reproduzir as experiências multissensoriais. Os resultados do nosso estudo provaramque o sistema interativo proposto é intuitivo e pode levar não apenas a uma melhor compreensão da perceção do cheiroe sabor, como também dos problemas ambientais. O entendimentosobre a interdependência entre exposições ambientais e saúde teve êxitoe os participantes recomendariam este sistema como ferramenta para aeducação. A nossa abordagem conceptual foi positivamentevalidadae novos desenvolvimentos foram incentivados. Nesta tese, demonstramos como aplicar metodologiasde design HCImultissensorialpara projetar com ossentidos químicos. Comprovamosque o modelo apresentado de representação simbólica do cheiroe do saborpermite comunicar essas experiênciasem plataformas digitais. Por serem dependentesdocontexto, as plataformas de aplicações emVR e AR são tecnologias adequadaspara este fim.Desenvolvimentos futuros pretendem aprofundar a nossa abordagemconceptual. Em particular, aspiramos desenvolvera aplicaçãodo sistema para promover mudanças de comportamento. Esta tese propõenovas possibilidades de aplicação da comunicação dos sentidos químicos em plataformas digitais, dedesign multissensorial HCI e de comunicação de saúde ambiental

    Improving command selection in smart environments by exploiting spatial constancy

    Get PDF
    With the a steadily increasing number of digital devices, our environments are becoming increasingly smarter: we can now use our tablets to control our TV, access our recipe database while cooking, and remotely turn lights on and off. Currently, this Human-Environment Interaction (HEI) is limited to in-place interfaces, where people have to walk up to a mounted set of switches and buttons, and navigation-based interaction, where people have to navigate on-screen menus, for example on a smart-phone, tablet, or TV screen. Unfortunately, there are numerous scenarios in which neither of these two interaction paradigms provide fast and convenient access to digital artifacts and system commands. People, for example, might not want to touch an interaction device because their hands are dirty from cooking: they want device-free interaction. Or people might not want to have to look at a screen because it would interrupt their current task: they want system-feedback-free interaction. Currently, there is no interaction paradigm for smart environments that allows people for these kinds of interactions. In my dissertation, I introduce Room-based Interaction to solve this problem of HEI. With room-based interaction, people associate digital artifacts and system commands with real-world objects in the environment and point toward these real-world proxy objects for selecting the associated digital artifact. The design of room-based interaction is informed by a theoretical analysis of navigation- and pointing-based selection techniques, where I investigated the cognitive systems involved in executing a selection. An evaluation of room-based interaction in three user studies and a comparison with existing HEI techniques revealed that room-based interaction solves many shortcomings of existing HEI techniques: the use of real-world proxy objects makes it easy for people to learn the interaction technique and to perform accurate pointing gestures, and it allows for system-feedback-free interaction; the use of the environment as flat input space makes selections fast; the use of mid-air full-arm pointing gestures allows for device-free interaction and increases awareness of other’s interactions with the environment. Overall, I present an alternative selection paradigm for smart environments that is superior to existing techniques in many common HEI-scenarios. This new paradigm can make HEI more user-friendly, broaden the use cases of smart environments, and increase their acceptance for the average user
    corecore