291 research outputs found

    Audification of Ultrasound for Human Echolocation

    Get PDF
    Individuals with functional blindness must often utilise assistive aids to enable them to complete tasks of daily living. One of these tasks, locomotion, poses considerable risk. The long white cane is often used to perform haptic exploration, but cannot detect obstacles that are not ground-based. Although devices have been developed to provide information above waist height, these do not provide auditory interfaces that are easy to learn. Development of such devices should adapt to the user, not require adaptation by the user. Can obstacle avoidance be achieved through direct perception? This research presents an auditory interface that has been designed with the user as the primary focus. An analysis of the tasks required has been taken into account resulting in an interface that audifies ultrasound. Audification provides intuitive information to the user to enable perceptive response to environmental obstacles. A device was developed that provides Doppler shift signals that are audible as a result of intentional aliasing. This system provides acoustic flow that is evident upon initiation of travel and has been shown to be effective in perceiving apertures and avoiding environmental obstacles. The orientation of receivers on this device was also examined, resulting in better distance perception and centreline accuracy when oriented outward as compared to forward. The design of this novel user interface for visually impaired individuals has also provided a tool that can be used to evaluate direct perception and acoustic flow in a manner that has never been studied before

    Spatial Auditory Maps for Blind Travellers

    Get PDF
    Empirical research shows that blind persons who have the ability and opportunity to access geographic map information tactually, benefit in their mobility. Unfortunately, tangible maps are not found in large numbers. Economics is the leading explanation: tangible maps are expensive to build, duplicate and distribute. SAM, short for Spatial Auditory Map, is a prototype created to address the unavail- ability of tangible maps. SAM presents geographic information to a blind person encoded in sound. A blind person receives maps electronically and accesses them using a small in- expensive digitalizing tablet connected to a PC. The interface provides location-dependent sound as a stylus is manipulated by the user, plus a schematic visual representation for users with residual vision. The assessment of SAM on a group of blind participants suggests that blind users can learn unknown environments as complex as the ones represented by tactile maps - in the same amount of reading time. This research opens new avenues in visualization techniques, promotes alternative communication methods, and proposes a human-computer interaction framework for conveying map information to a blind person

    Embodied Cognition In Auditory Display

    Get PDF
    Presented at the 19th International Conference on Auditory Display (ICAD2013) on July 6-9, 2013 in Lodz, Poland.This paper makes a case for the use of an embodied cognition framework, based on embodied schemata and cross-domain mappings, in the design of auditory display. An overview of research that relates auditory display with embodied cognition is provided to support such a framework. It then describes research efforts towards the development this framework. By designing to support human cognitive competencies that are bound up with meaning making, it is hoped to open the door to the creation of more meaningful and intuitive auditory displays

    Auditory Displays for People with Visual Impairments during Travel

    Get PDF
    Menschen mit Blindheit oder Sehbehinderungen begegnen beim Reisen zahlreichen Barrieren, was sich auf die LebensqualitĂ€t auswirkt. Obwohl spezielle elektronische Reisehilfen schon seit vielen Jahren im Mittelpunkt der Forschung stehen, werden sie von der Zielgruppe nach wie vor kaum genutzt. Dies liegt unter anderem daran, dass die von den Nutzern benötigten Informationen von der Technologie nur unzureichend bereitgestellt werden. Außerdem entsprechen die Schnittstellen selten den BedĂŒrfnissen der Nutzer. In der vorliegender Arbeit gehen wir auf diese Defizite ein und definieren die Anforderungen fĂŒr barrierefreies Reisen in Bezug auf den Informationsbedarf (Was muss vermittelt werden?) und die nichtfunktionalen Anforderungen (Wie muss es vermittelt werden?). Außerdem schlagen wir verschiedene auditive Displays vor, die die BedĂŒrfnisse von Menschen mit SehbeeintrĂ€chtigungen wĂ€hrend einer Reise berĂŒcksichtigen. Wir entwerfen, implementieren und evaluieren unsere Schnittstellen nach einem nutzerzentriertem Ansatz, wobei wir wĂ€hrend des gesamten Prozesses Nutzer und Experten aus diesem Bereich einbeziehen. In einem ersten Schritt erheben wir den Informationsbedarf von Menschen mit Behinderungen im Allgemeinen und von Menschen mit SehbeeintrĂ€chtigungen im Besonderen, wenn sie sich in GebĂ€uden bewegen. Außerdem vergleichen wir die gesammelten Informationen mit dem, was derzeit in OpenStreetMap (OSM), einer freien geografischen Datenbank, kartiert werden kann, und machen VorschlĂ€ge zur Schließung der LĂŒcke. Unser Ziel ist es, die Kartierung aller benötigten Informationen zu ermöglichen, um sie in Lösungen zur UnterstĂŒtzung des unabhĂ€ngigen Reisens zu verwenden. Nachdem wir die Frage beantwortet haben, welche Informationen benötigt werden, gehen wir weiter und beantworten die Frage, wie diese den Nutzern vermittelt werden können. Wir definieren eine Sammlung nicht-funktionaler Anforderungen, die wir in einer Befragung mit 22 MobilitĂ€tstrainern verfeinern und bewerten. Anschließend schlagen wir eine Grammatik - oder anders ausgedrĂŒckt, eine strukturierte Art der Informationsvermittlung - fĂŒr Navigationsanweisungen bei Reisen im Freien vor, die StraßenrĂ€nder, das Vorhandensein von Gehwegen und Kreuzungen berĂŒcksichtigt - alles wichtige Informationen fĂŒr Menschen mit SehbeeintrĂ€chtigungen. DarĂŒber hinaus können mit unserer Grammatik auch Orientierungspunkte, SehenswĂŒrdigkeiten und Hindernisse vermittelt werden, was die Reise zu einem ganzheitlichen und sichereren Erlebnis macht. Wir implementieren unsere Grammatik in einen bestehenden Prototyp und evaluieren sie mit der Zielgruppe. Es hat sich gezeigt, dass in GebĂ€uden Beschreibungen der Umgebung die Erstellung von mentalen Karten unterstĂŒtzen und damit die Erkundung und spontane Entscheidungsfindung besser fördern als Navigationsanweisungen. Wir definieren daher eine Grammatik fĂŒr die Vermittlung von Informationen ĂŒber die Umgebung in InnenrĂ€umen fĂŒr Menschen mit SehbeeintrĂ€chtigungen. Wir bewerten die Grammatik in einer Online-Studie mit 8 Nutzern aus der Zielgruppe. Wir zeigen, dass die Nutzer strukturierte SĂ€tze mit fester Wortreihenfolge benötigen. Schließlich implementieren wir die Grammatik als Proof-of-Concept in eine bestehende prototypische App. Sprachausgabe ist zwar Stand der Technik im Bereich der Ausgabeschnittstellen fĂŒr Menschen mit SehbeeintrĂ€chtigungen, hat aber auch Nachteile: es ist fĂŒr Menschen mit LeseschwĂ€che unzugĂ€nglich und kann fĂŒr manche Nutzer zu langsam sein. Wir nehmen uns dieses Problems an und untersuchen den Einsatz von Sonifikation in Form von auditiven Symbolen in Kombination mit Parameter-Mapping zur Vermittlung von Informationen ĂŒber Objekte und deren Verortung in der Umgebung. Da eine erste Evaluierung positive Ergebnisse lieferte, erstellten wir in einem nutzerzentrierten Entwicklungsansatz einen Datensatz mit kurzen auditiven Symbolen fĂŒr 40 AlltagsgegenstĂ€nde. Wir evaluieren den Datensatz mit 16 blinden Menschen und zeigen, dass die Töne intuitiv sind. Schließlich vergleichen wir in einer Nutzerstudie mit 5 Teilnehmern Sprachausgabe mit nicht-sprachlicher Sonifikation. Wir zeigen, dass Sonifikation fĂŒr die Vermittlung von groben Informationen ĂŒber Objekte in der Umgebung genau so gut geeignet ist wie Sprache, was die Benutzerfreundlichkeit angeht. Abschließend listen wir einige Vorteile von Sprache und Sonifikation auf, die zum Vergleich und als Entscheidungshilfe dienen sollen. Diese Arbeit befasst sich mit den BedĂŒrfnissen von Menschen mit SehbeeintrĂ€chtigungen wĂ€hrend der Reise in Bezug auf die benötigten Informationen und Schnittstellen. In einem nutzerzentrierten Ansatz schlagen wir verschiedene akustische Schnittstellen vor, die auf sprachlicher und nicht-sprachlicher Sonifikation basieren. Anhand mehrerer Nutzerstudien, an denen sowohl Nutzer als auch Experten beteiligt sind, entwerfen, implementieren und evaluieren wir unsere Schnittstellen. Wir zeigen, dass elektronische Reisehilfen in der Lage sein mĂŒssen, große Mengen an Informationen auf strukturierte Weise zu vermitteln, jedoch angepasst an den Nutzungskontext und die PrĂ€ferenzen und FĂ€higkeiten der Nutzer

    NoVA project final report

    Get PDF

    Understanding and improving methods for exterior sound quality evaluation of hybrid and electric vehicles

    Get PDF
    Electric and Hybrid Electric Vehicles [(H)EVs] are harder for pedestrians to hear when moving at speeds below 20 kph. Laws require (H)EVs to emit additional exterior sounds to alert pedestrians of the vehicles’ approach to prevent potential collisions. These sounds will also influence pedestrians’ impression of the vehicle brand. Current methods for evaluating (H)EV exterior sounds focus on “pedestrians’ safety” but overlook its influence on “vehicle brand”, and do not balance experimental control, correct context along with external and ecological validity. This research addresses the question: “How should (H)EV exterior sounds be evaluated?” The research proposes an experimental methodology for evaluating (H)EV exterior sounds that assesses pedestrians’ safety and influence on the vehicle brand by measuring a listener’s detection rate and sound quality evaluation of the (H)EV in a Virtual Environment (VE). This methodology was tested, improved and validated through three experimental studies based on their findings. Study 1 examined the fidelity of the VE setup used for experiments. The VE was immersive with sufficient degree of involvement/control, naturalness, resolution, and interface quality. It also explored a new interactive way of evaluating (H)EV sounds where participants freely navigate the VE and interact with vehicles more naturally. This interactivity increased the experiments’ ecological validity but reduced reliability and quadrupled the experiment duration compared to using a predefined scenario (non-interactive mode). Thus, a predefined scenario is preferred. Study 2 tested the non-interactive mode of the proposed methodology. Manipulating the target vehicle’s manoeuvre by varying factors, namely the vehicle’s “arrival time”, “approach direction” and “distance of travel”, across the experiment conditions increased ecological validity. This allowed participants to think, respond and pay similar attention as a real world pedestrian. These factors are neglected by existing methodologies, but were found to affect the participants’ detection rate and impression of the vehicle brand. Participants detected the vehicle more than once due to confusion with real world ambient sounds. In the real world, pedestrians continuously detect a vehicle in presence of non-vehicular ambient sounds. Therefore, recommendations to improve the representation of the real-world processes in the vehicle detection during listening experiments include an option to re-detect a vehicle and subjective evaluation of ‘detectability’ of the vehicle sounds. The improved methodology adds ‘detectability’ and ‘recognisability’ of (H)EV sounds as measures and (H)EV’s arrival time as an independent variable. External validity of VEs is a highly debated yet unanswered topic. Study 3 tested external validity of the improved methodology. The methodology accurately predicted participants’ real world evaluations of the detectability of (H)EV sounds, ranked order of the recognisability of (H)EV sounds and their impressions about the vehicle brand. The vehicle’s arrival time affected participants’ detection rate and was reaffirmed as a key element in the methodologies for vehicle sounds’ detection. The final methodological guidelines can help transportation researchers, automotive engineers and legislators determine how pedestrians will respond to the new (H)EV sounds

    Case study of information searching experiences of high school students with visual impairments in Taiwan

    Get PDF

    Understanding and Enhancing Customer-Agent-Computer Interaction in Customer Service Settings

    Get PDF
    Providing good customer service is crucial to many commercial organizations. There are different means through which the service can be provided, such as Ecommerce, call centres or face-to-face. Although some service is provided through electronic or telephone-based interaction, it is common that the service is provided through human agents. In addition, many customer service interactions also involve a computer, for example, an information system where a travel agent finds suitable flights. This thesis seeks to understand the three channels of customer service interactions between the agent, customer and computer: Customer-Agent-Computer Interaction (CACI). A set of ethnographic studies were conducted at call centres to gain an initial understanding of CACI and to investigate the customer-computer channel. The findings revealed that CACI is more complicated than traditional CHI, because there is a second person, the customer, involved in the interaction. For example, the agent provides a lot of feedback about the computer to the customer, such as, “I am waiting for the computer” Laboratory experiments were conducted to investigate the customer-computer channel by adding non-verbal auditory feedback about the computer directly to the customers. The findings showed only a small insignificant difference in task completion time and subjective satisfaction. There were indications that there was an improvement in flow of communication. Experiments were conducted to investigate how the two humans interact over two different communication modes: face-to-face and telephone. Findings showed that there was a significantly shorter task completion time via telephone. There was also a difference in style of communication, with face-to-face having more single activities, such as, talking only, while in the telephone condition there were more dual activities, for instance talking while also searching. There was only a small difference in subjective satisfaction. To investigate if the findings from the laboratory experiment also held in a real situation and to identify potential improvement areas, a series of studies were conducted: observations and interviews at multiple travel agencies, one focus group and a proof of concept study at one travel agency. The findings confirmed the results from the laboratory experiments. A number of potential interface improvements were also identified, such as, a history mechanism and sharing part of the computer screen with the customer at the agent's discretion. The results from the work in this thesis suggest that telephone interaction, although containing fewer cues, is not necessarily an impoverished mode of communication. Telephone interaction is less time consuming and more task-focused. Further, adding non-verbal auditory feedback did not enhance the interaction. The findings also suggest that customer service CACI is inherently different in nature and that there are additional complications with traditional CHI issues
    • 

    corecore