255,048 research outputs found

    Which senses dominate at different stages of product experience?

    Get PDF
    In the area of product design, sensory dominance can be defined as the relative importance of different sensory modalities for product experience. Since product experience is multisensory, it is interesting to know which sensory modality plays a leading role in a particular experience, so that designers could concentrate on the creation of the most relevant product properties. It is often assumed that vision dominates other senses. In the present study, we investigated the importance of different sensory modalities during various episodes of product usage. We asked 120 respondents to describe their experiences with consumer products in the following situations: while buying a product, after the first week, the first month, and the first year of usage. The data suggest that the dominant modality depends on the period of product usage. At the moment of buying, vision is the most important modality, but at later stages other modalities become more important. The dominance of a particular modality may depend on its appropriateness for the particular task. During long-term usage, modality importance depends on product functions and the characteristics of the user-product interaction. We conclude that to create a long-lasting positive product experience, designers need to consider the user-product interaction at different stages of product usage and to determine which sensory modality dominates product experience at each stage. Keywords: Sensory Dominance; User-Product Interaction; Product Design</p

    Exploring modality switching effects in negated sentences: further evidence for grounded representations

    Get PDF
    Theories of embodied cognition (e.g., Perceptual Symbol Systems Theory; Barsalou, 1999, 2009) suggest that modality specific simulations underlie the representation of concepts. Supporting evidence comes from modality switch costs: participants are slower to verify a property in one modality (e.g., auditory, BLENDER-loud) after verifying a property in a different modality (e.g., gustatory, CRANBERRIES-tart) compared to the same modality (e.g., LEAVES-rustling, Pecher et al., 2003). Similarly, modality switching costs lead to a modulation of the N400 effect in event-related potentials (ERPs; Collins et al., 2011; Hald et al., 2011). This effect of modality switching has also been shown to interact with the veracity of the sentence (Hald et al., 2011). The current ERP study further explores the role of modality match/mismatch on the processing of veracity as well as negation (sentences containing “not”). Our results indicate a modulation in the ERP based on modality and veracity, plus an interaction. The evidence supports the idea that modality specific simulations occur during language processing, and furthermore suggest that these simulations alter the processing of negation

    What the Future ‘Might’ Brings

    Get PDF
    This paper concerns a puzzle about the interaction of epistemic modals and future tense. In cases of predictable forgetfulness, speakers cannot describe their future states of mind with epistemic modals under future tense, but promising theories of epistemic modals do not predict this. In §1, I outline the puzzle. In §2, I argue that it undermines a very general approach to epistemic modals that draws a tight connection between epistemic modality and evidence. In §3, I defend the assumption that tense can indeed scope over epistemic modals. In §4, I outline a new way of determining the domain of quantification of epistemic modals: epistemic modals quantify over the worlds compatible with the information accumulated within a certain interval. Information loss can change which interval is relevant for determining the domain. In §5, I defend the view from some objections. In §6, I explore the connections between my view of epistemic modality and circumstantial modality

    Presentation modality influences behavioral measures of alerting, orienting, and executive control

    Get PDF
    The Attention Network Test (ANT) uses visual stimuli to separately assess the attentional skills of alerting (improved performance following a warning cue), spatial orienting (an additional benefit when the warning cue also cues target location), and executive control (impaired performance when a target stimulus contains conflicting information). This study contrasted performance on auditory and visual versions of the ANT to determine whether the measures it obtains are influenced by presentation modality. Forty healthy volunteers completed both auditory and visual tests. Reaction-time measures of executive control were of a similar magnitude and significantly correlated, suggesting that executive control might be a supramodal resource. Measures of alerting were also comparable across tasks. In contrast, spatial-orienting benefits were obtained only in the visual task. Auditory spatial cues did not improve response times to auditory targets presented at the cued location. The different spatial-orienting measures could reflect either separate orienting resources for each perceptual modality, or an interaction between a supramodal orienting resource and modality-specific perceptual processing

    Surface electromyographic control of a novel phonemic interface for speech synthesis

    Full text link
    Many individuals with minimal movement capabilities use AAC to communicate. These individuals require both an interface with which to construct a message (e.g., a grid of letters) and an input modality with which to select targets. This study evaluated the interaction of two such systems: (a) an input modality using surface electromyography (sEMG) of spared facial musculature, and (b) an onscreen interface from which users select phonemic targets. These systems were evaluated in two experiments: (a) participants without motor impairments used the systems during a series of eight training sessions, and (b) one individual who uses AAC used the systems for two sessions. Both the phonemic interface and the electromyographic cursor show promise for future AAC applications.F31 DC014872 - NIDCD NIH HHS; R01 DC002852 - NIDCD NIH HHS; R01 DC007683 - NIDCD NIH HHS; T90 DA032484 - NIDA NIH HHShttps://www.ncbi.nlm.nih.gov/pubmed/?term=Surface+electromyographic+control+of+a+novel+phonemic+interface+for+speech+synthesishttps://www.ncbi.nlm.nih.gov/pubmed/?term=Surface+electromyographic+control+of+a+novel+phonemic+interface+for+speech+synthesisPublished versio

    Novel Multimodal Feedback Techniques for In-Car Mid-Air Gesture Interaction

    Get PDF
    This paper presents an investigation into the effects of different feedback modalities on mid-air gesture interaction for infotainment systems in cars. Car crashes and near-crash events are most commonly caused by driver distraction. Mid-air interaction is a way of reducing driver distraction by reducing visual demand from infotainment. Despite a range of available modalities, feedback in mid-air gesture systems is generally provided through visual displays. We conducted a simulated driving study to investigate how different types of multimodal feedback can support in-air gestures. The effects of different feedback modalities on eye gaze behaviour, and the driving and gesturing tasks are considered. We found that feedback modality influenced gesturing behaviour. However, drivers corrected falsely executed gestures more often in non-visual conditions. Our findings show that non-visual feedback can reduce visual distraction significantl
    corecore