7,398 research outputs found
Recommended from our members
uC: Ubiquitous Collaboration Platform for Multimodal Team Interaction Support
A human-centered computing platform that improves teamwork and transforms the âhuman- computer interaction experienceâ for distributed teams is presented. This Ubiquitous Collaboration, or uC (âyou seeâ), platform\u27s objective is to transform distributed teamwork (i.e., work occurring when teams of workers and learners are geographically dispersed and often interacting at different times). It achieves this goal through a multimodal team interaction interface realized through a reconfigurable open architecture. The approach taken is to integrate: (1) an intuitive speech- and video-centric multi-modal interface to augment more conventional methods (e.g., mouse, stylus and touch), (2) an open and reconfigurable architecture supporting information gathering, and (3) a machine intelligent approach to analysis and management of heterogeneous live and stored sensor data to support collaboration. The system will transform how teams of people interact with computers by drawing on both the virtual and physical environment
"Set phasors to stun": an algorithm to improve phase coherence on transients in multi-microphone recordings
Ever since the advent of multi-microphone recording, sound engineers have wrestled with the colouration of sound by phasing issues. For some this was an anathema; for others this colouration was a crucial ingredient of the finished product. Traditionally, delicate microphone placement was essential, with subtle movements and tilts allowing the producer/engineer to determine when a sound was âin phaseâ based on perception alone. More recently, DAWâs have allowed us to view multiple waveforms and manually nudge them into coherence with visual feedback now supporting the aural, although still a manual process. This paper will present an algorithm that allows automatic correction of phase via a unique Max/MSP patch operating on multiple audio components simultaneously. With a single button push, the producer can now hear a stereo recording with maximum coherence and thus make an artistic judgment as to whether the âidealâ is ideal, or better to pursue naturally occurring phase colouration in preference. In addition, the patch allows zoning in to spatially separated sound sources, eg tuning drum kit overheads to phase lock with the snare drum or hi-hat microphone. Audio examples will be played and the patch demonstrated in action. Limiting factors, contexts and applications will also be discussed
A user perspective of quality of service in m-commerce
This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2004 Springer VerlagIn an m-commerce setting, the underlying communication system will have to provide a Quality of Service (QoS) in the presence of two competing factorsânetwork bandwidth and, as the pressure to add value to the business-to-consumer (B2C) shopping experience by integrating multimedia applications grows, increasing data sizes. In this paper, developments in the area of QoS-dependent multimedia perceptual quality are reviewed and are integrated with recent work focusing on QoS for e-commerce. Based on previously identified user perceptual tolerance to varying multimedia QoS, we show that enhancing the m-commerce B2C user experience with multimedia, far from being an idealised scenario, is in fact feasible if perceptual considerations are employed
Towards disappearing user interfaces for ubiquitous computing: human enhancement from sixth sense to super senses
The enhancement of human senses electronically is possible when pervasive computers interact unnoticeably with humans in Ubiquitous Computing. The design of computer user interfaces towards âdisappearingâ forces the interaction with humans using a content rather than a menu driven approach, thus the emerging requirement for huge number of non-technical users interfacing intuitively with billions of computers in the Internet of Things is met. Learning to use particular applications in Ubiquitous Computing is either too slow or sometimes impossible so the design of user interfaces must be naturally enough to facilitate intuitive human behaviours. Although humans from different racial, cultural and ethnic backgrounds own the same physiological sensory system, the perception to the same stimuli outside the human bodies can be different. A novel taxonomy for Disappearing User Interfaces (DUIs) to stimulate human senses and to capture human responses is proposed. Furthermore, applications of DUIs are reviewed. DUIs with sensor and data fusion to simulate the Sixth Sense is explored. Enhancement of human senses through DUIs and Context Awareness is discussed as the groundwork enabling smarter wearable devices for interfacing with human emotional memories
Conscious multisensory integration: Introducing a universal contextual field in biological and deep artificial neural networks
© 2020 The Authors. Published by Frontiers Media. This is an open access article available under a Creative Commons licence.
The published version can be accessed at the following link on the publisherâs website: https://doi.org/10.3389/fncom.2020.00015Conscious awareness plays a major role in human cognition and adaptive behaviour, though its function in multisensory
integration is not yet fully understood, hence, questions remain: How does the brain integrate the incoming
multisensory signals with respect to different external environments? How are the roles of these multisensory signals
defined to adhere to the anticipated behavioural-constraint of the environment? This work seeks to articulate a novel
theory on conscious multisensory integration that addresses the aforementioned research challenges. Specifically, the
well-established contextual field (CF) in pyramidal cells and coherent infomax theory [1][2] is split into two functionally
distinctive integrated input fields: local contextual field (LCF) and universal contextual field (UCF). LCF defines
the modulatory sensory signal coming from some other parts of the brain (in principle from anywhere in space-time)
and UCF defines the outside environment and anticipated behaviour (based on past learning and reasoning). Both LCF
and UCF are integrated with the receptive field (RF) to develop a new class of contextually-adaptive neuron (CAN),
which adapts to changing environments. The proposed theory is evaluated using human contextual audio-visual (AV)
speech modelling. Simulation results provide new insights into contextual modulation and selective multisensory
information amplification/suppression. The central hypothesis reviewed here suggests that the pyramidal cell, in addition
to the classical excitatory and inhibitory signals, receives LCF and UCF inputs. The UCF (as a steering force or
tuner) plays a decisive role in precisely selecting whether to amplify/suppress the transmission of relevant/irrelevant
feedforward signals, without changing the content e.g., which information is worth paying more attention to? This,
as opposed to, unconditional excitatory and inhibitory activity in existing deep neural networks (DNNs), is called
conditional amplification/suppression
- âŠ