1,506 research outputs found
Spatially augmented audio delivery: applications of spatial sound awareness in sensor-equipped indoor environments
Current mainstream audio playback paradigms do not take any account of a user's physical location or orientation in the delivery of audio through headphones or speakers. Thus audio is usually presented as a static perception whereby it is naturally a dynamic 3D phenomenon audio environment. It fails to take advantage of our innate psycho-acoustical perception that we have of sound source locations around us.
Described in this paper is an operational platform which we have built to augment the sound from a generic set of wireless headphones. We do this in a way that overcomes the spatial awareness limitation of audio playback in indoor 3D environments which are both location-aware and sensor-equipped. This platform provides access to an audio-spatial presentation modality which by its nature lends itself to numerous cross-dissiplinary applications. In the paper we present the platform and two demonstration applications
An outdoor spatially-aware audio playback platform exemplified by a virtual zoo
Outlined in this short paper is a framework for the construction of outdoor location-and direction-aware audio applications along with an example application to showcase the strengths of the framework and to demonstrate how it works. Although there has been previous work in this area which has concentrated on the spatial presentation of sound through wireless headphones, typically such sounds are presented as though originating from specific, defined spatial locations within a 3D environment. Allowing a user to move freely within this space and adjusting the sound dynamically as we do here, further enhances the perceived reality of the virtual environment. Techniques to realise this are implemented by the real-time adjustment of the presented 2 channels of audio to the headphones, using readings of the user's head orientation and location which in turn are made possible by sensors mounted upon the headphones.
Aside from proof of concept indoor applications, more user-responsive applications of spatial audio delivery have not been prototyped or explored. In this paper we present an audio-spatial presentation platform along with a primary demonstration application for an outdoor environment which we call a {\em virtual audio zoo}. This application explores our techniques to further improve the realism of the audio-spatial environments we can create, and to assess what types of future application are possible
Eye fixation related potentials in a target search task
Typically BCI (Brain Computer Interfaces) are found in rehabilitative or restorative applications, often allowing users a medium of communication that is otherwise unavailable through conventional means. Recently, however, there is growing interest in using BCI to assist users in searching for images. A class of neural signals often leveraged in common BCI paradigms are ERPs (Event Related Potentials), which are present in the EEG (Electroencephalograph) signals from users in response to various sensory events. One such ERP is the P300, and is typically elicited in an oddball experiment where a subject’s attention is orientated towards a deviant stimulus among a stream of presented images. It has been shown that these types of neural responses can be used to drive an image search or labeling task, where we can rank images by examining the presence of such ERP signals in response to the display of images. To date, systems like these have been demonstrated when presenting sequences of images containing targets at up to 10Hz, however, the target images in these tasks do not necessitate any kind of eye movement for their detection because the targets in the images are quite salient. In this paper we analyse the presence of discriminating signals when they are offset to the time of eye fixations in a visual search task where detection of target images does require eye fixations
Optimising the number of channels in EEG-augmented image search
Recent proof-of-concept research has appeared showing the applicability of Brain Computer Interface (BCI) technology in combination with the human visual system, to classify images. The basic premise here is that images that arouse a participant’s attention generate a detectable response in their brainwaves, measurable using an electroencephalograph (EEG). When a participant is given a target class of images to search for, each image belonging to that target class presented within a stream of images should elicit a distinctly detectable neural response. Previous work in this domain has primarily focused on validating the technique on proof of concept image sets that demonstrate desired properties and on examining the capabilities of the technique at various image presentation speeds. In this paper we expand on this by examining the capability of the technique when using a reduced number of channels in the EEG, and its impact on the detection accuracy
A Second Crystal Polymorph of Anilinium Picrate
The crystal structure of a second monoclinic polymorph of anilinium picrate shows a three-dimensional hydrogen-bonded polymer with strong primary interspecies interactions involving the proximal phenolate and adjacent nitro group O-atom acceptors and separate anilinium H-atom donors in two cyclic R (6) associations. Other nitro-O-anilinium-H hydrogen bonds together with heteromolecular interactions are also present
Adenosinium 3,5-dinitrosalicylate
The crystal structure of adenosinium 3,5-dinitrosalicylicate, C10H14N5O4+烷H3N2O7-, shows the presence of a primary chain structure formed through homomeric head-to-tail cyclic R22(10) hydrogen-bonding interactions between hydroxy O- and both purine and amine N-donor and acceptor groups of the furanose and purine moieties of the adenosinium species. These chain structures are related by crystallographic 21 symmetry. Secondary hetero-ionic hydrogen bonding, involving the 3,5-dinitrosalicylate anion, including a cyclic R22(8) interaction between the carboxylate group and the protonated purine and amine groups of the adenosinium cation are also present, together with heteromolecular - interactions giving a three-dimensional hydrogen-bonded polymer structure.Full Tex
Guanidinium 2-Carboxy-6-Nitrobenzoate Monohydrate: A Two-Dimensional Hydrogen-Bonded Network Structure
In the structure of the title compound, CH6N3+ . C8H4NO6- . H2O, obtained from the reaction of guanidine carbonate with 3-nitrophthalic acid, the 2-carboxylic acid group is deprotonated and participates in an asymmetric cyclic R2/1(6) hydrogen-bonding associatiuon with the guanidine cation together with a bridging water molecule of solvation. A conjoint R2/1(7) facial association involving a nitro O-atom acceptor together with a further five guanidinium N-H...O hydrogen bonds, as well as a strong carboxyl-water interaction [2.528(3) Ang.], give a two-dimensiional network structure
An analysis of EEG signals present during target search
Recent proof-of-concept research has appeared highlighting the applicability of using Brain Computer Interface (BCI) technology to utilise a subjects visual system to classify images. This technique involves classifying a users EEG (Electroencephalography) signals as they view images presented on a screen. The premise is that images (targets) that arouse a subjects attention generate distinct brain responses, and these brain responses can then be used to label the images. Research thus far in this domain has focused on examining the tasks and paradigms that can be used to elicit these neurologically informative signals from images, and the correlates of human perception that modulate them. While success has been shown in detecting these responses in high speed presentation paradigms, there is still an open question as to what search tasks can ultimately benefit from using an EEG based BCI system.
In this thesis we explore: (1) the neural signals present during visual search tasks that require eye movements, and how they inform us of the possibilities for BCI applica- tions utilising eye tracking and EEG in combination with each other, (2) how temporal characteristics of eye movements can give indication of the suitability of a search task to being augmented by an EEG based BCI system, (3) the characteristics of a number of paradigms that can be used to elicit informative neural responses to drive image search BCI applications.
In this thesis we demonstrate EEG signals can be used in a discriminative manner to label images. In addition, we find in certain instances, that signals derived from sources such as eye movements can yield significantly more discriminative information
Spatial Filtering Pipeline Evaluation of Cortically Coupled Computer Vision System for Rapid Serial Visual Presentation
Rapid Serial Visual Presentation (RSVP) is a paradigm that supports the
application of cortically coupled computer vision to rapid image search. In
RSVP, images are presented to participants in a rapid serial sequence which can
evoke Event-related Potentials (ERPs) detectable in their Electroencephalogram
(EEG). The contemporary approach to this problem involves supervised spatial
filtering techniques which are applied for the purposes of enhancing the
discriminative information in the EEG data. In this paper we make two primary
contributions to that field: 1) We propose a novel spatial filtering method
which we call the Multiple Time Window LDA Beamformer (MTWLB) method; 2) we
provide a comprehensive comparison of nine spatial filtering pipelines using
three spatial filtering schemes namely, MTWLB, xDAWN, Common Spatial Pattern
(CSP) and three linear classification methods Linear Discriminant Analysis
(LDA), Bayesian Linear Regression (BLR) and Logistic Regression (LR). Three
pipelines without spatial filtering are used as baseline comparison. The Area
Under Curve (AUC) is used as an evaluation metric in this paper. The results
reveal that MTWLB and xDAWN spatial filtering techniques enhance the
classification performance of the pipeline but CSP does not. The results also
support the conclusion that LR can be effective for RSVP based BCI if
discriminative features are available
Bis (2-pyrimidinyl) disulfide dihydrate: a redetermination
The crystal structure of bis(2-pyrimidinyl) disul®de dihydrate, C8H6N4S22H2O, has been redetermined using CCD diffractometer data. This has allowed for a more precise location of the water H atoms and shows the water molecules forming unusual spiral hydrogen-bonded aqua columns, as well as giving inter-column crosslinks through the pyrimidine N-atom acceptors of the disul®de molecules. The structural chemistry of aromatic disul®des has also been reviewed
- …