6,940 research outputs found

    Spatially augmented audio delivery: applications of spatial sound awareness in sensor-equipped indoor environments

    Get PDF
    Current mainstream audio playback paradigms do not take any account of a user's physical location or orientation in the delivery of audio through headphones or speakers. Thus audio is usually presented as a static perception whereby it is naturally a dynamic 3D phenomenon audio environment. It fails to take advantage of our innate psycho-acoustical perception that we have of sound source locations around us. Described in this paper is an operational platform which we have built to augment the sound from a generic set of wireless headphones. We do this in a way that overcomes the spatial awareness limitation of audio playback in indoor 3D environments which are both location-aware and sensor-equipped. This platform provides access to an audio-spatial presentation modality which by its nature lends itself to numerous cross-dissiplinary applications. In the paper we present the platform and two demonstration applications

    An outdoor spatially-aware audio playback platform exemplified by a virtual zoo

    Get PDF
    Outlined in this short paper is a framework for the construction of outdoor location-and direction-aware audio applications along with an example application to showcase the strengths of the framework and to demonstrate how it works. Although there has been previous work in this area which has concentrated on the spatial presentation of sound through wireless headphones, typically such sounds are presented as though originating from specific, defined spatial locations within a 3D environment. Allowing a user to move freely within this space and adjusting the sound dynamically as we do here, further enhances the perceived reality of the virtual environment. Techniques to realise this are implemented by the real-time adjustment of the presented 2 channels of audio to the headphones, using readings of the user's head orientation and location which in turn are made possible by sensors mounted upon the headphones. Aside from proof of concept indoor applications, more user-responsive applications of spatial audio delivery have not been prototyped or explored. In this paper we present an audio-spatial presentation platform along with a primary demonstration application for an outdoor environment which we call a {\em virtual audio zoo}. This application explores our techniques to further improve the realism of the audio-spatial environments we can create, and to assess what types of future application are possible

    Eye fixation related potentials in a target search task

    Get PDF
    Typically BCI (Brain Computer Interfaces) are found in rehabilitative or restorative applications, often allowing users a medium of communication that is otherwise unavailable through conventional means. Recently, however, there is growing interest in using BCI to assist users in searching for images. A class of neural signals often leveraged in common BCI paradigms are ERPs (Event Related Potentials), which are present in the EEG (Electroencephalograph) signals from users in response to various sensory events. One such ERP is the P300, and is typically elicited in an oddball experiment where a subject’s attention is orientated towards a deviant stimulus among a stream of presented images. It has been shown that these types of neural responses can be used to drive an image search or labeling task, where we can rank images by examining the presence of such ERP signals in response to the display of images. To date, systems like these have been demonstrated when presenting sequences of images containing targets at up to 10Hz, however, the target images in these tasks do not necessitate any kind of eye movement for their detection because the targets in the images are quite salient. In this paper we analyse the presence of discriminating signals when they are offset to the time of eye fixations in a visual search task where detection of target images does require eye fixations

    Optimising the number of channels in EEG-augmented image search

    Get PDF
    Recent proof-of-concept research has appeared showing the applicability of Brain Computer Interface (BCI) technology in combination with the human visual system, to classify images. The basic premise here is that images that arouse a participant’s attention generate a detectable response in their brainwaves, measurable using an electroencephalograph (EEG). When a participant is given a target class of images to search for, each image belonging to that target class presented within a stream of images should elicit a distinctly detectable neural response. Previous work in this domain has primarily focused on validating the technique on proof of concept image sets that demonstrate desired properties and on examining the capabilities of the technique at various image presentation speeds. In this paper we expand on this by examining the capability of the technique when using a reduced number of channels in the EEG, and its impact on the detection accuracy

    Spatial Filtering Pipeline Evaluation of Cortically Coupled Computer Vision System for Rapid Serial Visual Presentation

    Get PDF
    Rapid Serial Visual Presentation (RSVP) is a paradigm that supports the application of cortically coupled computer vision to rapid image search. In RSVP, images are presented to participants in a rapid serial sequence which can evoke Event-related Potentials (ERPs) detectable in their Electroencephalogram (EEG). The contemporary approach to this problem involves supervised spatial filtering techniques which are applied for the purposes of enhancing the discriminative information in the EEG data. In this paper we make two primary contributions to that field: 1) We propose a novel spatial filtering method which we call the Multiple Time Window LDA Beamformer (MTWLB) method; 2) we provide a comprehensive comparison of nine spatial filtering pipelines using three spatial filtering schemes namely, MTWLB, xDAWN, Common Spatial Pattern (CSP) and three linear classification methods Linear Discriminant Analysis (LDA), Bayesian Linear Regression (BLR) and Logistic Regression (LR). Three pipelines without spatial filtering are used as baseline comparison. The Area Under Curve (AUC) is used as an evaluation metric in this paper. The results reveal that MTWLB and xDAWN spatial filtering techniques enhance the classification performance of the pipeline but CSP does not. The results also support the conclusion that LR can be effective for RSVP based BCI if discriminative features are available

    Character.

    Get PDF

    Semiconductor filled microstructured optical fibres with single mode guidance

    No full text
    Microstructured optical fibre (MOF) technology has generated new opportunities for the implementation of optical fibres with novel properties and functions [1]. It has been shown that silica MOFs make excellent 3D templates for semiconductor material deposition inside the capillary voids [2]. Recently a silicon MOF was designed and fabricated that had a high refractive index micron sized core, but yet only supported two guided modes [3]. This structure was realised via the complete filling of a hollow core photonic bandgap fibre (PBGF) with silicon so that the original air guiding PBGF was converted to a total internal reflection guiding fibre. Here, we extend the investigation by using a finite element method to model the optical properties of semiconductor filled MOFs of similar structures, with the aim to achieve broadband single mode guidance. Strategies to achieve single mode guidance both through the MOF template design and the selective filling of the voids of the original PBGF with semiconductor materials of different indices (silicon, silicon nitride, germanium) are proposed and investigated numerically. In particular, by selectively filling MOF templates with cladding rods that have a slightly raised index over that of the core, index guiding single mode operation can be observed in high index micron sized cores. Small index differences are achievable by controlling the nitrogen content in SiNx and an example of a single mode semiconductor MOF is shown in Figure 1, where the confinement loss of the fundamental mode is ~106 lower than the lowest order cladding mode

    Tax planning through the use of advance rulings

    Get PDF

    Adolescents' and parents' views of Child and Adolescent Mental Health Services (CAMHS) in Ireland

    Get PDF
    Aim: To explore adolescents’ and parents’ experiences of CAMHS in relation to accessibility, approachability, and appropriateness. Methods: Using a descriptive qualitative design, a combination of focus group and single interviews were conducted with adolescents (n=15) and parents (n=32) from three mental health clinics. Data were transcribed verbatim and analysed using thematic analysis. Results: Accessing mental health services was a challenging experience due to knowledge deficit, lack of information and limited availability of specialist services. Participants desired more information, involvement in decision-making, single and shared consultations, flexible scheduling of appointments, continuity with clinicians, school support and parent support groups. Participants seem to be generally satisfied, however adolescents felt less involved in decision making than they would have liked. Frequent staff changes was problematic as it disrupted continuity of care and hindered the formation of a trusting relationship. Implications for practice: Parents and adolescents expressed similar views of the positive and negative aspects of mental health services. Their need for more information-sharing and involvement in decision-making underline the importance of collaborative practice. Clinician continuity contributed to trusting therapeutic relationships and was valued. These are key principles that with attention, could lead to quality service provision for adolescents and families

    Synthetic-Neuroscore: Using A Neuro-AI Interface for Evaluating Generative Adversarial Networks

    Full text link
    Generative adversarial networks (GANs) are increasingly attracting attention in the computer vision, natural language processing, speech synthesis and similar domains. Arguably the most striking results have been in the area of image synthesis. However, evaluating the performance of GANs is still an open and challenging problem. Existing evaluation metrics primarily measure the dissimilarity between real and generated images using automated statistical methods. They often require large sample sizes for evaluation and do not directly reflect human perception of image quality. In this work, we describe an evaluation metric we call Neuroscore, for evaluating the performance of GANs, that more directly reflects psychoperceptual image quality through the utilization of brain signals. Our results show that Neuroscore has superior performance to the current evaluation metrics in that: (1) It is more consistent with human judgment; (2) The evaluation process needs much smaller numbers of samples; and (3) It is able to rank the quality of images on a per GAN basis. A convolutional neural network (CNN) based neuro-AI interface is proposed to predict Neuroscore from GAN-generated images directly without the need for neural responses. Importantly, we show that including neural responses during the training phase of the network can significantly improve the prediction capability of the proposed model. Materials related to this work are provided at https://github.com/villawang/Neuro-AI-Interface
    corecore