373 research outputs found

    Studying the integrated functional cognitive basis of sustained attention with a Primed Subjective-Illusory-Contour Attention Task

    Get PDF
    Sustained attention plays an important role in everyday life, for work, learning, or when affected by attention disorders. Studies of the neural correlates of attention commonly treat sustained attention as an isolated construct, measured with computerized continuous performance tests. However, in any ecological context, sustained attention interacts with other executive functions and depends on lower level perceptual processing. Such interactions occur, for example, in inhibition of interference, and processing of complex hierarchical stimuli; both of which are important for successful ecological attention. Motivated by the need for more studies on neural correlates of higher cognition, I present an experiment to investigate these interactions of attention in 17 healthy participants measured with high-resolution electroencephalography. Participants perform a novel 2-alternative forced-choice computerised performance test, the Primed Subjective Illusory Contour Attention Task (PSICAT), which presents gestalt-stimuli targets with distractor primes to induce interference inhibition during complex-percept processing. Using behavioural and brain-imaging analyses, I demonstrate the novel result that task-irrelevant incongruency can evoke stronger behavioural and neural responses than the task-relevant stimulus condition; a potentially important finding in attention disorder research. PSICAT is available as an open-source code repository at the following url, allowing researchers to reuse and adapt it to their requirements.Peer reviewe

    Visualisation of Articular Cartilage Microstructure

    Get PDF
    This thesis developed image processing techniques enabling the detection and segregation of biological three dimensional images into its component features based upon shape and relative size of the features detected. The work used articular cartilage images and separated fibrous components from the cells and background noise. Measurement of individual components and their recombination into a composite image are possible. Developed software was used to analyse the development of hyaline cartilage in developing sheep embryos

    Methods for multi-spectral image fusion: identifying stable and repeatable information across the visible and infrared spectra

    Get PDF
    Fusion of images captured from different viewpoints is a well-known challenge in computer vision with many established approaches and applications; however, if the observations are captured by sensors also separated by wavelength, this challenge is compounded significantly. This dissertation presents an investigation into the fusion of visible and thermal image information from two front-facing sensors mounted side-by-side. The primary focus of this work is the development of methods that enable us to map and overlay multi-spectral information; the goal is to establish a combined image in which each pixel contains both colour and thermal information. Pixel-level fusion of these distinct modalities is approached using computational stereo methods; the focus is on the viewpoint alignment and correspondence search/matching stages of processing. Frequency domain analysis is performed using a method called phase congruency. An extensive investigation of this method is carried out with two major objectives: to identify predictable relationships between the elements extracted from each modality, and to establish a stable representation of the common information captured by both sensors. Phase congruency is shown to be a stable edge detector and repeatable spatial similarity measure for multi-spectral information; this result forms the basis for the methods developed in the subsequent chapters of this work. The feasibility of automatic alignment with sparse feature-correspondence methods is investigated. It is found that conventional methods fail to match inter-spectrum correspondences, motivating the development of an edge orientation histogram (EOH) descriptor which incorporates elements of the phase congruency process. A cost function, which incorporates the outputs of the phase congruency process and the mutual information similarity measure, is developed for computational stereo correspondence matching. An evaluation of the proposed cost function shows it to be an effective similarity measure for multi-spectral information

    Bodies in the Brain

    Get PDF

    Investigating the neural mechanisms underlying audio-visual perception using electroencephalography (EEG)

    Get PDF
    Traditionally research into how we perceive our external world focused on the unisensory approach, examining how information is processed by one sense at a time. This produced a vast literature of results revealing how our brains process information from the different senses, from fields such as psychophysics, animal electrophysiology, and neuroimaging. However, we know from our own experiences that we use more than one sense at a time to understand our external world. Therefore to fully understand perception, we must understand not only how the brain processes information from individual sensory modalities, but also how and when this information interacts and combines with information from other modalities. In short, we need to understand the phenomenon of multisensory perception. The work in this thesis describes three experiments aimed to provide new insights into this topic. Specifically, the three experiments presented here focused on examining when and where effects related to multisensory perception emerged in neural signals, and whether or not these effects could be related to behaviour in a time-resolved way and on a trial-by-trial basis. These experiments were carried out using a novel combination of psychophysics, high density electroencephalography (EEG), and advanced computational methods (linear discriminant analysis and mutual information analysis). Experiment 1 (Chapter 3) investigated how behavioural and neural signals are modulated by the reliability of sensory information. Previous work has shown that subjects will weight sensory cues in proportion to their relative reliabilities; high reliability cues are assigned a higher weight and have more influence on the final perceptual estimate, while low reliability cues are assigned a lower weight and have less influence. Despite this widespread finding, it remains unclear when neural correlates of sensory reliability emerge during a trial, and whether or not modulations in neural signals due to reliability relate to modulations in behavioural reweighting. To investigate these questions we used a combination of psychophysics, EEG-based neuroimaging, single-trial decoding, and regression modelling. Subjects performed an audio-visual rate discrimination task where the modality (auditory, visual, audio-visual), stimulus stream rate (8 to 14 Hz), visual reliability (high/low), and congruency in rate between audio-visual stimuli (± 2 Hz) were systematically manipulated. For the behavioural and EEG components (derived using linear discriminant analysis), a set of perceptual and neural weights were calculated for each time point. The behavioural results revealed that participants weighted sensory information based on reliability: as visual reliability decreased, auditory weighting increased. These modulations in perceptual weights emerged early after stimulus onset (48 ms). The EEG data revealed that neural correlates of sensory reliability and perceptual weighting were also evident in decoding signals, and that these occurred surprisingly early in the trial (84 ms). Finally, source localisation suggested that these correlates originated in early sensory (occipital/temporal) and parietal regions respectively. Overall, these results provide the first insights into the temporal dynamics underlying human cue weighting in the brain, and suggest that it is an early, dynamic, and distributed process in the brain. Experiment 2 (Chapter 4) expanded on this work by investigating how oscillatory power was modulated by the reliability of sensory information. To this end, we used a time-frequency approach to analyse the data collected for the work in Chapter 3. Our results showed that significant effects in the theta and alpha bands over fronto-central regions occurred during the same early time windows as a shift in perceptual weighting (100 ms and 250 ms respectively). Specifically, we found that theta power (4 - 6 Hz) was lower and alpha power (10 – 12 Hz) was higher in audio-visual conditions where visual reliability was low, relative to conditions where visual reliability was high. These results suggest that changes in oscillatory power may underlie reliability based cue weighting in the brain, and that these changes occur early during the sensory integration process. Finally, Experiment 3 (Chapter 5) moved away from examining reliability based cue weighting and focused on investigating cases where spatially and temporally incongruent auditory and visual cues interact to affect behaviour. Known collectively as “cross-modal associations”, past work has shown that observers have preferred and non-preferred stimuli pairings. For example, subjects will frequently pair high pitched tones with small objects and low pitched tones with large objects. However it is still unclear when and where these associations are reflected in neural signals, and whether they emerge at an early perceptual level or later decisional level. To investigate these questions we used a modified version of the implicit association test (IAT) to examine the modulation of behavioural and neural signals underlying an auditory pitch – visual size cross modal association. Congruency was manipulated by assigning two stimuli (one auditory and one visual) to each of the left or right response keys and changing this assignment across blocks to create congruent (left key: high tone – small circle, right key: low tone – large circle) and incongruent (left key: low tone – small circle, right key: high tone – large circle) pairings of stimuli. On each trial, subjects were presented with only one of the four stimuli (auditory high tone, auditory low tone, visual small circle, visual large circle), and asked to respond which was presented as quickly and accurately as possible. The key assumption with such a design is that subjects should respond faster when associated (i.e. congruent) stimuli are assigned to the same response key than when two non-associated stimuli are. In line with this, our behavioural results demonstrated that subjects responded faster on blocks where congruent pairings of stimuli were assigned to the response keys (high pitch-small circle and low pitch large circle), than blocks where incongruent pairings were. The EEG results demonstrated that information about auditory pitch and visual size could be extracted from neural signals using two approaches to single-trial analysis (linear discriminant analysis and mutual information analysis) early during the trial (50ms), with the strongest information contained over posterior and temporal electrodes for auditory trials, and posterior electrodes for visual trials. EEG components related to auditory pitch were significantly modulated by cross-modal congruency over temporal and frontal regions early in the trial (~100ms), while EEG components related to visual size were modulated later (~220ms) over frontal and temporal electrodes. For the auditory trials, these EEG components were significantly predictive of single trial reaction times, yet for the visual trials the components were not. As a result, the data support an early and short-latency origin of cross-modal associations, and suggest that these may originate in a bottom-up manner during early sensory processing rather than from high-level inference processes. Importantly, the findings were consistent across both analysis methods, suggesting these effects are robust. To summarise, the results across all three experiments showed that it is possible to extract meaningful, single-trial information from the EEG signal and relate it to behaviour on a time resolved basis. As a result, the work presented here steps beyond previous studies to provide new insights into the temporal dynamics of audio-visual perception in the brain. All experiments, although employing different paradigms and investigating different processes, showed early neural correlates related to audio-visual perception emerging in neural signals across early sensory, parietal, and frontal regions. Together, these results provide support for the prevailing modern view that the entire cortex is essentially multisensory and that multisensory effects can emerge at all stages during the perceptual process

    Curvilinear Structure Enhancement in Biomedical Images

    Get PDF
    Curvilinear structures can appear in many different areas and at a variety of scales. They can be axons and dendrites in the brain, blood vessels in the fundus, streets, rivers or fractures in buildings, and others. So, it is essential to study curvilinear structures in many fields such as neuroscience, biology, and cartography regarding image processing. Image processing is an important field for the help to aid in biomedical imaging especially the diagnosing the disease. Image enhancement is the early step of image analysis. In this thesis, I focus on the research, development, implementation, and validation of 2D and 3D curvilinear structure enhancement methods, recently established. The proposed methods are based on phase congruency, mathematical morphology, and tensor representation concepts. First, I have introduced a 3D contrast independent phase congruency-based enhancement approach. The obtained results demonstrate the proposed approach is robust against the contrast variations in 3D biomedical images. Second, I have proposed a new mathematical morphology-based approach called the bowler-hat transform. In this approach, I have combined the mathematical morphology with a local tensor representation of curvilinear structures in images. The bowler-hat transform is shown to give better results than comparison methods on challenging data such as retinal/fundus images. The bowler-hat transform is shown to give better results than comparison methods on challenging data such as retinal/fundus images. Especially the proposed method is quite successful while enhancing of curvilinear structures at junctions. Finally, I have extended the bowler-hat approach to the 3D version to prove the applicability, reliability, and ability of it in 3D

    Investigating the Cognitive and Neural Mechanisms underlying Multisensory Perceptual Decision-Making in Humans

    Get PDF
    On a frequent day-to-day basis, we encounter situations that require the formation of decisions based on ambiguous and often incomplete sensory information. Perceptual decision-making defines the process by which sensory information is consolidated and accumulated towards one of multiple possible choice alternatives, which inform our behavioural responses. Perceptual decision-making can be understood both theoretically and neurologically as a process of stochastic sensory evidence accumulation towards some choice threshold. Once this threshold is exceeded, a response is facilitated, informing the overt actions undertaken. Prevalent progress has been made towards understanding the cognitive and neural mechanisms underlying perceptual decision-making. Analyses of Reaction Time (RTs; typically constrained to milliseconds) and choice accuracy; reflecting decision-making behaviour, can be coupled with neuroimaging methodologies; notably electroencephalography (EEG) and functional Magnetic Resonance Imaging (fMRI), to identify spatiotemporal components representative of the neural signatures corresponding to such accumulation-to-bound decision formation on a single-trial basis. Taken together, these provide us with an experimental framework conceptualising the key computations underlying perceptual decision-making. Despite this, relatively little remains known about the enhancements or alternations to the process of perceptual decision-making from the integration of information across multiple sensory modalities. Consolidating the available sensory evidence requires processing information presented in more than one sensory modality, often near-simultaneously, to exploit the salient percepts for what we term as multisensory (perceptual) decision-making. Specifically, multisensory integration must be considered within the perceptual decision-making framework in order to understand how information becomes stochastically accumulated to inform overt sensory-motor choice behaviours. Recently, substantial progress in research has been made through the application of behaviourally-informed, and/or neurally-informed, modelling approaches to benefit our understanding of multisensory decision-making. In particular, these approaches fit a number of model parameters to behavioural and/or neuroimaging datasets, in order to (a) dissect the constituent internal cognitive and neural processes underlying perceptual decision-making with both multisensory and unisensory information, and (b) mechanistically infer how multisensory enhancements arise from the integration of information across multiple sensory modalities to benefit perceptual decision formation. Despite this, the spatiotemporal locus of the neural and cognitive underpinnings of enhancements from multisensory integration remains subject to debate. In particular, our understanding of which brain regions are predictive of such enhancements, where they arise, and how they influence decision-making behaviours requires further exploration. The current thesis outlines empirical findings from three studies aimed at providing a more complete characterisation of multisensory perceptual decision-making, utilising EEG and accumulation-to-bound modelling methodologies to incorporate both behaviourally-informed and neurally-informed modelling approaches, investigating where, when, and how perceptual improvements arise during multisensory perceptual decision-making. Pointedly, these modelling approaches sought to probe the exerted modulatory influences of three factors: unisensory formulated cross-modal associations (Chapter 2), natural ageing (Chapter 3), and perceptual learning (Chapter 4), on the integral cognitive and neural mechanisms underlying observable benefits towards multisensory decision formation. Chapter 2 outlines secondary analyses, utilising a neurally-informed modelling approach, characterising the spatiotemporal dynamics of neural activity underlying auditory pitch-visual size cross-modal associations. In particular, how unisensory auditory pitch-driven associations benefit perceptual decision formation was functionally probed. EEG measurements were recorded from participants during performance of an Implicit Association Test (IAT), a two-alternative forced-choice (2AFC) paradigm which presents one unisensory stimulus feature per trial for participants to categorise, but manipulates the stimulus feature-response key mappings of auditory pitch-visual size cross-modal associations from unisensory stimuli alone, thus overcoming the issue of mixed selectivity in recorded neural activity prevalent in previous cross-modal associative research, which near-simultaneously presented multisensory stimuli. Categorisations were faster (i.e., lower RTs) when stimulus feature-response key mappings were associatively congruent, compared to associatively incongruent, between the two associative counterparts, thus demonstrating a behavioural benefit to perceptual decision formation. Multivariate Linear Discriminant Analysis (LDA) was used to characterise the spatiotemporal dynamics of EEG activity underpinning IAT performance, in which two EEG components were identified that discriminated neural activity underlying the benefits of associative congruency of stimulus feature-response key mappings. Application of a neurally-informed Hierarchical Drift Diffusion Model (HDDM) demonstrated early sensory processing benefits, with increases in the duration of non-decisional processes with incongruent stimulus feature-response key mappings, and late post-sensory alterations to decision dynamics, with congruent stimulus feature-response key mappings decreasing the quantity of evidence required to facilitate a decision. Hence, we found that the trial-by-trial variability in perceptual decision formation from unisensory facilitated cross-modal associations could be predicted by neural activity within our neurally-informed modelling approach. Next, Chapter 3 outlines cognitive research investigating age-related impacts on the behavioural indices of multisensory perceptual decision-making (i.e., RTs and choice accuracy). Natural ageing has been demonstrated to diversely affect multisensory perceptual decision-making dynamics. However, the constituent cognitive processes affected remain unclear. Specifically, a mechanistic insight reconciling why older adults may exhibit preserved multisensory integrative benefits, yet display generalised perceptual deficits, relative to younger adults, remains inconclusive. To address this limitation, 212 participants performed an online variant of a well-established audiovisual object categorisation paradigm, whereby age-related differences in RTs and choice accuracy (binary responses) between audiovisual (AV), visual (V), and auditory (A) trial types could be assessed between Younger Adults (YAs; Mean ± Standard Deviation = 27.95 ± 5.82 years) and Older Adults (OAs; Mean ± Standard Deviation = 60.96 ± 10.35 years). Hierarchical Drift Diffusion Modelling (HDDM) was fitted to participants’ RTs and binary responses in order to probe age-related impacts on the latent underlying processes of multisensory decision formation. Behavioural results found that whereas OAs were typically slower (i.e., ↑ RTs) and less accurate (i.e., ↓ choice accuracy), relative to YAs across all sensory trial types, they exhibited greater differences in RTs between AV and V trials (i.e., ↑ AV-V RT difference), with no significant effects of choice accuracy, implicating preserved benefits of multisensory integration towards perceptual decision formation. HDDM demonstrated parsimonious fittings for characterising these behavioural discrepancies between YAs and OAs. Notably we found slower rates of sensory evidence accumulation (i.e., ↓ drift rates) for OAs across all sensory trial types, coupled with (1) higher rates of sensory evidence accumulation (i.e., ↑ drift rates) for OAs between AV versus V trial types irrespective of stimulus difficulty, coupled with (2) increased response caution (i.e., ↑ decision boundaries) between AV versus V trial types, and (3) decreased non-decisional processing duration (i.e., ↓ non-decision times) between AV versus V trial types for stimuli of increased difficulty respectively. Our findings suggest that older adults trade-off multisensory decision-making speed for accuracy to preserve enhancements towards perceptual decision formation relative to younger adults. Hence, they display an increased reliance on integrating multimodal information; through the principle of inverse effectiveness, as a compensatory mechanism for a generalised cognitive slowing when processing unisensory information. Overall, our findings demonstrate how computational modelling can reconcile contrasting hypotheses of age-related changes in processes underlying multisensory perceptual decision-making behaviour. Finally, Chapter 4 outlines research probing the exerted influence of perceptual learning on multisensory perceptual decision-making. Views of unisensory perceptual learning imply that improvements in perceptual sensitivity may be due to enhancements in early sensory representations and/or modulations to post-sensory decision dynamics. We sought to assess whether these views could account for improvements in perceptual sensitivity for multisensory stimuli, or even exacerbations of multisensory enhancements towards decision formation, by consolidating the spatiotemporal locus of where and when in the brain they may be observed. We recorded EEG activity from participants who completed the same audiovisual object categorisation paradigm (as outlined in Chapter 3), over three consecutive days. We used single-trial multivariate LDA to characterise the spatiotemporal trajectory of the decision dynamics underlying any observed multisensory benefits both (a) within and (b) between visual, auditory, and audiovisual trial types. While found significant decreases were found in RTs and increases in choice accuracy over testing days, we did not find any significant effects of perceptual learning on multisensory nor unisensory perceptual decision formation. Similarly, EEG analysis did not find any neural components indicative of early or late modulatory effects from perceptual learning in brain activity, which we attribute to (1) a long duration of stimulus presentations (300ms), and (2) a lack of sufficient statistical power for our LDA classifier to discriminate face-versus-car trial types. We end this chapter with considerations for discerning multisensory benefits towards perceptual decision formation, and recommendations for altering our experimental design to observe the effects of perceptual learning as a decision neuromodulator. These findings contribute to literature justifying the increasing relevance of utilising behaviourally-informed and/or neurally-informed modelling approaches for investigating multisensory perceptual decision-making. In particular, a discussion of the underlying cognitive and/or neural mechanisms that can be attributed to the benefits of multisensory integration towards perceptual decision formation, as well as the modulatory impact of the decision modulators in question, can contribute to a theoretical reconciliation that multisensory integrative benefits are not ubiquitous to specific spatiotemporal neural dynamics nor cognitive processes

    Image-Based Fracture Mechanics with Digital Image Correlation and Digital Volume Correlation

    Get PDF
    Analysis that requires human judgement can add bias which may, as a result, increase uncertainty. Accurate detection of a crack and segmentation of the crack geometry is beneficial to any fracture experiment. Studies of crack behaviour, such as the effect of closure, residual stress in fatigue or elastic-plastic fracture mechanics, require data on crack opening displacement. Furthermore, the crack path can give critical information of how the crack interacts with the microstructure and stress fields. Digital Image Correlation (DIC) and Digital Volume Correlation (DVC) have been widely accepted and routinely used to measure full-field displacements in many areas of solid mechanics, including fracture mechanics. However, current practise for the extraction of crack parameters from displacement fields usually requires manual methods and are quite onerous, particularly for large amounts of data. This thesis introduces the novel application of Phase Congruency-based Crack Detection (PC-CD) to automatically detect and characterise cracks from displacement fields. Phase congruency is a powerful mathematical tool that highlights a discontinuity more efficiently than gradient based methods. Phase congruency’s invariance to the magnitude of the discontinuity and its state-of-the-art de-noising method, make it ideal for the application to crack tip displacement fields. PC-CD’s accuracy is quantified and benchmarked using both theoretical and virtual displacement fields. The accuracy of PC-CD is evaluated and compared with conventional, manual computation methods such as Heaviside function fitting and gradient based methods. It is demonstrated how PC-CD can be coupled with a new method that is based on the conjoint use of displacement fields and finite element analysis to extract the strain energy release rate of cracks automatically. The PC-CD method is extended to volume displacement fields (VPC-CD) and semi-autonomously extracts crack surface, crack front and opening displacement through the thickness. As a proof of concept, PC-CD and VPC-CD are applied to a range of fracture experiments varying in material and fracture behaviour: two ductile and one quasi-brittle for surface displacement measurements; and two quasi-brittle and one ductile for volume measurements. Using the novel PC-CD and VPC-CD analyses, the crack geometry is obtained fully automatically and without any user judgement or intervention. The geometrical parameters extracted by PC-CD and VPC-CD are validated experimentally through other tools such as: optical microscope measurements, high resolution fractography and visual inspection

    Data reduction algorithms to enable long-term monitoring from low-power miniaturised wireless EEG systems

    No full text
    Objectives: The weight and volume of battery-powered wireless electroencephalography (EEG) systems are dominated by the batteries. Battery dimensions are in turn determined by the required energy capacity, which is derived from the system power consumption and required monitoring time. Data reduction may be carried out to reduce the amount of data transmitted and thus proportionally reduce the power consumption of the wireless transmitter, which dominates system power consumption. This thesis presents two new data selection algorithms that, in addition to achieving data reduction, also select EEG containing epileptic seizures and spikes that are important in diagnosis. Methods: The algorithms analyse short EEG sections, during monitoring, to determine the presence of candidate seizures or spikes. Phase information from different frequency components of the signal are used to detect spikes. For seizure detection, frequencies below 10 Hz are investigated for a relative increase in frequency and/or amplitude. Significant attention has also been given to metrics in order to accurately evaluate the performance of these algorithms for practical use in the proposed system. Additionally, signal processing techniques to emphasize seizures within the EEG and techniques to correct for broad-level amplitude variation in the EEG have been investigated. Results: The spike detection algorithm detected 80% of spikes whilst achieving 50% data reduction, when tested on 992 spikes from 105 hours of 10-channel scalp EEG data obtained from 25 adults. The seizure detection algorithm identified 94% of seizures selecting 80% of their duration for transmission and achieving 79% data reduction. It was tested on 34 seizures with a total duration of 4158 s in a database of over 168 hours of 16-channel scalp EEG obtained from 21 adults. These algorithms show great potential for longer monitoring times from miniaturised wireless EEG systems that would improve electroclinical diagnosis of patients
    • …
    corecore