2,243 research outputs found
Non-linear effects of transcranial direct current stimulation as a function of individual baseline performance:Evidence from biparietal tDCS influence on lateralized attention bias
Transcranial direct current stimulation (tDCS) is a well-established technique for non-invasive brain stimulation (NIBS). However, the technique suffers from a high variability in outcome, some of which is likely explained by the state of the brain at tDCS-delivery but for which explanatory, mechanistic models are lacking. Here, we tested the effects of bi-parietal tDCS on perceptual line bisection as a function of tDCS current strength (1 mA vs 2 mA) and individual baseline discrimination sensitivity (a measure associated with intrinsic uncertainty/signal-to-noise balance). Our main findings were threefold. We replicated a previous finding (Giglia et al., 2011) of a rightward shift in subjective midpoint after Left anode/Right cathode tDCS over parietal cortex (sham-controlled). We found this effect to be weak over our entire sample (n = 38), but to be substantial in a subset of participants when they were split according to tDCS-intensity and baseline performance. This was due to a complex, nonlinear interaction between these two factors. Our data lend further support to the notion of state-dependency in NIBS which suggests outcome to depend on the endogenous balance between task-informative ‘signal’ and task-uninformative ‘noise’ at baseline. The results highlight the strong influence of individual differences and variations in experimental parameters on tDCS outcome, and the importance of fostering knowledge on the factors influencing tDCS outcome across cognitive domains
Development Of Human Brain Network Architecture Underlying Executive Function
The transition from late childhood to adulthood is characterized by refinements in brain structure and function that support the dynamic control of attention and goal-directed behavior. One broad domain of cognition that undergoes particularly protracted development is executive function, which encompasses diverse cognitive processes including working memory, inhibitory control, and task switching. Delineating how white matter architecture develops to support specialized brain circuits underlying individual differences in executive function is critical for understanding sources of risk-taking behavior and mortality during adolescence. Moreover, neuropsychiatric disorders are increasingly understood as disorders of brain development, are marked by failures of executive function, and are linked to the disruption of evolving brain connectivity.
Network theory provides a parsimonious framework for modeling how anatomical white matter pathways support synchronized fluctuations in neural activity. However, only sparse data exists regarding how the maturation of white matter architecture during human brain development supports coordinated fluctuations in neural activity underlying higher-order cognitive ability. To address this gap, we capitalize on multi-modal neuroimaging and cognitive phenotyping data collected as part of the Philadelphia Neurodevelopmental Cohort (PNC), a large community-based study of brain development.
First, diffusion tractography methods were applied to characterize how the development of structural brain network topology supports domain-specific improvements in cognitive ability (n=882, ages 8-22 years old). Second, structural connectivity and task-based functional connectivity approaches were integrated to describe how the development of anatomical constraints on functional communication support individual differences in executive function (n=727, ages 8-23 years old). Finally, the systematic impact of head motion artifact on measures of structural connectivity were characterized (n=949, ages 8-22 years old), providing important guidelines for studying the development of structural brain network architecture.
Together, this body of work expands our understanding of how developing white matter connectivity in youth supports the emergence of functionally specialized circuits underlying executive processing. As diverse types of psychopathology are increasingly linked to atypical brain maturation, these findings could collectively lead to earlier diagnosis and personalized interventions for individuals at risk for developing mental disorders
Tangent functional connectomes uncover more unique phenotypic traits
Functional connectomes (FCs) contain pairwise estimations of functional
couplings based on pairs of brain regions activity. FCs are commonly
represented as correlation matrices that are symmetric positive definite (SPD)
lying on or inside the SPD manifold. Since the geometry on the SPD manifold is
non-Euclidean, the inter-related entries of FCs undermine the use of
Euclidean-based distances. By projecting FCs into a tangent space, we can
obtain tangent functional connectomes (tangent-FCs). Tangent-FCs have shown a
higher predictive power of behavior and cognition, but no studies have
evaluated the effect of such projections with respect to fingerprinting. We
hypothesize that tangent-FCs have a higher fingerprint than regular FCs.
Fingerprinting was measured by identification rates (ID rates) on test-retest
FCs as well as on monozygotic and dizygotic twins. Our results showed that
identification rates are systematically higher when using tangent-FCs.
Specifically, we found: (i) Riemann and log-Euclidean matrix references
systematically led to higher ID rates. (ii) In tangent-FCs, Main-diagonal
regularization prior to tangent space projection was critical for ID rate when
using Euclidean distance, whereas barely affected ID rates when using
correlation distance. (iii) ID rates were dependent on condition and fMRI scan
length. (iv) Parcellation granularity was key for ID rates in FCs, as well as
in tangent-FCs with fixed regularization, whereas optimal regularization of
tangent-FCs mostly removed this effect. (v) Correlation distance in tangent-FCs
outperformed any other configuration of distance on FCs or on tangent-FCs
across the fingerprint gradient (here sampled by assessing test-retest,
Monozygotic and Dizygotic twins). (vi)ID rates tended to be higher in task
scans compared to resting-state scans when accounting for fMRI scan length.Comment: 29 pages, 10 figures, 2 table
No effects of 1 Hz offline TMS on performance in the stop-signal game
Stopping an already initiated action is crucial for human everyday behavior and empirical evidence points toward the prefrontal cortex playing a key role in response inhibition. Two regions that have been consistently implicated in response inhibition are the right inferior frontal gyrus (IFG) and the more superior region of the dorsolateral prefrontal cortex (DLPFC). The present study investigated the effect of offline 1 Hz transcranial magnetic stimulation (TMS) over the right IFG and DLPFC on performance in a gamified stop-signal task (SSG). We hypothesized that perturbing each area would decrease performance in the SSG, albeit with a quantitative difference in the performance decrease after stimulation. After offline TMS, functional short-term reorganization is possible, and the domain-general area (i.e., the right DLPFC) might be able to compensate for the perturbation of the domain-specific area (i.e., the right IFG). Results showed that 1 Hz offline TMS over the right DLPFC and the right IFG at 110% intensity of the resting motor threshold had no effect on performance in the SSG. In fact, evidence in favor of the null hypothesis was found. One intriguing interpretation of this result is that within-network compensation was triggered, canceling out the potential TMS effects as has been suggested in recent theorizing on TMS effects, although the presented results do not unambiguously identify such compensatory mechanisms. Future studies may result in further support for this hypothesis, which is especially important when studying reactive response in complex environments
Exploring variability in medical imaging
Although recent successes of deep learning and novel machine learning techniques improved the perfor-
mance of classification and (anomaly) detection in computer vision problems, the application of these
methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this
is the amount of variability that is encountered and encapsulated in human anatomy and subsequently
reflected in medical images. This fundamental factor impacts most stages in modern medical imaging
processing pipelines.
Variability of human anatomy makes it virtually impossible to build large datasets for each disease
with labels and annotation for fully supervised machine learning. An efficient way to cope with this is
to try and learn only from normal samples. Such data is much easier to collect. A case study of such
an automatic anomaly detection system based on normative learning is presented in this work. We
present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative
models, which are trained only utilising normal/healthy subjects.
However, despite the significant improvement in automatic abnormality detection systems, clinical
routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis
and localise abnormalities. Integrating human expert knowledge into the medical imaging processing
pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per-
spective of building an automated medical imaging system, it is still an open issue, to what extent
this kind of variability and the resulting uncertainty are introduced during the training of a model
and how it affects the final performance of the task. Consequently, it is very important to explore the
effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as
on the model’s performance in a specific machine learning task. A thorough investigation of this issue
is presented in this work by leveraging automated estimates for machine learning model uncertainty,
inter-observer variability and segmentation task performance in lung CT scan images.
Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging
was attempted. This state-of-the-art survey includes both conventional pattern recognition methods
and deep learning based methods. It is one of the first literature surveys attempted in the specific
research area.Open Acces
Methods in machine learning for probabilistic modelling of environment, with applications in meteorology and geology
Earth scientists increasingly deal with ‘big data’. Where once we may have struggled to obtain a handful of relevant measurements, we now often have data being collected from multiple sources, on the ground, in the air, and from space. These observations are accumulating at a rate that far outpaces our ability to make sense of them using traditional methods with limited scalability (e.g., mental modelling, or trial-and-error improvement of process based models). The revolution in machine learning offers a new paradigm for modelling the environment: rather than focusing on tweaking every aspect of models developed from the top down based largely on prior knowledge, we now have the capability to instead set up more abstract machine learning systems that can ‘do the tweaking for us’ in order to learn models from the bottom up that can be considered optimal in terms of how well they agree with our (rapidly increasing number of) observations of reality, while still being guided by our prior beliefs.
In this thesis, with the help of spatial, temporal, and spatio-temporal examples in meteorology and geology, I present methods for probabilistic modelling of environmental variables using machine learning, and explore the considerations involved in developing and adopting these technologies, as well as the potential benefits they stand to bring, which include improved knowledge-acquisition and decision-making. In each application, the common theme is that we would like to learn predictive distributions for the variables of interest that are well-calibrated and as sharp as possible (i.e., to provide answers that are as precise as possible while remaining honest about their uncertainty). Achieving this requires the adoption of statistical approaches, but the volume and complexity of data available mean that scalability is an important factor — we can only realise the value of available data if it can be successfully incorporated into our models.Engineering and Physical Sciences Research Council (EPSRC
Lidar-based scene understanding for autonomous driving using deep learning
With over 1.35 million fatalities related to traffic accidents worldwide, autonomous driving was foreseen at the beginning of this century as a feasible solution to improve security in our roads. Nevertheless, it is meant to disrupt our transportation paradigm, allowing to reduce congestion, pollution, and costs, while increasing the accessibility, efficiency, and reliability of the transportation for both people and goods. Although some advances have gradually been transferred into commercial vehicles in the way of Advanced Driving Assistance Systems (ADAS) such as adaptive cruise control, blind spot detection or automatic parking, however, the technology is far from mature. A full understanding of the scene is actually needed so that allowing the vehicles to be aware of the surroundings, knowing the existing elements of the scene, as well as their motion, intentions and interactions.
In this PhD dissertation, we explore new approaches for understanding driving scenes from 3D LiDAR point clouds by using Deep Learning methods. To this end, in Part I we analyze the scene from a static perspective using independent frames to detect the neighboring vehicles. Next, in Part II we develop new ways for understanding the dynamics of the scene. Finally, in Part III we apply all the developed methods to accomplish higher level challenges such as segmenting moving obstacles while obtaining their rigid motion vector over the ground.
More specifically, in Chapter 2 we develop a 3D vehicle detection pipeline based on a multi-branch deep-learning architecture and propose a Front (FR-V) and a Bird’s Eye view (BE-V) as 2D representations of the 3D point cloud to serve as input for training our models. Later on, in Chapter 3 we apply and further test this method on two real uses-cases, for pre-filtering moving
obstacles while creating maps to better localize ourselves on subsequent days, as well as for vehicle tracking. From the dynamic perspective, in Chapter 4 we learn from the 3D point cloud a novel dynamic feature that resembles optical flow from RGB images. For that, we develop a new approach to leverage RGB optical flow as pseudo ground truth for training purposes but allowing the use of only 3D LiDAR data at inference time. Additionally, in Chapter 5 we explore the benefits of combining classification and regression learning problems to face the optical flow estimation task in a joint coarse-and-fine manner. Lastly, in Chapter 6 we gather the previous methods and demonstrate that with these independent tasks we can guide the learning of higher challenging problems such as segmentation and motion estimation of moving vehicles from our own moving perspective.Con más de 1,35 millones de muertes por accidentes de tráfico en el mundo, a principios de siglo se predijo que la conducción autónoma serÃa una solución viable para mejorar la seguridad en nuestras carreteras. Además la conducción autónoma está destinada a cambiar nuestros paradigmas de transporte, permitiendo reducir la congestión del tráfico, la contaminación y el coste, a la vez que aumentando la accesibilidad, la eficiencia y confiabilidad del transporte tanto de personas como de mercancÃas. Aunque algunos avances, como el control de crucero adaptativo, la detección de puntos ciegos o el estacionamiento automático, se han transferido gradualmente a vehÃculos comerciales en la forma de los Sistemas Avanzados de Asistencia a la Conducción (ADAS), la tecnologÃa aún no ha alcanzado el suficiente grado de madurez. Se necesita una comprensión completa de la escena para que los vehÃculos puedan entender el entorno, detectando los elementos presentes, asà como su movimiento, intenciones e interacciones. En la presente tesis doctoral, exploramos nuevos enfoques para comprender escenarios de conducción utilizando nubes de puntos en 3D capturadas con sensores LiDAR, para lo cual empleamos métodos de aprendizaje profundo. Con este fin, en la Parte I analizamos la escena desde una perspectiva estática para detectar vehÃculos. A continuación, en la Parte II, desarrollamos nuevas formas de entender las dinámicas del entorno. Finalmente, en la Parte III aplicamos los métodos previamente desarrollados para lograr desafÃos de nivel superior, como segmentar obstáculos dinámicos a la vez que estimamos su vector de movimiento sobre el suelo. EspecÃficamente, en el CapÃtulo 2 detectamos vehÃculos en 3D creando una arquitectura de aprendizaje profundo de dos ramas y proponemos una vista frontal (FR-V) y una vista de pájaro (BE-V) como representaciones 2D de la nube de puntos 3D que sirven como entrada para entrenar nuestros modelos. Más adelante, en el CapÃtulo 3 aplicamos y probamos aún más este método en dos casos de uso reales, tanto para filtrar obstáculos en movimiento previamente a la creación de mapas sobre los que poder localizarnos mejor en los dÃas posteriores, como para el seguimiento de vehÃculos. Desde la perspectiva dinámica, en el CapÃtulo 4 aprendemos de la nube de puntos en 3D una caracterÃstica dinámica novedosa que se asemeja al flujo óptico sobre imágenes RGB. Para ello, desarrollamos un nuevo enfoque que aprovecha el flujo óptico RGB como pseudo muestras reales para entrenamiento, usando solo information 3D durante la inferencia. Además, en el CapÃtulo 5 exploramos los beneficios de combinar los aprendizajes de problemas de clasificación y regresión para la tarea de estimación de flujo óptico de manera conjunta. Por último, en el CapÃtulo 6 reunimos los métodos anteriores y demostramos que con estas tareas independientes podemos guiar el aprendizaje de problemas de más alto nivel, como la segmentación y estimación del movimiento de vehÃculos desde nuestra propia perspectivaAmb més d’1,35 milions de morts per accidents de trà nsit al món, a principis de segle es va
predir que la conducció autònoma es convertiria en una solució viable per millorar la seguretat
a les nostres carreteres. D’altra banda, la conducció autònoma està destinada a canviar els
paradigmes del transport, fent possible aixà reduir la densitat del trà nsit, la contaminació i
el cost, alhora que augmentant l’accessibilitat, l’eficiència i la confiança del transport tant de
persones com de mercaderies. Encara que alguns avenços, com el control de creuer adaptatiu,
la detecció de punts cecs o l’estacionament automà tic, s’han transferit gradualment a vehicles
comercials en forma de Sistemes Avançats d’Assistència a la Conducció (ADAS), la tecnologia
encara no ha arribat a aconseguir el grau suficient de maduresa. És necessà ria, doncs, una
total comprensió de l’escena de manera que els vehicles puguin entendre l’entorn, detectant els
elements presents, aixà com el seu moviment, intencions i interaccions.
A la present tesi doctoral, explorem nous enfocaments per tal de comprendre les diferents
escenes de conducció utilitzant núvols de punts en 3D capturats amb sensors LiDAR, mitjançant
l’ús de mètodes d’aprenentatge profund. Amb aquest objectiu, a la Part I analitzem l’escena des
d’una perspectiva està tica per a detectar vehicles. A continuació, a la Part II, desenvolupem
noves formes d’entendre les dinà miques de l’entorn. Finalment, a la Part III apliquem els
mètodes prèviament desenvolupats per a aconseguir desafiaments d’un nivell superior, com, per
exemple, segmentar obstacles dinà mics al mateix temps que estimem el seu vector de moviment
respecte al terra.
Concretament, al CapÃtol 2 detectem vehicles en 3D creant una arquitectura d’aprenentatge
profund amb dues branques, i proposem una vista frontal (FR-V) i una vista d’ocell (BE-V)
com a representacions 2D del núvol de punts 3D que serveixen com a punt de partida per
entrenar els nostres models. Més endavant, al CapÃtol 3 apliquem i provem de nou aquest
mètode en dos casos d’ús reals, tant per filtrar obstacles en moviment prèviament a la creació
de mapes en els quals poder localitzar-nos millor en dies posteriors, com per dur a terme
el seguiment de vehicles. Des de la perspectiva dinà mica, al CapÃtol 4 aprenem una nova
caracterÃstica dinà mica del núvol de punts en 3D que s’assembla al flux òptic sobre imatges
RGB. Per a fer-ho, desenvolupem un nou enfocament que aprofita el flux òptic RGB com pseudo
mostres reals per a entrenament, utilitzant només informació 3D durant la inferència. Després,
al CapÃtol 5 explorem els beneficis que s’obtenen de combinar els aprenentatges de problemes
de classificació i regressió per la tasca d’estimació de flux òptic de manera conjunta. Finalment,
al CapÃtol 6 posem en comú els mètodes anteriors i demostrem que mitjançant aquests processos
independents podem abordar l’aprenentatge de problemes més complexos, com la segmentació
i estimació del moviment de vehicles des de la nostra pròpia perspectiva
- …