72 research outputs found

    DATA-DRIVEN SIMULATIONS OF WILDFIRE SPREAD AT REGIONAL SCALES

    Get PDF
    Current wildfire spread simulators lack the ability to provide accurate prediction of the active flame burning areas at regional scales due to two main challenges: a modeling challenge associated with providing accurate mathematical representations of the multi-physics multi-scale processes that induce the fire dynamics, and a data challenge associated with providing accurate estimates of the initial fire position and the physical parameters that are required by the fire spread models. A promising approach to overcome these limitations is data assimilation: data assimilation aims at integrating available observations into the fire spread simulator, while accounting for their respective uncertainties, in order to infer a more accurate estimate of the fire front position and to produce a more reliable forecast of the wildfire behavior. The main objective of the present study is to design and evaluate suitable algorithms for regional-scale wildfire spread simulations, which are able to properly handle the variations in wildfire spread due to the significant spatial heterogeneity in the model inputs and to the temporal changes in the wildfire behavior. First we developed a grid-based spatialized parameter estimation approach where the estimation targets are the spatially-varying input model parameters. Then we proposed an efficient and robust method to compute the discrepancy between the observed and simulated fire fronts, which is based on a front shape similarity measure inspired from image processing theory. The new method is demonstrated in the context of Luenberger observer-based state estimation strategy. Finally we developed a dual state-parameter estimation method where we estimate both model state and model parameters simultaneously in order to retrieve more accurate physical values of model parameters and achieve a better forecast performance in terms of fire front positions. All these efforts aim at designing algorithmic solutions to overcome the difficulties associated with spatially-varying environmental conditions and potentially complex fireline shapes and topologies. It paves the way towards real-time monitoring and forecasting of wildfire dynamics at regional scales

    Statistical causality in the EEG for the study of cognitive functions in healthy and pathological brains

    Get PDF
    Understanding brain functions requires not only information about the spatial localization of neural activity, but also about the dynamic functional links between the involved groups of neurons, which do not work in an isolated way, but rather interact together through ingoing and outgoing connections. The work carried on during the three years of PhD course returns a methodological framework for the estimation of the causal brain connectivity and its validation on simulated and real datasets (EEG and pseudo-EEG) at scalp and source level. Important open issues like the selection of the best algorithms for the source reconstruction and for time-varying estimates were addressed. Moreover, after the application of such approaches on real datasets recorded from healthy subjects and post-stroke patients, we extracted neurophysiological indices describing in a stable and reliable way the properties of the brain circuits underlying different cognitive states in humans (attention, memory). More in detail: I defined and implemented a toolbox (SEED-G toolbox) able to provide a useful validation instrument addressed to researchers who conduct their activity in the field of brain connectivity estimation. It may have strong implication, especially in methodological advancements. It allows to test the ability of different estimators in increasingly less ideal conditions: low number of available samples and trials, high inter-trial variability (very realistic situations when patients are involved in protocols) or, again, time varying connectivity patterns to be estimate (where stationary hypothesis in wide sense failed). A first simulation study demonstrated the robustness and the accuracy of the PDC with respect to the inter-trials variability under a large range of conditions usually encountered in practice. The simulations carried on the time-varying algorithms allowed to highlight the performance of the existing methodologies in different conditions of signals amount and number of available trials. Moreover, the adaptation of the Kalman based algorithm (GLKF) I implemented, with the introduction of the preliminary estimation of the initial conditions for the algorithm, lead to significantly better performance. Another simulation study allowed to identify a tool combining source localization approaches and brain connectivity estimation able to provide accurate and reliable estimates as less as possible affected to the presence of spurious links due to the head volume conduction. The developed and tested methodologies were successfully applied on three real datasets. The first one was recorded from a group of healthy subjects performing an attention task that allowed to describe the brain circuit at scalp and source level related with three important attention functions: alerting, orienting and executive control. The second EEG dataset come from a group of healthy subjects performing a memory task. Also in this case, the approaches under investigation allowed to identify synthetic connectivity-based descriptors able to characterize the three main memory phases (encoding, storage and retrieval). For the last analysis I recorded EEG data from a group of stroke patients performing the same memory task before and after one month of cognitive rehabilitation. The promising results of this preliminary study showed the possibility to follow the changes observed at behavioural level by means of the introduced neurophysiological indices

    Effective influences in neuronal networks : attentional modulation of effective influences underlying flexible processing and how to measure them

    Get PDF
    Selective routing of information between brain areas is a key prerequisite for flexible adaptive behaviour. It allows to focus on relevant information and to ignore potentially distracting influences. Selective attention is a psychological process which controls this preferential processing of relevant information. The neuronal network structures and dynamics, and the attentional mechanisms by which this routing is enabled are not fully clarified. Based on previous experimental findings and theories, a network model is proposed which reproduces a range of results from the attention literature. It depends on shifting of phase relations between oscillating neuronal populations to modulate the effective influence of synapses. This network model might serve as a generic routing motif throughout the brain. The attentional modifications of activity in this network are investigated experimentally and found to employ two distinct channels to influence processing: facilitation of relevant information and independent suppression of distracting information. These findings are in agreement with the model and previously unreported on the level of neuronal populations. Furthermore, effective influence in dynamical systems is investigated more closely. Due to a lack of a theoretical underpinning for measurements of influence in non-linear dynamical systems such as neuronal networks, often unsuited measures are used for experimental data that can lead to erroneous conclusions. Based on a central theorem in dynamical systems, a novel theory of effective influence is developed. Measures derived from this theory are demonstrated to capture the time dependent effective influence and the asymmetry of influences in model systems and experimental data. This new theory holds the potential to uncover previously concealed interactions in generic non-linear systems studied in a range of disciplines, such as neuroscience, ecology, economy and climatology

    K+ Channel's Equilibrium Preference Reveals the Origin of Its Conduction Selectivity and the Inactivated State of the Selectivity Filter

    Get PDF
    K^+ channels are a class of membrane proteins that rapidly and selectively transport K^+ ions across lipid membranes. K^+ ions are concentrated inside the cell, and its efflux is responsible for the rapid repolarization during action potential. Given that K^+ channels play important roles in cell physiology, their activities are tightly controlled through a variety of features, of which high ion selectivity and gating are the two most common. The work presented in this dissertation will address two fundamental questions about them. First, what is the origin of the ion selectivity during conduction? Second, C-type inactivation is gating mechanism that takes place in the selectivity filter, and how is the C-type inactivated state of the filter differ from other functional states? Ion selectivity is achieved through a highly conserved region in the channel named the selectivity filter, which is a queue of four binding sites observed in crystal structures of all K^+ channels. These sites are selective for K^+ over another abundant Na^+ ion at equilibrium, so a model based on the equilibrium selectivity was proposed to explain the conduction selectivity in K^+ channels. A recent study showed that eliminating sites from the filter of K^+-selective channels abolished their conduction selectivity, suggesting that these channels may have lost their equilibrium selectivity. To test the hypothesis, we measured the ion binding preference of K^+ channels and non-selective mutant channels. Unexpectedly, my results demonstrated that these channels have strong K^+ selectivity at equilibrium, suggesting that the conduction selectivity is likely derived from a blocking mechanism created by interacting ions inside the filter. C-type inactivation reduces ion flow through the selectivity filter of K^+ channels following channel opening. Crystal structures of the open KcsA K^+ channel shows a constricted selectivity filter that does not permit ion conduction, which was proposed by others to be the inactivated conformation. However, recent work using a semi-synthetic channel that is unable to adopt the constricted conformation but inactivates like wild-type channels challenges this idea. I measured the equilibrium ion-binding properties of channels in three different conformations to differentiate their apparent binding affinities. My results revealed that the inactivated filter is more similar to the conductive conformation than the constricted conformation from an energetic point of view. In this dissertation, I primarily applied isothermal titration calorimetry to measure the ion equilibrium preference to the selectivity filter, because it is a mechanism-free approach to detect the states of the channel in an aqueous solution. These data provide further constraints on mechanistic models of ion selectivity and inactivation in K^+ channels, which allowed to propose that the conduction selectivity of K^+ channels derives from ion interaction in the filter and that the inactivated filter resembles the conductive filter of the KcsA K^+ channel

    Scan-based immersed isogeometric analysis

    Get PDF
    Scan-based simulations contain innate topologically complex three-dimensional geometries, represented by large data sets in formats which are not directly suitable for analysis. Consequently, performing high-fidelity scan-based simulations at practical computational costs is still very challenging. The main objective of this dissertation is to develop an efficient and robust scan-based simulation strategy by acquiring a profound understanding of three prominent challenges in scan-based IGA, viz.: i) balancing the accuracy and computational effort associated with numerical integration; ii) the preservation of topology in the spline-based segmentation procedure; and iii) the control of accuracy using error estimation and adaptivity techniques. In three-dimensional immersed isogeometric simulations, the computational effort associated with integration can be the critical component. A myriad of integration strategies has been proposed over the past years to ameliorate the difficulties associated with integration, but a general optimal integration framework that suits a broad class of engineering problems is not yet available. In this dissertation we provide a thorough investigation of the accuracy and computational effort of the octree integration technique. We quantify the contribution of the integration error using the theoretical basis provided by Strang’s first lemma. Based on this study we propose an error-estimate-based adaptive integration procedure for immersed IGA. To exploit the advantageous properties of IGA in a scan-based setting, it is important to extract a smooth geometry. This can be established by convoluting the voxel data using B-splines, but this can induce problematic topological changes when features with a size similar to that of the voxels are encountered. This dissertation presents a topology-preserving segmentation procedure using truncated hierarchical (TH)B-splines. A moving-window-based topological anomaly detection algorithm is proposed to identify regions in which (TH)B-spline refinements must be performed. The criterion to identify topological anomalies is based on the Euler characteristic, giving it the capability to distinguish between topological and shape changes. A Fourier analysis is presented to explain the effectiveness of the developed procedure. An additional computational challenge in the context of immersed IGA is the construction of optimal approximations using locally refined splines. For scan-based volumetric domains, hierarchical splines are particularly suitable, as they optimally leverage the advantages offered by the availability of a geometrically simple background mesh. Although truncated hierarchical B-splines have been successfully applied in the context of IGA, their application in the immersed setting is largely unexplored. In this dissertation we propose a computational strategy for the application of error estimation-based mesh adaptivity for stabilized immersed IGA. The conducted analyses and developed computational techniques for scan-based immersed IGA are interrelated, and together constitute a significant improvement in the efficiency and robustness of the analysis paradigm. In combination with other state-of-the-art developments regarding immersed FEM/IGA (\emph{e.g.}, iterative solution techniques, parallel computing), the research in this thesis opens the doors to scan-based simulations with more sophisticated physical behavior, geometries of increased complexity, and larger scan-data sizes.Scan-based simulations contain innate topologically complex three-dimensional geometries, represented by large data sets in formats which are not directly suitable for analysis. Consequently, performing high-fidelity scan-based simulations at practical computational costs is still very challenging. The main objective of this dissertation is to develop an efficient and robust scan-based simulation strategy by acquiring a profound understanding of three prominent challenges in scan-based IGA, viz.: i) balancing the accuracy and computational effort associated with numerical integration; ii) the preservation of topology in the spline-based segmentation procedure; and iii) the control of accuracy using error estimation and adaptivity techniques. In three-dimensional immersed isogeometric simulations, the computational effort associated with integration can be the critical component. A myriad of integration strategies has been proposed over the past years to ameliorate the difficulties associated with integration, but a general optimal integration framework that suits a broad class of engineering problems is not yet available. In this dissertation we provide a thorough investigation of the accuracy and computational effort of the octree integration technique. We quantify the contribution of the integration error using the theoretical basis provided by Strang’s first lemma. Based on this study we propose an error-estimate-based adaptive integration procedure for immersed IGA. To exploit the advantageous properties of IGA in a scan-based setting, it is important to extract a smooth geometry. This can be established by convoluting the voxel data using B-splines, but this can induce problematic topological changes when features with a size similar to that of the voxels are encountered. This dissertation presents a topology-preserving segmentation procedure using truncated hierarchical (TH)B-splines. A moving-window-based topological anomaly detection algorithm is proposed to identify regions in which (TH)B-spline refinements must be performed. The criterion to identify topological anomalies is based on the Euler characteristic, giving it the capability to distinguish between topological and shape changes. A Fourier analysis is presented to explain the effectiveness of the developed procedure. An additional computational challenge in the context of immersed IGA is the construction of optimal approximations using locally refined splines. For scan-based volumetric domains, hierarchical splines are particularly suitable, as they optimally leverage the advantages offered by the availability of a geometrically simple background mesh. Although truncated hierarchical B-splines have been successfully applied in the context of IGA, their application in the immersed setting is largely unexplored. In this dissertation we propose a computational strategy for the application of error estimation-based mesh adaptivity for stabilized immersed IGA. The conducted analyses and developed computational techniques for scan-based immersed IGA are interrelated, and together constitute a significant improvement in the efficiency and robustness of the analysis paradigm. In combination with other state-of-the-art developments regarding immersed FEM/IGA (\emph{e.g.}, iterative solution techniques, parallel computing), the research in this thesis opens the doors to scan-based simulations with more sophisticated physical behavior, geometries of increased complexity, and larger scan-data sizes

    Multi-scale active shape description in medical imaging

    Get PDF
    Shape description in medical imaging has become an increasingly important research field in recent years. Fast and high-resolution image acquisition methods like Magnetic Resonance (MR) imaging produce very detailed cross-sectional images of the human body - shape description is then a post-processing operation which abstracts quantitative descriptions of anatomically relevant object shapes. This task is usually performed by clinicians and other experts by first segmenting the shapes of interest, and then making volumetric and other quantitative measurements. High demand on expert time and inter- and intra-observer variability impose a clinical need of automating this process. Furthermore, recent studies in clinical neurology on the correspondence between disease status and degree of shape deformations necessitate the use of more sophisticated, higher-level shape description techniques. In this work a new hierarchical tool for shape description has been developed, combining two recently developed and powerful techniques in image processing: differential invariants in scale-space, and active contour models. This tool enables quantitative and qualitative shape studies at multiple levels of image detail, exploring the extra image scale degree of freedom. Using scale-space continuity, the global object shape can be detected at a coarse level of image detail, and finer shape characteristics can be found at higher levels of detail or scales. New methods for active shape evolution and focusing have been developed for the extraction of shapes at a large set of scales using an active contour model whose energy function is regularized with respect to scale and geometric differential image invariants. The resulting set of shapes is formulated as a multiscale shape stack which is analysed and described for each scale level with a large set of shape descriptors to obtain and analyse shape changes across scales. This shape stack leads naturally to several questions in regard to variable sampling and appropriate levels of detail to investigate an image. The relationship between active contour sampling precision and scale-space is addressed. After a thorough review of modem shape description, multi-scale image processing and active contour model techniques, the novel framework for multi-scale active shape description is presented and tested on synthetic images and medical images. An interesting result is the recovery of the fractal dimension of a known fractal boundary using this framework. Medical applications addressed are grey-matter deformations occurring for patients with epilepsy, spinal cord atrophy for patients with Multiple Sclerosis, and cortical impairment for neonates. Extensions to non-linear scale-spaces, comparisons to binary curve and curvature evolution schemes as well as other hierarchical shape descriptors are discussed

    Network Dynamics of Visual Object Recognition

    Get PDF
    Visual object recognition is the principal mechanism by which humans and many animals interpret their surroundings. Despite the complexity of neural computation required, object recognition is achieved with such rapidity and accuracy that it appears to us almost effortless. Extensive human and non-human primate research has identified putative category-selective regions within higher-level visual cortex, which are thought to mediate object recognition. Despite decades of study, however, the functional organization and network dynamics within these regions remain poorly understood, due to a lack of appropriate animal models as well as the spatiotemporal limitations of current non-invasive human neuroimaging techniques (e.g. fMRI, scalp EEG). To better understand these issues, we leveraged the high spatiotemporal resolution of intracranial EEG (icEEG) recordings to study rapid, transient interactions between the disseminated cortical substrates within category-specific networks. Employing novel techniques for the topologically accurate and statistically robust analysis of grouped icEEG, we found that category-selective regions were spatially arranged with respect to cortical folding patterns, and relative to each other, to generate a hierarchical information structuring of visual information within higher-level visual cortex. This may facilitate rapid visual categorization by enabling the extraction of different levels of object detail across multiple spatial scales. To characterize network interactions between distributed regions sharing the same category-selectivity, we evaluated feed-forward, hierarchal and parallel, distributed models of information flow during face perception via measurements of cortical activation, functional and structural connectivity, and transient disruption through electrical stimulation. We found that input from early visual cortex (EVC) to two face-selective regions – the occipital and fusiform face areas (OFA and FFA, respectively) – occurred in a parallelized, distributed fashion: Functional connectivity between EVC and FFA began prior to the onset of subsequent re-entrant connectivity between the OFA and FFA. Furthermore, electrophysiological measures of structural connectivity revealed independent cortico- cortical connections between the EVC and both the OFA and FFA. Finally, direct disruption of the FFA, but not OFA, impaired face-perception. Given that the FFA is downstream of the OFA, these findings are incompatible with the feed-forward, hierarchical models of visual processing, and argue instead for the existence of parallel, distributed network interactions

    Effective influences in neuronal networks : attentional modulation of effective influences underlying flexible processing and how to measure them

    Get PDF
    Selective routing of information between brain areas is a key prerequisite for flexible adaptive behaviour. It allows to focus on relevant information and to ignore potentially distracting influences. Selective attention is a psychological process which controls this preferential processing of relevant information. The neuronal network structures and dynamics, and the attentional mechanisms by which this routing is enabled are not fully clarified. Based on previous experimental findings and theories, a network model is proposed which reproduces a range of results from the attention literature. It depends on shifting of phase relations between oscillating neuronal populations to modulate the effective influence of synapses. This network model might serve as a generic routing motif throughout the brain. The attentional modifications of activity in this network are investigated experimentally and found to employ two distinct channels to influence processing: facilitation of relevant information and independent suppression of distracting information. These findings are in agreement with the model and previously unreported on the level of neuronal populations. Furthermore, effective influence in dynamical systems is investigated more closely. Due to a lack of a theoretical underpinning for measurements of influence in non-linear dynamical systems such as neuronal networks, often unsuited measures are used for experimental data that can lead to erroneous conclusions. Based on a central theorem in dynamical systems, a novel theory of effective influence is developed. Measures derived from this theory are demonstrated to capture the time dependent effective influence and the asymmetry of influences in model systems and experimental data. This new theory holds the potential to uncover previously concealed interactions in generic non-linear systems studied in a range of disciplines, such as neuroscience, ecology, economy and climatology
    corecore