1,991 research outputs found

    The genotype-phenotype relationship in multicellular pattern-generating models - the neglected role of pattern descriptors

    Get PDF
    Background: A deep understanding of what causes the phenotypic variation arising from biological patterning processes, cannot be claimed before we are able to recreate this variation by mathematical models capable of generating genotype-phenotype maps in a causally cohesive way. However, the concept of pattern in a multicellular context implies that what matters is not the state of every single cell, but certain emergent qualities of the total cell aggregate. Thus, in order to set up a genotype-phenotype map in such a spatiotemporal pattern setting one is actually forced to establish new pattern descriptors and derive their relations to parameters of the original model. A pattern descriptor is a variable that describes and quantifies a certain qualitative feature of the pattern, for example the degree to which certain macroscopic structures are present. There is today no general procedure for how to relate a set of patterns and their characteristic features to the functional relationships, parameter values and initial values of an original pattern-generating model. Here we present a new, generic approach for explorative analysis of complex patterning models which focuses on the essential pattern features and their relations to the model parameters. The approach is illustrated on an existing model for Delta-Notch lateral inhibition over a two-dimensional lattice. Results: By combining computer simulations according to a succession of statistical experimental designs, computer graphics, automatic image analysis, human sensory descriptive analysis and multivariate data modelling, we derive a pattern descriptor model of those macroscopic, emergent aspects of the patterns that we consider of interest. The pattern descriptor model relates the values of the new, dedicated pattern descriptors to the parameter values of the original model, for example by predicting the parameter values leading to particular patterns, and provides insights that would have been hard to obtain by traditional methods. Conclusion: The results suggest that our approach may qualify as a general procedure for how to discover and relate relevant features and characteristics of emergent patterns to the functional relationships, parameter values and initial values of an underlying pattern-generating mathematical model

    Design and construction of permanent magnetic gears

    Get PDF

    Analysis of pattern dynamics for a nonlinear model of the human cortex via bifurcation theories

    Get PDF
    This thesis examines the bifurcations, i.e., the emergent behaviours, for the Waikato cortical model under the influence of the gap-junction inhibitory diffusion D₂ (identified as the Turing bifurcation parameter) and the time-to-peak for hyperpolarising GABA response γi (i.e., inhibitory rate-constant, identified as the Hopf bifurcation parameter). The cortical model simplifies the entire cortex to a cylindrical macrocolumn (∼ 1 mm³) containing ∼ 10⁵ neurons (85% excitatory, 15% inhibitory) communicating via both chemical and electrical (gap-junction) synapses. The linear stability analysis of the model equations predict the emergence of a Turing instability (in which separated areas of the cortex become activated) when gap-junction diffusivity is increased above a critical level. In addition, a Hopf bifurcation (oscillation) occurs when the inhibitory rate-constant is sufficiently small. Nonlinear interaction between these instabilities leads to spontaneous cortical patterns of neuronal activities evolving in space and time. Such model dynamics of delicately balanced interplay between Turing and Hopf instabilities may be of direct relevance to clinically observed brain dynamics such as epileptic seizure EEG spikes, deep-sleep slow-wave oscillations and cognitive gamma-waves. The relationship between the modelled brain patterns and model equations can normally be inferred from the eigenvalue dispersion curve, i.e., linear stability analysis. Sometimes we experienced mismatches between the linear stability analysis and the formed cortical patterns, which hampers us in identifying the type of instability corresponding to the emergent patterns. In this thesis, I investigate the pattern-forming mechanism of the Waikato cortical model to better understand the model nonlinearities. I first study the pattern dynamics via analysis of a simple pattern-forming system, the Brusselator model, which has a similar model structure and bifurcation phenomena as the cortical model. I apply both linear and nonlinear perturbation methods to analyse the near-bifurcation behaviour of the Brusselator in order to precisely capture the dominant mode that contributes the most to the final formed-patterns. My nonlinear analysis of the Brusselator model yields Ginzburg-Landau type amplitude equations that describe the dynamics of the most unstable mode, i.e., the dominant mode, in the vicinity of a bifurcation point. The amplitude equations at a Turing point unfold three characteristic spatial structures: honeycomb Hπ, stripes, and reentrant honeycomb H₀. A codimension-2 Turing–Hopf point (CTHP) predicts three mixed instabilities: stable Turing–Hopf (TH), chaotic TH, and bistable TH. The amplitude equations precisely determine the bifurcation conditions for these instabilities and explain the pattern-competition mechanism once the bifurcation parameters cross the thresholds, whilst driving the system into a nonlinear region where the linear stability analysis may not be applicable. Then, I apply the bifurcation theories to the cortical model for its pattern predictions. Analogous to the Brusselator model, I find cortical Turing pattens in Hπ, stripes and H₀ spatial structures. Moreover, I develop the amplitude equations for the cortical model, with which I derive the envelope frequency for the beating-waves of a stable TH mode; and propose ideas regarding emergence of the cortical chaotic mode. Apart from these pattern dynamics that the cortical model shares with the Brusselator system, the cortical model also exhibits “eye-blinking” TH patterns latticed in hexagons with localised oscillations. Although we have not found biological significance of these model pattens, the developed bifurcation theories and investigated pattern-forming mechanism may enrich our modelling strategies and help us to further improve model performance. In the last chapter of this thesis, I introduce a Turing–Hopf mechanism for the anaesthetic slow-waves, and predict a coherence drop of such slow-waves with the induction of propofol anaesthesia. To test this hypothesis, I developed an EEG coherence analysing algorithm, EEG coherence, to automatically examine the clinical EEG recordings across multiple subjects. The result shows significantly decreased coherence along the fronto-occipital axis, and increased coherence along the left- and right-temporal axis. As the Waikato cortical model is spatially homogenous, i.e., there are no explicit front-to-back or right-to-left directions, it is unable to produce different coherence changes for different regions. It appears that the Waikato cortical model best represents the cortical dynamics in the frontal region. The theory of pattern dynamics suggests that a mode transition from wave–Turing–wave to Turing–wave–Turing introduces pattern coherence changes in both positive and negative directions. Thus, a further modelling improvement may be the introduction of a cortical bistable mode where Turing and wave coexist

    Development of radiation transport techniques for modelling a high-resolution multi-energy photon emission tomography system

    Get PDF
    ”Nondestructive characterization techniques such as gamma tomography represent powerful tools for the analysis and quantification of physical defects and radionuclide concentrations within nuclear fuel forms. Gamma emission tomography, in particular, has the ability to utilize the inherent radiation within spent nuclear fuel to provide users with information about the migration and concentration of fission and activation products within the fuel form. Idaho National Laboratory is interested in using this technology to analyze new nuclear fuel forms for potential use in next generation nuclear reactors. In this work, two aspect of the system are analyzed. The first is a semi-analytic radiation transport methodology in conjunction with a parallel beam collimator developed to facilitate the acquisition of data from Monte-Carlo modeling of a small submersible gamma tomography system, with a focus on emission information. The second is a pinhole collimator designed to optimize count rates, diameter, and acceptance angle to increase the sampling of the fuel forms to decrease data acquisition time. Utilizing the semi-analytical technique, computational savings of 107-1011 can be achieved with a degradation in accuracy of 1845% compared to a standard isotropic uniform Monte-Carlo N Particle transport simulation. However, this loss in accuracy can be minimized by increasing the parallel beam collimator’s aspect ratio where it tends towards a degenerate cylinder. The semianalytic technique is also compared to inbuilt acceleration techniques. The pinhole collimator design yields count rates on the order of 100s-1000s which represents a 101-102 increase in actual count rates over the entirety of the photon spectrum”--Abstract, page iv

    Ergonomic Models of Anthropometry, Human Biomechanics and Operator-Equipment Interfaces

    Get PDF
    The Committee on Human Factors was established in October 1980 by the Commission on Behavioral and Social Sciences and Education of the National Research Council. The committee is sponsored by the Office of Naval Research, the Air Force Office of Scientific Research, the Army Research Institute for the Behavioral and Social Sciences, the National Aeronautics and Space Administration, and the National Science Foundation. The workshop discussed the following: anthropometric models; biomechanical models; human-machine interface models; and research recommendations. A 17-page bibliography is included

    Evaluating Risks from Antibacterial Medication Therapy

    Get PDF
    ABSTRACT EVALUATING RISKS FROM ANTIBACTERIAL MEDICATION THERAPY USING AN OBSERVATIONAL PRIMARY CARE DATABASE Sharon B. Meropol Joshua P. Metlay Virtually everyone in the U.S. is exposed to antibacterial drugs at some point in their lives. It is important to understand the benefits and risks related to these medications with nearly universal public exposure. Most information on antibacterial drug-associated adverse events comes from spontaneous reports. Without an unexposed control group, it is impossible to know the real risks for treated vs. untreated patients. We used an electronic medical record database to select a cohort of office visits for non-bacterial acute respiratory tract infections (excluding patients with pneumonia, sinusitis, or acute exacerbations of chronic bronchitis), and compared outcomes of antibacterial drug-exposed vs. -unexposed patients. By limiting our assessment to visits with acute nonspecific respiratory infections, we promoted comparability between exposed and unexposed patients. To further control for confounding by indication and practice, we explored methods to promote further comparability between exposure groups. Our rare outcome presented an additional analytic challenge. Antibacterial drug prescribing for acute nonspecific respiratory infections decreased over the study period, but, in contrast to the U.S., broad spectrum antibacterial prescribing remained low. Conditional fixed effects linear regression provided stable estimates of exposure effects on rare outcomes; results were similar to those using more traditional methods for binary outcomes. Patients with acute nonspecific respiratory infections treated with antibacterial drugs were not at increased risk of severe adverse events compared to untreated patients. Patients with acute nonspecific respiratory infections exposed to antibacterials had a small decreased risk of pneumonia hospitalizations vs. unexposed patients. This very small measurable benefit of antibacterial drug therapy for acute nonspecific respiratory infections at the patient level must be weighed against the public health risk of emerging antibacterial resistance. Our data provide valuable point estimates of risks and benefits that can be used to inform future decision analysis and guideline recommendations for patients with acute nonspecific respiratory infections. Ultimately, improved point-of-care diagnostic testing may help direct antibacterial drugs to the subset of patients most likely to derive benefit

    2-D reconstruction of atmospheric concentration peaks from horizontal long path DOAS tomographic measurements: parametrisation and geometry within a discrete approach

    No full text
    International audienceIn this study, we theoretically investigate the reconstruction of 2-D cross sections through Gaussian concentration distributions, e.g. emission plumes, from long path DOAS measurements along a limited number of light paths. This is done systematically with respect to the extension of the up to four peaks and for six different measurement setups with 2-4 telescopes and 36 light paths each. We distinguish between cases with and without additional background concentrations. Our approach parametrises the unknown distribution by local piecewise constant or linear functions on a regular grid and solves the resulting discrete, linear system by a least squares minimum norm principle. We show that the linear parametrisation not only allows better representation of the distributions in terms of discretisation errors, but also better inversion of the system. We calculate area integrals of the concentration field (i.e. total emissions rates for non-vanishing perpendicular wind speed components) and show that reconstruction errors and reconstructed area integrals within the peaks for narrow distributions crucially depend on the resolution of the reconstruction grid. A recently suggested grid translation method for the piecewise constant basis functions, combining reconstructions from several shifted grids, is modified for the linear basis functions and proven to reduce overall reconstruction errors, but not the uncertainty of concentration integrals. We suggest a procedure to subtract additional background concentration fields before inversion. We find large differences in reconstruction quality between the geometries and conclude that, in general, for a constant number of light paths increasing the number of telescopes leads to better reconstruction results. It appears that geometries that give better results for negligible measurement errors and parts of the geometry that are better resolved are also less sensitive to increasing measurement errors
    corecore