267 research outputs found

    The trajectory of counterfactual simulation in development

    Get PDF
    Young children often struggle to answer the question “what would have happened?” particularly in cases where the adult-like “correct” answer has the same outcome as the event that actually occurred. Previous work has assumed that children fail because they cannot engage in accurate counterfactual simulations. Children have trouble considering what to change and what to keep fixed when comparing counterfactual alternatives to reality. However, most developmental studies on counterfactual reasoning have relied on binary yes/no responses to counterfactual questions about complex narratives and so have only been able to document when these failures occur but not why and how. Here, we investigate counterfactual reasoning in a domain in which specific counterfactual possibilities are very concrete: simple collision interactions. In Experiment 1, we show that 5- to 10-year-old children (recruited from schools and museums in Connecticut) succeed in making predictions but struggle to answer binary counterfactual questions. In Experiment 2, we use a multiple-choice method to allow children to select a specific counterfactual possibility. We find evidence that 4- to 6-year-old children (recruited online from across the United States) do conduct counterfactual simulations, but the counterfactual possibilities younger children consider differ from adult-like reasoning in systematic ways. Experiment 3 provides further evidence that young children engage in simulation rather than using a simpler visual matching strategy. Together, these experiments show that the developmental changes in counterfactual reasoning are not simply a matter of whether children engage in counterfactual simulation but also how they do so. (PsycInfo Database Record (c) 2021 APA, all rights reserved

    Quantification of depth of anesthesia by nonlinear time series analysis of brain electrical activity

    Full text link
    We investigate several quantifiers of the electroencephalogram (EEG) signal with respect to their ability to indicate depth of anesthesia. For 17 patients anesthetized with Sevoflurane, three established measures (two spectral and one based on the bispectrum), as well as a phase space based nonlinear correlation index were computed from consecutive EEG epochs. In absence of an independent way to determine anesthesia depth, the standard was derived from measured blood plasma concentrations of the anesthetic via a pharmacokinetic/pharmacodynamic model for the estimated effective brain concentration of Sevoflurane. In most patients, the highest correlation is observed for the nonlinear correlation index D*. In contrast to spectral measures, D* is found to decrease monotonically with increasing (estimated) depth of anesthesia, even when a "burst-suppression" pattern occurs in the EEG. The findings show the potential for applications of concepts derived from the theory of nonlinear dynamics, even if little can be assumed about the process under investigation.Comment: 7 pages, 5 figure

    Acute effect of meal glycemic index and glycemic load on blood glucose and insulin responses in humans

    Get PDF
    OBJECTIVE: Foods with contrasting glycemic index when incorporated into a meal, are able to differentially modify glycemia and insulinemia. However, little is known about whether this is dependent on the size of the meal. The purposes of this study were: i) to determine if the differential impact on blood glucose and insulin responses induced by contrasting GI foods is similar when provided in meals of different sizes, and; ii) to determine the relationship between the total meal glycemic load and the observed serum glucose and insulin responses. METHODS: Twelve obese women (BMI 33.7 ± 2.4 kg/m(2)) were recruited. Subjects received 4 different meals in random order. Two meals had a low glycemic index (40–43%) and two had a high-glycemic index (86–91%). Both meal types were given as two meal sizes with energy supply corresponding to 23% and 49% of predicted basal metabolic rate. Thus, meals with three different glycemic loads (95, 45–48 and 22 g) were administered. Blood samples were taken before and after each meal to determine glucose, free-fatty acids, insulin and glucagon concentrations over a 5-h period. RESULTS: An almost 2-fold higher serum glucose and insulin incremental area under the curve (AUC) over 2 h for the high- versus low-glycemic index same sized meals was observed (p < 0.05), however, for the serum glucose response in small meals this was not significant (p = 0.38). Calculated meal glycemic load was associated with 2 and 5 h serum glucose (r = 0.58, p < 0.01) and insulin (r = 0.54, p < 0.01) incremental and total AUC. In fact, when comparing the two meals with similar glycemic load but differing carbohydrate amount and type, very similar serum glucose and insulin responses were found. No differences were observed for serum free-fatty acids and glucagon profile in response to meal glycemic index. CONCLUSION: This study showed that foods of contrasting glycemic index induced a proportionally comparable difference in serum insulin response when provided in both small and large meals. The same was true for the serum glucose response but only in large meals. Glycemic load was useful in predicting the acute impact on blood glucose and insulin responses within the context of mixed meals

    Probing quantum gravity using photons from a flare of the active galactic nucleus Markarian 501 observed by the MAGIC telescope

    Get PDF
    We analyze the timing of photons observed by the MAGIC telescope during a flare of the active galactic nucleus Mkn 501 for a possible correlation with energy, as suggested by some models of quantum gravity (QG), which predict a vacuum refractive index \simeq 1 + (E/M_{QGn})^n, n = 1,2. Parametrizing the delay between gamma-rays of different energies as \Delta t =\pm\tau_l E or \Delta t =\pm\tau_q E^2, we find \tau_l=(0.030\pm0.012) s/GeV at the 2.5-sigma level, and \tau_q=(3.71\pm2.57)x10^{-6} s/GeV^2, respectively. We use these results to establish lower limits M_{QG1} > 0.21x10^{18} GeV and M_{QG2} > 0.26x10^{11} GeV at the 95% C.L. Monte Carlo studies confirm the MAGIC sensitivity to propagation effects at these levels. Thermal plasma effects in the source are negligible, but we cannot exclude the importance of some other source effect.Comment: 12 pages, 3 figures, Phys. Lett. B, reflects published versio

    Haplotype-based quantitative trait mapping using a clustering algorithm

    Get PDF
    BACKGROUND: With the availability of large-scale, high-density single-nucleotide polymorphism (SNP) markers, substantial effort has been made in identifying disease-causing genes using linkage disequilibrium (LD) mapping by haplotype analysis of unrelated individuals. In addition to complex diseases, many continuously distributed quantitative traits are of primary clinical and health significance. However the development of association mapping methods using unrelated individuals for quantitative traits has received relatively less attention. RESULTS: We recently developed an association mapping method for complex diseases by mining the sharing of haplotype segments (i.e., phased genotype pairs) in affected individuals that are rarely present in normal individuals. In this paper, we extend our previous work to address the problem of quantitative trait mapping from unrelated individuals. The method is non-parametric in nature, and statistical significance can be obtained by a permutation test. It can also be incorporated into the one-way ANCOVA (analysis of covariance) framework so that other factors and covariates can be easily incorporated. The effectiveness of the approach is demonstrated by extensive experimental studies using both simulated and real data sets. The results show that our haplotype-based approach is more robust than two statistical methods based on single markers: a single SNP association test (SSA) and the Mann-Whitney U-test (MWU). The algorithm has been incorporated into our existing software package called HapMiner, which is available from our website at . CONCLUSION: For QTL (quantitative trait loci) fine mapping, to identify QTNs (quantitative trait nucleotides) with realistic effects (the contribution of each QTN less than 10% of total variance of the trait), large samples sizes (≥ 500) are needed for all the methods. The overall performance of HapMiner is better than that of the other two methods. Its effectiveness further depends on other factors such as recombination rates and the density of typed SNPs. Haplotype-based methods might provide higher power than methods based on a single SNP when using tag SNPs selected from a small number of samples or some other sources (such as HapMap data). Rank-based statistics usually have much lower power, as shown in our study

    Influence of Statistical Estimators of Mutual Information and Data Heterogeneity on the Inference of Gene Regulatory Networks

    Get PDF
    The inference of gene regulatory networks from gene expression data is a difficult problem because the performance of the inference algorithms depends on a multitude of different factors. In this paper we study two of these. First, we investigate the influence of discrete mutual information (MI) estimators on the global and local network inference performance of the C3NET algorithm. More precisely, we study different MI estimators (Empirical, Miller-Madow, Shrink and SchĂźrmann-Grassberger) in combination with discretization methods (equal frequency, equal width and global equal width discretization). We observe the best global and local inference performance of C3NET for the Miller-Madow estimator with an equal width discretization. Second, our numerical analysis can be considered as a systems approach because we simulate gene expression data from an underlying gene regulatory network, instead of making a distributional assumption to sample thereof. We demonstrate that despite the popularity of the latter approach, which is the traditional way of studying MI estimators, this is in fact not supported by simulated and biological expression data because of their heterogeneity. Hence, our study provides guidance for an efficient design of a simulation study in the context of network inference, supporting a systems approach

    Texture classification of proteins using support vector machines and bio-inspired metaheuristics

    Get PDF
    6th International Joint Conference, BIOSTEC 2013, Barcelona, Spain, February 11-14, 2013[Abstract] In this paper, a novel classification method of two-dimensional polyacrylamide gel electrophoresis images is presented. Such a method uses textural features obtained by means of a feature selection process for whose implementation we compare Genetic Algorithms and Particle Swarm Optimization. Then, the selected features, among which the most decisive and representative ones appear to be those related to the second order co-occurrence matrix, are used as inputs for a Support Vector Machine. The accuracy of the proposed method is around 94 %, a statistically better performance than the classification based on the entire feature set. This classification step can be very useful for discarding over-segmented areas after a protein segmentation or identification process

    Inferring gene regression networks with model trees

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Novel strategies are required in order to handle the huge amount of data produced by microarray technologies. To infer gene regulatory networks, the first step is to find direct regulatory relationships between genes building the so-called gene co-expression networks. They are typically generated using correlation statistics as pairwise similarity measures. Correlation-based methods are very useful in order to determine whether two genes have a strong global similarity but do not detect local similarities.</p> <p>Results</p> <p>We propose model trees as a method to identify gene interaction networks. While correlation-based methods analyze each pair of genes, in our approach we generate a single regression tree for each gene from the remaining genes. Finally, a graph from all the relationships among output and input genes is built taking into account whether the pair of genes is statistically significant. For this reason we apply a statistical procedure to control the false discovery rate. The performance of our approach, named R<smcaps>EG</smcaps>N<smcaps>ET</smcaps>, is experimentally tested on two well-known data sets: <it>Saccharomyces Cerevisiae </it>and E.coli data set. First, the biological coherence of the results are tested. Second the E.coli transcriptional network (in the Regulon database) is used as control to compare the results to that of a correlation-based method. This experiment shows that R<smcaps>EG</smcaps>N<smcaps>ET</smcaps> performs more accurately at detecting true gene associations than the Pearson and Spearman zeroth and first-order correlation-based methods.</p> <p>Conclusions</p> <p>R<smcaps>EG</smcaps>N<smcaps>ET</smcaps> generates gene association networks from gene expression data, and differs from correlation-based methods in that the relationship between one gene and others is calculated simultaneously. Model trees are very useful techniques to estimate the numerical values for the target genes by linear regression functions. They are very often more precise than linear regression models because they can add just different linear regressions to separate areas of the search space favoring to infer localized similarities over a more global similarity. Furthermore, experimental results show the good performance of R<smcaps>EG</smcaps>N<smcaps>ET</smcaps>.</p

    Influence of the Time Scale on the Construction of Financial Networks

    Get PDF
    BACKGROUND: In this paper we investigate the definition and formation of financial networks. Specifically, we study the influence of the time scale on their construction. METHODOLOGY/PRINCIPAL FINDINGS: For our analysis we use correlation-based networks obtained from the daily closing prices of stock market data. More precisely, we use the stocks that currently comprise the Dow Jones Industrial Average (DJIA) and estimate financial networks where nodes correspond to stocks and edges correspond to none vanishing correlation coefficients. That means only if a correlation coefficient is statistically significant different from zero, we include an edge in the network. This construction procedure results in unweighted, undirected networks. By separating the time series of stock prices in non-overlapping intervals, we obtain one network per interval. The length of these intervals corresponds to the time scale of the data, whose influence on the construction of the networks will be studied in this paper. CONCLUSIONS/SIGNIFICANCE: Numerical analysis of four different measures in dependence on the time scale for the construction of networks allows us to gain insights about the intrinsic time scale of the stock market with respect to a meaningful graph-theoretical analysis
    • …
    corecore