632 research outputs found

    Preferences for explanation generality develop early in biology but not physics

    Get PDF
    One of the core functions of explanation is to support prediction and generalization. However, some explanations license a broader range of predictions than others. For instance, an explanation about biology could be presented as applying to a specific case (e.g., “this bear”) or more generally across “all animals.” The current study investigated how 5- to 7-year-olds (N=36), 11- to 13-year-olds (N=34), and adults (N=79) evaluate explanations at varying levels of generality in biology and physics. Findings revealed that even the youngest children preferred general explanations in biology. However, only older children and adults preferred explanation generality in physics. Findings are discussed in light of differences in our intuitions about biological and physical principles

    The trajectory of counterfactual simulation in development

    Get PDF
    Young children often struggle to answer the question “what would have happened?” particularly in cases where the adult-like “correct” answer has the same outcome as the event that actually occurred. Previous work has assumed that children fail because they cannot engage in accurate counterfactual simulations. Children have trouble considering what to change and what to keep fixed when comparing counterfactual alternatives to reality. However, most developmental studies on counterfactual reasoning have relied on binary yes/no responses to counterfactual questions about complex narratives and so have only been able to document when these failures occur but not why and how. Here, we investigate counterfactual reasoning in a domain in which specific counterfactual possibilities are very concrete: simple collision interactions. In Experiment 1, we show that 5- to 10-year-old children (recruited from schools and museums in Connecticut) succeed in making predictions but struggle to answer binary counterfactual questions. In Experiment 2, we use a multiple-choice method to allow children to select a specific counterfactual possibility. We find evidence that 4- to 6-year-old children (recruited online from across the United States) do conduct counterfactual simulations, but the counterfactual possibilities younger children consider differ from adult-like reasoning in systematic ways. Experiment 3 provides further evidence that young children engage in simulation rather than using a simpler visual matching strategy. Together, these experiments show that the developmental changes in counterfactual reasoning are not simply a matter of whether children engage in counterfactual simulation but also how they do so. (PsycInfo Database Record (c) 2021 APA, all rights reserved

    Measures of Model Performance Based On the Log Accuracy Ratio

    Get PDF
    Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio and derive from it two metrics: the median symmetric accuracy and the symmetric signed percentage bias. Robust methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.Peer reviewe

    Boundaries of Siegel Disks: Numerical Studies of their Dynamics and Regularity

    Get PDF
    Siegel disks are domains around fixed points of holomorphic maps in which the maps are locally linearizable (i.e., become a rotation under an appropriate change of coordinates which is analytic in a neighborhood of the origin). The dynamical behavior of the iterates of the map on the boundary of the Siegel disk exhibits strong scaling properties which have been intensively studied in the physical and mathematical literature. In the cases we study, the boundary of the Siegel disk is a Jordan curve containing a critical point of the map (we consider critical maps of different orders), and there exists a natural parametrization which transforms the dynamics on the boundary into a rotation. We compute numerically this parameterization and use methods of harmonic analysis to compute the global Holder regularity of the parametrization for different maps and rotation numbers. We obtain that the regularity of the boundaries and the scaling exponents are universal numbers in the sense of renormalization theory (i.e., they do not depend on the map when the map ranges in an open set), and only depend on the order of the critical point of the map in the boundary of the Siegel disk and the tail of the continued function expansion of the rotation number. We also discuss some possible relations between the regularity of the parametrization of the boundaries and the corresponding scaling exponents. (C) 2008 American Institute of Physics.NSFMathematic

    Quantification of depth of anesthesia by nonlinear time series analysis of brain electrical activity

    Full text link
    We investigate several quantifiers of the electroencephalogram (EEG) signal with respect to their ability to indicate depth of anesthesia. For 17 patients anesthetized with Sevoflurane, three established measures (two spectral and one based on the bispectrum), as well as a phase space based nonlinear correlation index were computed from consecutive EEG epochs. In absence of an independent way to determine anesthesia depth, the standard was derived from measured blood plasma concentrations of the anesthetic via a pharmacokinetic/pharmacodynamic model for the estimated effective brain concentration of Sevoflurane. In most patients, the highest correlation is observed for the nonlinear correlation index D*. In contrast to spectral measures, D* is found to decrease monotonically with increasing (estimated) depth of anesthesia, even when a "burst-suppression" pattern occurs in the EEG. The findings show the potential for applications of concepts derived from the theory of nonlinear dynamics, even if little can be assumed about the process under investigation.Comment: 7 pages, 5 figure

    The impact of liquidity on the capital structure: a case study of Croatian firms

    Get PDF
    Background: Previous studies have shown that in some countries, liquid assets increased leverage while in other countries liquid firms were more frequently financed by their own capital and therefore were less leveraged. Objectives: The aim of this paper is to investigate the impact of liquidity on the capital structure of Croatian firms. Methods/Approach: Pearson correlation coefficient is applied to the test on the relationship between liquidity ratios and debt ratios, the share of retained earnings to capital and liquidity ratios and the relationship between the structure of current assets and leverage. Results: A survey has been conducted on a sample of 1058 Croatian firms. There are statistically significant correlations between liquidity ratios and leverage ratios. Also, there are statistically significant correlations between leverage ratios and the structure of current assets. The relationship between liquidity ratios and the short-term leverage is stronger than between liquidity ratios and the long-term leverage. Conclusions: The more liquid assets firms have, the less they are leveraged. Longterm leveraged firms are more liquid. Increasing inventory levels leads to an increase in leverage. Furthermore, increasing the cash in current assets leads to a reduction in the short-term and the longterm leverage

    Search based algorithms for test sequence generation in functional testing

    Get PDF
    Information and Software Technology (DOI: 10.1016/j.infsof.2014.07.014)The generation of dynamic test sequences from a formal specification, complementing traditional testing methods in order to find errors in the source code. Objective In this paper we extend one specific combinatorial test approach, the Classification Tree Method (CTM), with transition information to generate test sequences. Although we use CTM, this extension is also possible for any combinatorial testing method. Method The generation of minimal test sequences that fulfill the demanded coverage criteria is an NP-hard problem. Therefore, search-based approaches are required to find such (near) optimal test sequences. Results The experimental analysis compares the search-based technique with a greedy algorithm on a set of 12 hierarchical concurrent models of programs extracted from the literature. Our proposed search-based approaches (GTSG and ACOts) are able to generate test sequences by finding the shortest valid path to achieve full class (state) and transition coverage. Conclusion The extended classification tree is useful for generating of test sequences. Moreover, the experimental analysis reveals that our search-based approaches are better than the greedy deterministic approach, especially in the most complex instances. All presented algorithms are actually integrated into a professional tool for functional testing.Spanish Ministry of Economy and Competitiveness and FEDER under contract TIN2011-28194 and fellowship BES-2012-055967. Project 8.06/5.47.4142 in collaboration with the VSB-Tech. Univ. of Ostrava, Universidad de Málaga, Andalucía Tech. and EU Grant ICT-257574 (FITTEST project)

    Probing quantum gravity using photons from a flare of the active galactic nucleus Markarian 501 observed by the MAGIC telescope

    Get PDF
    We analyze the timing of photons observed by the MAGIC telescope during a flare of the active galactic nucleus Mkn 501 for a possible correlation with energy, as suggested by some models of quantum gravity (QG), which predict a vacuum refractive index \simeq 1 + (E/M_{QGn})^n, n = 1,2. Parametrizing the delay between gamma-rays of different energies as \Delta t =\pm\tau_l E or \Delta t =\pm\tau_q E^2, we find \tau_l=(0.030\pm0.012) s/GeV at the 2.5-sigma level, and \tau_q=(3.71\pm2.57)x10^{-6} s/GeV^2, respectively. We use these results to establish lower limits M_{QG1} > 0.21x10^{18} GeV and M_{QG2} > 0.26x10^{11} GeV at the 95% C.L. Monte Carlo studies confirm the MAGIC sensitivity to propagation effects at these levels. Thermal plasma effects in the source are negligible, but we cannot exclude the importance of some other source effect.Comment: 12 pages, 3 figures, Phys. Lett. B, reflects published versio

    Acute effect of meal glycemic index and glycemic load on blood glucose and insulin responses in humans

    Get PDF
    OBJECTIVE: Foods with contrasting glycemic index when incorporated into a meal, are able to differentially modify glycemia and insulinemia. However, little is known about whether this is dependent on the size of the meal. The purposes of this study were: i) to determine if the differential impact on blood glucose and insulin responses induced by contrasting GI foods is similar when provided in meals of different sizes, and; ii) to determine the relationship between the total meal glycemic load and the observed serum glucose and insulin responses. METHODS: Twelve obese women (BMI 33.7 ± 2.4 kg/m(2)) were recruited. Subjects received 4 different meals in random order. Two meals had a low glycemic index (40–43%) and two had a high-glycemic index (86–91%). Both meal types were given as two meal sizes with energy supply corresponding to 23% and 49% of predicted basal metabolic rate. Thus, meals with three different glycemic loads (95, 45–48 and 22 g) were administered. Blood samples were taken before and after each meal to determine glucose, free-fatty acids, insulin and glucagon concentrations over a 5-h period. RESULTS: An almost 2-fold higher serum glucose and insulin incremental area under the curve (AUC) over 2 h for the high- versus low-glycemic index same sized meals was observed (p < 0.05), however, for the serum glucose response in small meals this was not significant (p = 0.38). Calculated meal glycemic load was associated with 2 and 5 h serum glucose (r = 0.58, p < 0.01) and insulin (r = 0.54, p < 0.01) incremental and total AUC. In fact, when comparing the two meals with similar glycemic load but differing carbohydrate amount and type, very similar serum glucose and insulin responses were found. No differences were observed for serum free-fatty acids and glucagon profile in response to meal glycemic index. CONCLUSION: This study showed that foods of contrasting glycemic index induced a proportionally comparable difference in serum insulin response when provided in both small and large meals. The same was true for the serum glucose response but only in large meals. Glycemic load was useful in predicting the acute impact on blood glucose and insulin responses within the context of mixed meals
    • …
    corecore