589 research outputs found

    Measures of Model Performance Based On the Log Accuracy Ratio

    Get PDF
    Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio and derive from it two metrics: the median symmetric accuracy and the symmetric signed percentage bias. Robust methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.Peer reviewe

    Reduction of the size of datasets by using evolutionary feature selection: the case of noise in a modern city

    Get PDF
    Smart city initiatives have emerged to mitigate the negative effects of a very fast growth of urban areas. Most of the population in our cities are exposed to high levels of noise that generate discomfort and different health problems. These issues may be mitigated by applying different smart cities solutions, some of them require high accurate noise information to provide the best quality of serve possible. In this study, we have designed a machine learning approach based on genetic algorithms to analyze noise data captured in the university campus. This method reduces the amount of data required to classify the noise by addressing a feature selection optimization problem. The experimental results have shown that our approach improved the accuracy in 20% (achieving an accuracy of 87% with a reduction of up to 85% on the original dataset).Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. This research has been partially funded by the Spanish MINECO and FEDER projects TIN2016-81766-REDT (http://cirti.es), and TIN2017-88213-R (http://6city.lcc.uma.es)

    The trajectory of counterfactual simulation in development

    Get PDF
    Young children often struggle to answer the question “what would have happened?” particularly in cases where the adult-like “correct” answer has the same outcome as the event that actually occurred. Previous work has assumed that children fail because they cannot engage in accurate counterfactual simulations. Children have trouble considering what to change and what to keep fixed when comparing counterfactual alternatives to reality. However, most developmental studies on counterfactual reasoning have relied on binary yes/no responses to counterfactual questions about complex narratives and so have only been able to document when these failures occur but not why and how. Here, we investigate counterfactual reasoning in a domain in which specific counterfactual possibilities are very concrete: simple collision interactions. In Experiment 1, we show that 5- to 10-year-old children (recruited from schools and museums in Connecticut) succeed in making predictions but struggle to answer binary counterfactual questions. In Experiment 2, we use a multiple-choice method to allow children to select a specific counterfactual possibility. We find evidence that 4- to 6-year-old children (recruited online from across the United States) do conduct counterfactual simulations, but the counterfactual possibilities younger children consider differ from adult-like reasoning in systematic ways. Experiment 3 provides further evidence that young children engage in simulation rather than using a simpler visual matching strategy. Together, these experiments show that the developmental changes in counterfactual reasoning are not simply a matter of whether children engage in counterfactual simulation but also how they do so. (PsycInfo Database Record (c) 2021 APA, all rights reserved

    The Critical Coupling Likelihood Method: A new approach for seamless integration of environmental and operating conditions of gravitational wave detectors into gravitational wave searches

    Get PDF
    Any search effort for gravitational waves (GW) using interferometric detectors like LIGO needs to be able to identify if and when noise is coupling into the detector's output signal. The Critical Coupling Likelihood (CCL) method has been developed to characterize potential noise coupling and in the future aid GW search efforts. By testing two hypotheses about pairs of channels, CCL is able to identify undesirable coupled instrumental noise from potential GW candidates. Our preliminary results show that CCL can associate up to 80\sim 80% of observed artifacts with SNR8SNR \geq 8, to local noise sources, while reducing the duty cycle of the instrument by 15\lesssim 15%. An approach like CCL will become increasingly important as GW research moves into the Advanced LIGO era, going from the first GW detection to GW astronomy.Comment: submitted CQ

    Boundaries of Siegel Disks: Numerical Studies of their Dynamics and Regularity

    Get PDF
    Siegel disks are domains around fixed points of holomorphic maps in which the maps are locally linearizable (i.e., become a rotation under an appropriate change of coordinates which is analytic in a neighborhood of the origin). The dynamical behavior of the iterates of the map on the boundary of the Siegel disk exhibits strong scaling properties which have been intensively studied in the physical and mathematical literature. In the cases we study, the boundary of the Siegel disk is a Jordan curve containing a critical point of the map (we consider critical maps of different orders), and there exists a natural parametrization which transforms the dynamics on the boundary into a rotation. We compute numerically this parameterization and use methods of harmonic analysis to compute the global Holder regularity of the parametrization for different maps and rotation numbers. We obtain that the regularity of the boundaries and the scaling exponents are universal numbers in the sense of renormalization theory (i.e., they do not depend on the map when the map ranges in an open set), and only depend on the order of the critical point of the map in the boundary of the Siegel disk and the tail of the continued function expansion of the rotation number. We also discuss some possible relations between the regularity of the parametrization of the boundaries and the corresponding scaling exponents. (C) 2008 American Institute of Physics.NSFMathematic

    A Study of Archiving Strategies in Multi-Objective PSO for Molecular Docking

    Get PDF
    Molecular docking is a complex optimization problem aimed at predicting the position of a ligand molecule in the active site of a receptor with the lowest binding energy. This problem can be formulated as a bi-objective optimization problem by minimizing the binding energy and the Root Mean Square Deviation (RMSD) difference in the coordinates of ligands. In this context, the SMPSO multi-objective swarm-intelligence algorithm has shown a remarkable performance. SMPSO is characterized by having an external archive used to store the non-dominated solutions and also as the basis of the leader selection strategy. In this paper, we analyze several SMPSO variants based on different archiving strategies in the scope of a benchmark of molecular docking instances. Our study reveals that the SMPSOhv, which uses an hypervolume contribution based archive, shows the overall best performance.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Footrot and interdigital dermatitis in sheep: Farmer satisfaction with current management, their ideal management and sources used to adopt new strategies

    Get PDF
    The aims of this research were to identify management practices that sheep farmers currently use to treat and prevent footrot in sheep and whether they consider that these are successful management tools and to find out how sheep farmers would ideally like to manage footrot in their flock. Over 90% of lameness in sheep in the UK is caused by Dichelobacter nodosus, which presents clinically as interdigital dermatitis (ID) alone or with separation of hoof horn (FR). A questionnaire was sent to 265 farmers to investigate their current management and their satisfaction with current management of the spectrum of clinical presentations of footrot. Farmers were also asked their ideal management of footrot and their interest in, and sources of information for, change. Approximately 160 farmers responded. Farmers satisfied with current management reported a prevalence of lameness ≤5%. These farmers caught and treated lame sheep within 3 days of first seeing them lame, and treated sheep with FR and ID with parenteral antibacterials. Farmers dissatisfied with their management reported a prevalence of lameness >5%. These farmers practised routine foot trimming, footbathing and vaccination against footrot. Whilst 89% of farmers said they were satisfied with their management of FR over 34% were interested in changing management. Farmers identified veterinarians as the most influential source for new information. Farmers reported that ideally they would control FR by culling/isolating lame sheep, sourcing replacements from non-lame parents, trimming feet less, using antibacterial treatments less and using vaccination more. Footbathing was a commonly used management that was linked with dissatisfaction and that also was listed highly as an ideal management. Consequently, some of the ideal managements are in agreement with our understanding of disease control (culling and isolation, sourcing healthy replacements) but others are in contrast with our current knowledge of management and farmers self-reporting of satisfaction of management of footrot (less use of antibacterial treatment, more footbathing and vaccination). One explanation for this is the theory of cognitive dissonance where belief follows behaviour, i.e. farmers report that they believe an ideal which is what they are currently doing, even if the management is sub-optimal

    Quantification of depth of anesthesia by nonlinear time series analysis of brain electrical activity

    Full text link
    We investigate several quantifiers of the electroencephalogram (EEG) signal with respect to their ability to indicate depth of anesthesia. For 17 patients anesthetized with Sevoflurane, three established measures (two spectral and one based on the bispectrum), as well as a phase space based nonlinear correlation index were computed from consecutive EEG epochs. In absence of an independent way to determine anesthesia depth, the standard was derived from measured blood plasma concentrations of the anesthetic via a pharmacokinetic/pharmacodynamic model for the estimated effective brain concentration of Sevoflurane. In most patients, the highest correlation is observed for the nonlinear correlation index D*. In contrast to spectral measures, D* is found to decrease monotonically with increasing (estimated) depth of anesthesia, even when a "burst-suppression" pattern occurs in the EEG. The findings show the potential for applications of concepts derived from the theory of nonlinear dynamics, even if little can be assumed about the process under investigation.Comment: 7 pages, 5 figure

    A Comparison of Qualifications Based-Selection and Best Value Procurement for Construction Manager/General Contractor Highway Construction

    Get PDF
    Faster project delivery and the infusion of contractor knowledge into design are the primary drivers for choosing construction manager/general contractor (CM/GC) project delivery. This paper focuses on the use of qualifications-based (QBS) and best-value (BV) procurement approaches, how and why agencies use each, and their associated opportunities and obstacles. Data for this study were obtained from a majority of federally funded CM/GC projects completed between 2005 to 2015. The findings are that BV and QBS projects characteristics and performance have no statistically significant difference. The choice of BV or QBS coincides with the agency’s CM/GC stage of organizational development and influences of non-agency stakeholders on the CM/GC process. When agencies and the local industry are new to CM/GC, they were found to use BV as it is closer to the traditional procurement culture and it is perceived to result in a fair market project price. Alternatively, agencies and local industry partners with an established history of using CM/GC were found to choose QBS. The low level of design at the time of procurement, means that assumptions relating to risk, production rates, materials sources, etc. may be too preliminary to secure a reliable price. The use of BV procurement was found to pose a risk to innovation and increase negotiation efforts. Qualitative trends from the project data, interviews and literature point to agencies using QBS for the majority of CM/GC project and BV on CM/GC projects with lesser complexity or more highly developed designs at the time of selection

    Acute effect of meal glycemic index and glycemic load on blood glucose and insulin responses in humans

    Get PDF
    OBJECTIVE: Foods with contrasting glycemic index when incorporated into a meal, are able to differentially modify glycemia and insulinemia. However, little is known about whether this is dependent on the size of the meal. The purposes of this study were: i) to determine if the differential impact on blood glucose and insulin responses induced by contrasting GI foods is similar when provided in meals of different sizes, and; ii) to determine the relationship between the total meal glycemic load and the observed serum glucose and insulin responses. METHODS: Twelve obese women (BMI 33.7 ± 2.4 kg/m(2)) were recruited. Subjects received 4 different meals in random order. Two meals had a low glycemic index (40–43%) and two had a high-glycemic index (86–91%). Both meal types were given as two meal sizes with energy supply corresponding to 23% and 49% of predicted basal metabolic rate. Thus, meals with three different glycemic loads (95, 45–48 and 22 g) were administered. Blood samples were taken before and after each meal to determine glucose, free-fatty acids, insulin and glucagon concentrations over a 5-h period. RESULTS: An almost 2-fold higher serum glucose and insulin incremental area under the curve (AUC) over 2 h for the high- versus low-glycemic index same sized meals was observed (p < 0.05), however, for the serum glucose response in small meals this was not significant (p = 0.38). Calculated meal glycemic load was associated with 2 and 5 h serum glucose (r = 0.58, p < 0.01) and insulin (r = 0.54, p < 0.01) incremental and total AUC. In fact, when comparing the two meals with similar glycemic load but differing carbohydrate amount and type, very similar serum glucose and insulin responses were found. No differences were observed for serum free-fatty acids and glucagon profile in response to meal glycemic index. CONCLUSION: This study showed that foods of contrasting glycemic index induced a proportionally comparable difference in serum insulin response when provided in both small and large meals. The same was true for the serum glucose response but only in large meals. Glycemic load was useful in predicting the acute impact on blood glucose and insulin responses within the context of mixed meals
    corecore