215,600 research outputs found

    Educational approaches to improving knowledge and attitude towards dental hygiene among elementary school children

    Get PDF
    The selection of appropriate dental health education methods will be beneficial in promoting dental health. This study aimed to determine the difference in the effect of role-playing method and storytelling method on knowledge and attitudes towards oral hygiene among elementary school students. The research subjects were 112 students in grade 5. The subjects were divided into 2 different treatment groups, namely 56 students in grade 5 at SD Negeri Tegalrejo I with the storytelling method and 56 students in grade 5 at SD Negeri Tegalrejo II using the role-playing method. The measuring instrument in this research was a questionnaire. The data analysis used the Mann-Whitney test and Wilcoxon Signed Ranks test because the data were not normally distributed. The results of the analysis showed that there was a significant increase over time in knowledge and attitudes carried out in 3 assessments. The mean rank for delta values between the pre-test and posttest 2 for the knowledge variable using the role-playing method was 51.29 while that using the storytelling method was 61.71. Meanwhile, the mean rank for delta values for the attitude variable using the role-playing method was 49.93, while that using the storytelling method was 63.07. The results of the delta analysis from pre-test to post-test 1 and pre-test to post-test 2 showed that the storytelling group experiences a higher increase in knowledge and attitudes than the role-playing group (p<0.05). Provision of education using a storytelling method shows better improvement in students’ knowledge and attitudes towards oral hygiene than using a role-playing method

    Advances in Extreme Learning Machines

    Get PDF
    Nowadays, due to advances in technology, data is generated at an incredible pace, resulting in large data sets of ever-increasing size and dimensionality. Therefore, it is important to have efficient computational methods and machine learning algorithms that can handle such large data sets, such that they may be analyzed in reasonable time. One particular approach that has gained popularity in recent years is the Extreme Learning Machine (ELM), which is the name given to neural networks that employ randomization in their hidden layer, and that can be trained efficiently. This dissertation introduces several machine learning methods based on Extreme Learning Machines (ELMs) aimed at dealing with the challenges that modern data sets pose. The contributions follow three main directions.    Firstly, ensemble approaches based on ELM are developed, which adapt to context and can scale to large data. Due to their stochastic nature, different ELMs tend to make different mistakes when modeling data. This independence of their errors makes them good candidates for combining them in an ensemble model, which averages out these errors and results in a more accurate model. Adaptivity to a changing environment is introduced by adapting the linear combination of the models based on accuracy of the individual models over time. Scalability is achieved by exploiting the modularity of the ensemble model, and evaluating the models in parallel on multiple processor cores and graphics processor units. Secondly, the dissertation develops variable selection approaches based on ELM and Delta Test, that result in more accurate and efficient models. Scalability of variable selection using Delta Test is again achieved by accelerating it on GPU. Furthermore, a new variable selection method based on ELM is introduced, and shown to be a competitive alternative to other variable selection methods. Besides explicit variable selection methods, also a new weight scheme based on binary/ternary weights is developed for ELM. This weight scheme is shown to perform implicit variable selection, and results in increased robustness and accuracy at no increase in computational cost. Finally, the dissertation develops training algorithms for ELM that allow for a flexible trade-off between accuracy and computational time. The Compressive ELM is introduced, which allows for training the ELM in a reduced feature space. By selecting the dimension of the feature space, the practitioner can trade off accuracy for speed as required.    Overall, the resulting collection of proposed methods provides an efficient, accurate and flexible framework for solving large-scale supervised learning problems. The proposed methods are not limited to the particular types of ELMs and contexts in which they have been tested, and can easily be incorporated in new contexts and models

    Evaluation of Variability Concepts for Simulink in the Automotive Domain

    Get PDF
    Modeling variability in Matlab/Simulink becomes more and more important. We took the two variability modeling concepts already included in Matlab/Simulink and our own one and evaluated them to find out which one is suited best for modeling variability in the automotive domain. We conducted a controlled experiment with developers at Volkswagen AG to decide which concept is preferred by developers and if their preference aligns with measurable performance factors. We found out that all existing concepts are viable approaches and that the delta approach is both the preferred concept as well as the objectively most efficient one, which makes Delta-Simulink a good solution to model variability in the automotive domain.Comment: 10 pages, 7 figures, 6 tables, Proceedings of 48th Hawaii International Conference on System Sciences (HICSS), pp. 5373-5382, Kauai, Hawaii, USA, IEEE Computer Society, 201

    Variable Point Sources in Sloan Digital Sky Survey Stripe 82. I. Project Description and Initial Catalog (0 h < R.A. < 4 h)

    Full text link
    We report the first results of a study of variable point sources identified using multi-color time-series photometry from Sloan Digital Sky Survey (SDSS) Stripe 82 over a span of nearly 10 years (1998-2007). We construct a light-curve catalog of 221,842 point sources in the R.A. 0-4 h half of Stripe 82, limited to r = 22.0, that have at least 10 detections in the ugriz bands and color errors of < 0.2 mag. These objects are then classified by color and by cross-matching them to existing SDSS catalogs of interesting objects. We use inhomogeneous ensemble differential photometry techniques to greatly improve our sensitivity to variability. Robust variable identification methods are used to extract 6520 variable candidates in this dataset, resulting in an overall variable fraction of ~2.9% at the level of 0.05 mag variability. A search for periodic variables results in the identification of 30 eclipsing/ellipsoidal binary candidates, 55 RR Lyrae, and 16 Delta Scuti variables. We also identify 2704 variable quasars matched to the SDSS Quasar catalog (Schneider et al. 2007), as well as an additional 2403 quasar candidates identified by their non-stellar colors and variability properties. Finally, a sample of 11,328 point sources that appear to be nonvariable at the limits of our sensitivity is also discussed. (Abridged.)Comment: 67 pages, 27 figures. Accepted for publication in ApJS. Catalog available at http://shrike.pha.jhu.edu/stripe82-variable

    Who invests in home equity to exempt wealth from bankruptcy? : [This draft: May 2013]

    Get PDF
    Homestead exemptions to personal bankruptcy allow households to retain their home equity up to a limit determined at the state level. Households that may experience bankruptcy thus have an incentive to bias their portfolios towards home equity. Using US household data for the period 1996 to 2006, we find that household demand for real estate is relatively high if the marginal investment in home equity is covered by the exemption. The home equity bias is more pronounced for younger households that face more financial uncertainty and therefore have a higher ex ante probability of bankruptcy

    Can we disregard the whole model? Omnibus non-inferiority testing for R2R^{2} in multivariable linear regression and η^2\hat{\eta}^{2} in ANOVA

    Full text link
    Determining a lack of association between an outcome variable and a number of different explanatory variables is frequently necessary in order to disregard a proposed model (i.e., to confirm the lack of an association between an outcome and predictors). Despite this, the literature rarely offers information about, or technical recommendations concerning, the appropriate statistical methodology to be used to accomplish this task. This paper introduces non-inferiority tests for ANOVA and linear regression analyses, that correspond to the standard widely used FF-test for η^2\hat{\eta}^2 and R2R^{2}, respectively. A simulation study is conducted to examine the type I error rates and statistical power of the tests, and a comparison is made with an alternative Bayesian testing approach. The results indicate that the proposed non-inferiority test is a potentially useful tool for 'testing the null.'Comment: 30 pages, 6 figure

    Scaling limits of a model for selection at two scales

    Full text link
    The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval [0,1][0,1] with dependence on a single parameter, λ\lambda. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on λ\lambda and the behavior of the initial data around 11. The second scaling leads to a measure-valued Fleming-Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics.Comment: 23 pages, 1 figur
    • …
    corecore