1,698 research outputs found

    Validating Synthetic Health Datasets for Longitudinal Clustering

    Get PDF
    This paper appeared at the Australasian Workshop on Health Informatics and Knowledge Management (HIKM 2013), Adelaide, Australia. Conferences in Research and Practice in Information Technology (CRPIT), Vol.142. K. Gray and A. Koronios, Eds. Reproduction for academic, not-for profit purposes permitted provided this text is included.Clustering methods partition datasets into subgroups with some homogeneous properties, with information about the number and particular characteristics of each subgroup unknown a priori. The problem of predicting the number of clusters and quality of each cluster might be overcome by using cluster validation methods. This paper presents such an approach in-corporating quantitative methods for comparison be-tween original and synthetic versions of longitudinal health datasets. The use of the methods is demon-strated by using two different clustering algorithms, K-means and Latent Class Analysis, to perform clus-tering on synthetic data derived from the 45 and Up Study baseline data, from NSW in Australia

    Constructing a Synthetic Longitudinal Health Dataset for Data Mining

    Get PDF
    Published version reproduced here with permission from the publisher.The traditional approach to epidemiological research is to analyse data in an explicit statistical fashion, attempting to answer a question or test a hypothesis. However, increasing experience in the application of data mining and exploratory data analysis methods suggests that valuable information can be obtained from large datasets using these less constrained approaches. Available data mining techniques, such as clustering, have mainly been applied to cross-sectional point-in-time data. However, health datasets often include repeated observations for individuals and so researchers are interested in following their health trajectories. This requires methods for analysis of multiple-points-over-time or longitudinal data. Here, we describe an approach to construct a synthetic longitudinal version of a major population health dataset in which clusters merge and split over time, to investigate the utility of clustering for discovering time sequence based patterns

    Spirality: A Novel Way to Measure Spiral Arm Pitch Angle

    Full text link
    We present the MATLAB code Spirality, a novel method for measuring spiral arm pitch angles by fitting galaxy images to spiral templates of known pitch. Computation time is typically on the order of 2 minutes per galaxy, assuming at least 8 GB of working memory. We tested the code using 117 synthetic spiral images with known pitches, varying both the spiral properties and the input parameters. The code yielded correct results for all synthetic spirals with galaxy-like properties. We also compared the code's results to two-dimensional Fast Fourier Transform (2DFFT) measurements for the sample of nearby galaxies defined by DMS PPak. Spirality's error bars overlapped 2DFFT's error bars for 26 of the 30 galaxies. The two methods' agreement correlates strongly with galaxy radius in pixels and also with i-band magnitude, but not with redshift, a result that is consistent with at least some galaxies' spiral structure being fully formed by z=1.2, beyond which there are few galaxies in our sample. The Spirality code package also includes GenSpiral, which produces FITS images of synthetic spirals, and SpiralArmCount, which uses a one-dimensional Fast Fourier Transform to count the spiral arms of a galaxy after its pitch is determined. The code package is freely available online; see Comments for URL.Comment: 19 pages, 9 figures, 3 tables. The code package is available at http://dafix.uark.edu/~doug/SpiralityCode

    Computational bounds on polynomial differential equations

    Get PDF
    In this paper we study from a computational perspective some prop-erties of the solutions of polynomial ordinary di erential equations. We consider elementary (in the sense of Analysis) discrete-time dynam-ical systems satisfying certain criteria of robustness. We show that those systems can be simulated with elementary and robust continuous-time dynamical systems which can be expanded into fully polynomial ordinary diferential equations with coe cients in Q[ ]. This sets a computational lower bound on polynomial ODEs since the former class is large enough to include the dynamics of arbitrary Turing machines. We also apply the previous methods to show that the problem of de-termining whether the maximal interval of defnition of an initial-value problem defned with polynomial ODEs is bounded or not is in general undecidable, even if the parameters of the system are computable and comparable and if the degree of the corresponding polynomial is at most 56. Combined with earlier results on the computability of solutions of poly-nomial ODEs, one can conclude that there is from a computational point of view a close connection between these systems and Turing machines

    Boundedness of the domain of definition is undecidable for polynomial odes

    Get PDF
    Consider the initial-value problem with computable parameters dx dt = p(t, x) x(t0) = x0, where p : Rn+1 ! Rn is a vector of polynomials and (t0, x0) 2 Rn+1. We show that the problem of determining whether the maximal interval of definition of this initial-value problem is bounded or not is in general undecidable

    Pengaruh Model Pembelajaran Talking Stick terhadap Keaktifan Belajar Siswa

    Get PDF
    [Title: The Effect of a Talking Stick Learning Model on Student Learning Activities]. A fun learning environment, active and meaningful for students need to be created by applying active learning model for students that learning model talking stick. This type of research is quasi-experimental. This research aimed to know there is or no effect of talking stick learning model toward students learning activeness. This research implemented in class VIII SMP Negeri 5 Mataram in academic year 2015/2016 from March to April 2016. The technique of sample using cluster random sampling, in order to obtain first-class VIII I totalling 30 students as an experimental class and class VIII E totalling 30 students as the control class. The results of this research are students' activeness learning showed by using the instrument is activeness of student learning questionnaire. Based on data analysis showed that t-test was 8.28 and t- table was 2.000 (df = 58). So, t-test > t-table (8.28 > 2.000). This means that there is an effect of talking stick learning model toward students learning activeness

    Computable randomness is about more than probabilities

    Get PDF
    We introduce a notion of computable randomness for infinite sequences that generalises the classical version in two important ways. First, our definition of computable randomness is associated with imprecise probability models, in the sense that we consider lower expectations (or sets of probabilities) instead of classical 'precise' probabilities. Secondly, instead of binary sequences, we consider sequences whose elements take values in some finite sample space. Interestingly, we find that every sequence is computably random with respect to at least one lower expectation, and that lower expectations that are more informative have fewer computably random sequences. This leads to the intriguing question whether every sequence is computably random with respect to a unique most informative lower expectation. We study this question in some detail and provide a partial answer

    Comparison of in-situ delay monitors for use in Adaptive Voltage Scaling

    Get PDF
    In Adaptive Voltage Scaling (AVS) the supply voltage of digital circuits is tuned according to the circuit's actual operating condition, which enables dynamic compensation to PVTA variations. By exploiting the excessive safety margins added in state-of-the-art worst-case designs considerable power saving is achieved. In our approach, the operating condition of the circuit is monitored by in-situ delay monitors. This paper presents different designs to implement the in-situ delay monitors capable of detecting late but still non-erroneous transitions, called Pre-Errors. The developed Pre-Error monitors are integrated in a 16 bit multiplier test circuit and the resulting Pre-Error AVS system is modeled by a Markov chain in order to determine the power saving potential of each Pre-Error detection approach

    Determining the best drought tolerance indices using Artificial Neural Network (ANN): Insight into application of intelligent agriculture in agronomy and plant breeding

    Get PDF
    In the present study, efficiency of the artificial neural network (ANN) method to identify the best drought tolerance indices was investigated. For this purpose, 25 durum genotypes were evaluated under rainfed and supplemental irrigation environments during two consecutive cropping seasons (2011–2013). The results of combined analysis of variance (ANOVA) revealed that year, environment, genotype and their interaction effects were significant for grain yield. Mean grain yield of the genotypes ranged from 184.93 g plot–1 under rainfed environment to 659.32 g plot–1 under irrigated environment. Based on the ANN results, yield stability index (YSI), harmonic mean (HM) and stress susceptible index (SSI) were identified as the best indices to predict drought-tolerant genotypes. However, mean productivity (MP) followed by geometric mean productivity (GMP) and HM were found to be accurate indices for screening drought tolerant genotypes. In general, our results indicated that genotypes G9, G12, G21, G23 and G24 were identified as more desirable genotypes for cultivation in drought-prone environments. Importantly, these results could provide an evidence that ANN method can play an important role in the selection of drought tolerant genotypes and also could be useful in other biological contexts

    Ressources numériques musicales en bibliothèque : Compte-rendu des activités 20152016 du groupe de travail de l’ACIM

    Get PDF
    Attentive au développement de la musique numérique en bibliothèque, l’ACIM a constitué en 2015 un groupe de travail pour contribuer à l’évaluation et au développement de l’offre des ressources numériques musicales à destination des bibliothèques. Ce groupe de travail s’est donné comme premiers objectifs : - la rédaction et la diffusion d’une enquête par questionnaire pour collecter des retours d’expérience sur ces ressources musicales numériques, - l’organisation de rencontres avec de nouveaux prestataires afin de faire évoluer et d’enrichir l’offre - la rédaction d’une grille d’analyse pour mieux appréhender la nature et le contenu des ressources existantes
    corecore