761 research outputs found

    The Effect of Therapeutic Horseback Riding on Balance and Self-Efficacy in Children with Developmental Disabilities

    Get PDF
    The prevalence of developmental disabilities in children in the United States is a serious problem. Since children with developmental disabilities often show decreased self-efficacy and balance, researchers have studied the effects of interventions in this population. The purpose of this study is to determine the effect of a 10-week THR session on balance and task-specific self-efficacy in children with physical disabilities ages 6 to 18 years old. Bandura’s social cognitive theory and The Physical Stress Theory will guide the quasi-experimental study. A pre-test post-test design will be implemented over a 12 week span at 3 different riding centers in the Midwest United States. Data collection will begin after an approval from the university review board and obtaining a signed informed consent and assent. Data will be collected through surveys and analyzed with same-sample t-tests

    Pre-endoscopy SARS-CoV-2 testing strategy during COVID-19 pandemic: the care must go on

    Get PDF
    Background: In response to the COVID-19 pandemic, endoscopic societies initially recommended reduction of endoscopic procedures. In particular non-urgent endoscopies should be postponed. However, this might lead to unnecessary delay in diagnosing gastrointestinal conditions. Methods: Retrospectively we analysed the gastrointestinal endoscopies performed at the Central Endoscopy Unit of Saarland University Medical Center during seven weeks from 23 March to 10 May 2020 and present our real-world single-centre experience with an individualized rtPCR-based pre-endoscopy SARS-CoV-2 testing strategy. We also present our experience with this strategy in 2021. Results: Altogether 359 gastrointestinal endoscopies were performed in the initial period. The testing strategy enabled us to conservatively handle endoscopy programme reduction (44% reduction as compared 2019) during the frst wave of the COVID-19 pandemic. The results of COVID-19 rtPCR from nasopharyngeal swabs were available in 89% of patients prior to endoscopies. Apart from six patients with known COVID-19, all other tested patients were negative. The frequencies of endoscopic therapies and clinically signifcant fndings did not difer between patients with or without SARS-CoV-2 tests. In 2021 we were able to unrestrictedly perform all requested endoscopic procedures (>5000 procedures) by applying the rtPCR-based pre-endoscopy SARS-CoV-2 testing strategy, regardless of next waves of COVID-19. Only two out-patients (1893 out-patient procedures) were tested positive in the year 2021. Conclusion: A structured pre-endoscopy SARS-CoV-2 testing strategy is feasible in the clinical routine of an endoscopy unit. rtPCR-based pre-endoscopy SARS-CoV-2 testing safely allowed unrestricted continuation of endoscopic procedures even in the presence of high incidence rates of COVID-19. Given the low frequency of positive tests, the absolute efect of pre-endoscopy testing on viral transmission may be low when FFP-2 masks are regularly used

    A Kernel Method for the Two-sample Problem

    Get PDF
    We propose a framework for analyzing and comparing distributions, allowing us to design statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS). We present two tests based on large deviation bounds for the test statistic, while a third is based on the asymptotic distribution of this statistic. The test statistic can be computed in quadratic time, although efficient linear time approximations are available. Several classical metrics on distributions are recovered when the function space used to compute the difference in expectations is allowed to be more general (eg.~a Banach space). We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests

    A framework for space-efficient string kernels

    Full text link
    String kernels are typically used to compare genome-scale sequences whose length makes alignment impractical, yet their computation is based on data structures that are either space-inefficient, or incur large slowdowns. We show that a number of exact string kernels, like the kk-mer kernel, the substrings kernels, a number of length-weighted kernels, the minimal absent words kernel, and kernels with Markovian corrections, can all be computed in O(nd)O(nd) time and in o(n)o(n) bits of space in addition to the input, using just a rangeDistinct\mathtt{rangeDistinct} data structure on the Burrows-Wheeler transform of the input strings, which takes O(d)O(d) time per element in its output. The same bounds hold for a number of measures of compositional complexity based on multiple value of kk, like the kk-mer profile and the kk-th order empirical entropy, and for calibrating the value of kk using the data

    Statistical Mechanical Development of a Sparse Bayesian Classifier

    Full text link
    The demand for extracting rules from high dimensional real world data is increasing in various fields. However, the possible redundancy of such data sometimes makes it difficult to obtain a good generalization ability for novel samples. To resolve this problem, we provide a scheme that reduces the effective dimensions of data by pruning redundant components for bicategorical classification based on the Bayesian framework. First, the potential of the proposed method is confirmed in ideal situations using the replica method. Unfortunately, performing the scheme exactly is computationally difficult. So, we next develop a tractable approximation algorithm, which turns out to offer nearly optimal performance in ideal cases when the system size is large. Finally, the efficacy of the developed classifier is experimentally examined for a real world problem of colon cancer classification, which shows that the developed method can be practically useful.Comment: 13 pages, 6 figure

    On landmark selection and sampling in high-dimensional data analysis

    Full text link
    In recent years, the spectral analysis of appropriately defined kernel matrices has emerged as a principled way to extract the low-dimensional structure often prevalent in high-dimensional data. Here we provide an introduction to spectral methods for linear and nonlinear dimension reduction, emphasizing ways to overcome the computational limitations currently faced by practitioners with massive datasets. In particular, a data subsampling or landmark selection process is often employed to construct a kernel based on partial information, followed by an approximate spectral analysis termed the Nystrom extension. We provide a quantitative framework to analyse this procedure, and use it to demonstrate algorithmic performance bounds on a range of practical approaches designed to optimize the landmark selection process. We compare the practical implications of these bounds by way of real-world examples drawn from the field of computer vision, whereby low-dimensional manifold structure is shown to emerge from high-dimensional video data streams.Comment: 18 pages, 6 figures, submitted for publicatio

    Hilbert Space Representations of Probability Distributions

    Get PDF
    Many problems in unsupervised learning require the analysis of features of probability distributions. At the most fundamental level, we might wish to determine whether two distributions are the same, based on samples from each - this is known as the two-sample or homogeneity problem. We use kernel methods to address this problem, by mapping probability distributions to elements in a reproducing kernel Hilbert space (RKHS). Given a sufficiently rich RKHS, these representations are unique: thus comparing feature space representations allows us to compare distributions without ambiguity. Applications include testing whether cancer subtypes are distinguishable on the basis of DNA microarray data, and whether low frequency oscillations measured at an electrode in the cortex have a different distribution during a neural spike. A more difficult problem is to discover whether two random variables drawn from a joint distribution are independent. It turns out that any dependence between pairs of random variables can be encoded in a cross-covariance operator between appropriate RKHS representations of the variables, and we may test independence by looking at a norm of the operator. We demonstrate this independence test by establishing dependence between an English text and its French translation, as opposed to French text on the same topic but otherwise unrelated. Finally, we show that this operator norm is itself a difference in feature means

    The devices, experimental scaffolds, and biomaterials ontology (DEB): a tool for mapping, annotation, and analysis of biomaterials' data

    Get PDF
    The size and complexity of the biomaterials literature makes systematic data analysis an excruciating manual task. A practical solution is creating databases and information resources. Implant design and biomaterials research can greatly benefit from an open database for systematic data retrieval. Ontologies are pivotal to knowledge base creation, serving to represent and organize domain knowledge. To name but two examples, GO, the gene ontology, and CheBI, Chemical Entities of Biological Interest ontology and their associated databases are central resources to their respective research communities. The creation of the devices, experimental scaffolds, and biomaterials ontology (DEB), an open resource for organizing information about biomaterials, their design, manufacture, and biological testing, is described. It is developed using text analysis for identifying ontology terms from a biomaterials gold standard corpus, systematically curated to represent the domain's lexicon. Topics covered are validated by members of the biomaterials research community. The ontology may be used for searching terms, performing annotations for machine learning applications, standardized meta-data indexing, and other cross-disciplinary data exploitation. The input of the biomaterials community to this effort to create data-driven open-access research tools is encouraged and welcomed.Preprin

    Knot selection in sparse Gaussian processes with a variational objective function

    Get PDF
    Sparse, knot‐based Gaussian processes have enjoyed considerable success as scalable approximations of full Gaussian processes. Certain sparse models can be derived through specific variational approximations to the true posterior, and knots can be selected to minimize the Kullback‐Leibler divergence between the approximate and true posterior. While this has been a successful approach, simultaneous optimization of knots can be slow due to the number of parameters being optimized. Furthermore, there have been few proposed methods for selecting the number of knots, and no experimental results exist in the literature. We propose using a one‐at‐a‐time knot selection algorithm based on Bayesian optimization to select the number and locations of knots. We showcase the competitive performance of this method relative to optimization of knots simultaneously on three benchmark datasets, but at a fraction of the computational cost
    • 

    corecore