199 research outputs found
Analysis of healthcare service utilization after transport-related injuries by a mixture of hidden Markov models
© 2018 Esmaili et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Background Transport injuries commonly result in significant disease burden, leading to physical disability, mental health deterioration and reduced quality of life. Analyzing the patterns of healthcare service utilization after transport injuries can provide an insight into the health of the affected parties, allow improved health system resource planning, and provide a baseline against which any future system-level interventions can be evaluated. Therefore, this research aims to use time series of service utilization provided by a compensation agency to identify groups of claimants with similar utilization patterns, describe such patterns, and characterize the groups in terms of demographic, accident type and injury type. Methods To achieve this aim, we have proposed an analytical framework that utilizes latent variables to describe the utilization patterns over time and group the claimants into clusters based on their service utilization time series. To perform the clustering without dismissing the temporal dimension of the time series, we have used a well-established statistical approach known as the mixture of hidden Markov models (MHMM). Ensuing the clustering, we have applied multinomial logistic regression to provide a description of the clusters against demographic, injury and accident covariates. Results We have tested our model with data on psychology service utilization from one of the main compensation agencies for transport accidents in Australia, and found that three clear clusters of service utilization can be evinced from the data. These three clusters correspond to claimants who have tended to use the services 1) only briefly after the accident; 2) for an intermediate period of time and in moderate amounts; and 3) for a sustained period of time, and intensely. The size of these clusters is approximately 67%, 27% and 6% of the number of claimants, respectively. The multinomial logistic regression analysis has showed that claimants who were 30 to 60-year-old at the time of accident, were witnesses, and who suffered a soft tissue injury were more likely to be part of the intermediate cluster than the majority cluster. Conversely, claimants who suffered more severe injuries such as a brain head injury or anon-limb fracture injury and who started their service utilization later were more likely to be part of the sustained cluster
Comparing Support Vector Machines with Gaussian Kernels to Radial Basis Function Classifiers
The Support Vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights and threshold such as to minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by --means clustering and the weights are found using error backpropagation. We consider three machines, namely a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the US postal service database of handwritten digits, the SV machine achieves the highest test accuracy, followed by the hybrid approach. The SV approach is thus not only theoretically well--founded, but also superior in a practical application
Tornado Detection with Support Vector Machines
Abstract. The National Weather Service (NWS) Mesocyclone Detec-tion Algorithms (MDA) use empirical rules to process velocity data from the Weather Surveillance Radar 1988 Doppler (WSR-88D). In this study Support Vector Machines (SVM) are applied to mesocyclone detection. Comparison with other classification methods like neural networks and radial basis function networks show that SVM are more effective in meso-cyclone/tornado detection.
A Bayesian Approach to Inverse Quantum Statistics
A nonparametric Bayesian approach is developed to determine quantum
potentials from empirical data for quantum systems at finite temperature. The
approach combines the likelihood model of quantum mechanics with a priori
information over potentials implemented in form of stochastic processes. Its
specific advantages are the possibilities to deal with heterogeneous data and
to express a priori information explicitly, i.e., directly in terms of the
potential of interest. A numerical solution in maximum a posteriori
approximation was feasible for one--dimensional problems. Using correct a
priori information turned out to be essential.Comment: 4 pages, 6 figures, revte
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
Assessment of the proportion of neonates and children in low and middle income countries with access to a healthcare facility: A systematic review
<p>Abstract</p> <p>Background</p> <p>Comprehensive antenatal, perinatal and early postnatal care has the potential to significantly reduce the 3.58 million neonatal deaths that occur annually worldwide. This paper systematically reviews data on the proportion of neonates and children < 5 years of age that have access to health facilities in low and middle income countries. Gaps in available data by WHO region are identified, and an agenda for future research and advocacy is proposed.</p> <p>Methods</p> <p>For this paper, "utilization" was used as a proxy for "access" to a healthcare facility, and the term "facility" was used for any clinic or hospital outside of a person's home staffed by a "medical professional". A systematic literature search was conducted for published studies of children up to 5 years of age that included the neonatal age group with an illness or illness symptoms in which health facility utilization was quantified. In addition, information from available Demographic and Health Surveys (DHS) was extracted.</p> <p>Results</p> <p>The initial broad search yielded 2,239 articles, of which 14 presented relevant data. From the community-based neonatal studies conducted in the Southeast Asia region with the goal of enhancing care-seeking for neonates with sepsis, the 10-48% of sick neonates in the studies' control arms utilized a healthcare facility. Data from cross-sectional surveys involving young children indicate that 12 to 86% utilizing healthcare facilities when sick. From the DHS surveys, a global median of 58.1% of infants < 6 months were taken to a facility for symptoms of ARI.</p> <p>Conclusions</p> <p>There is a scarcity of data regarding the access to facility-based care for sick neonates/young children in many areas of the world; it was not possible to generalize an overall number of neonates or young children that utilize a healthcare facility when showing signs and symptoms of illness. The estimate ranges were broad, and there was a paucity of data from some regions. It is imperative that researchers, advocates, and policy makers join together to better understand the factors affecting health care utilization/access for newborns in different settings and what the barriers are that prevent children from being taken to a facility in a timely manner.</p
On the Bounds of Function Approximations
Within machine learning, the subfield of Neural Architecture Search (NAS) has
recently garnered research attention due to its ability to improve upon
human-designed models. However, the computational requirements for finding an
exact solution to this problem are often intractable, and the design of the
search space still requires manual intervention. In this paper we attempt to
establish a formalized framework from which we can better understand the
computational bounds of NAS in relation to its search space. For this, we first
reformulate the function approximation problem in terms of sequences of
functions, and we call it the Function Approximation (FA) problem; then we show
that it is computationally infeasible to devise a procedure that solves FA for
all functions to zero error, regardless of the search space. We show also that
such error will be minimal if a specific class of functions is present in the
search space. Subsequently, we show that machine learning as a mathematical
problem is a solution strategy for FA, albeit not an effective one, and further
describe a stronger version of this approach: the Approximate Architectural
Search Problem (a-ASP), which is the mathematical equivalent of NAS. We
leverage the framework from this paper and results from the literature to
describe the conditions under which a-ASP can potentially solve FA as well as
an exhaustive search, but in polynomial time.Comment: Accepted as a full paper at ICANN 2019. The final, authenticated
publication will be available at https://doi.org/10.1007/978-3-030-30487-4_3
- âŠ