6,045 research outputs found
Incidence of Gunshot Wounds: Before and After Implementation of a Shall Issue Conceal Carry Law
Introduction. This study examined the incidence of gunshot wounds before and after enacting a conceal carry (CC) law in a predominately rural state.
Methods. A retrospective review was conducted of all patients who were admitted with a gunshot injury to a Level I trauma center. Patient data collected included demographics, injury details, hospital course, and discharge destination.
Results. Among the 238 patients included, 44.6% (n = 107) were admitted during the pre-CC period and 55.4% (n = 131) in the post-CC period. No demographic differences were noted between the two periods except for an increase in uninsured patients from 43.0% vs 61.1% (p = 0.020). Compared to pre-CC patients, post-CC patients experienced a trend toward increased abdominal injury (11.2% vs 20.6%, p = 0.051) and increased vascular injuries (11.2% vs 22.1%, p = 0.026) while lower extremity injuries decreased significantly (38.3% vs 26.0%, p = 0.041). Positive focused assessment with sonography in trauma (FAST) exams (2.2% vs 16.8, p < 0.001), intensive care unit admission (26.2% vs 42.0%, p = 0.011) and need for ventilator support (11.2% vs 22.1%, p = 0.026) all increased during the post-CC period. In-hospital mortality more than doubled (8.4% vs 18.3%, p = 0.028) across the pre- and post-CC time periods.
Conclusion. Implementation of a CC law was not associated with a decrease in the overall number of penetrating injuries or a decrease in mortality
Smart City Analytics: Ensemble-Learned Prediction of Citizen Home Care
We present an ensemble learning method that predicts large increases in the
hours of home care received by citizens. The method is supervised, and uses
different ensembles of either linear (logistic regression) or non-linear
(random forests) classifiers. Experiments with data available from 2013 to 2017
for every citizen in Copenhagen receiving home care (27,775 citizens) show that
prediction can achieve state of the art performance as reported in similar
health related domains (AUC=0.715). We further find that competitive results
can be obtained by using limited information for training, which is very useful
when full records are not accessible or available. Smart city analytics does
not necessarily require full city records.
To our knowledge this preliminary study is the first to predict large
increases in home care for smart city analytics
Convergence in Income Inequality: Further Evidence from the Club Clustering Methodology across States in the U.S.
This paper contributes to the sparse literature on inequality convergence by empirically testing convergence across states in the U.S. This sample period encompasses a series of different periods that the existing literature discusses -- the Great Depression (1929–1944), the Great Compression (1945–1979), the Great Divergence (1980-present), the Great Moderation (1982–2007), and the Great Recession (2007–2009). This paper implements the relatively new method of panel convergence testing, recommended by Phillips and Sul (2007). This method examines the club convergence hypothesis, which argues that certain countries, states, sectors, or regions belong to a club that moves from disequilibrium positions to their club-specific steady-state positions. We find strong support for convergence through the late 1970s and early 1980s, and then evidence of divergence. The divergence, however, moves the dispersion of inequality measures across states only a fraction of the way back to their levels in the early part of the twentieth century
Sequence Modelling For Analysing Student Interaction with Educational Systems
The analysis of log data generated by online educational systems is an
important task for improving the systems, and furthering our knowledge of how
students learn. This paper uses previously unseen log data from Edulab, the
largest provider of digital learning for mathematics in Denmark, to analyse the
sessions of its users, where 1.08 million student sessions are extracted from a
subset of their data. We propose to model students as a distribution of
different underlying student behaviours, where the sequence of actions from
each session belongs to an underlying student behaviour. We model student
behaviour as Markov chains, such that a student is modelled as a distribution
of Markov chains, which are estimated using a modified k-means clustering
algorithm. The resulting Markov chains are readily interpretable, and in a
qualitative analysis around 125,000 student sessions are identified as
exhibiting unproductive student behaviour. Based on our results this student
representation is promising, especially for educational systems offering many
different learning usages, and offers an alternative to common approaches like
modelling student behaviour as a single Markov chain often done in the
literature.Comment: The 10th International Conference on Educational Data Mining 201
Neural Speed Reading with Structural-Jump-LSTM
Recurrent neural networks (RNNs) can model natural language by sequentially
'reading' input tokens and outputting a distributed representation of each
token. Due to the sequential nature of RNNs, inference time is linearly
dependent on the input length, and all inputs are read regardless of their
importance. Efforts to speed up this inference, known as 'neural speed
reading', either ignore or skim over part of the input. We present
Structural-Jump-LSTM: the first neural speed reading model to both skip and
jump text during inference. The model consists of a standard LSTM and two
agents: one capable of skipping single words when reading, and one capable of
exploiting punctuation structure (sub-sentence separators (,:), sentence end
symbols (.!?), or end of text markers) to jump ahead after reading a word. A
comprehensive experimental evaluation of our model against all five
state-of-the-art neural reading models shows that Structural-Jump-LSTM achieves
the best overall floating point operations (FLOP) reduction (hence is faster),
while keeping the same accuracy or even improving it compared to a vanilla LSTM
that reads the whole text.Comment: 10 page
Jobs, Welfare and Austerity : how the destruction of industrial Britain casts a shadow over the present-day public finances
In this short paper we aim to explain how the loss of Britain’s industrial base sets the context for present-day public finances. In doing so, we draw in particular on our own research at CRESR over the last three decades. Individual components of this research provide pieces of the jigsaw, but by combining all the pieces and drawing on wider ideas in economics to fill in some of the gaps the overall picture becomes clear.
In brief, our argument is that the destruction of industrial jobs, which was so marked in the 1980s and early 90s but has continued on and off ever since, fuelled spending on welfare benefits which in turn has compounded the budgetary problems of successive governments. And with the present government set on welfare reform, the places that bore the brunt of job destruction some years ago are now generally facing the biggest reductions in household incomes. There is a continuous thread linking what happened to British industry in the 1980s, via the Treasury’s budgetary calculations, to what is today happening on the ground in so many hard-pressed communities.
In particular, we demonstrate these links by deploying local data. This has been the distinctive contribution of our research (and of CRESR more generally) and its value is that it provides not just a level of detail that would otherwise be missing but, more importantly, it sheds light on the underlying processes at work. The Treasury knows it has a problem balancing public finances, and that the government spends an awful lot on working-age welfare benefits. But it never seems to ask exactly where – which towns and cities – draw so heavily on benefits, or why these communities have become so dependent on welfare spending
Modelling Sequential Music Track Skips using a Multi-RNN Approach
Modelling sequential music skips provides streaming companies the ability to
better understand the needs of the user base, resulting in a better user
experience by reducing the need to manually skip certain music tracks. This
paper describes the solution of the University of Copenhagen DIKU-IR team in
the 'Spotify Sequential Skip Prediction Challenge', where the task was to
predict the skip behaviour of the second half in a music listening session
conditioned on the first half. We model this task using a Multi-RNN approach
consisting of two distinct stacked recurrent neural networks, where one network
focuses on encoding the first half of the session and the other network focuses
on utilizing the encoding to make sequential skip predictions. The encoder
network is initialized by a learned session-wide music encoding, and both of
them utilize a learned track embedding. Our final model consists of a majority
voted ensemble of individually trained models, and ranked 2nd out of 45
participating teams in the competition with a mean average accuracy of 0.641
and an accuracy on the first skip prediction of 0.807. Our code is released at
https://github.com/Varyn/WSDM-challenge-2019-spotify.Comment: 4 page
- …