299 research outputs found
Languages of games and play: A systematic mapping study
Digital games are a powerful means for creating enticing, beautiful, educational, and often highly addictive interactive experiences that impact the lives of billions of players worldwide. We explore what informs the design and construction of good games to learn how to speed-up game development. In particular, we study to what extent languages, notations, patterns, and tools, can offer experts theoretical foundations, systematic techniques, and practical solutions they need to raise their productivity and improve the quality of games and play. Despite the growing number of publications on this topic there is currently no overview describing the state-of-the-art that relates research areas, goals, and applications. As a result, efforts and successes are often one-off, lessons learned go overlooked, language reuse remains minimal, and opportunities for collaboration and synergy are lost. We present a systematic map that identifies relevant publications and gives an overview of research areas and publication venues. In addition, we categorize research perspectives along common objectives, techniques, and approaches, illustrated by summaries of selected languages. Finally, we distill challenges and opportunities for future research and development
Synthesizing electronic health records for predictive models in low-middle-income countries (LMICs)
The spread of machine learning models, coupled with by the growing adoption of electronic health records (EHRs), has opened the door for developing clinical decision support systems. However, despite the great promise of machine learning for healthcare in low-middle-income countries (LMICs), many data-specific limitations, such as the small size and irregular sampling, hinder the progress in such applications. Recently, deep generative models have been proposed to generate realistic-looking synthetic data, including EHRs, by learning the underlying data distribution without compromising patient privacy. In this study, we first use a deep generative model to generate synthetic data based on a small dataset (364 patients) from a LMIC setting. Next, we use synthetic data to build models that predict the onset of hospital-acquired infections based on minimal information collected at patient ICU admission. The performance of the diagnostic model trained on the synthetic data outperformed models trained on the original and oversampled data using techniques such as SMOTE. We also experiment with varying the size of the synthetic data and observe the impact on the performance and interpretability of the models. Our results show the promise of using deep generative models in enabling healthcare data owners to develop and validate models that serve their needs and applications, despite limitations in dataset size
Recommended from our members
A review of high impact weather for aviation meteorology
This review paper summarizes current knowledge available for aviation operations related to meteorology and provides suggestions for necessary improvements in the measurement and prediction of weather-related parameters, new physical methods for numerical weather predictions (NWP), and next-generation integrated systems. Severe weather can disrupt aviation operations on the ground or in-flight. The most important parameters related to aviation meteorology are wind and turbulence, fog visibility, aerosol/ash loading, ceiling, rain and snow amount and rates, icing, ice microphysical parameters, convection and precipitation intensity, microbursts, hail, and lightning. Measurements of these parameters are functions of sensor response times and measurement thresholds in extreme weather conditions. In addition to these, airport environments can also play an important role leading to intensification of extreme weather conditions or high impact weather events, e.g., anthropogenic ice fog. To observe meteorological parameters, new remote sensing platforms, namely wind LIDAR, sodars, radars, and geostationary satellites, and in situ instruments at the surface and in the atmosphere, as well as aircraft and Unmanned Aerial Vehicles mounted sensors, are becoming more common. At smaller time and space scales (e.g., < 1 km), meteorological forecasts from NWP models need to be continuously improved for accurate physical parameterizations. Aviation weather forecasts also need to be developed to provide detailed information that represents both deterministic and statistical approaches. In this review, we present available resources and issues for aviation meteorology and evaluate them for required improvements related to measurements, nowcasting, forecasting, and climate change, and emphasize future challenges
Semi-Supervised Named Entity Recognition:\ud Learning to Recognize 100 Entity Types with Little Supervision\ud
Named Entity Recognition (NER) aims to extract and to classify rigid designators in text such as proper names, biological species, and temporal expressions. There has been growing interest in this field of research since the early 1990s. In this thesis, we document a trend moving away from handcrafted rules, and towards machine learning approaches. Still, recent machine learning approaches have a problem with annotated data availability, which is a serious shortcoming in building and maintaining large-scale NER systems. \ud
\ud
In this thesis, we present an NER system built with very little supervision. Human supervision is indeed limited to listing a few examples of each named entity (NE) type. First, we introduce a proof-of-concept semi-supervised system that can recognize four NE types. Then, we expand its capacities by improving key technologies, and we apply the system to an entire hierarchy comprised of 100 NE types. \ud
\ud
Our work makes the following contributions: the creation of a proof-of-concept semi-supervised NER system; the demonstration of an innovative noise filtering technique for generating NE lists; the validation of a strategy for learning disambiguation rules using automatically identified, unambiguous NEs; and finally, the development of an acronym detection algorithm, thus solving a rare but very difficult problem in alias resolution. \ud
\ud
We believe semi-supervised learning techniques are about to break new ground in the machine learning community. In this thesis, we show that limited supervision can build complete NER systems. On standard evaluation corpora, we report performances that compare to baseline supervised systems in the task of annotating NEs in texts. \u
Computationally bounded rationality from three perspectives: precomputation, regret tradeoffs, and lifelong learning
What does it mean for a computer program to be optimal? Many fields in optimal decision making, from game theory to Bayesian decision theory, define optimal solutions which can be computationally intractable to implement or find. This is problematic, because it means that sometimes these solutions are not physically realizable. To address this problem, bounded rationality studies what it means to behave optimally subject to constraints on processing time, memory and knowledge. This thesis contributes three new models for studying bounded rationality in different contexts.
The first model considers games like chess. We suppose each player can spend some time before the game precomputing (memorizing) strong moves from an oracle, but has limited memory to remember these moves. We show how to analytically quantify how randomly optimal strategies play in equilibrium, and give polynomial- time algorithms for computing a best response and an ε-Nash equilibrium. We use the best response algorithm to empirically evaluate the chess playing program Stockfish.
The second model takes place in the setting of adversarial online learning. Here, we imagine an algorithm receives new problems online, and is given a computational budget to run B problem solvers for each problem. We show how to trade off the budget B for a strengthening of the algorithm’s regret guarantee in both the full and semi-bandit feedback settings. We then show how this tradeoff implies new results for Online Submodular Function Maximization (OSFM) (Streeter and Golovin, 2008) and Linear Programming. We use these observations to derive and benchmark a new algorithm for OSFM.
The third model approaches bounded rationality from the perspective of lifelong learning (Chen and Liu, 2018). Instead of modelling the final solution, lifelong learning models how a computationally bounded agent can accumulate knowledge over time and attempt to solve tractable subproblems it encounters. We develop models for incrementally accumulating and learning knowledge in a domain agnostic setting, and use these models to give an abstract framework for a lifelong reinforcement learner. The framework attempts to make a step towards making the best of analytical performance guarantees, while still being able to make use of black box techniques such as neural networks which may perform well in practice
AI in Learning: Designing the Future
AI (Artificial Intelligence) is predicted to radically change teaching and learning in both schools and industry causing radical disruption of work. AI can support well-being initiatives and lifelong learning but educational institutions and companies need to take the changing technology into account. Moving towards AI supported by digital tools requires a dramatic shift in the concept of learning, expertise and the businesses built off of it. Based on the latest research on AI and how it is changing learning and education, this book will focus on the enormous opportunities to expand educational settings with AI for learning in and beyond the traditional classroom. This open access book also introduces ethical challenges related to learning and education, while connecting human learning and machine learning. This book will be of use to a variety of readers, including researchers, AI users, companies and policy makers
Testing the Potential of Deep Learning in Earthquake Forecasting
Reliable earthquake forecasting methods have long been sought after, and so
the rise of modern data science techniques raises a new question: does deep
learning have the potential to learn this pattern? In this study, we leverage
the large amount of earthquakes reported via good seismic station coverage in
the subduction zone of Japan. We pose earthquake forecasting as a
classification problem and train a Deep Learning Network to decide, whether a
timeseries of length greater than 2 years will end in an earthquake on the
following day with magnitude greater than 5 or not. Our method is based on
spatiotemporal b value data, on which we train an autoencoder to learn the
normal seismic behaviour. We then take the pixel by pixel reconstruction error
as input for a Convolutional Dilated Network classifier, whose model output
could serve for earthquake forecasting. We develop a special progressive
training method for this model to mimic real life use. The trained network is
then evaluated over the actual dataseries of Japan from 2002 to 2020 to
simulate a real life application scenario. The overall accuracy of the model is
72.3 percent. The accuracy of this classification is significantly above the
baseline and can likely be improved with more data in the futur
- …