3,319 research outputs found
Scoring dynamics across professional team sports: tempo, balance and predictability
Despite growing interest in quantifying and modeling the scoring dynamics
within professional sports games, relative little is known about what patterns
or principles, if any, cut across different sports. Using a comprehensive data
set of scoring events in nearly a dozen consecutive seasons of college and
professional (American) football, professional hockey, and professional
basketball, we identify several common patterns in scoring dynamics. Across
these sports, scoring tempo---when scoring events occur---closely follows a
common Poisson process, with a sport-specific rate. Similarly, scoring
balance---how often a team wins an event---follows a common Bernoulli process,
with a parameter that effectively varies with the size of the lead. Combining
these processes within a generative model of gameplay, we find they both
reproduce the observed dynamics in all four sports and accurately predict game
outcomes. These results demonstrate common dynamical patterns underlying
within-game scoring dynamics across professional team sports, and suggest
specific mechanisms for driving them. We close with a brief discussion of the
implications of our results for several popular hypotheses about sports
dynamics.Comment: 18 pages, 8 figures, 4 tables, 2 appendice
Mesoscopic description of hippocampal replay and metastability in spiking neural networks with short-term plasticity
Bottom-up models of functionally relevant patterns of neural activity provide
an explicit link between neuronal dynamics and computation. A prime example of
functional activity pattern is hippocampal replay, which is critical for memory
consolidation. The switchings between replay events and a low-activity state in
neural recordings suggests metastable neural circuit dynamics. As metastability
has been attributed to noise and/or slow fatigue mechanisms, we propose a
concise mesoscopic model which accounts for both. Crucially, our model is
bottom-up: it is analytically derived from the dynamics of finite-size networks
of Linear-Nonlinear Poisson neurons with short-term synaptic depression. As
such, noise is explicitly linked to spike noise and network size, and fatigue
is explicitly linked to synaptic dynamics. To derive the mesosocpic model, we
first consider a homogeneous spiking neural network and follow the temporal
coarse-graining approach of Gillespie ("chemical Langevin equation"), which can
be naturally interpreted as a stochastic neural mass model. The Langevin
equation is computationally inexpensive to simulate and enables a thorough
study of metastable dynamics in classical setups (population spikes and Up-Down
states dynamics) by means of phase-plane analysis. This stochastic neural mass
model is the basic component of our mesoscopic model for replay. We show that
our model faithfully captures the stochastic nature of individual replayed
trajectories. Moreover, compared to the deterministic Romani-Tsodyks model of
place cell dynamics, it exhibits a higher level of variability in terms of
content, direction and timing of replay events, compatible with biological
evidence and could be functionally desirable. This variability is the product
of a new dynamical regime where metastability emerges from a complex interplay
between finite-size fluctuations and local fatigue.Comment: 43 pages, 8 figure
Proceedings of Mathsport international 2017 conference
Proceedings of MathSport International 2017 Conference, held in the Botanical Garden of the University of Padua, June 26-28, 2017.
MathSport International organizes biennial conferences dedicated to all topics where mathematics and sport meet.
Topics include: performance measures, optimization of sports performance, statistics and probability models, mathematical and physical models in sports, competitive strategies, statistics and probability match outcome models, optimal tournament design and scheduling, decision support systems, analysis of rules and adjudication, econometrics in sport, analysis of sporting technologies, financial valuation in sport, e-sports (gaming), betting and sports
Learning high-speed flight in the wild
Quadrotors are agile. Unlike most other machines, they can traverse extremely complex environments at high speeds. To date, only expert human pilots have been able to fully exploit their capabilities. Autonomous operation with onboard sensing and computation has been limited to low speeds. State-of-the-art methods generally separate the navigation problem into subtasks: sensing, mapping, and planning. Although this approach has proven successful at low speeds, the separation it builds upon can be problematic for high-speed navigation in cluttered environments. The subtasks are executed sequentially, leading to increased processing latency and a compounding of errors through the pipeline. Here, we propose an end-to-end approach that can autonomously fly quadrotors through complex natural and human-made environments at high speeds with purely onboard sensing and computation. The key principle is to directly map noisy sensory observations to collision-free trajectories in a receding-horizon fashion. This direct mapping drastically reduces processing latency and increases robustness to noisy and incomplete perception. The sensorimotor mapping is performed by a convolutional network that is trained exclusively in simulation via privileged learning: imitating an expert with access to privileged information. By simulating realistic sensor noise, our approach achieves zero-shot transfer from simulation to challenging real-world environments that were never experienced during training: dense forests, snow-covered terrain, derailed trains, and collapsed buildings. Our work demonstrates that end-to-end policies trained in simulation enable high-speed autonomous flight through challenging environments, outperforming traditional obstacle avoidance pipelines
Learning Scheduling Algorithms for Data Processing Clusters
Efficiently scheduling data processing jobs on distributed compute clusters
requires complex algorithms. Current systems, however, use simple generalized
heuristics and ignore workload characteristics, since developing and tuning a
scheduling policy for each workload is infeasible. In this paper, we show that
modern machine learning techniques can generate highly-efficient policies
automatically. Decima uses reinforcement learning (RL) and neural networks to
learn workload-specific scheduling algorithms without any human instruction
beyond a high-level objective such as minimizing average job completion time.
Off-the-shelf RL techniques, however, cannot handle the complexity and scale of
the scheduling problem. To build Decima, we had to develop new representations
for jobs' dependency graphs, design scalable RL models, and invent RL training
methods for dealing with continuous stochastic job arrivals. Our prototype
integration with Spark on a 25-node cluster shows that Decima improves the
average job completion time over hand-tuned scheduling heuristics by at least
21%, achieving up to 2x improvement during periods of high cluster load
Towards Scalable, Private and Practical Deep Learning
Deep Learning (DL) models have drastically improved the performance of Artificial Intelligence (AI) tasks such as image recognition, word prediction, translation, among many others, on which traditional Machine Learning (ML) models fall short. However, DL models are costly to design, train, and deploy due to their computing and memory demands. Designing DL models usually requires extensive expertise and significant manual tuning efforts. Even with the latest accelerators such as Graphics Processing Unit (GPU) and Tensor Processing Unit (TPU), training DL models can take prohibitively long time, therefore training large DL models in a distributed manner is a norm. Massive amount of data is made available thanks to the prevalence of mobile and internet-of-things (IoT) devices. However, regulations such as HIPAA and GDPR limit the access and transmission of personal data to protect security and privacy. Therefore, enabling DL model training in a decentralized but private fashion is urgent and critical. Deploying trained DL models in a real world environment usually requires meeting Quality of Service (QoS) standards, which makes adaptability of DL models an important yet challenging matter. In this dissertation, we aim to address the above challenges to make a step towards scalable, private, and practical deep learning. To simplify DL model design, we propose Efficient Progressive Neural-Architecture Search (EPNAS) and FedCust to automatically design model architectures and tune hyperparameters, respectively. To provide efficient and robust distributed training while preserving privacy, we design LEASGD, TiFL, and HDFL. We further conduct a study on the security aspect of distributed learning by focusing on how data heterogeneity affects backdoor attacks and how to mitigate such threats. Finally, we use super resolution (SR) as an example application to explore model adaptability for cross platform deployment and dynamic runtime environment. Specifically, we propose DySR and AdaSR frameworks which enable SR models to meet QoS by dynamically adapting to available resources instantly and seamlessly without excessive memory overheads
- …