861 research outputs found
Irrationality is needed to compute with signal machines with only three speeds
International audienceSpace-time diagrams of signal machines on finite configurations are composed of interconnected line segments in the Euclidean plane. As the system runs, a network emerges. If segments extend only in one or two directions, the dynamics is finite and simplistic. With four directions, it is known that fractal generation, accumulation and any Turing computation are possible. This communication deals with the three directions/sp eeds case. If there is no irrational ratio (between initial distances between signals or between speeds) then the network follows a mesh preventing accumulation and forcing a cyclic behavior. With an irrational ratio (here, the Golden ratio) between initial distances, it becomes possible to provoke an accumulation that generates infinitely many interacting signals in a bounded portion of the Euclidean plane. This b ehavior is then controlled and used in order to simulate a Turing machine and generate a 25-state 3-speed Turing-universal signal machin
Abstract geometrical computation 7: geometrical accumulations and computably enumerable real numbers
Critical Market Crashes
This review is a partial synthesis of the book ``Why stock market crash''
(Princeton University Press, January 2003), which presents a general theory of
financial crashes and of stock market instabilities that his co-workers and the
author have developed over the past seven years. The study of the frequency
distribution of drawdowns, or runs of successive losses shows that large
financial crashes are ``outliers'': they form a class of their own as can be
seen from their statistical signatures. If large financial crashes are
``outliers'', they are special and thus require a special explanation, a
specific model, a theory of their own. In addition, their special properties
may perhaps be used for their prediction. The main mechanisms leading to
positive feedbacks, i.e., self-reinforcement, such as imitative behavior and
herding between investors are reviewed with many references provided to the
relevant literature outside the confine of Physics. Positive feedbacks provide
the fuel for the development of speculative bubbles, preparing the instability
for a major crash. We demonstrate several detailed mathematical models of
speculative bubbles and crashes. The most important message is the discovery of
robust and universal signatures of the approach to crashes. These precursory
patterns have been documented for essentially all crashes on developed as well
as emergent stock markets, on currency markets, on company stocks, and so on.
The concept of an ``anti-bubble'' is also summarized, with two forward
predictions on the Japanese stock market starting in 1999 and on the USA stock
market still running. We conclude by presenting our view of the organization of
financial markets.Comment: Latex 89 pages and 38 figures, in press in Physics Report
Recommended from our members
Scheduling, Characterization and Prediction of HPC Workloads for Distributed Computing Environments
As High Performance Computing (HPC) has grown considerably and is expected to grow even more, effective resource management for distributed computing sys- tems is motivated more than ever. As the computational workloads grow in quantity, it is becoming more crucial to apply efficient resource management and workload scheduling to use resources efficiently while keeping the computational performance reasonably good. The problem of efficiently scheduling workloads on resources while meeting performance standards is hard. Additionally, non-clairvoyance of job dimen- sions makes resource management even harder in real-world scenarios. Our research methodology investigates the scheduling problem compliant for HPC and researches the challenges for deploying the scheduling in real world-scenarios using state of the art machine learning and data science techniques.To this end, this Ph.D. dissertation makes the following core contributions: a) We perform a theoretical analysis of space-sharing, non-preemptive scheduling: we studied this scheduling problem and proposed scheduling algorithms with polyno- mial computation time. We also proved constant upper-bounds for the performance of these algorithms. b) We studied the sensitivity of scheduling algorithms to the accuracy of runtime and devised a meta-learning approach to estimate prediction accuracy for newly submitted jobs to the HPC system. c) We studied the runtime prediction problem for HPC applications. For this purpose, we studied the distri- bution of available public workloads and proposed two different solutions that can predict multi-modal distributions: switching state-space models and Mixture Density Networks. d) We studied the effectiveness of recent recurrent neural network models for CPU usage trace prediction for individual VM traces as well as aggregate CPU usage traces. In this dissertation, we explore solutions to improve the performance of scheduling workloads on distributed systems.We begin by looking at the problem from the theoretical perspective. Modeling the problem mathematically, we first propose a scheduling algorithm that finds a constant approximation of the optimal solution for the problem in polynomial time. We prove that the performance of the algorithm (average completion time is the constant approximation of the performance of the optimal scheduling. We next look at the problem in real-world scenarios. Considering High-Performance Computing (HPC) workload computing environments as the most similar real-world equivalent of our mathematical model, we explore the problem of predicting application runtime. We propose an algorithm to handle the existing uncertainties in the real world and show-case our algorithm with demonstrative effectiveness in terms of response time and resource utilization. After looking at the uncertainty problem, we focus on trying to improve the accuracy of existing prediction approaches for HPC application runtime. We propose two solutions, one based on Kalman filters and one based on deep density mixture networks. We showcase the effectiveness of our prediction approaches by comparing with previous prediction approaches in terms of prediction accuracy and impact on improving scheduling performance. In the end, we focus on predicting resource usage for individual applications during their execution. We explore the application of recurrent neural networks for predicting resource usage of applications deployed on individual virtual machines. To validate our proposed models and solutions, we performed extensive trace-driven simulation and measured the effectiveness of our approaches
25 Years of Self-Organized Criticality: Solar and Astrophysics
Shortly after the seminal paper {\sl "Self-Organized Criticality: An
explanation of 1/f noise"} by Bak, Tang, and Wiesenfeld (1987), the idea has
been applied to solar physics, in {\sl "Avalanches and the Distribution of
Solar Flares"} by Lu and Hamilton (1991). In the following years, an inspiring
cross-fertilization from complexity theory to solar and astrophysics took
place, where the SOC concept was initially applied to solar flares, stellar
flares, and magnetospheric substorms, and later extended to the radiation belt,
the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar
glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and
boson clouds. The application of SOC concepts has been performed by numerical
cellular automaton simulations, by analytical calculations of statistical
(powerlaw-like) distributions based on physical scaling laws, and by
observational tests of theoretically predicted size distributions and waiting
time distributions. Attempts have been undertaken to import physical models
into the numerical SOC toy models, such as the discretization of
magneto-hydrodynamics (MHD) processes. The novel applications stimulated also
vigorous debates about the discrimination between SOC models, SOC-like, and
non-SOC processes, such as phase transitions, turbulence, random-walk
diffusion, percolation, branching processes, network theory, chaos theory,
fractality, multi-scale, and other complexity phenomena. We review SOC studies
from the last 25 years and highlight new trends, open questions, and future
challenges, as discussed during two recent ISSI workshops on this theme.Comment: 139 pages, 28 figures, Review based on ISSI workshops "Self-Organized
Criticality and Turbulence" (2012, 2013, Bern, Switzerland
Machine learning in solar physics
The application of machine learning in solar physics has the potential to
greatly enhance our understanding of the complex processes that take place in
the atmosphere of the Sun. By using techniques such as deep learning, we are
now in the position to analyze large amounts of data from solar observations
and identify patterns and trends that may not have been apparent using
traditional methods. This can help us improve our understanding of explosive
events like solar flares, which can have a strong effect on the Earth
environment. Predicting hazardous events on Earth becomes crucial for our
technological society. Machine learning can also improve our understanding of
the inner workings of the sun itself by allowing us to go deeper into the data
and to propose more complex models to explain them. Additionally, the use of
machine learning can help to automate the analysis of solar data, reducing the
need for manual labor and increasing the efficiency of research in this field.Comment: 100 pages, 13 figures, 286 references, accepted for publication as a
Living Review in Solar Physics (LRSP
Pattern Recognition
A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition
- …