3,088 research outputs found

    Low-temperature thermochronology and thermokinematic modeling of deformation, exhumation, and development of topography in the central Southern Alps, New Zealand

    Get PDF
    Apatite and zircon (U-Th)/He and fission track ages were obtained from ridge transects across the central Southern Alps, New Zealand. Interpretation of local profiles is difficult because relationships between ages and topography or local faults are complex and the data contain large uncertainties, with poor reproducibility between sample duplicates. Data do form regional patterns, however, consistent with theoretical systematics and corroborating previous observations: young Neogene ages occur immediately southeast of the Alpine Fault (the main plate boundary structure on which rocks are exhumed); partially reset ages occur in the central Southern Alps; and older Mesozoic ages occur further toward the southeast. Zircon apparent ages are older than apatite apparent ages for the equivalent method. Three-dimensional thermokinematic modeling of plate convergence incorporates advection of the upper Pacific plate along a low-angle detachment then up an Alpine Fault ramp, adopting a generally accepted tectonic scenario for the Southern Alps. The modeling incorporates heat flow, evolving topography, and the detailed kinetics of different thermochronometric systems and explains both complex local variations and regional patterns. Inclusion of the effects of radiation damage on He diffusion in detrital apatite is shown to have dramatic effects on results. Geometric and velocity parameters are tuned to fit model ages to observed data. Best fit is achieved at 9 mm a−1 plate convergence, with Pacific plate delamination on a gentle 10°SE dipping detachment and more rapid uplift on a 45–60° dipping Alpine Fault ramp from 15 km depth. Thermokinematic modeling suggests dip-slip motion on reverse faults within the Southern Alps should be highest ∼22 km from the Alpine Fault and much lower toward the southeast

    Improved Reinforcement Learning with Curriculum

    Full text link
    Humans tend to learn complex abstract concepts faster if examples are presented in a structured manner. For instance, when learning how to play a board game, usually one of the first concepts learned is how the game ends, i.e. the actions that lead to a terminal state (win, lose or draw). The advantage of learning end-games first is that once the actions which lead to a terminal state are understood, it becomes possible to incrementally learn the consequences of actions that are further away from a terminal state - we call this an end-game-first curriculum. Currently the state-of-the-art machine learning player for general board games, AlphaZero by Google DeepMind, does not employ a structured training curriculum; instead learning from the entire game at all times. By employing an end-game-first training curriculum to train an AlphaZero inspired player, we empirically show that the rate of learning of an artificial player can be improved during the early stages of training when compared to a player not using a training curriculum.Comment: Draft prior to submission to IEEE Trans on Games. Changed paper slightl

    Flows and stochastic Taylor series in Ito calculus

    Get PDF
    For stochastic systems driven by continuous semimartingales an explicit formula for the logarithm of the Ito flow map is given. A similar formula is also obtained for solutions of linear matrix-valued SDEs driven by arbitrary semimartingales. The computation relies on the lift to quasi-shuffle algebras of formulas involving products of Ito integrals of semimartingales. Whereas the Chen-Strichartz formula computing the logarithm of the Stratonovich flow map is classically expanded as a formal sum indexed by permutations, the analogous formula in Ito calculus is naturally indexed by surjections. This reflects the change of algebraic background involved in the transition between the two integration theories

    An Empirical Examination of the Fisher Effect in Australia

    Get PDF
    This paper analyzes the Fisher effect in Australia. Initial testing indicates that both interest rates and inflation contain unit roots. Furthermore, there are indications that the variables have non-standard error processes. To overcome problems associated with this and derive the correct small sample distributions of test statistics we make use of Monte Carlo simulations. These tests indicate that while a long-run Fisher effect seems to exist there is no evidence of a short-run Fisher effect. This suggests that, while short-run changes in interest rates reflect changes in monetary policy, longer-run levels indicate inflationary expectations. Thus, the longer-run level of interest rates should not be used to characterize the stance of monetary policy.

    An Empirical Examination of the Fisher Effect in Australia

    Get PDF
    This paper analyses the Fisher effect in Australia. Initial testing indicates that both interest rates and inflation contain unit roots. Furthermore, there are indications that the variables have non-standard error processes. To overcome problems associated with this and derive the correct small sample distributions of test statistics we make use of Monte Carlo simulations. These tests indicate that while a long-run Fisher effect seems to exist there is no evidence of a short-run Fisher effect. This suggests that, while short-run changes in interest rates reflect changes in monetary policy, longer-run levels indicate inflationary expectations. Thus, the longer-run level of interest rates should not be used to characterise the stance of monetary policy.

    Detecting Overfitting of Deep Generative Networks via Latent Recovery

    Full text link
    State of the art deep generative networks are capable of producing images with such incredible realism that they can be suspected of memorizing training images. It is why it is not uncommon to include visualizations of training set nearest neighbors, to suggest generated images are not simply memorized. We demonstrate this is not sufficient and motivates the need to study memorization/overfitting of deep generators with more scrutiny. This paper addresses this question by i) showing how simple losses are highly effective at reconstructing images for deep generators ii) analyzing the statistics of reconstruction errors when reconstructing training and validation images, which is the standard way to analyze overfitting in machine learning. Using this methodology, this paper shows that overfitting is not detectable in the pure GAN models proposed in the literature, in contrast with those using hybrid adversarial losses, which are amongst the most widely applied generative methods. The paper also shows that standard GAN evaluation metrics fail to capture memorization for some deep generators. Finally, the paper also shows how off-the-shelf GAN generators can be successfully applied to face inpainting and face super-resolution using the proposed reconstruction method, without hybrid adversarial losses
    corecore