14,476 research outputs found

    Tropical Cyclone Intensity Estimation Using Multi-Dimensional Convolutional Neural Networks from Geostationary Satellite Data

    Get PDF
    For a long time, researchers have tried to find a way to analyze tropical cyclone (TC) intensity in real-time. Since there is no standardized method for estimating TC intensity and the most widely used method is a manual algorithm using satellite-based cloud images, there is a bias that varies depending on the TC center and shape. In this study, we adopted convolutional neural networks (CNNs) which are part of a state-of-art approach that analyzes image patterns to estimate TC intensity by mimicking human cloud pattern recognition. Both two dimensional-CNN (2D-CNN) and three-dimensional-CNN (3D-CNN) were used to analyze the relationship between multi-spectral geostationary satellite images and TC intensity. Our best-optimized model produced a root mean squared error (RMSE) of 8.32 kts, resulting in better performance (~35%) than the existing model using the CNN-based approach with a single channel image. Moreover, we analyzed the characteristics of multi-spectral satellite-based TC images according to intensity using a heat map, which is one of the visualization means of CNNs. It shows that the stronger the intensity of the TC, the greater the influence of the TC center in the lower atmosphere. This is consistent with the results from the existing TC initialization method with numerical simulations based on dynamical TC models. Our study suggests the possibility that a deep learning approach can be used to interpret the behavior characteristics of TCs

    Deep Learning as a Parton Shower

    Get PDF
    We make the connection between certain deep learning architectures and the renormalisation group explicit in the context of QCD by using a deep learning network to construct a toy parton shower model. The model aims to describe proton-proton collisions at the Large Hadron Collider. A convolutional autoencoder learns a set of kernels that efficiently encode the behaviour of fully showered QCD collision events. The network is structured recursively so as to ensure self-similarity, and the number of trained network parameters is low. Randomness is introduced via a novel custom masking layer, which also preserves existing parton splittings by using layer-skipping connections. By applying a shower merging procedure, the network can be evaluated on unshowered events produced by a matrix element calculation. The trained network behaves as a parton shower that qualitatively reproduces jet-based observables.Comment: 26 pages, 13 figure

    One-Class Classification: Taxonomy of Study and Review of Techniques

    Full text link
    One-class classification (OCC) algorithms aim to build classification models when the negative class is either absent, poorly sampled or not well defined. This unique situation constrains the learning of efficient classifiers by defining class boundary just with the knowledge of positive class. The OCC problem has been considered and applied under many research themes, such as outlier/novelty detection and concept learning. In this paper we present a unified view of the general problem of OCC by presenting a taxonomy of study for OCC problems, which is based on the availability of training data, algorithms used and the application domains applied. We further delve into each of the categories of the proposed taxonomy and present a comprehensive literature review of the OCC algorithms, techniques and methodologies with a focus on their significance, limitations and applications. We conclude our paper by discussing some open research problems in the field of OCC and present our vision for future research.Comment: 24 pages + 11 pages of references, 8 figure

    An Efficient Pseudospectral Method for the Computation of the Self-force on a Charged Particle: Circular Geodesics around a Schwarzschild Black Hole

    Full text link
    The description of the inspiral of a stellar-mass compact object into a massive black hole sitting at a galactic centre is a problem of major relevance for the future space-based gravitational-wave observatory LISA (Laser Interferometer Space Antenna), as the signals from these systems will be buried in the data stream and accurate gravitational-wave templates will be needed to extract them. The main difficulty in describing these systems lies in the estimation of the gravitational effects of the stellar-mass compact object on his own trajectory around the massive black hole, which can be modeled as the action of a local force, the self-force. In this paper, we present a new time-domain numerical method for the computation of the self-force in a simplified model consisting of a charged scalar particle orbiting a nonrotating black hole. We use a multi-domain framework in such a way that the particle is located at the interface between two domains so that the presence of the particle and its physical effects appear only through appropriate boundary conditions. In this way we eliminate completely the presence of a small length scale associated with the need of resolving the particle. This technique also avoids the problems associated with the impact of a low differentiability of the solution in the accuracy of the numerical computations. The spatial discretization of the field equations is done by using the pseudospectral collocation method and the time evolution, based on the method of lines, uses a Runge-Kutta solver. We show how this special framework can provide very efficient and accurate computations in the time domain, which makes the technique amenable for the intensive computations required in the astrophysically-relevant scenarios for LISA.Comment: 15 pages, 9 figures, Revtex 4. Minor changes to match published versio

    High-temporal resolution fluvial sediment source fingerprinting with uncertainty: a Bayesian approach

    Get PDF
    This contribution addresses two developing areas of sediment fingerprinting research. Specifically, how to improve the temporal resolution of source apportionment estimates whilst minimizing analytical costs and, secondly, how to consistently quantify all perceived uncertainties associated with the sediment mixing model procedure. This first matter is tackled by using direct X-ray fluorescence spectroscopy (XRFS) and diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) analyses of suspended particulate matter (SPM) covered filter papers in conjunction with automatic water samplers. This method enables SPM geochemistry to be quickly, accurately, inexpensively and non-destructively monitored at high-temporal resolution throughout the progression of numerous precipitation events. We then employed a Bayesian mixing model procedure to provide full characterization of spatial geochemical variability, instrument precision and residual error to yield a realistic and coherent assessment of the uncertainties associated with source apportionment estimates. Applying these methods to SPM data from the River Wensum catchment, UK, we have been able to apportion, with uncertainty, sediment contributions from eroding arable topsoils, damaged road verges and combined subsurface channel bank and agricultural field drain sources at 60- and 120-minute resolution for the duration of five precipitation events. The results presented here demonstrate how combining Bayesian mixing models with the direct spectroscopic analysis of SPM-covered filter papers can produce high-temporal resolution source apportionment estimates that can assist with the appropriate targeting of sediment pollution mitigation measures at a catchment level
    corecore