49 research outputs found

    Advances in Modelling of Rainfall Fields

    Get PDF
    Rainfall is the main input for all hydrological models, such as rainfall–runoff models and the forecasting of landslides triggered by precipitation, with its comprehension being clearly essential for effective water resource management as well. The need to improve the modeling of rainfall fields constitutes a key aspect both for efficiently realizing early warning systems and for carrying out analyses of future scenarios related to occurrences and magnitudes for all induced phenomena. The aim of this Special Issue was hence to provide a collection of innovative contributions for rainfall modeling, focusing on hydrological scales and a context of climate changes. We believe that the contribution from the latest research outcomes presented in this Special Issue can shed novel insights on the comprehension of the hydrological cycle and all the phenomena that are a direct consequence of rainfall. Moreover, all these proposed papers can clearly constitute a valid base of knowledge for improving specific key aspects of rainfall modeling, mainly concerning climate change and how it induces modifications in properties such as magnitude, frequency, duration, and the spatial extension of different types of rainfall fields. The goal should also consider providing useful tools to practitioners for quantifying important design metrics in transient hydrological contexts (quantiles of assigned frequency, hazard functions, intensity–duration–frequency curves, etc.)

    Hybrid hidden Markov LSTM for short-term traffic flow prediction

    Full text link
    Deep learning (DL) methods have outperformed parametric models such as historical average, ARIMA and variants in predicting traffic variables into short and near-short future, that are critical for traffic management. Specifically, recurrent neural network (RNN) and its variants (e.g. long short-term memory) are designed to retain long-term temporal correlations and therefore are suitable for modeling sequences. However, multi-regime models assume the traffic system to evolve through multiple states (say, free-flow, congestion in traffic) with distinct characteristics, and hence, separate models are trained to characterize the traffic dynamics within each regime. For instance, Markov-switching models with a hidden Markov model (HMM) for regime identification is capable of capturing complex dynamic patterns and non-stationarity. Interestingly, both HMM and LSTM can be used for modeling an observation sequence from a set of latent or, hidden state variables. In LSTM, the latent variable is computed in a deterministic manner from the current observation and the previous latent variable, while, in HMM, the set of latent variables is a Markov chain. Inspired by research in natural language processing, a hybrid hidden Markov-LSTM model that is capable of learning complementary features in traffic data is proposed for traffic flow prediction. Results indicate significant performance gains in using hybrid architecture compared to conventional methods such as Markov switching ARIMA and LSTM

    The Skellam Distribution revisited -Estimating the unobserved incoming and outgoing ICU COVID-19 patients on a regional level in Germany

    Full text link
    With the beginning of the COVID-19 pandemic, we became aware of the need for comprehensive data collection and its provision to scientists and experts for proper data analyses. In Germany, the Robert Koch Institute (RKI) has tried to keep up with this demand for data on COVID-19, but there were (and still are) relevant data missing that are needed to understand the whole picture of the pandemic. In this paper, we take a closer look at the severity of the course of COVID-19 in Germany, for which ideal information would be the number of incoming patients to ICU units. This information was (and still is) not available. Instead, the current occupancy of ICU units on the district level was reported daily. We demonstrate how this information can be used to predict the number of incoming as well as released COVID-19 patients using a stochastic version of the Expectation Maximisation algorithm (SEM). This in turn, allows for estimating the influence of district-specific and age-specific infection rates as well as further covariates, including spatial effects, on the number of incoming patients. The paper demonstrates that even if relevant data are not recorded or provided officially, statistical modelling allows for reconstructing them. This also includes the quantification of uncertainty which naturally results from the application of the SEM algorithm.Comment: 30 pages, 10 figure

    Fractional Calculus and the Future of Science

    Get PDF
    Newton foresaw the limitations of geometry’s description of planetary behavior and developed fluxions (differentials) as the new language for celestial mechanics and as the way to implement his laws of mechanics. Two hundred years later Mandelbrot introduced the notion of fractals into the scientific lexicon of geometry, dynamics, and statistics and in so doing suggested ways to see beyond the limitations of Newton’s laws. Mandelbrot’s mathematical essays suggest how fractals may lead to the understanding of turbulence, viscoelasticity, and ultimately to end of dominance of the Newton’s macroscopic world view.Fractional Calculus and the Future of Science examines the nexus of these two game-changing contributions to our scientific understanding of the world. It addresses how non-integer differential equations replace Newton’s laws to describe the many guises of complexity, most of which lay beyond Newton’s experience, and many had even eluded Mandelbrot’s powerful intuition. The book’s authors look behind the mathematics and examine what must be true about a phenomenon’s behavior to justify the replacement of an integer-order with a noninteger-order (fractional) derivative. This window into the future of specific science disciplines using the fractional calculus lens suggests how what is seen entails a difference in scientific thinking and understanding

    Digital Forensics AI: on Practicality, Optimality, and Interpretability of Digital Evidence Mining Techniques

    Get PDF
    Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have devised means for exploiting the vulnerabilities or sophistication of these devices to carry out malicious activities in unprecedented ways. Their belief is that electronic crimes can be committed without identities being revealed or trails being established. Several applications of artificial intelligence (AI) have demonstrated interesting and promising solutions to seemingly intractable societal challenges. This thesis aims to advance the concept of applying AI techniques in digital forensic investigation. Our approach involves experimenting with a complex case scenario in which suspects corresponded by e-mail and deleted, suspiciously, certain communications, presumably to conceal evidence. The purpose is to demonstrate the efficacy of Artificial Neural Networks (ANN) in learning and detecting communication patterns over time, and then predicting the possibility of missing communication(s) along with potential topics of discussion. To do this, we developed a novel approach and included other existing models. The accuracy of our results is evaluated, and their performance on previously unseen data is measured. Second, we proposed conceptualizing the term “Digital Forensics AI” (DFAI) to formalize the application of AI in digital forensics. The objective is to highlight the instruments that facilitate the best evidential outcomes and presentation mechanisms that are adaptable to the probabilistic output of AI models. Finally, we enhanced our notion in support of the application of AI in digital forensics by recommending methodologies and approaches for bridging trust gaps through the development of interpretable models that facilitate the admissibility of digital evidence in legal proceedings

    Digital Forensics AI: on Practicality, Optimality, and Interpretability of Digital Evidence Mining Techniques

    Get PDF
    Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have devised means for exploiting the vulnerabilities or sophistication of these devices to carry out malicious activities in unprecedented ways. Their belief is that electronic crimes can be committed without identities being revealed or trails being established. Several applications of artificial intelligence (AI) have demonstrated interesting and promising solutions to seemingly intractable societal challenges. This thesis aims to advance the concept of applying AI techniques in digital forensic investigation. Our approach involves experimenting with a complex case scenario in which suspects corresponded by e-mail and deleted, suspiciously, certain communications, presumably to conceal evidence. The purpose is to demonstrate the efficacy of Artificial Neural Networks (ANN) in learning and detecting communication patterns over time, and then predicting the possibility of missing communication(s) along with potential topics of discussion. To do this, we developed a novel approach and included other existing models. The accuracy of our results is evaluated, and their performance on previously unseen data is measured. Second, we proposed conceptualizing the term “Digital Forensics AI” (DFAI) to formalize the application of AI in digital forensics. The objective is to highlight the instruments that facilitate the best evidential outcomes and presentation mechanisms that are adaptable to the probabilistic output of AI models. Finally, we enhanced our notion in support of the application of AI in digital forensics by recommending methodologies and approaches for bridging trust gaps through the development of interpretable models that facilitate the admissibility of digital evidence in legal proceedings

    Adaptive Imaging with a Cylindrical, Time-Encoded Imaging System

    Full text link
    Most imaging systems for terrestrial nuclear imaging are static in that the design of the system and the data acquisition protocol are defined prior to the experiment. Often, these systems are designed for general use and not optimized for any specific task. The core concept of adaptive imaging is to modify the imaging system during a measurement based on collected data. This enables scenario specific adaptation of the imaging system which leads to better performance for a given task. This dissertation presents the first adaptive, cylindrical, time-encoded imaging (c-TEI) system and evaluates its performance on tasks relevant to nuclear non-proliferation and international safeguards. We explore two methods of adaptation of a c-TEI system, adaptive detector movements and adaptive mask movements, and apply these methods to three tasks, improving angular resolution, detecting a weak source in the vicinity of a strong source, and reconstructing complex source scenes. The results indicate that adaptive imaging significantly improves performance in each case. For the MATADOR imager, we find that adaptive detector movements improve the angular resolution of a point source by 20% and improve the angular resolution of two point sources by up to 50%. For the problem of detecting a weak source in the vicinity of a strong source, we find that adaptive mask movements achieve the same detection performance as a similar, non-adaptive system in 20%-40% less time, depending on the relative position of the weak source. Additionally, we developed an adaptive detection algorithm that doubles the probability of detection of the weak source at a 5% false-alarm rate. Finally, we applied adaptive imaging concepts to reconstruct complex arrangements of special nuclear material at Idaho National Laboratory. We find that combining data from multiple detector positions improves image uniformity of extended sources by 38% and reduces the background noise by 50%. We also demonstrate 2D (azimuthal and radial) imaging in a crowded source scene. These promising experimental results highlight the potential for adaptive imaging using a c-TEI system and motivate further research toward specific, real-world applications.PHDNuclear Engineering & Radiological SciencesUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163009/1/nirpshah_1.pd

    Extreme-value statistics of stochastic transport processes

    Get PDF
    We derive exact expressions for the finite-time statistics of extrema (maximum and minimum) of the spatial displacement and the fluctuating entropy flow of biased random walks. Our approach captures key features of extreme events in molecular motor motion along linear filaments. For one-dimensional biased random walks, we derive exact results which tighten bounds for entropy production extrema obtained with martingale theory and reveal a symmetry between the distribution of the maxima and minima of entropy production. Furthermore, we show that the relaxation spectrum of the full generating function, and hence of any moment, of the finite-time extrema distributions can be written in terms of the Marcenko-Pastur distribution of random-matrix theory. Using this result, we obtain efficient estimates for the extreme-value statistics of stochastic transport processes from the eigenvalue distributions of suitable Wishart and Laguerre random matrices. We confirm our results with numerical simulations of stochastic models of molecular motors
    corecore