36 research outputs found

    Convective Weather Forecast Quality Metrics for Air Traffic Management Decision-Making

    Get PDF
    Since numerical weather prediction models are unable to accurately forecast the severity and the location of the storm cells several hours into the future when compared with observation data, there has been a growing interest in probabilistic description of convective weather. The classical approach for generating uncertainty bounds consists of integrating the state equations and covariance propagation equations forward in time. This step is readily recognized as the process update step of the Kalman Filter algorithm. The second well known method, known as the Monte Carlo method, consists of generating output samples by driving the forecast algorithm with input samples selected from distributions. The statistical properties of the distributions of the output samples are then used for defining the uncertainty bounds of the output variables. This method is computationally expensive for a complex model compared to the covariance propagation method. The main advantage of the Monte Carlo method is that a complex non-linear model can be easily handled. Recently, a few different methods for probabilistic forecasting have appeared in the literature. A method for computing probability of convection in a region using forecast data is described in Ref. 5. Probability at a grid location is computed as the fraction of grid points, within a box of specified dimensions around the grid location, with forecast convection precipitation exceeding a specified threshold. The main limitation of this method is that the results are dependent on the chosen dimensions of the box. The examples presented Ref. 5 show that this process is equivalent to low-pass filtering of the forecast data with a finite support spatial filter. References 6 and 7 describe the technique for computing percentage coverage within a 92 x 92 square-kilometer box and assigning the value to the center 4 x 4 square-kilometer box. This technique is same as that described in Ref. 5. Characterizing the forecast, following the process described in Refs. 5 through 7, in terms of percentage coverage or confidence level is notionally sound compared to characterizing in terms of probabilities because the probability of the forecast being correct can only be determined using actual observations. References 5 through 7 only use the forecast data and not the observations. The method for computing the probability of detection, false alarm ratio and several forecast quality metrics (Skill Scores) using both the forecast and observation data are given in Ref. 2. This paper extends the statistical verification method in Ref. 2 to determine co-occurrence probabilities. The method consists of computing the probability that a severe weather cell (grid location) is detected in the observation data in the neighborhood of the severe weather cell in the forecast data. Probabilities of occurrence at the grid location and in its neighborhood with higher severity, and with lower severity in the observation data compared to that in the forecast data are examined. The method proposed in Refs. 5 through 7 is used for computing the probability that a certain number of cells in the neighborhood of severe weather cells in the forecast data are seen as severe weather cells in the observation data. Finally, the probability of existence of gaps in the observation data in the neighborhood of severe weather cells in forecast data is computed. Gaps are defined as openings between severe weather cells through which an aircraft can safely fly to its intended destination. The rest of the paper is organized as follows. Section II summarizes the statistical verification method described in Ref. 2. The extension of this method for computing the co-occurrence probabilities in discussed in Section HI. Numerical examples using NCWF forecast data and NCWD observation data are presented in Section III to elucidate the characteristics of the co-occurrence probabilities. This section also discusses the procedure for computing throbabilities that the severity of convection in the observation data will be higher or lower in the neighborhood of grid locations compared to that indicated at the grid locations in the forecast data. The probability of coverage of neighborhood grid cells is also described via examples in this section. Section IV discusses the gap detection algorithm and presents a numerical example to illustrate the method. The locations of the detected gaps in the observation data are used along with the locations of convective weather cells in the forecast data to determine the probability of existence of gaps in the neighborhood of these cells. Finally, the paper is concluded in Section V

    Nucleotide Discrimination with DNA Immobilized in the MspA Nanopore

    Get PDF
    Nanopore sequencing has the potential to become a fast and low-cost DNA sequencing platform. An ionic current passing through a small pore would directly map the sequence of single stranded DNA (ssDNA) driven through the constriction. The pore protein, MspA, derived from Mycobacterium smegmatis, has a short and narrow channel constriction ideally suited for nanopore sequencing. To study MspA's ability to resolve nucleotides, we held ssDNA within the pore using a biotin-NeutrAvidin complex. We show that homopolymers of adenine, cytosine, thymine, and guanine in MspA exhibit much larger current differences than in α-hemolysin. Additionally, methylated cytosine is distinguishable from unmethylated cytosine. We establish that single nucleotide substitutions within homopolymer ssDNA can be detected when held in MspA's constriction. Using genomic single nucleotide polymorphisms, we demonstrate that single nucleotides within random DNA can be identified. Our results indicate that MspA has high signal-to-noise ratio and the single nucleotide sensitivity desired for nanopore sequencing devices

    Measuring Single-Molecule DNA Hybridization by Active Control of DNA in a Nanopore

    Get PDF
    We present a novel application of active voltage control of DNA captured in a nanopore to regulate the amount of time the DNA is available to molecules in the bulk phase that bind to the DNA. In this work, the control method is used to measure hybridization between a single molecule of DNA captured in a nanopore and complementary oligonucleotides in the bulk phase. We examine the effect of oligonucleotide length on hybridization, and the effect of DNA length heterogeneity on the measurements. Using a mathematical model, we are able to deduce the binding rate of complementary oligonucleotides, even when DNA samples in experiments are affected by heterogeneity in length. We analyze the lifetime distribution of DNA duplexes that are formed in the bulk phase and then pulled against the pore by reversing the voltage. The lifetime distribution reveals several dissociation modes. It remains to be resolved whether these dissociation modes are due to DNA heterogeneity or correspond to different states of duplex DNA. The control method is unique in its ability to detect single-molecule complex assembly in the bulk phase, free from external force and with a broad (millisecond-to-second) temporal range
    corecore