83 research outputs found

    A Hybrid Approach of Traffic Flow Prediction Using Wavelet Transform and Fuzzy Logic

    Get PDF
    The rapid development of urban areas and the increasing size of vehicle fleets are causing severe traffic congestions. According to traffic index data (Tom Tom Traffic Index 2016), most of the larger cities in Canada placed between 30th and 100th most traffic congested cities in the world. A recent research study by CAA (Canadian Automotive Association) concludes traffic congestions cost drivers 11.5 million hours and 22 million liters of fuel each year that causes billions of dollars in lost revenues. Although for four decades’ active research has been going on to improve transportation management, statistical data shows the demand for new methods to predict traffic flow with improved accuracy. This research presents a hybrid approach that applies a wavelet transform on a time-frequency (traffic count/hour) signal to determine sharp variation points of traffic flow. Datasets in between sharp variation points reveal segments of data with similar trends. These sets of data, construct fuzzy membership sets by categorizing the processed data together with other recorded information such as time, season, and weather. When real-time data is compared with the historical data using fuzzy IF-THEN rules, a matched dataset represents a reliable source of information for traffic prediction. In addition to the proposed new method, this research work also includes experiment results to demonstrate the improvement of accuracy for long-term traffic flow prediction

    A survey of measurement-based spectrum occupancy modeling for cognitive radios

    Get PDF
    Spectrum occupancy models are very useful in cognitive radio designs. They can be used to increase spectrum sensing accuracy for more reliable operation, to remove spectrum sensing for higher resource usage efficiency, or to select channels for better opportunistic access, among other applications. In this survey, various spectrum occupancy models from measurement campaigns taken around the world are investigated. These models extract different statistical properties of the spectrum occupancy from the measured data. In addition to these models, spectrum occupancy prediction is also discussed, where autoregressive and/or moving-average models are used to predict the channel status at future time instants. After comparing these different methods and models, several challenges are also summarized based on this survey

    Computational analysis of nucleosome positioning datasets

    Get PDF
    Chromatin is a complex of DNA and histone proteins that constitutes the elemental material of eukaryotic chromosomes. The basic repeating sub-unit of chromatin, the nucleosome core particle, is comprised of approximately 146 base pairs (bp) of DNA wrapped around an octamer of core histones. Core particles are joined together by variable lengths of linker DNA to form chains of nucleosomes that are folded into higher-order structures. The specific distribution of nucleosomes along the DNA fibre is known to influence this folding process. Furthermore, on a local level, the positioning of nucleosomes can control access to DNA sequence motifs, and thus plays a fundamental role in regulating gene expression. Despite considerable experimental effort, neither the folding process nor the mechanisms for gene regulation are currently well understood.Monomer extension (ME) is an established in vitro experimental technique which maps the positions adopted by reconstituted core histone octamers on a defined DNA sequence. It provides quantitative positioning information, at high resolution, over long continuous stretches of DNA sequence. This technique has been employed to map several genes: globin genes (8 kbp), the beta-lactoglobulin gene (10 kbp) and various imprinting genes (4 kbp).This study explores and analyses this unique dataset, utilising computational and stochastic techniques, to gain insight into the potential influence of nucleosomal positioning on the structure and function of chromatin. The first section of this thesis expands upon prior analyses, explores general features of the dataset using common bioinformatics tools, and attempts to relate the quantitative positioning information from ME to data from other commonly used competitive reconstitution protocols. Finally, evidence of a correlation between the in vitro ME dataset and in vivo nucleosome positions for the beta-lactoglobulin gene region is presented.The second section presents the development of a novel method for the analysis of ME maps using Monte Carlo simulation methods. The goal was to use the ME datasets to simulate a higher order chromatin fibre, taking advantage of the longrange and quantitative nature of the ME datasets.The Monte Carlo simulations have allowed new insights to be gleaned from the datasets. Analysis of the beta-lactoglobulin positioning map indicates the potential for discrete disruption of nucleosomal organisation, at specific physiological nucleosome densities, over regions found to have unusual chromatin structure in vivo. This suggests a correspondence between the quantitative histone octamer positioning information in vitro and the positioning of nucleosomes in vivo.Further, the simulations demonstrate that histone density-dependent changes in nucleosomal organisation, in both the beta-lactoglobulin and globin positioning maps, often occur in regions involved in gene regulation. This implies that irregular chromatin structures may form over certain biologically significant regions.Taken together, these studies lend weight to the hypothesis that nucleosome positioning information encoded within DNA plays a fundamental role in directing chromatin structure in vivo

    Application of learning algorithms to traffic management in integrated services networks.

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN027131 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Machine Learning Solutions for Transportation Networks

    Get PDF
    This thesis brings a collection of novel models and methods that result from a new look at practical problems in transportation through the prism of newly available sensor data. There are four main contributions: First, we design a generative probabilistic graphical model to describe multivariate continuous densities such as observed traffic patterns. The model implements a multivariate normal distribution with covariance constrained in a natural way, using a number of parameters that is only linear (as opposed to quadratic) in the dimensionality of the data. This means that learning these models requires less data. The primary use for such a model is to support inferences, for instance, of data missing due to sensor malfunctions. Second, we build a model of traffic flow inspired by macroscopic flow models. Unlike traditional such models, our model deals with uncertainty of measurement and unobservability of certain important quantities and incorporates on-the-fly observations more easily. Because the model does not admit efficient exact inference, we develop a particle filter. The model delivers better medium- and long- term predictions than general-purpose time series models. Moreover, having a predictive distribution of traffic state enables the application of powerful decision-making machinery to the traffic domain. Third, two new optimization algorithms for the common task of vehicle routing are designed, using the traffic flow model as their probabilistic underpinning. Their benefits include suitability to highly volatile environments and the fact that optimization criteria other than the classical minimal expected time are easily incorporated. Finally, we present a new method for detecting accidents and other adverse events. Data collected from highways enables us to bring supervised learning approaches to incident detection. We show that a support vector machine learner can outperform manually calibrated solutions. A major hurdle to performance of supervised learners is the quality of data which contains systematic biases varying from site to site. We build a dynamic Bayesian network framework that learns and rectifies these biases, leading to improved supervised detector performance with little need for manually tagged data. The realignment method applies generally to virtually all forms of labeled sequential data
    • …
    corecore