1,259 research outputs found

    On Training Traffic Predictors via Broad Learning Structures:A Benchmark Study

    Get PDF
    A fast architecture for real-time (i.e., minute-based) training of a traffic predictor is studied, based on the so-called broad learning system (BLS) paradigm. The study uses various traffic datasets by the California Department of Transportation, and employs a variety of standard algorithms (LASSO regression, shallow and deep neural networks, stacked autoencoders, convolutional, and recurrent neural networks) for comparison purposes: all algorithms are implemented in MATLAB on the same computing platform. The study demonstrates a BLS training process two-three orders of magnitude faster (tens of seconds against tens-hundreds of thousands of seconds), allowing unprecedented real-time capabilities. Additional comparisons with the extreme learning machine architecture, a learning algorithm sharing some features with BLS, confirm the fast training of least-square training as compared to gradient training

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Currency security and forensics: a survey

    Get PDF
    By its definition, the word currency refers to an agreed medium for exchange, a nation’s currency is the formal medium enforced by the elected governing entity. Throughout history, issuers have faced one common threat: counterfeiting. Despite technological advancements, overcoming counterfeit production remains a distant future. Scientific determination of authenticity requires a deep understanding of the raw materials and manufacturing processes involved. This survey serves as a synthesis of the current literature to understand the technology and the mechanics involved in currency manufacture and security, whilst identifying gaps in the current literature. Ultimately, a robust currency is desire

    Modeling travel demand and crashes at macroscopic and microscopic levels

    Get PDF
    Accurate travel demand / Annual Average Daily Traffic (AADT) and crash predictions helps planners to plan, propose and prioritize infrastructure projects for future improvements. Existing methods are based on demographic characteristics, socio-economic characteristics, and on-network (includes traffic volume) characteristics. A few methods have considered land use characteristics but along with other predictor variables. A strong correlation exists between land use characteristics and these other predictor variables. None of the past research has attempted to directly evaluate the effect and influence of land use characteristics on travel demand/AADT and crashes at both area and link level. These land use characteristics may be easy to capture and may have better predictive capabilities than other variables. The primary focus of this research is to develop macroscopic and microscopic models to estimate travel demand and crashes with an emphasis on land use characteristics. The proposed methodology involves development of macroscopic (area level) and microscopic (link level) models by incorporating scientific principles, statistical and artificial intelligent techniques. The microscopic models help evaluate the link level performance, whereas the macroscopic models help evaluate the overall performance of an area. The method for developing macroscopic models differs from microscopic models. The areas of land use characteristics were considered in developing macroscopic models, whereas the principle of demographic gravitation is incorporated in developing microscopic models. Statistical and back-propagation neural network (BPNN) techniques are used in developing the models. The results obtained indicate that statistical and neural network models ensured significantly lower errors. Overall, the BPNN models yielded better results in estimating travel demand and crashes than any other approach considered in this research. The neural network approach can be particularly suitable for their better predictive capability, whereas the statistical models could be used for mathematical formulation or understanding the role of explanatory variables in estimating AADT. Results obtained also indicate that land use characteristics have better predictive capabilities than other variables considered in this research. The outcomes can be used in safety conscious planning, land use decisions, long range transportation plans, prioritization of projects (short term and long term), and, to proactively apply safety treatments

    Analysis and review of the possibility of using the generative model as a compression technique in DNA data storage: review and future research agenda

    Get PDF
    The amount of data in this world is getting higher, and overwriting technology also has severe challenges. Data growth is expected to grow to 175 ZB by 2025. Data storage technology in DNA is an alternative technology with potential in information storage, mainly digital data. One of the stages of storing information on DNA is synthesis. This synthesis process costs very high, so it is necessary to integrate compression techniques for digital data to minimize the costs incurred. One of the models used in compression techniques is the generative model. This paper aims to see if compression using this generative model allows it to be integrated into data storage methods on DNA. To this end, we have conducted a Systematic Literature Review using the PRISMA method in selecting papers. We took the source of the papers from four leading databases and other additional databases. Out of 2440 papers, we finally decided on 34 primary papers for detailed analysis. This systematic literature review (SLR) presents and categorizes based on research questions, namely discussing machine learning methods applied in DNA storage, identifying compression techniques for DNA storage, knowing the role of deep learning in the compression process for DNA storage, knowing how generative models are associated with deep learning, knowing how generative models are applied in the compression process, and knowing latent space can be formed. The study highlights open problems that need to be solved and provides an identified research direction
    • …
    corecore