341 research outputs found

    A Hybrid Approach of Traffic Flow Prediction Using Wavelet Transform and Fuzzy Logic

    Get PDF
    The rapid development of urban areas and the increasing size of vehicle fleets are causing severe traffic congestions. According to traffic index data (Tom Tom Traffic Index 2016), most of the larger cities in Canada placed between 30th and 100th most traffic congested cities in the world. A recent research study by CAA (Canadian Automotive Association) concludes traffic congestions cost drivers 11.5 million hours and 22 million liters of fuel each year that causes billions of dollars in lost revenues. Although for four decades’ active research has been going on to improve transportation management, statistical data shows the demand for new methods to predict traffic flow with improved accuracy. This research presents a hybrid approach that applies a wavelet transform on a time-frequency (traffic count/hour) signal to determine sharp variation points of traffic flow. Datasets in between sharp variation points reveal segments of data with similar trends. These sets of data, construct fuzzy membership sets by categorizing the processed data together with other recorded information such as time, season, and weather. When real-time data is compared with the historical data using fuzzy IF-THEN rules, a matched dataset represents a reliable source of information for traffic prediction. In addition to the proposed new method, this research work also includes experiment results to demonstrate the improvement of accuracy for long-term traffic flow prediction

    AN APPROACH OF TRAFFIC FLOW PREDICTION USING ARIMA MODEL WITH FUZZY WAVELET TRANSFORM

    Get PDF
    It is essential for intelligent transportation systems to be capable of producing an accurate forecast of traffic flow in both the short and long terms. However, the counting datasets of traffic volume are non-stationary time series, which are integrally noisy. As a result, the accuracy of traffic prediction carried out on such unrefined data is reduced by the arbitrary components. A prior study shows that Box-Jenkins’ Autoregressive Integrated Moving Average (ARIMA) models convey demand of noise-free dataset for model construction. Therefore, this study proposes to overcome the noise issue by using a hybrid approach that combines the ARIMA model with fuzzy wavelet transform. In this approach, fuzzy rules are developed to categorize traffic datasets according to influencing factors such as the time of a day, the season of a year, and weather conditions. As the input of linear data series for ARIMA model needs to be converted into linear time series for traffic flow prediction, the discrete wavelet transform is applied to help separating the nonlinear and linear part of the time series along with denoised time series traffic data

    Landslide displacement forecasting using deep learning and monitoring data across selected sites

    Get PDF
    Accurate early warning systems for landslides are a reliable risk-reduction strategy that may significantly reduce fatalities and economic losses. Several machine learning methods have been examined for this purpose, underlying deep learning (DL) models’ remarkable prediction capabilities. The long short-term memory (LSTM) and gated recurrent unit (GRU) algorithms are the sole DL model studied in the extant comparisons. However, several other DL algorithms are suitable for time series forecasting tasks. In this paper, we assess, compare, and describe seven DL methods for forecasting future landslide displacement: multi-layer perception (MLP), LSTM, GRU, 1D convolutional neural network (1D CNN), 2xLSTM, bidirectional LSTM (bi-LSTM), and an architecture composed of 1D CNN and LSTM (Conv-LSTM). The investigation focuses on four landslides with different geographic locations, geological settings, time step dimensions, and measurement instruments. Two landslides are located in an artificial reservoir context, while the displacement of the other two is influenced just by rainfall. The results reveal that the MLP, GRU, and LSTM models can make reliable predictions in all four scenarios, while the Conv- LSTM model outperforms the others in the Baishuihe landslide, where the landslide is highly seasonal. No evident performance differences were found for landslides inside artificial reservoirs rather than outside. Furthermore, the research shows that MLP is better adapted to forecast the highest displacement peaks, while LSTM and GRU are better suited to model lower displacement peaks. We believe the findings of this research will serve as a precious aid when implementing a DL-based landslide early warning system (LEWS).SUPPORTO SCIENTIFICO PER L’OTTIMIZZAZIONE, IMPLEMENTAZIONE E GESTIONE DEL SISTEMA DI MONITORAGGIO CON AGGIORNAMENTO DELLE SOGLIE DI ALLERTAMENTO DEL FENOMENO FRANOSO DI SANT’ANDREA – PERAROLO DI CADORE (BL)” and the Spanish Grant “SARAI, PID2020-116540RB-C21,MCIN/AEI/10.13039/501100011033” and “RISKCOASTInSAR displacement data of the El Arrecife landslideGeohazard Exploitation Platform (GEP) of the European Space AgencyNoR Projects Sponsorship (Project ID: 63737

    Contributions to improve the technologies supporting unmanned aircraft operations

    Get PDF
    Mención Internacional en el título de doctorUnmanned Aerial Vehicles (UAVs), in their smaller versions known as drones, are becoming increasingly important in today's societies. The systems that make them up present a multitude of challenges, of which error can be considered the common denominator. The perception of the environment is measured by sensors that have errors, the models that interpret the information and/or define behaviors are approximations of the world and therefore also have errors. Explaining error allows extending the limits of deterministic models to address real-world problems. The performance of the technologies embedded in drones depends on our ability to understand, model, and control the error of the systems that integrate them, as well as new technologies that may emerge. Flight controllers integrate various subsystems that are generally dependent on other systems. One example is the guidance systems. These systems provide the engine's propulsion controller with the necessary information to accomplish a desired mission. For this purpose, the flight controller is made up of a control law for the guidance system that reacts to the information perceived by the perception and navigation systems. The error of any of the subsystems propagates through the ecosystem of the controller, so the study of each of them is essential. On the other hand, among the strategies for error control are state-space estimators, where the Kalman filter has been a great ally of engineers since its appearance in the 1960s. Kalman filters are at the heart of information fusion systems, minimizing the error covariance of the system and allowing the measured states to be filtered and estimated in the absence of observations. State Space Models (SSM) are developed based on a set of hypotheses for modeling the world. Among the assumptions are that the models of the world must be linear, Markovian, and that the error of their models must be Gaussian. In general, systems are not linear, so linearization are performed on models that are already approximations of the world. In other cases, the noise to be controlled is not Gaussian, but it is approximated to that distribution in order to be able to deal with it. On the other hand, many systems are not Markovian, i.e., their states do not depend only on the previous state, but there are other dependencies that state space models cannot handle. This thesis deals a collection of studies in which error is formulated and reduced. First, the error in a computer vision-based precision landing system is studied, then estimation and filtering problems from the deep learning approach are addressed. Finally, classification concepts with deep learning over trajectories are studied. The first case of the collection xviiistudies the consequences of error propagation in a machine vision-based precision landing system. This paper proposes a set of strategies to reduce the impact on the guidance system, and ultimately reduce the error. The next two studies approach the estimation and filtering problem from the deep learning approach, where error is a function to be minimized by learning. The last case of the collection deals with a trajectory classification problem with real data. This work completes the two main fields in deep learning, regression and classification, where the error is considered as a probability function of class membership.Los vehículos aéreos no tripulados (UAV) en sus versiones de pequeño tamaño conocidos como drones, van tomando protagonismo en las sociedades actuales. Los sistemas que los componen presentan multitud de retos entre los cuales el error se puede considerar como el denominador común. La percepción del entorno se mide mediante sensores que tienen error, los modelos que interpretan la información y/o definen comportamientos son aproximaciones del mundo y por consiguiente también presentan error. Explicar el error permite extender los límites de los modelos deterministas para abordar problemas del mundo real. El rendimiento de las tecnologías embarcadas en los drones, dependen de nuestra capacidad de comprender, modelar y controlar el error de los sistemas que los integran, así como de las nuevas tecnologías que puedan surgir. Los controladores de vuelo integran diferentes subsistemas los cuales generalmente son dependientes de otros sistemas. Un caso de esta situación son los sistemas de guiado. Estos sistemas son los encargados de proporcionar al controlador de los motores información necesaria para cumplir con una misión deseada. Para ello se componen de una ley de control de guiado que reacciona a la información percibida por los sistemas de percepción y navegación. El error de cualquiera de estos sistemas se propaga por el ecosistema del controlador siendo vital su estudio. Por otro lado, entre las estrategias para abordar el control del error se encuentran los estimadores en espacios de estados, donde el filtro de Kalman desde su aparición en los años 60, ha sido y continúa siendo un gran aliado para los ingenieros. Los filtros de Kalman son el corazón de los sistemas de fusión de información, los cuales minimizan la covarianza del error del sistema, permitiendo filtrar los estados medidos y estimarlos cuando no se tienen observaciones. Los modelos de espacios de estados se desarrollan en base a un conjunto de hipótesis para modelar el mundo. Entre las hipótesis se encuentra que los modelos del mundo han de ser lineales, markovianos y que el error de sus modelos ha de ser gaussiano. Generalmente los sistemas no son lineales por lo que se realizan linealizaciones sobre modelos que a su vez ya son aproximaciones del mundo. En otros casos el ruido que se desea controlar no es gaussiano, pero se aproxima a esta distribución para poder abordarlo. Por otro lado, multitud de sistemas no son markovianos, es decir, sus estados no solo dependen del estado anterior, sino que existen otras dependencias que los modelos de espacio de estados no son capaces de abordar. Esta tesis aborda un compendio de estudios sobre los que se formula y reduce el error. En primer lugar, se estudia el error en un sistema de aterrizaje de precisión basado en visión por computador. Después se plantean problemas de estimación y filtrado desde la aproximación del aprendizaje profundo. Por último, se estudian los conceptos de clasificación con aprendizaje profundo sobre trayectorias. El primer caso del compendio estudia las consecuencias de la propagación del error de un sistema de aterrizaje de precisión basado en visión artificial. En este trabajo se propone un conjunto de estrategias para reducir el impacto sobre el sistema de guiado, y en última instancia reducir el error. Los siguientes dos estudios abordan el problema de estimación y filtrado desde la perspectiva del aprendizaje profundo, donde el error es una función que minimizar mediante aprendizaje. El último caso del compendio aborda un problema de clasificación de trayectorias con datos reales. Con este trabajo se completan los dos campos principales en aprendizaje profundo, regresión y clasificación, donde se plantea el error como una función de probabilidad de pertenencia a una clase.I would like to thank the Ministry of Science and Innovation for granting me the funding with reference PRE2018-086793, associated to the project TEC2017-88048-C2-2-R, which provide me the opportunity to carry out all my PhD. activities, including completing an international research internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Antonio Berlanga de Jesús.- Secretario: Daniel Arias Medina.- Vocal: Alejandro Martínez Cav

    Theory and applications of artificial neural networks

    Get PDF
    In this thesis some fundamental theoretical problems about artificial neural networks and their application in communication and control systems are discussed. We consider the convergence properties of the Back-Propagation algorithm which is widely used for training of artificial neural networks, and two stepsize variation techniques are proposed to accelerate convergence. Simulation results demonstrate significant improvement over conventional Back-Propagation algorithms. We also discuss the relationship between generalization performance of artificial neural networks and their structure and representation strategy. It is shown that the structure of the network which represent a priori knowledge of the environment has a strong influence on generalization performance. A Theorem about the number of hidden units and the capacity of self-association MLP (Multi-Layer Perceptron) type network is also given in the thesis. In the application part of the thesis, we discuss the feasibility of using artificial neural networks for nonlinear system identification. Some advantages and disadvantages of this approach are analyzed. The thesis continues with a study of artificial neural networks applied to communication channel equalization and the problem of call access control in broadband ATM (Asynchronous Transfer Mode) communication networks. A final chapter provides overall conclusions and suggestions for further work

    Optics for AI and AI for Optics

    Get PDF
    Artificial intelligence is deeply involved in our daily lives via reinforcing the digital transformation of modern economies and infrastructure. It relies on powerful computing clusters, which face bottlenecks of power consumption for both data transmission and intensive computing. Meanwhile, optics (especially optical communications, which underpin today’s telecommunications) is penetrating short-reach connections down to the chip level, thus meeting with AI technology and creating numerous opportunities. This book is about the marriage of optics and AI and how each part can benefit from the other. Optics facilitates on-chip neural networks based on fast optical computing and energy-efficient interconnects and communications. On the other hand, AI enables efficient tools to address the challenges of today’s optical communication networks, which behave in an increasingly complex manner. The book collects contributions from pioneering researchers from both academy and industry to discuss the challenges and solutions in each of the respective fields

    Enhancing Short-Term Berry Yield Prediction for Small Growers Using a Novel Hybrid Machine Learning Model

    Get PDF
    This study presents a novel hybrid model that combines two different algorithms to increase the accuracy of short-term berry yield prediction using only previous yield data. The model integrates both autoregressive integrated moving average (ARIMA) with Kalman filter refinement and neural network techniques, specifically support vector regression (SVR), and nonlinear autoregressive (NAR) neural networks, to improve prediction accuracy by correcting the errors generated by the system. In order to enhance the prediction performance of the ARIMA model, an innovative method is introduced that reduces randomness and incorporates only observed variables and system errors into the state-space system. The results indicate that the proposed hybrid models exhibit greater accuracy in predicting weekly production, with a goodness-of-fit value above 0.95 and lower root mean square error (RMSE) and mean absolute error (MAE) values compared with non-hybrid models. The study highlights several implications, including the potential for small growers to use digital strategies that offer crop forecasts to increase sales and promote loyalty in relationships with large food retail chains. Additionally, accurate yield forecasting can help berry growers plan their production schedules and optimize resource use, leading to increased efficiency and profitability. The proposed model may serve as a valuable information source for European food retailers, enabling growers to form strategic alliances with their customers
    corecore