13,884 research outputs found

    A novel Big Data analytics and intelligent technique to predict driver's intent

    Get PDF
    Modern age offers a great potential for automatically predicting the driver's intent through the increasing miniaturization of computing technologies, rapid advancements in communication technologies and continuous connectivity of heterogeneous smart objects. Inside the cabin and engine of modern cars, dedicated computer systems need to possess the ability to exploit the wealth of information generated by heterogeneous data sources with different contextual and conceptual representations. Processing and utilizing this diverse and voluminous data, involves many challenges concerning the design of the computational technique used to perform this task. In this paper, we investigate the various data sources available in the car and the surrounding environment, which can be utilized as inputs in order to predict driver's intent and behavior. As part of investigating these potential data sources, we conducted experiments on e-calendars for a large number of employees, and have reviewed a number of available geo referencing systems. Through the results of a statistical analysis and by computing location recognition accuracy results, we explored in detail the potential utilization of calendar location data to detect the driver's intentions. In order to exploit the numerous diverse data inputs available in modern vehicles, we investigate the suitability of different Computational Intelligence (CI) techniques, and propose a novel fuzzy computational modelling methodology. Finally, we outline the impact of applying advanced CI and Big Data analytics techniques in modern vehicles on the driver and society in general, and discuss ethical and legal issues arising from the deployment of intelligent self-learning cars

    Large-scale climatic teleconnection for predicting extreme hydro-climatic events in southern Japan

    Get PDF
    Coordinator: Sameh KantoushPrnicipial Invistegator: Vahid Nouran

    Predicting a Containership's Arrival Punctuality in Liner Operations by Using a Fuzzy Rule-Based Bayesian Network (FRBBN)

    Get PDF
    One of the biggest concerns in liner operations is punctuality of containerships. Managing the time factor has become a crucial issue in today's liner shipping operations. A statistic in 2015 showed that the overall punctuality for containerships only reached an on-time performance of 73%. However, vessel punctuality is affected by many factors such as the port and vessel conditions and knock-on effects of delays. As a result, this paper develops a model for analyzing and predicting the arrival punctuality of a liner vessel at ports of call under uncertain environments by using a hybrid decision-making technique, the Fuzzy Rule-Based Bayesian Network (FRBBN). In order to ensure the practicability of the model, two container vessels have been tested by using the proposed model. The results have shown that the differences between prediction values and real arrival times are only 4.2% and 6.6%, which can be considered as reasonable. This model is capable of helping liner shipping operators (LSOs) to predict the arrival punctuality of their vessel at a particular port of call. © 2017 The Korean Association of Shipping and Logistics, Inc

    Multilayered feed forward Artificial Neural Network model to predict the average summer-monsoon rainfall in India

    Full text link
    In the present research, possibility of predicting average summer-monsoon rainfall over India has been analyzed through Artificial Neural Network models. In formulating the Artificial Neural Network based predictive model, three layered networks have been constructed with sigmoid non-linearity. The models under study are different in the number of hidden neurons. After a thorough training and test procedure, neural net with three nodes in the hidden layer is found to be the best predictive model.Comment: 19 pages, 1 table, 3 figure

    Valuing information from mesoscale forecasts

    Get PDF
    The development of meso-gamma scale numerical weather prediction (NWP) models requires a substantial investment in research, development and computational resources. Traditional objective verification of deterministic model output fails to demonstrate the added value of high-resolution forecasts made by such models. It is generally accepted from subjective verification that these models nevertheless have a predictive potential for small-scale weather phenomena and extreme weather events. This has prompted an extensive body of research into new verification techniques and scores aimed at developing mesoscale performance measures that objectively demonstrate the return on investment in meso-gamma NWP. In this article it is argued that the evaluation of the information in mesoscale forecasts should be essentially connected to the method that is used to extract this information from the direct model output (DMO). This could be an evaluation by a forecaster, but, given the probabilistic nature of small-scale weather, is more likely a form of statistical post-processing. Using model output statistics (MOS) and traditional verification scores, the potential of this approach is demonstrated both on an educational abstraction and a real world example. The MOS approach for this article incorporates concepts from fuzzy verification. This MOS approach objectively weighs different forecast quality measures and as such it is an essential extension of fuzzy methods

    A systematic review of data quality issues in knowledge discovery tasks

    Get PDF
    Hay un gran crecimiento en el volumen de datos porque las organizaciones capturan permanentemente la cantidad colectiva de datos para lograr un mejor proceso de toma de decisiones. El desafío mas fundamental es la exploración de los grandes volúmenes de datos y la extracción de conocimiento útil para futuras acciones por medio de tareas para el descubrimiento del conocimiento; sin embargo, muchos datos presentan mala calidad. Presentamos una revisión sistemática de los asuntos de calidad de datos en las áreas del descubrimiento de conocimiento y un estudio de caso aplicado a la enfermedad agrícola conocida como la roya del café.Large volume of data is growing because the organizations are continuously capturing the collective amount of data for better decision-making process. The most fundamental challenge is to explore the large volumes of data and extract useful knowledge for future actions through knowledge discovery tasks, nevertheless many data has poor quality. We presented a systematic review of the data quality issues in knowledge discovery tasks and a case study applied to agricultural disease named coffee rust
    corecore