98 research outputs found

    Port throughput influence factors based on neighborhood rough sets: an exploratory study

    Get PDF
    Purpose: The purpose of this paper is to devise a efficient method for the importance analysis on Port Throughput Influence Factors. Design/methodology/approach: Neighborhood rough sets is applied to solve the problem of selection factors. First the throughput index system is established. Then, we build the attribute reduction model using the updated numerical attribute to reduction algorithm based on neighborhood rough sets. We optimized the algorithm in order to achieve high efficiency performance. Finally, the article do empirical validation using Guangzhou Port throughput and influencing factors’ historical data of year 2000 to 2013. Findings: Through the model and algorithm, port enterprises can identify the importance of port throughput factors. It can provide support for their decisions. Research limitations: The empirical data are historical data of year 2000 to 2013. The amount of data is small. Practical implications: The results provide support for port business investment, decisions and risk control, and also provide assistance for port enterprises’ or other researchers’ throughput forecasting. Originality/value: In this paper, we establish a throughput index system, and optimize the algorithm for efficiency performance.Peer Reviewe

    Knowledge Discovery and Monotonicity

    Get PDF
    The monotonicity property is ubiquitous in our lives and it appears in different roles: as domain knowledge, as a requirement, as a property that reduces the complexity of the problem, and so on. It is present in various domains: economics, mathematics, languages, operations research and many others. This thesis is focused on the monotonicity property in knowledge discovery and more specifically in classification, attribute reduction, function decomposition, frequent patterns generation and missing values handling. Four specific problems are addressed within four different methodologies, namely, rough sets theory, monotone decision trees, function decomposition and frequent patterns generation. In the first three parts, the monotonicity is domain knowledge and a requirement for the outcome of the classification process. The three methodologies are extended for dealing with monotone data in order to be able to guarantee that the outcome will also satisfy the monotonicity requirement. In the last part, monotonicity is a property that helps reduce the computation of the process of frequent patterns generation. Here the focus is on two of the best algorithms and their comparison both theoretically and experimentally. About the Author: Viara Popova was born in Bourgas, Bulgaria in 1972. She followed her secondary education at Mathematics High School "Nikola Obreshkov" in Bourgas. In 1996 she finished her higher education at Sofia University, Faculty of Mathematics and Informatics where she graduated with major in Informatics and specialization in Information Technologies in Education. She then joined the Department of Information Technologies, First as an associated member and from 1997 as an assistant professor. In 1999 she became a PhD student at Erasmus University Rotterdam, Faculty of Economics, Department of Computer Science. In 2004 she joined the Artificial Intelligence Group within the Department of Computer Science, Faculty of Sciences at Vrije Universiteit Amsterdam as a PostDoc researcher.This thesis is positioned in the area of knowledge discovery with special attention to problems where the property of monotonicity plays an important role. Monotonicity is a ubiquitous property in all areas of life and has therefore been widely studied in mathematics. Monotonicity in knowledge discovery can be treated as available background information that can facilitate and guide the knowledge extraction process. While in some sub-areas methods have already been developed for taking this additional information into account, in most methodologies it has not been extensively studied or even has not been addressed at all. This thesis is a contribution to a change in that direction. In the thesis, four specific problems have been examined from different sub-areas of knowledge discovery: the rough sets methodology, monotone decision trees, function decomposition and frequent patterns discovery. In the first three parts, the monotonicity is domain knowledge and a requirement for the outcome of the classification process. The three methodologies are extended for dealing with monotone data in order to be able to guarantee that the outcome will also satisfy the monotonicity requirement. In the last part, monotonicity is a property that helps reduce the computation of the process of frequent patterns generation. Here the focus is on two of the best algorithms and their comparison both theoretically and experimentally

    Advances in Data Mining Knowledge Discovery and Applications

    Get PDF
    Advances in Data Mining Knowledge Discovery and Applications aims to help data miners, researchers, scholars, and PhD students who wish to apply data mining techniques. The primary contribution of this book is highlighting frontier fields and implementations of the knowledge discovery and data mining. It seems to be same things are repeated again. But in general, same approach and techniques may help us in different fields and expertise areas. This book presents knowledge discovery and data mining applications in two different sections. As known that, data mining covers areas of statistics, machine learning, data management and databases, pattern recognition, artificial intelligence, and other areas. In this book, most of the areas are covered with different data mining applications. The eighteen chapters have been classified in two parts: Knowledge Discovery and Data Mining Applications

    Uncertainty Management of Intelligent Feature Selection in Wireless Sensor Networks

    Get PDF
    Wireless sensor networks (WSN) are envisioned to revolutionize the paradigm of monitoring complex real-world systems at a very high resolution. However, the deployment of a large number of unattended sensor nodes in hostile environments, frequent changes of environment dynamics, and severe resource constraints pose uncertainties and limit the potential use of WSN in complex real-world applications. Although uncertainty management in Artificial Intelligence (AI) is well developed and well investigated, its implications in wireless sensor environments are inadequately addressed. This dissertation addresses uncertainty management issues of spatio-temporal patterns generated from sensor data. It provides a framework for characterizing spatio-temporal pattern in WSN. Using rough set theory and temporal reasoning a novel formalism has been developed to characterize and quantify the uncertainties in predicting spatio-temporal patterns from sensor data. This research also uncovers the trade-off among the uncertainty measures, which can be used to develop a multi-objective optimization model for real-time decision making in sensor data aggregation and samplin

    STAIRS 2014:proceedings of the 7th European Starting AI Researcher Symposium

    Get PDF

    Recent Trends in Computational Intelligence

    Get PDF
    Traditional models struggle to cope with complexity, noise, and the existence of a changing environment, while Computational Intelligence (CI) offers solutions to complicated problems as well as reverse problems. The main feature of CI is adaptability, spanning the fields of machine learning and computational neuroscience. CI also comprises biologically-inspired technologies such as the intellect of swarm as part of evolutionary computation and encompassing wider areas such as image processing, data collection, and natural language processing. This book aims to discuss the usage of CI for optimal solving of various applications proving its wide reach and relevance. Bounding of optimization methods and data mining strategies make a strong and reliable prediction tool for handling real-life applications

    A prescriptive framework for recommending decision attributes of infrastructure disaster recovery problems

    Get PDF
    This paper proposes a framework to systematically evaluate and select attributes of decision models used in disaster risk management. In doing so, we formalized the attribute selection process as a sequential screening-utility problem by formulating a prescriptive decision model. The aim is to assist decision-makers in producing a ranked list of attributes and selecting a set among them. We developed an evaluation process consisting of ten criteria in three sequential stages. We used a combination of three decision rules for the evaluation process, alongside mathematically integrated compensatory and non-compensatory techniques as the aggregation methods. We implemented the framework in the context of disaster resilient transportation network to investigate its performance and outcomes. Results show that the framework acted as an inclusive systematic decision aiding mechanism and promoted creative and collaborative decision-making. Preliminary investigations suggest the successful application of the framework in evaluating and selecting a tenable set of attributes. Further analyses are required to discuss the performance of the produced attributes. The properties of the resulting attributes and feedback of the users suggest the quality of outcomes compared to the retrospective attributes that were selected in an unaided selection process. Research and practice can use the framework to conduct a systematic problem-structuring phase of decision analysis and select an equitable set of decision attributes.TU Berlin, Open-Access-Mittel – 202
    • …
    corecore