192 research outputs found

    The posterity of Zadeh's 50-year-old paper: A retrospective in 101 Easy Pieces – and a Few More

    Get PDF
    International audienceThis article was commissioned by the 22nd IEEE International Conference of Fuzzy Systems (FUZZ-IEEE) to celebrate the 50th Anniversary of Lotfi Zadeh's seminal 1965 paper on fuzzy sets. In addition to Lotfi's original paper, this note itemizes 100 citations of books and papers deemed “important (significant, seminal, etc.)” by 20 of the 21 living IEEE CIS Fuzzy Systems pioneers. Each of the 20 contributors supplied 5 citations, and Lotfi's paper makes the overall list a tidy 101, as in “Fuzzy Sets 101”. This note is not a survey in any real sense of the word, but the contributors did offer short remarks to indicate the reason for inclusion (e.g., historical, topical, seminal, etc.) of each citation. Citation statistics are easy to find and notoriously erroneous, so we refrain from reporting them - almost. The exception is that according to Google scholar on April 9, 2015, Lotfi's 1965 paper has been cited 55,479 times

    Data Science: Measuring Uncertainties

    Get PDF
    With the increase in data processing and storage capacity, a large amount of data is available. Data without analysis does not have much value. Thus, the demand for data analysis is increasing daily, and the consequence is the appearance of a large number of jobs and published articles. Data science has emerged as a multidisciplinary field to support data-driven activities, integrating and developing ideas, methods, and processes to extract information from data. This includes methods built from different knowledge areas: Statistics, Computer Science, Mathematics, Physics, Information Science, and Engineering. This mixture of areas has given rise to what we call Data Science. New solutions to the new problems are reproducing rapidly to generate large volumes of data. Current and future challenges require greater care in creating new solutions that satisfy the rationality for each type of problem. Labels such as Big Data, Data Science, Machine Learning, Statistical Learning, and Artificial Intelligence are demanding more sophistication in the foundations and how they are being applied. This point highlights the importance of building the foundations of Data Science. This book is dedicated to solutions and discussions of measuring uncertainties in data analysis problems

    Variable modeling of fuzzy phenomena with industrial applications

    Get PDF
    Includes abstract. Includes bibliographical references (leaves 98-100)

    Fuzzy Mathematics

    Get PDF
    This book provides a timely overview of topics in fuzzy mathematics. It lays the foundation for further research and applications in a broad range of areas. It contains break-through analysis on how results from the many variations and extensions of fuzzy set theory can be obtained from known results of traditional fuzzy set theory. The book contains not only theoretical results, but a wide range of applications in areas such as decision analysis, optimal allocation in possibilistics and mixed models, pattern classification, credibility measures, algorithms for modeling uncertain data, and numerical methods for solving fuzzy linear systems. The book offers an excellent reference for advanced undergraduate and graduate students in applied and theoretical fuzzy mathematics. Researchers and referees in fuzzy set theory will find the book to be of extreme value

    Software defect prediction using maximal information coefficient and fast correlation-based filter feature selection

    Get PDF
    Software quality ensures that applications that are developed are failure free. Some modern systems are intricate, due to the complexity of their information processes. Software fault prediction is an important quality assurance activity, since it is a mechanism that correctly predicts the defect proneness of modules and classifies modules that saves resources, time and developers’ efforts. In this study, a model that selects relevant features that can be used in defect prediction was proposed. The literature was reviewed and it revealed that process metrics are better predictors of defects in version systems and are based on historic source code over time. These metrics are extracted from the source-code module and include, for example, the number of additions and deletions from the source code, the number of distinct committers and the number of modified lines. In this research, defect prediction was conducted using open source software (OSS) of software product line(s) (SPL), hence process metrics were chosen. Data sets that are used in defect prediction may contain non-significant and redundant attributes that may affect the accuracy of machine-learning algorithms. In order to improve the prediction accuracy of classification models, features that are significant in the defect prediction process are utilised. In machine learning, feature selection techniques are applied in the identification of the relevant data. Feature selection is a pre-processing step that helps to reduce the dimensionality of data in machine learning. Feature selection techniques include information theoretic methods that are based on the entropy concept. This study experimented the efficiency of the feature selection techniques. It was realised that software defect prediction using significant attributes improves the prediction accuracy. A novel MICFastCR model, which is based on the Maximal Information Coefficient (MIC) was developed to select significant attributes and Fast Correlation Based Filter (FCBF) to eliminate redundant attributes. Machine learning algorithms were then run to predict software defects. The MICFastCR achieved the highest prediction accuracy as reported by various performance measures.School of ComputingPh. D. (Computer Science

    Data Mining

    Get PDF
    Data mining is a branch of computer science that is used to automatically extract meaningful, useful knowledge and previously unknown, hidden, interesting patterns from a large amount of data to support the decision-making process. This book presents recent theoretical and practical advances in the field of data mining. It discusses a number of data mining methods, including classification, clustering, and association rule mining. This book brings together many different successful data mining studies in various areas such as health, banking, education, software engineering, animal science, and the environment

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens

    Arhitektura sistema za prepoznavanje nepravilnosti u mrežnom saobraćaju zasnovano na analizi entropije

    Get PDF
    With the steady increase in reliance on computer networks in all aspects of life, computers and other connected devices have become more vulnerable to attacks, which exposes them to many major threats, especially in recent years. There are different systems to protect networks from these threats such as firewalls, antivirus programs, and data encryption, but it is still hard to provide complete protection for networks and their systems from the attacks, which are increasingly sophisticated with time. That is why it is required to use intrusion detection systems (IDS) on a large scale to be the second line of defense for computer and network systems along with other network security techniques. The main objective of intrusion detection systems is used to monitor network traffic and detect internal and external attacks. Intrusion detection systems represent an important focus of studies today, because most protection systems, no matter how good they are, can fail due to the emergence of new (unknown/predefined) types of intrusions. Most of the existing techniques detect network intrusions by collecting information about known types of attacks, so-called signature-based IDS, using them to recognize any attempt of attack on data or resources. The major problem of this approach is its inability to detect previously unknown attacks, even if these attacks are derived slightly from the known ones (the so-called zero-day attack). Also, it is powerless to detect encryption-related attacks. On the other hand, detecting abnormalities concerning conventional behavior (anomaly-based IDS) exceeds the abovementioned limitations. Many scientific studies have tended to build modern and smart systems to detect both known and unknown intrusions. In this research, an architecture that applies a new technique for IDS using an anomaly-based detection method based on entropy is introduced. Network behavior analysis relies on the profiling of legitimate network behavior in order to efficiently detect anomalous traffic deviations that indicate security threats. Entropy-based detection techniques are attractive due to their simplicity and applicability in real-time network traffic, with no need to train the system with labelled data. Besides the fact that the NetFlow protocol provides only a basic set of information about network communications, it is very beneficial for identifying zero-day attacks and suspicious behavior in traffic structure. Nevertheless, the challenge associated with limited NetFlow information combined with the simplicity of the entropy-based approach is providing an efficient and sensitive mechanism to detect a wide range of anomalies, including those of small intensity. However, a recent study found of generic entropy-based anomaly detection reports its vulnerability to deceit by introducing spoofed data to mask the abnormality. Furthermore, the majority of approaches for further classification of anomalies rely on machine learning, which brings additional complexity. Previously highlighted shortcomings and limitations of these approaches open up a space for the exploration of new techniques and methodologies for the detection of anomalies in network traffic in order to isolate security threats, which will be the main subject of the research in this thesis. Abstract An architrvture for network traffic anomaly detection system based on entropy analysis Page vii This research addresses all these issues by providing a systematic methodology with the main novelty in anomaly detection and classification based on the entropy of flow count and behavior features extracted from the basic data obtained by the NetFlow protocol. Two new approaches are proposed to solve these concerns. Firstly, an effective protection mechanism against entropy deception derived from the study of changes in several entropy types, such as Shannon, Rényi, and Tsallis entropies, as well as the measurement of the number of distinct elements in a feature distribution as a new detection metric. The suggested method improves the reliability of entropy approaches. Secondly, an anomaly classification technique was introduced to the existing entropy-based anomaly detection system. Entropy-based anomaly classification methods were presented and effectively confirmed by tests based on a multivariate analysis of the entropy changes of several features as well as aggregation by complicated feature combinations. Through an analysis of the most prominent security attacks, generalized network traffic behavior models were developed to describe various communication patterns. Based on a multivariate analysis of the entropy changes by anomalies in each of the modelled classes, anomaly classification rules were proposed and verified through the experiments. The concept of the behavior features is generalized, while the proposed data partitioning provides greater efficiency in real-time anomaly detection. The practicality of the proposed architecture for the implementation of effective anomaly detection and classification system in a general real-world network environment is demonstrated using experimental data

    Algebraic Structures of Neutrosophic Triplets, Neutrosophic Duplets, or Neutrosophic Multisets

    Get PDF
    Neutrosophy (1995) is a new branch of philosophy that studies triads of the form (, , ), where is an entity {i.e. element, concept, idea, theory, logical proposition, etc.}, is the opposite of , while is the neutral (or indeterminate) between them, i.e., neither nor .Based on neutrosophy, the neutrosophic triplets were founded, which have a similar form (x, neut(x), anti(x)), that satisfy several axioms, for each element x in a given set.This collective book presents original research papers by many neutrosophic researchers from around the world, that report on the state-of-the-art and recent advancements of neutrosophic triplets, neutrosophic duplets, neutrosophic multisets and their algebraic structures – that have been defined recently in 2016 but have gained interest from world researchers. Connections between classical algebraic structures and neutrosophic triplet / duplet / multiset structures are also studied. And numerous neutrosophic applications in various fields, such as: multi-criteria decision making, image segmentation, medical diagnosis, fault diagnosis, clustering data, neutrosophic probability, human resource management, strategic planning, forecasting model, multi-granulation, supplier selection problems, typhoon disaster evaluation, skin lesson detection, mining algorithm for big data analysis, etc

    Uncertain Multi-Criteria Optimization Problems

    Get PDF
    Most real-world search and optimization problems naturally involve multiple criteria as objectives. Generally, symmetry, asymmetry, and anti-symmetry are basic characteristics of binary relationships used when modeling optimization problems. Moreover, the notion of symmetry has appeared in many articles about uncertainty theories that are employed in multi-criteria problems. Different solutions may produce trade-offs (conflicting scenarios) among different objectives. A better solution with respect to one objective may compromise other objectives. There are various factors that need to be considered to address the problems in multidisciplinary research, which is critical for the overall sustainability of human development and activity. In this regard, in recent decades, decision-making theory has been the subject of intense research activities due to its wide applications in different areas. The decision-making theory approach has become an important means to provide real-time solutions to uncertainty problems. Theories such as probability theory, fuzzy set theory, type-2 fuzzy set theory, rough set, and uncertainty theory, available in the existing literature, deal with such uncertainties. Nevertheless, the uncertain multi-criteria characteristics in such problems have not yet been explored in depth, and there is much left to be achieved in this direction. Hence, different mathematical models of real-life multi-criteria optimization problems can be developed in various uncertain frameworks with special emphasis on optimization problems
    corecore