388 research outputs found

    Deep Learning based Densenet Convolution Neural Network for Community Detection in Online Social Networks

    Get PDF
    Online Social Networks (OSNs) have become increasingly popular, with hundreds of millions of users in recent years. A community in a social network is a virtual group with shared interests and activities that they want to communicate. OSN and the growing number of users have also increased the need for communities. Community structure is an important topological property of OSN and plays an essential role in various dynamic processes, including the diffusion of information within the network. All networks have a community format, and one of the most continually addressed research issues is the finding of communities. However, traditional techniques didn't do a better community of discovering user interests. As a result, these methods cannot detect active communities.  To tackle this issues, in this paper presents Densenet Convolution Neural Network (DnetCNN) approach for community detection. Initially, we gather dataset from Kaggle repository. Then preprocessing the dataset to remove inconsistent and missing values. In addition to User Behavior Impact Rate (UBIR) technique to identify the user URL access, key term and page access. After that, Web Crawling Prone Factor Rate (WCPFR) technique is used find the malicious activity random forest and decision method. Furthermore, Spider Web Cluster Community based Feature Selection (SWC2FS) algorithm is used to choose finest attributes in the dataset. Based on the attributes, to find the community group using Densenet Convolution Neural Network (DnetCNN) approach. Thus, the experimental result produce better performance than other methods

    A novel granular approach for detecting dynamic online communities in social network

    Get PDF
    The great surge in the research of community discovery in complex network is going on due to its challenging aspects. Dynamicity and overlapping nature are among the common characteristics of these networks which are the main focus of this paper. In this research, we attempt to approximate the granular human-inspired viewpoints of the networks. This is especially helpful when making decisions with partial knowledge. In line with the principle of granular computing, in which precision is avoided, we define the micro- and macrogranules in two levels of nodes and communities, respectively. The proposed algorithm takes microgranules as input and outputs meaningful communities in rough macrocommunity form. For this purpose, the microgranules are drawn toward each other based on a new rough similarity measure defined in this paper. As a result, the structure of communities is revealed and adapted over time, according to the interactions observed in the network, and the number of communities is extracted automatically. The proposed model can deal with both the low and the sharp changes in the network. The algorithm is evaluated in multiple dynamic datasets and the results confirm the superiority of the proposed algorithm in various measures and scenarios

    Mitigating the effect of covariates in face recognition

    Get PDF
    Current face recognition systems capture faces of cooperative individuals in controlled environment as part of the face recognition process. It is therefore possible to control lighting, pose, background, and quality of images. However, in a real world application, we have to deal with both ideal and imperfect data. Performance of current face recognition systems is affected for such non-ideal and challenging cases. This research focuses on designing algorithms to mitigate the effect of covariates in face recognition.;To address the challenge of facial aging, an age transformation algorithm is proposed that registers two face images and minimizes the aging variations. Unlike the conventional method, the gallery face image is transformed with respect to the probe face image and facial features are extracted from the registered gallery and probe face images. The variations due to disguises cause change in visual perception, alter actual data, make pertinent facial information disappear, mask features to varying degrees, or introduce extraneous artifacts in the face image. To recognize face images with variations due to age progression and disguises, a granular face verification approach is designed which uses dynamic feed-forward neural architecture to extract 2D log polar Gabor phase features at different granularity levels. The granular levels provide non-disjoint spatial information which is combined using the proposed likelihood ratio based Support Vector Machine match score fusion algorithm. The face verification algorithm is validated using five face databases including the Notre Dame face database, FG-Net face database and three disguise face databases.;The information in visible spectrum images is compromised due to improper illumination whereas infrared images provide invariance to illumination and expression. A multispectral face image fusion algorithm is proposed to address the variations in illumination. The Support Vector Machine based image fusion algorithm learns the properties of the multispectral face images at different resolution and granularity levels to determine optimal information and combines them to generate a fused image. Experiments on the Equinox and Notre Dame multispectral face databases show that the proposed algorithm outperforms existing algorithms. We next propose a face mosaicing algorithm to address the challenge due to pose variations. The mosaicing algorithm generates a composite face image during enrollment using the evidence provided by frontal and semiprofile face images of an individual. Face mosaicing obviates the need to store multiple face templates representing multiple poses of a users face image. Experiments conducted on three different databases indicate that face mosaicing offers significant benefits by accounting for the pose variations that are commonly observed in face images.;Finally, the concept of online learning is introduced to address the problem of classifier re-training and update. A learning scheme for Support Vector Machine is designed to train the classifier in online mode. This enables the classifier to update the decision hyperplane in order to account for the newly enrolled subjects. On a heterogeneous near infrared face database, the case study using Principal Component Analysis and C2 feature algorithms shows that the proposed online classifier significantly improves the verification performance both in terms of accuracy and computational time

    Memristor Platforms for Pattern Recognition Memristor Theory, Systems and Applications

    Get PDF
    In the last decade a large scientific community has focused on the study of the memristor. The memristor is thought to be by many the best alternative to CMOS technology, which is gradually showing its flaws. Transistor technology has developed fast both under a research and an industrial point of view, reducing the size of its elements to the nano-scale. It has been possible to generate more and more complex machinery and to communicate with that same machinery thanks to the development of programming languages based on combinations of boolean operands. Alas as shown by Moore’s law, the steep curve of implementation and of development of CMOS is gradually reaching a plateau. It is clear the need of studying new elements that can combine the efficiency of transistors and at the same time increase the complexity of the operations. Memristors can be described as non-linear resistors capable of maintaining memory of the resistance state that they reached. From their first theoretical treatment by Professor Leon O. Chua in 1971, different research groups have devoted their expertise in studying the both the fabrication and the implementation of this new promising technology. In the following thesis a complete study on memristors and memristive elements is presented. The road map that characterizes this study departs from a deep understanding of the physics that govern memristors, focusing on the HP model by Dr. Stanley Williams. Other devices such as phase change memories (PCMs) and memristive biosensors made with Si nano-wires have been studied, developing emulators and equivalent circuitry, in order to describe their complex dynamics. This part sets the first milestone of a pathway that passes trough more complex implementations such as neuromorphic systems and neural networks based on memristors proving their computing efficiency. Finally it will be presented a memristror-based technology, covered by patent, demonstrating its efficacy for clinical applications. The presented system has been designed for detecting and assessing automatically chronic wounds, a syndrome that affects roughly 2% of the world population, through a Cellular Automaton which analyzes and processes digital images of ulcers. Thanks to its precision in measuring the lesions the proposed solution promises not only to increase healing rates, but also to prevent the worsening of the wounds that usually lead to amputation and death

    Uncertainty and Interpretability Studies in Soft Computing with an Application to Complex Manufacturing Systems

    Get PDF
    In systems modelling and control theory, the benefits of applying neural networks have been extensively studied. Particularly in manufacturing processes, such as the prediction of mechanical properties of heat treated steels. However, modern industrial processes usually involve large amounts of data and a range of non-linear effects and interactions that might hinder their model interpretation. For example, in steel manufacturing the understanding of complex mechanisms that lead to the mechanical properties which are generated by the heat treatment process is vital. This knowledge is not available via numerical models, therefore an experienced metallurgist estimates the model parameters to obtain the required properties. This human knowledge and perception sometimes can be imprecise leading to a kind of cognitive uncertainty such as vagueness and ambiguity when making decisions. In system classification, this may be translated into a system deficiency - for example, small input changes in system attributes may result in a sudden and inappropriate change for class assignation. In order to address this issue, practitioners and researches have developed systems that are functional equivalent to fuzzy systems and neural networks. Such systems provide a morphology that mimics the human ability of reasoning via the qualitative aspects of fuzzy information rather by its quantitative analysis. Furthermore, these models are able to learn from data sets and to describe the associated interactions and non-linearities in the data. However, in a like-manner to neural networks, a neural fuzzy system may suffer from a lost of interpretability and transparency when making decisions. This is mainly due to the application of adaptive approaches for its parameter identification. Since the RBF-NN can be treated as a fuzzy inference engine, this thesis presents several methodologies that quantify different types of uncertainty and its influence on the model interpretability and transparency of the RBF-NN during its parameter identification. Particularly, three kind of uncertainty sources in relation to the RBF-NN are studied, namely: entropy, fuzziness and ambiguity. First, a methodology based on Granular Computing (GrC), neutrosophic sets and the RBF-NN is presented. The objective of this methodology is to quantify the hesitation produced during the granular compression at the low level of interpretability of the RBF-NN via the use of neutrosophic sets. This study also aims to enhance the disitnguishability and hence the transparency of the initial fuzzy partition. The effectiveness of the proposed methodology is tested against a real case study for the prediction of the properties of heat-treated steels. Secondly, a new Interval Type-2 Radial Basis Function Neural Network (IT2-RBF-NN) is introduced as a new modelling framework. The IT2-RBF-NN takes advantage of the functional equivalence between FLSs of type-1 and the RBF-NN so as to construct an Interval Type-2 Fuzzy Logic System (IT2-FLS) that is able to deal with linguistic uncertainty and perceptions in the RBF-NN rule base. This gave raise to different combinations when optimising the IT2-RBF-NN parameters. Finally, a twofold study for uncertainty assessment at the high-level of interpretability of the RBF-NN is provided. On the one hand, the first study proposes a new methodology to quantify the a) fuzziness and the b) ambiguity at each RU, and during the formation of the rule base via the use of neutrosophic sets theory. The aim of this methodology is to calculate the associated fuzziness of each rule and then the ambiguity related to each normalised consequence of the fuzzy rules that result from the overlapping and to the choice with one-to-many decisions respectively. On the other hand, a second study proposes a new methodology to quantify the entropy and the fuzziness that come out from the redundancy phenomenon during the parameter identification. To conclude this work, the experimental results obtained through the application of the proposed methodologies for modelling two well-known benchmark data sets and for the prediction of mechanical properties of heat-treated steels conducted to publication of three articles in two peer-reviewed journals and one international conference

    Evolving Clustering Algorithms And Their Application For Condition Monitoring, Diagnostics, & Prognostics

    Get PDF
    Applications of Condition-Based Maintenance (CBM) technology requires effective yet generic data driven methods capable of carrying out diagnostics and prognostics tasks without detailed domain knowledge and human intervention. Improved system availability, operational safety, and enhanced logistics and supply chain performance could be achieved, with the widespread deployment of CBM, at a lower cost level. This dissertation focuses on the development of a Mutual Information based Recursive Gustafson-Kessel-Like (MIRGKL) clustering algorithm which operates recursively to identify underlying model structure and parameters from stream type data. Inspired by the Evolving Gustafson-Kessel-like Clustering (eGKL) algorithm, we applied the notion of mutual information to the well-known Mahalanobis distance as the governing similarity measure throughout. This is also a special case of the Kullback-Leibler (KL) Divergence where between-cluster shape information (governed by the determinant and trace of the covariance matrix) is omitted and is only applicable in the case of normally distributed data. In the cluster assignment and consolidation process, we proposed the use of the Chi-square statistic with the provision of having different probability thresholds. Due to the symmetry and boundedness property brought in by the mutual information formulation, we have shown with real-world data that the algorithm’s performance becomes less sensitive to the same range of probability thresholds which makes system tuning a simpler task in practice. As a result, improvement demonstrated by the proposed algorithm has implications in improving generic data driven methods for diagnostics, prognostics, generic function approximations and knowledge extractions for stream type of data. The work in this dissertation demonstrates MIRGKL’s effectiveness in clustering and knowledge representation and shows promising results in diagnostics and prognostics applications

    Initial Condition Estimation in Flux Tube Simulations using Machine Learning

    Get PDF
    Space weather has become an essential field of study as solar flares, coronal mass ejections, and other phenomena can severely impact Earth's life as we know it. The solar wind is threaded by magnetic flux tubes that extend from the solar atmosphere to distances beyond the solar system boundary. As those flux tubes cross the Earth's orbit, it is essential to understand and predict solar phenomena' effects at 1 AU, but the physical parameters linked to the solar wind formation and acceleration processes are not directly observable. Some existing models, such as MULTI-VP, try to fill this gap by predicting the background solar wind's dynamical and thermal properties from chosen magnetograms and using a coronal field reconstruction method. However, these models take a long time, and their performance increases with good initial guesses regarding the simulation's initial conditions. To address this problem, we propose using varied machine learning techniques to obtain good initial guesses that can accelerate MULTI-VP's computational time

    A methodology for automatic parameter-tuning and center selection in density-peak clustering methods

    Get PDF
    The density-peak clustering algorithm, which we refer to as DPC, is a novel and efficient density-based clustering approach. The method has the advantage of allowing non-convex clusters, and clusters of variable size and density, to be grouped together, but it also has some limitations, such as the visual location of centers and the parameter tuning. This paper describes an optimization-based methodology for automatic parameter/center selection applicable both to the DPC and to other algorithms derived from it. The objective function is an internal/external cluster validity index, and the decisions are the parameterization of the algorithm and the choice of centers. The internal validation measures lead to an automatic parameter-tuning process, and the external validation measures lead to the so-called optimal rules, which are a tool to bound the performance of a given algorithm from above on the set of parameterizations. A numerical experiment with real data was performed for the DPC and for the fuzzy weighted k-nearest neighbor (FKNN-DPC) which validates the automatic parameter-tuning methodology and demonstrates its efficiency compared to the state of the art.El algoritmo de agrupamiento de picos de densidad, al que nos referimos como DPC , es un enfoque de agrupamiento basado en la densidad novedoso y eficiente. El método tiene la ventaja de permitir agrupar clústeres no convexos y clústeres de tamaño y densidad variables, pero también tiene algunas limitaciones, como la ubicación visual de los centros y el ajuste de parámetros. Este artículo describe una metodología basada en la optimización para la selección automática de parámetros/centros aplicable tanto al DPC como a otros algoritmos derivados de él. La función objetivo es un índice de validez de clúster interno/externo, y las decisiones son la parametrización del algoritmo y la elección de los centros. Las medidas de validación interna conducen a un proceso automático de ajuste de parámetros, y las medidas de validación externa conducen al llamadoreglas óptimas , que son una herramienta para limitar el rendimiento de un algoritmo dado desde arriba en el conjunto de parametrizaciones. Se realizó un experimento numérico con datos reales para el DPC y para el k -vecino más cercano ponderado difuso ( FKNN-DPC ) que valida la metodología de ajuste automático de parámetros y demuestra su eficiencia en comparación con el estado del arte
    • …
    corecore