85 research outputs found

    Fuzzy rough and evolutionary approaches to instance selection

    Get PDF

    A Detailed Study of the Distributed Rough Set Based Locality Sensitive Hashing Feature Selection Technique

    Get PDF
    International audienceIn the context of big data, granular computing has recently been implemented by some mathematical tools, especially Rough Set Theory (RST). As a key topic of rough set theory, feature selection has been investigated to adapt the related granular concepts of RST to deal with large amounts of data, leading to the development of the distributed RST version. However, despite of its scalability, the distributed RST version faces a key challenge tied to the partitioning of the feature search space in the distributed environment while guaranteeing data dependency. Therefore, in this manuscript, we propose a new distributed RST version based on Locality Sensitive Hashing (LSH), named LSH-dRST, for big data feature selection. LSH-dRST uses LSH to match similar features into the same bucket and maps the generated buckets into partitions to enable the splitting of the universe in a more efficient way. More precisely, in this paper, we perform a detailed analysis of the performance of LSH-dRST by comparing it to the standard distributed RST version, which is based on a random partitioning of the universe. We demonstrate that our LSH-dRST is scalable when dealing with large amounts of data. We also demonstrate * This work is part of a project that has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 702527. 2 Z. Chelly Dagdia, C. Zarges / LSH-RST for an Efficient Big Data Pre-processing that LSH-dRST ensures the partitioning of the high dimensional feature search space in a more reliable way; hence better preserving data dependency in the distributed environment and ensuring a lower computational cost

    DTI-SNNFRA: Drug-Target interaction prediction by shared nearest neighbors and fuzzy-rough approximation

    Full text link
    In-silico prediction of repurposable drugs is an effective drug discovery strategy that supplements de-nevo drug discovery from scratch. Reduced development time, less cost and absence of severe side effects are significant advantages of using drug repositioning. Most recent and most advanced artificial intelligence (AI) approaches have boosted drug repurposing in terms of throughput and accuracy enormously. However, with the growing number of drugs, targets and their massive interactions produce imbalanced data which may not be suitable as input to the classification model directly. Here, we have proposed DTI-SNNFRA, a framework for predicting drug-target interaction (DTI), based on shared nearest neighbour (SNN) and fuzzy-rough approximation (FRA). It uses sampling techniques to collectively reduce the vast search space covering the available drugs, targets and millions of interactions between them. DTI-SNNFRA operates in two stages: first, it uses SNN followed by a partitioning clustering for sampling the search space. Next, it computes the degree of fuzzy-rough approximations and proper degree threshold selection for the negative samples' undersampling from all possible interaction pairs between drugs and targets obtained in the first stage. Finally, classification is performed using the positive and selected negative samples. We have evaluated the efficacy of DTI-SNNFRA using AUC (Area under ROC Curve), Geometric Mean, and F1 Score. The model performs exceptionally well with a high prediction score of 0.95 for ROC-AUC. The predicted drug-target interactions are validated through an existing drug-target database (Connectivity Map (Cmap))
    corecore