4,886 research outputs found

    Granular Support Vector Machines Based on Granular Computing, Soft Computing and Statistical Learning

    Get PDF
    With emergence of biomedical informatics, Web intelligence, and E-business, new challenges are coming for knowledge discovery and data mining modeling problems. In this dissertation work, a framework named Granular Support Vector Machines (GSVM) is proposed to systematically and formally combine statistical learning theory, granular computing theory and soft computing theory to address challenging predictive data modeling problems effectively and/or efficiently, with specific focus on binary classification problems. In general, GSVM works in 3 steps. Step 1 is granulation to build a sequence of information granules from the original dataset or from the original feature space. Step 2 is modeling Support Vector Machines (SVM) in some of these information granules when necessary. Finally, step 3 is aggregation to consolidate information in these granules at suitable abstract level. A good granulation method to find suitable granules is crucial for modeling a good GSVM. Under this framework, many different granulation algorithms including the GSVM-CMW (cumulative margin width) algorithm, the GSVM-AR (association rule mining) algorithm, a family of GSVM-RFE (recursive feature elimination) algorithms, the GSVM-DC (data cleaning) algorithm and the GSVM-RU (repetitive undersampling) algorithm are designed for binary classification problems with different characteristics. The empirical studies in biomedical domain and many other application domains demonstrate that the framework is promising. As a preliminary step, this dissertation work will be extended in the future to build a Granular Computing based Predictive Data Modeling framework (GrC-PDM) with which we can create hybrid adaptive intelligent data mining systems for high quality prediction

    A fuzzy-based reliaility for JXTA-overlay P2P platform considering data download speed, peer congestion situation, number of interaction and packet loss parameters

    Get PDF
    (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.In this paper, we propose and evaluate a new fuzzy-based reliability system for Peer-to-Peer (P2P) communications in JXTA-Overlay platform considering as a new parameter the peer congestion situation. In our system, we considered four input parameters: Data Download Speed (DDS), Peer Congestion Situation (PCS), Number of Interactions (NI) and Packet Loss (PL) to decide the Peer Reliability (PR). We evaluate the proposed system by computer simulations. The simulation results have shown that the proposed system has a good performance and can choose reliable peers to connect in JXTA-Overlay platform.Peer ReviewedPostprint (author's final draft

    Programming agent-based demographic models with cross-state and message-exchange dependencies: A study with speculative PDES and automatic load-sharing

    Get PDF
    Agent-based modeling and simulation is a versatile and promising methodology to capture complex interactions among entities and their surrounding environment. A great advantage is its ability to model phenomena at a macro scale by exploiting simpler descriptions at a micro level. It has been proven effective in many fields, and it is rapidly becoming a de-facto standard in the study of population dynamics. In this article we study programmability and performance aspects of the last-generation ROOT-Sim speculative PDES environment for multi/many-core shared-memory architectures. ROOT-Sim transparently offers a programming model where interactions can be based on both explicit message passing and in-place state accesses. We introduce programming guidelines for systematic exploitation of these facilities in agent-based simulations, and we study the effects on performance of an innovative load-sharing policy targeting these types of dependencies. An experimental assessment with synthetic and real-world applications is provided, to assess the validity of our proposal

    A fuzzy-based reliability system for JXTA-overlay P2P platform considering as new parameter sustained communication time

    Get PDF
    (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.In this paper, we propose and evaluate a new fuzzy-based reliability system for Peer-to-Peer (P2P) Communications in JXTA-Overlay platform considering as a new parameter the sustained communication time. In our system, we considered four input parameters: Data Download Speed (DDS), Local Score (LS), Number of Interactions (NI) and Sustained Communication Time (SCT) to decide the Peer Reliability (PR). We evaluate the proposed system by computer simulations. The simulation results have shown that the proposed system has a good performance and can choose reliable peers to connect in JXTA-Overlay platform.Peer ReviewedPostprint (author's final draft

    A comparison study for two fuzzy-based systems: improving reliability and security of JXTA-overlay P2P platform

    Get PDF
    This is a copy of the author's final draft version of an article published in the journal Soft computing.The reliability of peers is very important for safe communication in peer-to-peer (P2P) systems. The reliability of a peer can be evaluated based on the reputation and interactions with other peers to provide different services. However, for deciding the peer reliability there are needed many parameters, which make the problem NP-hard. In this paper, we present two fuzzy-based systems (called FBRS1 and FBRS2) to improve the reliability of JXTA-overlay P2P platform. In FBRS1, we considered three input parameters: number of interactions (NI), security (S), packet loss (PL) to decide the peer reliability (PR). In FBRS2, we considered four input parameters: NI, S, PL and local score to decide the PR. We compare the proposed systems by computer simulations. Comparing the complexity of FBRS1 and FBRS2, the FBRS2 is more complex than FBRS1. However, it also considers the local score, which makes it more reliable than FBRS1.Peer ReviewedPostprint (author's final draft

    Effects of sustained communication time on reliability of JXTA-Overlay P2P platform: a comparison study for two fuzzy-based systems

    Get PDF
    (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.In P2P systems, each peer has to obtain information of other peers and propagate the information to other peers through neighboring peers. Thus, it is important for each peer to have some number of neighbor peers. Moreover, it is more significant to discuss if each peer has reliable neighbor peers. In reality, each peer might be faulty or might send obsolete, even incorrect information to the other peers. We have implemented a P2P platform called JXTA-Orverlay, which defines a set of protocols that standardize how different devices may communicate and collaborate among them. JXTA-Overlay provides a set of basic functionalities, primitives, intended to be as complete as possible to satisfy the needs of most JXTA-based applications. In this paper, we present two fuzzy-based systems (called FPRS1 and FPRS2) to improve the reliability of JXTA-Overlay P2P platform. We make a comparison study between the fuzzy-based reliability systems. Comparing the complexity of FPRS1 and FPRS2, the FPRS2 is more complex than FPRS1. However, it considers also the sustained communication time which makes the platform more reliable.Peer ReviewedPostprint (author's final draft

    A synthetic sample of short-cadence solar-like oscillators for TESS

    Get PDF
    NASA's Transiting Exoplanet Survey Satellite (TESS) has begun a two-year survey of most of the sky, which will include lightcurves for thousands of solar-like oscillators sampled at a cadence of two minutes. To prepare for this steady stream of data, we present a mock catalogue of lightcurves, designed to realistically mimic the properties of the TESS sample. In the process, we also present the first public release of the asteroFLAG Artificial Dataset Generator, which simulates lightcurves of solar-like oscillators based on input mode properties. The targets are drawn from a simulation of the Milky Way's populations and are selected in the same way as TESS's true Asteroseismic Target List. The lightcurves are produced by combining stellar models, pulsation calculations and semi-empirical models of solar-like oscillators. We describe the details of the catalogue and provide several examples. We provide pristine lightcurves to which noise can be added easily. This mock catalogue will be valuable in testing asteroseismology pipelines for TESS and our methods can be applied in preparation and planning for other observatories and observing campaigns.Comment: 14 pages, 6 figures, accepted for publication in ApJS. Archives containing the mock catalogue are available at https://doi.org/10.5281/zenodo.1470155 and the pipeline to produce it at https://github.com/warrickball/s4tess . The first public release of the asteroFLAG Artificial Dataset Generator v3 (AADG3) is described at https://warrickball.github.io/AADG3

    MOT meets AHA!

    Get PDF
    MOT (My Online Teacher) is an adaptive hypermedia system (AHS) web-authoring environment. MOT is now being further developed according to the LAOS five-layer adaptation model for adaptive hypermedia and adaptive web-material, containing a domain -, goal -, user -, adaptation – and presentation model. The adaptation itself follows the LAG three-layer granularity structure, figuring direct adaptation techniques and rules, an adaptation language and adaptation strategies. In this paper we shortly describe the theoretical basis of MOT, i.e., LAOS and LAG, and then give some information about the current state of MOT. The purpose of this paper is to show how we plan the design and development of MOT and the well-known system AHA! (Adaptive Hypermedia Architecture), developed at the Technical University of Eindhoven since 1996. We aim especially at the integration with AHA! 2.0. Although AHA! 2.0 represents a progress when compared to the previous versions, a lot of adaptive features that are described by the LAOS and the adaptation granulation model and that are being implemented into MOT are not yet (directly) available. So therefore AHA! can benefit from MOT. On the other hand, AHA! offers a running platform for the adaptation engine, which can benefit MOT in return
    • …
    corecore