1,578 research outputs found

    Kooperativna evolucija za kvalitetno pružanje usluga u paradigmi Interneta stvari

    Get PDF
    To facilitate the automation process in the Internet of Things, the research issue of distinguishing prospective services out of many “similar” services, and identifying needed services w.r.t the criteria of Quality of Service (QoS), becomes very important. To address this aim, we propose heuristic optimization, as a robust and efficient approach for solving complex real world problems. Accordingly, this paper devises a cooperative evolution approach for service composition under the restrictions of QoS. A series of effective strategies are presented for this problem, which include an enhanced local best first strategy and a global best strategy that introduces perturbations. Simulation traces collected from real measurements are used for evaluating the proposed algorithms under different service composition scales that indicate that the proposed cooperative evolution approach conducts highly efficient search with stability and rapid convergence. The proposed algorithm also makes a well-designed trade-off between the population diversity and the selection pressure when the service compositions occur on a large scale.Kako bi se automatizirali procesi u internetu stvati, nužno je rezlikovati bitne usluge u moru sličnih kao i identificirati potrebne usluge u pogledu kvalitete usluge (QoS). Kako bi doskočili ovome problemu prdlaže se heuristička optimizacija kao robustan i efikasan način rješavajne kompleksnih problema. Nadalje, u članku je predložen postupak kooperativne evolucije za slaganje usluga uz ograničenja u pogledu kvalutete usluge. Predstavljen je niz efektivnih strategija za spomenuti problem uključujući strategije najboljeg prvog i najboljeg globalnog koje unose perturbacije u polazni problem. Simulacijski rezultati kao i stvarni podatci su korišteni u svrhu evaluacije prodloženog algoritma kako bi se osigurala efikasna pretraga uz stabilnost i brzu konvergenciju. Predloženi algoritam tako.er vodi računa o odnosu izme.u različitosti populacije i selekcijskog pritiska kada je potrebno osigurati slaganje usluga na velikoj skali

    Neuro-fuzzy resource forecast in site suitability assessment for wind and solar energy: a mini review

    Get PDF
    Abstract:Site suitability problems in renewable energy studies have taken a new turn since the advent of geographical information system (GIS). GIS has been used for site suitability analysis for renewable energy due to its prowess in processing and analyzing attributes with geospatial components. Multi-criteria decision making (MCDM) tools are further used for criteria ranking in the order of influence on the study. Upon location of most appropriate sites, the need for intelligent resource forecast to aid in strategic and operational planning becomes necessary if viability of the investment will be enhanced and resource variability will be better understood. One of such intelligent models is the adaptive neuro-fuzzy inference system (ANFIS) and its variants. This study presents a mini-review of GIS-based MCDM facility location problems in wind and solar resource site suitability analysis and resource forecast using ANFIS-based models. We further present a framework for the integration of the two concepts in wind and solar energy studies. Various MCDM techniques for decision making with their strengths and weaknesses were presented. Country specific studies which apply GIS-based method in site suitability were presented with criteria considered. Similarly, country-specific studies in ANFIS-based resource forecasts for wind and solar energy were also presented. From our findings, there has been no technically valid range of values for spatial criteria and the analytical hierarchical process (AHP) has been commonly used for criteria ranking leaving other techniques less explored. Also, hybrid ANFIS models are more effective compared to standalone ANFIS models in resource forecast, and ANFIS optimized with population-based models has been mostly used. Finally, we present a roadmap for integrating GIS-MCDM site suitability studies with ANFIS-based modeling for improved strategic and operational planning

    IMPROVED SUPPORT VECTOR MACHINE PERFORMANCE USING PARTICLE SWARM OPTIMIZATION IN CREDIT RISK CLASSIFICATION

    Get PDF
    In Classification using Support Vector Machine (SVM), each kernel has parameters that affect the classification accuracy results. This study examines the improvement of SVM performance by selecting parameters using Particle Swarm Optimization (PSO) on credit risk classification, the results of which are compared with SVM with random parameter selection. The classification performance is evaluated by applying the SVM classification to the Credit German benchmark credit data set and the private credit data set which is a credit data set issued from a local bank in North Sumatra. Although it requires a longer execution time to achieve optimal accuracy values, the SVM+PSO combination is quite effective and more systematic than trial and error techniques in finding SVM parameter values, so as to produce better accuracy. In general, the test results show that the RBF kernel is able to produce higher accuracy and f1-scores than linear and polynomial kernels. SVM classification with optimization using PSO can produce better accuracy than classification using SVM without optimization, namely the determination of parameters randomly. Credit data classification accuracy increased to 92.31%

    District metered area design through multicriteria and multiobjective optimization

    Get PDF
    The design of district metered areas (DMA) in potable water supply systems is of paramount importance for water utilities to properly manage their systems. Concomitant to their main objective, namely, to deliver quality water to consumers, the benefits include leakage reduction and prompt reaction in cases of natural or malicious contamination events. Given the structure of a water distribution network (WDN), graph theory is the basis for DMA design, and clustering algorithms can be applied to perform the partitioning. However, such sectorization entails a number of network modifications (installing cut-off valves and metering and control devices) involving costs and operation changes, which have to be carefully studied and optimized. Given the complexity of WDNs, optimization is usually performed using metaheuristic algorithms. In turn, optimization may be single or multiple-objective. In this last case, a large number of solutions, frequently integrating the Pareto front, may be produced. The decision maker has eventually to choose one among them, what may be tough task. Multicriteria decision methods may be applied to support this last step of the decision-making process. In this paper, DMA design is addressed by (i) proposing a modified k-means algorithm for partitioning, (ii) using a multiobjective particle swarm optimization to suitably place partitioning devices, (iii) using fuzzy analytic hierarchy process (FAHP) to weight the four objective functions considered, and (iv) using technique for order of preference by similarity to ideal solution (TOPSIS) to rank the Pareto solutions to support the decision. This joint approach is applied in a case of a well-known WDN of the literature, and the results are discussed

    A Survey on Soft Subspace Clustering

    Full text link
    Subspace clustering (SC) is a promising clustering technology to identify clusters based on their associations with subspaces in high dimensional spaces. SC can be classified into hard subspace clustering (HSC) and soft subspace clustering (SSC). While HSC algorithms have been extensively studied and well accepted by the scientific community, SSC algorithms are relatively new but gaining more attention in recent years due to better adaptability. In the paper, a comprehensive survey on existing SSC algorithms and the recent development are presented. The SSC algorithms are classified systematically into three main categories, namely, conventional SSC (CSSC), independent SSC (ISSC) and extended SSC (XSSC). The characteristics of these algorithms are highlighted and the potential future development of SSC is also discussed.Comment: This paper has been published in Information Sciences Journal in 201

    Attribute Identification and Predictive Customisation Using Fuzzy Clustering and Genetic Search for Industry 4.0 Environments

    Get PDF
    Today´s factory involves more services and customisation. A paradigm shift is towards “Industry 4.0” (i4) aiming at realising mass customisation at a mass production cost. However, there is a lack of tools for customer informatics. This paper addresses this issue and develops a predictive analytics framework integrating big data analysis and business informatics, using Computational Intelligence (CI). In particular, a fuzzy c-means is used for pattern recognition, as well as managing relevant big data for feeding potential customer needs and wants for improved productivity at the design stage for customised mass production. The selection of patterns from big data is performed using a genetic algorithm with fuzzy c-means, which helps with clustering and selection of optimal attributes. The case study shows that fuzzy c-means are able to assign new clusters with growing knowledge of customer needs and wants. The dataset has three types of entities: specification of various characteristics, assigned insurance risk rating, and normalised losses in use compared with other cars. The fuzzy c-means tool offers a number of features suitable for smart designs for an i4 environment

    Starfish Search

    Get PDF
    Starfish Search is a swarm optimization algorithm that operates in the same vein as Particle Swarm Optimization and the Firefly Algorithm. This search algorithm attempts to find global optimal solutions to optimization problems by dispersing agents into the search space. Each agent consists of many nodes that represent candidate solutions to the problem being solved. Agent\u27s nodes are formatted in a parent-child hierarchy, similar to tree structures, which facilitates information passing to a root node. With this structure, it becomes possible to determine the likely direction in which an optimal lies. By using a form of linear regression, the fitness values and positions of each node in an agent are used to evaluate a vector, known as the Local Gradient. This vector points along the slope of the search space, and its magnitude represents the steepness of this slope. In this way, an agent has an understanding of the local area and can make intelligent decisions about which direction to search for additional candidate solutions. With this additional information, agents also have the ability to execute behaviors based on the type of topology encountered. These behaviors can be specifically tailored to individual problems and situations to help agents correctly solve the problem. Starfish Search has been applied to problems such as, search space optimization, k nearest neighbors classification, and k means clustering. By tailoring fitness functions and behavior execution, evidence has been gathered to support the algorithms use over traditional techniques. This paper dives into the details of the algorithm\u27s implementation, calculations, and behaviors as well as explain the tests and evidence gathered to support the use of Starfish Search
    corecore