3,543 research outputs found
Detection and Filtering of Collaborative Malicious Users in Reputation System using Quality Repository Approach
Online reputation system is gaining popularity as it helps a user to be sure
about the quality of a product/service he wants to buy. Nonetheless online
reputation system is not immune from attack. Dealing with malicious ratings in
reputation systems has been recognized as an important but difficult task. This
problem is challenging when the number of true user's ratings is relatively
small and unfair ratings plays majority in rated values. In this paper, we have
proposed a new method to find malicious users in online reputation systems
using Quality Repository Approach (QRA). We mainly concentrated on anomaly
detection in both rating values and the malicious users. QRA is very efficient
to detect malicious user ratings and aggregate true ratings. The proposed
reputation system has been evaluated through simulations and it is concluded
that the QRA based system significantly reduces the impact of unfair ratings
and improve trust on reputation score with lower false positive as compared to
other method used for the purpose.Comment: 14 pages, 5 figures, 5 tables, submitted to ICACCI 2013, Mysore,
indi
Understanding hypervolume behavior theoretically for benchmarking in evolutionary multi/many-objective optimization
Hypervolume (HV) is one of the most commonly used metrics for evaluating the Pareto front (PF) approximations generated by multiobjective evolutionary algorithms. Even so, HV is a resultant of a complex interplay between the PF shape, number of objectives, and user-specified reference points which, if not well understood, may lead to misinformed inferences about benchmarking performance. In order to understand this behavior, some previous studies have investigated such interactions empirically. In this letter, a new and unconventional approach is taken for gaining further insights about HV behavior. The key idea is to develop theoretical formulas for certain linear (equilateral simplex) and quadratic (orthant) PFs in two specific orientations: 1) regular and 2) inverted. These PFs represent a large number of problems in the existing DTLZ and WFG suites commonly used for benchmarking. The numerical experiments are presented to demonstrate the utility of the proposed work in benchmarking, and in understanding the contributions of the different regions of the PFs, such as corners, edges, as well explaining the contrast between the HV behaviors for regular versus inverted PFs. This letter provides a foundation and computationally fast means to undertake parametric studies to understand various aspects of HV
Investigating the equivalence between PBI and AASF scalarization for multi-objective optimization
Scalarization refers to a generic class of methods to combine multiple conflicting objectives into one in order to find a Pareto optimal solution to the original problem. Augmented achievement scalarizing function (AASF) is one such method used popularly in the multi-criterion decision-making (MCDM) field. In evolutionary multi-objective optimization (EMO) literature, scalarization methods such as penalty boundary intersection (PBI) are commonly used to compare similar solutions within a population. Both AASF and PBI methods require a reference point and a reference direction for their calculation. In this paper, we aim to analytically derive and understand the commonalities between these two metrics and gain insights into the limitations of their standard parametric forms. We show that it is possible to find an equivalent modified AASF formulation for a given PBI parameter and vice versa for bi-objective problems. Numerical experiments are presented to validate the theory developed. We further discuss the challenges in extending this to higher objectives and show that it is still possible to achieve limited equivalence along symmetric reference vectors. The study connects the two philosophies of solving multi-objective optimization problems, provides a means to gain a deeper understanding of both these measures, and expands their parametric range to provide more flexibility of controlling the search behavior of the EMO algorithms
Identifying Stochastically Non-dominated Solutions Using Evolutionary Computation
We consider the problem of finding a solution robust to disturbances of its decision variables, and explain why this should be framed as problem to identify all stochastically non-dominated solutions. Then we show how this can be formulated as an unconventional multi-objective optimization problem and solved using evolutionary computation. Because evaluating stochastic dominance in a black-box setting is computationally very expensive, we also propose more efficient algorithm variants that utilize surrogate models and re-use historical data. Empirical results on several test problems demonstrate that the algorithm indeed finds the stochastically non-dominated solutions, and that the proposed efficiency enhancements are able to drastically cut the number of required function evaluations while maintaining good solution quality
Feasibility-ratio based sequencing for computationally efficient constrained optimization
Real world optimization problems typically involve constraints that must be satisfied for a design to be viable. Such constraints could represent physical limitations on induced behavior such as stress, displacement, temperature etc., or other practical limitations such as cost, geometry, manufacturability, etc. Constraint handling is therefore an active area of research, and a number of approaches have been proposed in the literature to be used with evolutionary algorithms to solve constrained optimization problems. Most approaches assume that the objective and constraints are simultaneously evaluated, and counted together as one evaluation as far as computation budget is concerned. However, in practical applications, there could be situations where the constraints and the objective can be independently evaluated (e.g., using different analyses). Therefore, it is worthwhile exploring if one can forego evaluation of certain constraints or the objective to save computational cost without misleading the search owing to insufficient information. Towards this end, the main focus of this study is to deal with a partial evaluation paradigm. We propose a feasibility-ratio based sequencing for partial evaluation of solutions during the evolutionary search. A lexicographic ranking approach is used for ordering the partially evaluated solutions. To balance the evaluation cost and information gain near constraint boundaries, a dynamic feasibility control technique is incorporated. A number of variants are systematically constructed to demonstrate the efficacy of partial evaluation and proposed sequencing approach. An extensive numerical study is conducted to evaluate the approach on a range of constraint problems, which confirms its competence
Evolutionary Algorithm Embedded with Bump-Hunting for Constrained Design Optimization
Real-world design optimization problems commonly entail constraints that must be satisfied for the design to be viable. Mathematically, the constraints divide the search space into feasible (where all constraints are satisfied) and infeasible (where at least one constraint is violated) regions. The presence of multiple constraints, constricted and/or disconnected feasible regions, non-linearity and multi-modality of the underlying functions could significantly slow down the convergence of evolutionary algorithms (EA). Since each design evaluation incurs some time/computational cost, it is of significant interest to improve the rate of convergence to obtain competitive solutions with relatively fewer design evaluations. In this study, we propose to accomplish this using two mechanisms: (a) more intensified search by identifying promising regions through "bump-hunting," and (b) use of infeasibility-driven ranking to exploit the fact that optimal solutions are likely to be located on constraint boundaries. Numerical experiments are conducted on a range of mathematical benchmarks and empirically formulated engineering problems, as well as a simulation-based wind turbine design optimization problem. The proposed approach shows up to 53.48% improvement in median objective values and up to 69.23% reduction in cost of identifying a feasible solution compared with a baseline EA
Investigating the use of sequencing and infeasibility driven strategies for constrained optimization
Real-world optimization problems involve constraints that must be satisfied for the design to be viable. The constraints are a manifestation of statutory physical limitations such as allowable strength, geometric compatibility or other practical considerations such as cost and time required for manufacturing. Constraint handling is thus an important area in the domain of optimization and there exists rich literature on the subject. Within population based stochastic optimization methods, constraint handling is typically implemented through a ranking process, where feasible solutions are considered better then infeasible ones. Recent studies have suggested that preserving infeasible solutions can be advantageous to the evolutionary search. Such studies have typically considered a paradigm where all objectives and constraints are evaluated simultaneously. In many practical scenarios however, it is possible to evaluate them independently. This opens up opportunities to gain computational benefits through sequencing constraint evaluation and using partial evaluation (i.e. only evaluate some of the constraints). In this paper, we systematically construct and study the performance of these strategies by combining them in different ways (total 8 variants). The numerical experiments compare the performance of these strategies under different evaluation and ranking scenarios. The study offers understanding of the advantages that can be gained by using appropriate combinations of these strategies for the cases where objective and constraint(s) are associated with individual cost and can be computed independently
Comparing Expected Improvement and Kriging Believer for Expensive Bilevel Optimization
Bilevel optimization refers to a specialized class of problems where one optimization task is nested as a constraint within another. Such problems emerge in a range of real-world scenarios involving hierarchical decision-making, and significant literature exists on classical and evolutionary approaches to solve them. However, computationally expensive bilevel optimization problems remain relatively less explored. Since each evaluation incurs a significant computational cost, one can only perform a limited number of function evaluations during the course of search. Surrogate-assisted strategies provide a promising way forward to deal with such classes of problems. Of particular interest to this study are the steady-state strategies which carefully pre-select a promising solution for true evaluation based on a surrogate model. The main aim of this paper is to compare two widely adopted steady-state infill strategies - Kriging believer (KB) and expected improvement (EI) - through systematic experiments within a nested optimization framework. Our experiments on a set of benchmark problems reveal some interesting and counter-intuitive observations. We discuss some of the underlying reasons and believe that the findings will inform further research on understanding and improving search strategies for expensive bilevel optimization
Investigating Normalization Bounds for Hypervolume-Based Infill Criterion for Expensive Multiobjective Optimization
While solving expensive multi-objective optimization problems, there may be stringent limits on the number of allowed function evaluations. Surrogate models are commonly used for such problems where calls to surrogates are made in lieu of calls to the true objective functions. The surrogates can also be used to identify infill points for evaluation, i.e., solutions that maximize certain performance criteria. One such infill criteria is the maximization of predicted hypervolume, which is the focus of this study. In particular, we are interested in investigating if better estimate of the normalization bounds could help in improving the performance of the surrogate assisted optimization algorithm. Towards this end, we propose a strategy to identify a better ideal point than the one that exists in the current archive. Numerical experiments are conducted on a range of problems to test the efficacy of the proposed method. The approach outperforms conventional forms of normalization in some cases, while providing comparable results for others. We provide critical insights on the search behavior and relate them with the underlying properties of the test problems
Partial Evaluation Strategies for Expensive Evolutionary Constrained Optimization
Constrained optimization problems (COPs) are frequently encountered in real-world design applications. For some COPs, the evaluation of the objective(s) and/or constraint(s) may involve significant computational/temporal/financial cost. Such problems are referred to as expensive COPs (ECOPs). Surrogate modeling has been widely used in conjunction with optimization methods for such problems, wherein the search is partially driven by an approximate function instead of true expensive evaluations. However, for any true evaluation, nearly all existing methods compute all objective and constraint values together as one batch. Such full evaluation approaches may be inefficient for cases where the objective/constraint(s) can be evaluated independently of each other. In this article, we propose and study a constraint handling strategy for ECOPs using partial evaluations. The constraints are evaluated in a sequence determined based on their likelihood of being violated; and the evaluation is aborted if a constraint violation is encountered. Modified ranking strategies are introduced to effectively rank the solutions using the limited information thus obtained, while saving on significant function evaluations. The proposed algorithm is compared with a number of its variants to establish the utility of its key components systematically. Numerical experiments and benchmarking are conducted on a range of mathematical and engineering design problems to demonstrate the efficacy of the approach compared to state-of-The-Art evolutionary optimization approaches
- …