799 research outputs found

    Comparing metaheuristic algorithms for error detection in Java programs

    Get PDF
    Chicano, F., Ferreira M., & Alba E. (2011). Comparing Metaheuristic Algorithms for Error Detection in Java Programs. In Proceedings of Search Based Software Engineering, Szeged, Hungary, September 10-12, 2011. pp. 82–96.Model checking is a fully automatic technique for checking concurrent software properties in which the states of a concurrent system are explored in an explicit or implicit way. The main drawback of this technique is the high memory consumption, which limits the size of the programs that can be checked. In the last years, some researchers have focused on the application of guided non-complete stochastic techniques to the search of the state space of such concurrent programs. In this paper, we compare five metaheuristic algorithms for this problem. The algorithms are Simulated Annealing, Ant Colony Optimization, Particle Swarm Optimization and two variants of Genetic Algorithm. To the best of our knowledge, it is the first time that Simulated Annealing has been applied to the problem. We use in the comparison a benchmark composed of 17 Java concurrent programs. We also compare the results of these algorithms with the ones of deterministic algorithms.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. This research has been partially funded by the Spanish Ministry of Science and Innovation and FEDER under contract TIN2008-06491-C04-01 (the M∗ project) and the Andalusian Government under contract P07-TIC-03044 (DIRICOM project)

    Improved decision support for engine-in-the-loop experimental design optimization

    Get PDF
    Experimental optimization with hardware in the loop is a common procedure in engineering and has been the subject of intense development, particularly when it is applied to relatively complex combinatorial systems that are not completely understood, or where accurate modelling is not possible owing to the dimensions of the search space. A common source of difficulty arises because of the level of noise associated with experimental measurements, a combination of limited instrument precision, and extraneous factors. When a series of experiments is conducted to search for a combination of input parameters that results in a minimum or maximum response, under the imposition of noise, the underlying shape of the function being optimized can become very difficult to discern or even lost. A common methodology to support experimental search for optimal or suboptimal values is to use one of the many gradient descent methods. However, even sophisticated and proven methodologies, such as simulated annealing, can be significantly challenged in the presence of noise, since approximating the gradient at any point becomes highly unreliable. Often, experiments are accepted as a result of random noise which should be rejected, and vice versa. This is also true for other sampling techniques, including tabu and evolutionary algorithms. After the general introduction, this paper is divided into two main sections (sections 2 and 3), which are followed by the conclusion. Section 2 introduces a decision support methodology based upon response surfaces, which supplements experimental management based on a variable neighbourhood search and is shown to be highly effective in directing experiments in the presence of a significant signal-to-noise ratio and complex combinatorial functions. The methodology is developed on a three-dimensional surface with multiple local minima, a large basin of attraction, and a high signal-to-noise ratio. In section 2, the methodology is applied to an automotive combinatorial search in the laboratory, on a real-time engine-in-the-loop application. In this application, it is desired to find the maximum power output of an experimental single-cylinder spark ignition engine operating under a quasi-constant-volume operating regime. Under this regime, the piston is slowed at top dead centre to achieve combustion in close to constant volume conditions. As part of the further development of the engine to incorporate a linear generator to investigate free-piston operation, it is necessary to perform a series of experiments with combinatorial parameters. The objective is to identify the maximum power point in the least number of experiments in order to minimize costs. This test programme provides peak power data in order to achieve optimal electrical machine design. The decision support methodology is combined with standard optimization and search methods – namely gradient descent and simulated annealing – in order to study the reductions possible in experimental iterations. It is shown that the decision support methodology significantly reduces the number of experiments necessary to find the maximum power solution and thus offers a potentially significant cost saving to hardware-in-the-loop experi- mentation

    Optimization for Decision Making II

    Get PDF
    In the current context of the electronic governance of society, both administrations and citizens are demanding the greater participation of all the actors involved in the decision-making process relative to the governance of society. This book presents collective works published in the recent Special Issue (SI) entitled “Optimization for Decision Making II”. These works give an appropriate response to the new challenges raised, the decision-making process can be done by applying different methods and tools, as well as using different objectives. In real-life problems, the formulation of decision-making problems and the application of optimization techniques to support decisions are particularly complex and a wide range of optimization techniques and methodologies are used to minimize risks, improve quality in making decisions or, in general, to solve problems. In addition, a sensitivity or robustness analysis should be done to validate/analyze the influence of uncertainty regarding decision-making. This book brings together a collection of inter-/multi-disciplinary works applied to the optimization of decision making in a coherent manner

    Comparison of metaheuristic strategies for peakbin selection in proteomic mass spectrometry data

    Get PDF
    Mass spectrometry (MS) data provide a promising strategy for biomarker discovery. For this purpose, the detection of relevant peakbins in MS data is currently under intense research. Data from mass spectrometry are challenging to analyze because of their high dimensionality and the generally low number of samples available. To tackle this problem, the scientific community is becoming increasingly interested in applying feature subset selection techniques based on specialized machine learning algorithms. In this paper, we present a performance comparison of some metaheuristics: best first (BF), genetic algorithm (GA), scatter search (SS) and variable neighborhood search (VNS). Up to now, all the algorithms, except for GA, have been first applied to detect relevant peakbins in MS data. All these metaheuristic searches are embedded in two different filter and wrapper schemes coupled with Naive Bayes and SVM classifiers

    Fast energy-aware OLSR routing in VANETs by means of a parallel evolutionary.

    Get PDF
    Política de acceso abierto tomada de: https://v2.sherpa.ac.uk/id/publication/17174This work tackles the problem of reducing the power consumption of the OLSR routing protocol in vehicular networks. Nowadays, energy-aware and green communication protocols are important research topics, specially when deploying wireless mobile networks. This article introduces a fast automatic methodology to search for energy-efficient OLSR configurations by using a parallel evolutionary algorithm. The experimental analysis demonstrates that significant improvements over the standard configuration can be attained in terms of power consumption, with no noteworthy loss in the QoS

    Intelligent Intrusion Detection System using Enhanced Arithmetic Optimization Algorithm with Deep Learning Model

    Get PDF
    The widespread use of interoperability and interconnectivity of computing systems is becoming indispensable for enhancing our day-to-day actions. The susceptibilities deem cyber-security systems necessary for assuming communication interchanges. Secure transmission needs security measures for combating the threats and required developments to security measures that counter evolving security risks. Though firewalls were devised to secure networks, in real-time they cannot detect intrusions. Hence, destructive cyber-attacks put forward severe security complexities, requiring reliable and adaptable intrusion detection systems (IDS) that could monitor unauthorized access, policy violations, and malicious activity practically. Conventional machine learning (ML) techniques were revealed for identifying data patterns and detecting cyber-attacks IDSs successfully. Currently, deep learning (DL) methods are useful for designing accurate and effective IDS methods. In this aspect, this study develops an intelligent IDS using enhanced arithmetic optimization algorithm with deep learning (IIDS-EAOADL) method. The presented IIDS-EAOADL model performs data standardization process to normalize the input data. Besides, equilibrium optimizer based feature selection (EOFS) approach is developed to elect an optimal subset of features. For intrusion detection, deep wavelet autoencoder (DWAE) classifier is applied. Since the proper tuning of parameters of the DWNN is highly important, EAOA algorithm is used to tune them. For assuring the simulation results of the IIDS-EAOADL technique, a widespread simulation analysis takes place using a benchmark dataset. The experimentation outcomes demonstrate the improvements of the IIDS-EAOADL model over other existing technique

    An Efficient Ant Colony Optimization Framework for HPC Environments

    Get PDF
    Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG[Abstract] Combinatorial optimization problems arise in many disciplines, both in the basic sciences and in applied fields such as engineering and economics. One of the most popular combinatorial optimization methods is the Ant Colony Optimization (ACO) metaheuristic. Its parallel nature makes it especially attractive for implementation and execution in High Performance Computing (HPC) environments. Here we present a novel parallel ACO strategy making use of efficient asynchronous decentralized cooperative mechanisms. This strategy seeks to fulfill two objectives: (i) acceleration of the computations by performing the ants’ solution construction in parallel; (ii) convergence improvement through the stimulation of the diversification in the search and the cooperation between different colonies. The two main features of the proposal, decentralization and desynchronization, enable a more effective and efficient response in environments where resources are highly coupled. Examples of such infrastructures include both traditional HPC clusters, and also new distributed environments, such as cloud infrastructures, or even local computer networks. The proposal has been evaluated using the popular Traveling Salesman Problem (TSP), as a well-known NP-hard problem widely used in the literature to test combinatorial optimization methods. An exhaustive evaluation has been carried out using three medium and large size instances from the TSPLIB library, and the experiments show encouraging results with superlinear speedups compared to the sequential algorithm (e.g. speedups of 18 with 16 cores), and a very good scalability (experiments were performed with up to 384 cores improving execution time even at that scale).This work was supported by the Ministry of Science and Innovation of Spain (PID2019-104184RB-I00 / AEI / 10.13039/501100011033), and by Xunta de Galicia and FEDER funds of the EU (Centro de Investigación de Galicia accreditation 2019–2022, ref. ED431G 2019/01; Consolidation Program of Competitive Reference Groups, ref. ED431C 2021/30). JRB acknowledges funding from the Ministry of Science and Innovation of Spain MCIN / AEI / 10.13039/501100011033 through grant PID2020-117271RB-C22 (BIODYNAMICS), and from MCIN / AEI / 10.13039/501100011033 and “ERDF A way of making Europe” through grant DPI2017-82896-C2-2-R (SYNBIOCONTROL). Authors also acknowledge the Galician Supercomputing Center (CESGA) for the access to its facilities. Funding for open access charge: Universidade da Coruña/CISUGXunta de Galicia; ED431G 2019/01Xunta de Galicia; ED431C 2021/3

    Optimization of Airfield Parking and Fuel Asset Dispersal to Maximize Survivability and Mission Capability Level

    Get PDF
    While the US focus for the majority of the past two decades has been on combatting insurgency and promoting stability in Southwest Asia, strategic focus is beginning to shift toward concerns of conflict with a near-peer state. Such conflict brings with it the risk of ballistic missile attack on air bases. With 26 conflicts worldwide in the past 100 years including attacks on air bases, new doctrine and modeling capacity are needed to enable the Department of Defense to continue use of vulnerable bases during conflict involving ballistic missiles. Several models have been developed to date for Air Force strategic planning use, but these models have limited use on a tactical level or for civil engineer use. This thesis presents the development of a novel model capable of identifying base layout characteristics for aprons and fuel depots to maximize dispersal and minimize impact on sortie generation times during normal operations. This model is implemented using multi-objective genetic algorithms to identify solutions that provide optimal tradeoffs between competing objectives and is assessed using an application example. These capabilities are expected to assist military engineers in the layout of parking plans and fuel depots that ensure maximum resilience while providing minimal impact to the user while enabling continued sortie generation in a contested region

    The probability of default in internal ratings based (IRB) models in Basel II: an application of the rough sets methodology

    Get PDF
    El nuevo Acuerdo de Capital de junio de 2004 (Basilea II) da cabida e incentiva la implantación de modelos propios para la medición de los riesgos financieros en las entidades de crédito. En el trabajo que presentamos nos centramos en los modelos internos para la valoración del riesgo de crédito (IRB) y concretamente en la aproximación a uno de sus componentes: la probabilidad de impago (PD). Los métodos tradicionales usados para la modelización del riesgo de crédito, como son el análisis discriminante y los modelos logit y probit, parten de una serie de restricciones estadísticas. La metodología rough sets se presenta como una alternativa a los métodos estadísticos clásicos, salvando las limitaciones de estos. En nuestro trabajo aplicamos la metodología rought sets a una base de datos, compuesta por 106 empresas, solicitantes de créditos, con el objeto de obtener aquellos ratios que mejor discriminan entre empresas sanas y fallidas, así como una serie de reglas de decisión que ayudarán a detectar las operaciones potencialmente fallidas, como primer paso en la modelización de la probabilidad de impago. Por último, enfrentamos los resultados obtenidos con los alcanzados con el análisis discriminante clásico, para concluir que la metodología de los rough sets presenta mejores resultados de clasificación, en nuestro caso.The new Capital Accord of June 2004 (Basel II) opens the way for and encourages credit entities to implement their own models for measuring financial risks. In the paper presented, we focus on the use of internal rating based (IRB) models for the assessment of credit risk and specifically on the approach to one of their components: probability of default (PD). In our study we apply the rough sets methodology to a database composed of 106 companies, applicants for credit, with the object of obtaining those ratios that discriminate best between healthy and bankrupt companies, together with a series of decision rules that will help to detect the operations potentially in default, as a first step in modelling the probability of default. Lastly, we compare the results obtained against those obtained using classic discriminant análisis. We conclude that the rough sets methodology presents better risk classification results.Junta de Andalucía P06-SEJ-0153
    corecore