647 research outputs found

    Vibration-based damage localisation: Impulse response identification and model updating methods

    Get PDF
    Structural health monitoring has gained more and more interest over the recent decades. As the technology has matured and monitoring systems are employed commercially, the development of more powerful and precise methods is the logical next step in this field. Especially vibration sensor networks with few measurement points combined with utilisation of ambient vibration sources are attractive for practical applications, as this approach promises to be cost-effective while requiring minimal modification to the monitored structures. Since efficient methods for damage detection have already been developed for such sensor networks, the research focus shifts towards extracting more information from the measurement data, in particular to the localisation and quantification of damage. Two main concepts have produced promising results for damage localisation. The first approach involves a mechanical model of the structure, which is used in a model updating scheme to find the damaged areas of the structure. Second, there is a purely data-driven approach, which relies on residuals of vibration estimations to find regions where damage is probable. While much research has been conducted following these two concepts, different approaches are rarely directly compared using the same data sets. Therefore, this thesis presents advanced methods for vibration-based damage localisation using model updating as well as a data-driven method and provides a direct comparison using the same vibration measurement data. The model updating approach presented in this thesis relies on multiobjective optimisation. Hence, the applied numerical optimisation algorithms are presented first. On this basis, the model updating parameterisation and objective function formulation is developed. The data-driven approach employs residuals from vibration estimations obtained using multiple-input finite impulse response filters. Both approaches are then verified using a simulated cantilever beam considering multiple damage scenarios. Finally, experimentally obtained data from an outdoor girder mast structure is used to validate the approaches. In summary, this thesis provides an assessment of model updating and residual-based damage localisation by means of verification and validation cases. It is found that the residual-based method exhibits numerical performance sufficient for real-time applications while providing a high sensitivity towards damage. However, the localisation accuracy is found to be superior using the model updating method

    Design and Evaluation of a Traffic Safety System based on Vehicular Networks for the Next Generation of Intelligent Vehicles

    Get PDF
    La integración de las tecnologías de las telecomunicaciones en el sector del automóvil permitirá a los vehículos intercambiar información mediante Redes Vehiculares, ofreciendo numerosas posibilidades. Esta tesis se centra en la mejora de la seguridad vial y la reducción de la siniestralidad mediante Sistemas Inteligentes de Transporte (ITS). El primer paso consiste en obtener una difusión eficiente de los mensajes de advertencia sobre situaciones potencialmente peligrosas. Hemos desarrollado un marco para simular el intercambio de mensajes entre vehículos, utilizado para proponer esquemas eficientes de difusión. También demostramos que la disposición de las calles tiene gran influencia sobre la eficiencia del proceso. Nuestros algoritmos de difusión son parte de una arquitectura más amplia (e-NOTIFY) capaz de detectar accidentes de tráfico e informar a los servicios de emergencia. El desarrollo y evaluación de un prototipo demostró la viabilidad del sistema y cómo podría ayudar a reducir el número de víctimas en carretera

    Advances in Evolutionary Algorithms

    Get PDF
    With the recent trends towards massive data sets and significant computational power, combined with evolutionary algorithmic advances evolutionary computation is becoming much more relevant to practice. Aim of the book is to present recent improvements, innovative ideas and concepts in a part of a huge EA field

    Simulation of automated negotiation

    Get PDF
    Durch die Automatisierung von Verhandlungen sollen bessere Verhandlungsergebnisse erzielt werden können als bei Verhandlungen zwischen Menschen und neue Koordinationsformen für autonome Agentensysteme ermöglicht werden. Diese Arbeit beschäftigt sich mit der Simulation solcher Systeme für automatisierte Verhandlungen, da operative Systeme zur Zeit noch nicht verfügbar sind. Die Arbeit basiert auf einer Erhebung und Diskussion der aktuellen Literatur im Bereich der Simulation automatisierter Verhandlungen. Existierende Ansätze weisen einige Unzulänglichkeiten bezüglich deren praktischer Umsetzbarkeit in einer offenen Umgebung wie dem Internet auf, wo automatisierte Verhandlungen nicht nur sehr schnell durchgeführt werden sondern sich auch Software-Agenten und Verhandlungsprobleme ändern können. Diese Defizite thematisierend werden Verhandlungssysteme für automatisierte Verhandlungen vorgeschlagen. Diese bestehen zum einen aus Software-Agenten, die generische Angebots- und Konzessionsstratgien verfolgen, zum anderen aus Interaktionsprotokollen, die es Agenten erlauben ihre Strategien vorübergehend oder permanent auszusetzen. Ergebnisse der Simulation dieser Systeme, mit Verhandlungsproblemen aus Verhandlungsexperimenten mit menschlichen Probanden als Input, werden für unterschiedliche Ergebnisdimensionen -- Übereinkunftshäufigkeit, Fairness, individuelle und kollektive Effizienz -- zwischen Systemen und auch mit den Ergebnissen der Experimente verglichen. Trotz fundamentaler Zielkonflikte zwischen den einzelnen Ergebnisdimensionen erzielen einige Systeme konsistent bessere Ergebnisse sowohl im Systemvergleich als auch verglichen mit den Ergebnissen der Experimente. Diese Systeme bestehen aus Software-Agenten die systematisch Angebote mit monoton abnehmendem Nutzen unterbreiten und erste Konzessionensschritte tätigen solange der Opponent bisherige Konzessionen erwidert hat. Das verwendete Interaktionsprotokoll zeichnet sich dadurch aus, dass es den Agenten erlaubt ungünstige Angebote zurückzuweisen und damit neue Angebote des Opponenten einzufordern, durch diese Unterbrechung der eigenen Angebotsstrategie können ungünstige Verhandlungsergebnisse vermieden werden.Automated negotiation is argued to improve negotiation outcomes by replacing humans and to enable coordination in autonomous systems. As operative systems do not yet exist scholars rely on simulations to evaluate potential systems for automated negotiation. This dissertation reviews the state of the art literature on simulation of automated negotiation along its main components - negotiation problem, interaction protocol, and software agents. Deficiencies of existing approaches concerning the practical application in an open environment as the Internet - where automated negotiation proceeds fast, with changing opponents, and for various negotiation problems - are identified. To address these deficiencies we develop and simulate automated negotiation systems, consisting of software agents that follow generic offer generation and concession strategies and protocols that allow these agents to interrupt their strategy to avoid exploitation and unfavorable agreements. Outcomes of simulation runs are compared across systems and to human negotiation along various outcome dimensions - proportion of agreements, dyadic and individual performance, and fairness - for various negotiation problems derived from negotiation experiments with human subjects. Though there exist trade-offs between the different outcome dimensions, systems consisting of software agents, that systematically propose offers of monotonically decreasing utility and make first concession steps if the opponent reciprocated previous concessions, and an interaction protocol that enables to reject unfavorable offers - without immediately aborting negotiations - in order to elicit new offers from the opponent, performed best. These systems performed very well in all outcome dimensions when compared with other systems and were the only that outperformed negotiation between humans in all dimensions

    Optimisation du développement de nouveaux produits dans l'industrie pharmaceutique par algorithme génétique multicritère

    Get PDF
    Le développement de nouveaux produits constitue une priorité stratégique de l'industrie pharmaceutique, en raison de la présence d'incertitudes, de la lourdeur des investissements mis en jeu, de l'interdépendance entre projets, de la disponibilité limitée des ressources, du nombre très élevé de décisions impliquées dû à la longueur des processus (de l'ordre d'une dizaine d'années) et de la nature combinatoire du problème. Formellement, le problème se pose ainsi : sélectionner des projets de Ret D parmi des projets candidats pour satisfaire plusieurs critères (rentabilité économique, temps de mise sur le marché) tout en considérant leur nature incertaine. Plus précisément, les points clés récurrents sont relatifs à la détermination des projets à développer une fois que les molécules cibles sont identifiées, leur ordre de traitement et le niveau de ressources à affecter. Dans ce contexte, une approche basée sur le couplage entre un simulateur à événements discrets stochastique (approche Monte Carlo) pour représenter la dynamique du système et un algorithme d'optimisation multicritère (de type NSGA II) pour choisir les produits est proposée. Un modèle par objets développé précédemment pour la conception et l'ordonnancement d'ateliers discontinus, de réutilisation aisée tant par les aspects de structure que de logique de fonctionnement, a été étendu pour intégrer le cas de la gestion de nouveaux produits. Deux cas d'étude illustrent et valident l'approche. Les résultats de simulation ont mis en évidence l'intérêt de trois critères d'évaluation de performance pour l'aide à la décision : le bénéfice actualisé d'une séquence, le risque associé et le temps de mise sur le marché. Ils ont été utilisés dans la formulation multiobjectif du problème d'optimisation. Dans ce contexte, des algorithmes génétiques sont particulièrement intéressants en raison de leur capacité à conduire directement au front de Pareto et à traiter l'aspect combinatoire. La variante NSGA II a été adaptée au problème pour prendre en compte à la fois le nombre et l'ordre de lancement des produits dans une séquence. A partir d'une analyse bicritère réalisée pour un cas d'étude représentatif sur différentes paires de critères pour l'optimisation bi- et tri-critère, la stratégie d'optimisation s'avère efficace et particulièrement élitiste pour détecter les séquences à considérer par le décideur. Seules quelques séquences sont détectées. Parmi elles, les portefeuilles à nombre élevé de produits provoquent des attentes et des retards au lancement ; ils sont éliminés par la stratégie d'optimistaion bicritère. Les petits portefeuilles qui réduisent les files d'attente et le temps de lancement sont ainsi préférés. Le temps se révèle un critère important à optimiser simultanément, mettant en évidence tout l'intérêt d'une optimisation tricritère. Enfin, l'ordre de lancement des produits est une variable majeure comme pour les problèmes d'ordonnancement d'atelier. ABSTRACT : New Product Development (NPD) constitutes a challenging problem in the pharmaceutical industry, due to the characteristics of the development pipeline, namely, the presence of uncertainty, the high level of the involved capital costs, the interdependency between projects, the limited availability of resources, the overwhelming number of decisions due to the length of the time horizon (about 10 years) and the combinatorial nature of a portfolio. Formally, the NPD problem can be stated as follows: select a set of R and D projects from a pool of candidate projects in order to satisfy several criteria (economic profitability, time to market) while copying with the uncertain nature of the projects. More precisely, the recurrent key issues are to determine the projects to develop once target molecules have been identified, their order and the level of resources to assign. In this context, the proposed approach combines discrete event stochastic simulation (Monte Carlo approach) with multiobjective genetic algorithms (NSGA II type, Non-Sorted Genetic Algorithm II) to optimize the highly combinatorial portfolio management problem. An object-oriented model previously developed for batch plant scheduling and design is then extended to embed the case of new product management, which is particularly adequate for reuse of both structure and logic. Two case studies illustrate and validate the approach. From this simulation study, three performance evaluation criteria must be considered for decision making: the Net Present Value (NPV) of a sequence, its associated risk defined as the number of positive occurrences of NPV among the samples and the time to market. Theyv have been used in the multiobjective optimization formulation of the problem. In that context, Genetic Algorithms (GAs) are particularly attractive for treating this kind of problem, due to their ability to directly lead to the so-called Pareto front and to account for the combinatorial aspect. NSGA II has been adapted to the treated case for taking into account both the number of products in a sequence and the drug release order. From an analysis performed for a representative case study on the different pairs of criteria both for the bi- and tricriteria optimization, the optimization strategy turns out to be efficient and particularly elitist to detect the sequences which can be considered by the decision makers. Only a few sequences are detected. Among theses sequences, large portfolios cause resource queues and delays time to launch and are eliminated by the bicriteria optimization strategy. Small portfolio reduces queuing and time to launch appear as good candidates. The optimization strategy is interesting to detect the sequence candidates. Time is an important criterion to consider simultaneously with NPV and risk criteria. The order in which drugs are released in the pipeline is of great importance as with scheduling problems

    Economics of Water Quality Protection from Nonpoint Sources: Theory and Practice

    Get PDF
    Water quality is a major environmental issue. Pollution from nonpoint sources is the single largest remaining source of water quality impairments in the United States. Agriculture is a major source of several nonpoint-source pollutants, including nutrients, sediment, pesticides, and salts. Agricultural nonpoint pollution reduction policies can be designed to induce producers to change their production practices in ways that improve the environmental and related economic consequences of production. The information necessary to design economically efficient pollution control policies is almost always lacking. Instead, policies can be designed to achieve specific environmental or other similarly related goals at least cost, given transaction costs and any other political, legal, or informational constraints that may exist. This report outlines the economic characteristics of five instruments that can be used to reduce agricultural nonpoint source pollution (economic incentives, standards, education, liability, and research) and discusses empirical research related to the use of these instruments.water quality, nonpoint-source pollution, economic incentives, standards, education, liability, research, Environmental Economics and Policy,

    Resource allocation optimization problems in the public sector

    Get PDF
    This dissertation consists of three distinct, although conceptually related, public sector topics: the Transportation Security Agency (TSA), U.S. Customs and Border Patrol (CBP), and the Georgia Trauma Care Network Commission (GTCNC). The topics are unified in their mathematical modeling and mixed-integer programming solution strategies. In Chapter 2, we discuss strategies for solving large-scale integer programs to include column generation and the known heuristic of particle swarm optimization (PSO). In order to solve problems with an exponential number of decision variables, we employ Dantzig-Wolfe decomposition to take advantage of the special subproblem structures encountered in resource allocation problems. In each of the resource allocation problems presented, we concentrate on selecting an optimal portfolio of improvement measures. In most cases, the number of potential portfolios of investment is too large to be expressed explicitly or stored on a computer. We use column generation to effectively solve these problems to optimality, but are hindered by the solution time and large CPU requirement. We explore utilizing multi-swarm particle swarm optimization to solve the decomposition heuristically. We also explore integrating multi-swarm PSO into the column generation framework to solve the pricing problem for entering columns of negative reduced cost. In Chapter 3, we present a TSA problem to allocate security measures across all federally funded airports nationwide. This project establishes a quantitative construct for enterprise risk assessment and optimal resource allocation to achieve the best aviation security. We first analyze and model the various aviation transportation risks and establish their interdependencies. The mixed-integer program determines how best to invest any additional security measures for the best overall risk protection and return on investment. Our analysis involves cascading and inter-dependency modeling of the multi-tier risk taxonomy and overlaying security measurements. The model selects optimal security measure allocations for each airport with the objectives to minimize the probability of false clears, maximize the probability of threat detection, and maximize the risk posture (ability to mitigate risks) in aviation security. The risk assessment and optimal resource allocation construct are generalizable and are applied to the CBP problem. In Chapter 4, we optimize security measure investments to achieve the most cost-effective deterrence and detection capabilities for the CBP. A large-scale resource allocation integer program was successfully modeled that rapidly returns good Pareto optimal results. The model incorporates the utility of each measure, the probability of success, along with multiple objectives. To the best of our knowledge, our work presents the first mathematical model that optimizes security strategies for the CBP and is the first to introduce a utility factor to emphasize deterrence and detection impact. The model accommodates different resources, constraints, and various types of objectives. In Chapter 5, we analyze the emergency trauma network problem first by simulation. The simulation offers a framework of resource allocation for trauma systems and possible ways to evaluate the impact of the investments on the overall performance of the trauma system. The simulation works as an effective proof of concept to demonstrate that improvements to patient well-being can be measured and that alternative solutions can be analyzed. We then explore three different formulations to model the Emergency Trauma Network as a mixed-integer programming model. The first model is a Multi-Region, Multi-Depot, Multi-Trip Vehicle Routing Problem with Time Windows. This is a known expansion of the vehicle routing problem that has been extended to model the Georgia trauma network. We then adapt an Ambulance Routing Problem (ARP) to the previously mentioned VRP. There are no known ARPs of this magnitude/extension of a VRP. One of the primary differences is many ARPs are constructed for disaster scenarios versus day-to-day emergency trauma operations. The new ARP also implements more constraints based on trauma level limitations for patients and hospitals. Lastly, the Resource Allocation ARP is constructed to reflect the investment decisions presented in the simulation.Ph.D

    14th Conference on DATA ANALYSIS METHODS for Software Systems

    Get PDF
    DAMSS-2023 is the 14th International Conference on Data Analysis Methods for Software Systems, held in Druskininkai, Lithuania. Every year at the same venue and time. The exception was in 2020, when the world was gripped by the Covid-19 pandemic and the movement of people was severely restricted. After a year’s break, the conference was back on track, and the next conference was successful in achieving its primary goal of lively scientific communication. The conference focuses on live interaction among participants. For better efficiency of communication among participants, most of the presentations are poster presentations. This format has proven to be highly effective. However, we have several oral sections, too. The history of the conference dates back to 2009 when 16 papers were presented. It began as a workshop and has evolved into a well-known conference. The idea of such a workshop originated at the Institute of Mathematics and Informatics, now the Institute of Data Science and Digital Technologies of Vilnius University. The Lithuanian Academy of Sciences and the Lithuanian Computer Society supported this idea, which gained enthusiastic acceptance from both the Lithuanian and international scientific communities. This year’s conference features 84 presentations, with 137 registered participants from 11 countries. The conference serves as a gathering point for researchers from six Lithuanian universities, making it the main annual meeting for Lithuanian computer scientists. The primary aim of the conference is to showcase research conducted at Lithuanian and foreign universities in the fields of data science and software engineering. The annual organization of the conference facilitates the rapid exchange of new ideas within the scientific community. Seven IT companies supported the conference this year, indicating the relevance of the conference topics to the business sector. In addition, the conference is supported by the Lithuanian Research Council and the National Science and Technology Council (Taiwan, R. O. C.). The conference covers a wide range of topics, including Applied Mathematics, Artificial Intelligence, Big Data, Bioinformatics, Blockchain Technologies, Business Rules, Software Engineering, Cybersecurity, Data Science, Deep Learning, High-Performance Computing, Data Visualization, Machine Learning, Medical Informatics, Modelling Educational Data, Ontological Engineering, Optimization, Quantum Computing, Signal Processing. This book provides an overview of all presentations from the DAMSS-2023 conference
    corecore