37 research outputs found
A NOVEL METAHEURISTIC ALGORITHM: DYNAMIC VIRTUAL BATS ALGORITHM FOR GLOBAL OPTIMIZATION
A novel nature-inspired algorithm called the Dynamic Virtual Bats Algorithm (DVBA)
is presented in this thesis. DVBA is inspired by a bat’s ability to manipulate frequency
and wavelength of the emitted sound waves when hunting. A role based search has been
developed to improve the diversification and intensification capability of standard Bat
Algorithm (BA). Although DVBA is inspired from bats, like BA, it is conceptually very
different from BA. BA needs a huge number of population size; however, DVBA employs
just two bats to handle the ”exploration and exploitation” conflict which is known as a
real challenge for all optimization algorithms.
Firstly, we study bat’s echolocation ability and next, the most known bat-inspired
algorithm and its modified versions are analyzed. The contributions of this thesis start
reading and imitating bat’s hunting strategies with different perspectives. In the DVBA, there are only two bats: explorer and exploiter bat. While the explorer bat explores the
search space, the exploiter bat makes an intensive search of the local with the highest
probability of locating the desired target. Depending on their location, bats exchange the
roles dynamically.
The performance of the DVBA is extensively evaluated on a suite of 30 bound-constrained
optimization problems from Congress of Evolutionary Computation (CEC) 2014 and
compared with 4 classical optimization algorithm, 4 state-of-the-art modified bat
algorithms, and 5 algorithms from a special session at CEC2014. In addition, DVBA
is tested on supply chain cost problem to see its performance on a complicated real world
problem. The experimental results demonstrated that the proposed DVBA outperform, or
is comparable to, its competitors in terms of the quality of final solution and its convergence
rates.Epoka Universit
Blind source separation for clutter and noise suppression in ultrasound imaging:review for different applications
Blind source separation (BSS) refers to a number of signal processing techniques that decompose a signal into several 'source' signals. In recent years, BSS is increasingly employed for the suppression of clutter and noise in ultrasonic imaging. In particular, its ability to separate sources based on measures of independence rather than their temporal or spatial frequency content makes BSS a powerful filtering tool for data in which the desired and undesired signals overlap in the spectral domain. The purpose of this work was to review the existing BSS methods and their potential in ultrasound imaging. Furthermore, we tested and compared the effectiveness of these techniques in the field of contrast-ultrasound super-resolution, contrast quantification, and speckle tracking. For all applications, this was done in silico, in vitro, and in vivo. We found that the critical step in BSS filtering is the identification of components containing the desired signal and highlighted the value of a priori domain knowledge to define effective criteria for signal component selection
Recommended from our members
Model updating in structural dynamics: advanced parametrization, optimal regularization, and symmetry considerations
Numerical models are pervasive tools in science and engineering for simulation, design, and assessment of physical systems. In structural engineering, finite element (FE) models are extensively used to predict responses and estimate risk for built structures. While FE models attempt to exactly replicate the physics of their corresponding structures, discrepancies always exist between measured and model output responses. Discrepancies are related to aleatoric uncertainties, such as measurement noise, and epistemic uncertainties, such as modeling errors. Epistemic uncertainties indicate that the FE model may not fully represent the built structure, greatly limiting its utility for simulation and structural assessment. Model updating is used to reduce error between measurement and model-output responses through adjustment of uncertain FE model parameters, typically using data from structural vibration studies. However, the model updating problem is often ill-posed with more unknown parameters than available data, such that parameters cannot be uniquely inferred from the data.
This dissertation focuses on two approaches to remedy ill-posedness in FE model updating: parametrization and regularization. Parametrization produces a reduced set of updating parameters to estimate, thereby improving posedness. An ideal parametrization should incorporate model uncertainties, effectively reduce errors, and use as few parameters as possible. This is a challenging task since a large number of candidate parametrizations are available in any model updating problem. To ameliorate this, three new parametrization techniques are proposed: improved parameter clustering with residual-based weighting, singular vector decomposition-based parametrization, and incremental reparametrization. All of these methods utilize local system sensitivity information, providing effective reduced-order parametrizations which incorporate FE model uncertainties.
The other focus of this dissertation is regularization, which improves posedness by providing additional constraints on the updating problem, such as a minimum-norm parameter solution constraint. Optimal regularization is proposed for use in model updating to provide an optimal balance between residual reduction and parameter change minimization. This approach links computationally-efficient deterministic model updating with asymptotic Bayesian inference to provide regularization based on maximal model evidence. Estimates are also provided for uncertainties and model evidence, along with an interesting measure of parameter efficiency
Advances in Artificial Intelligence: Models, Optimization, and Machine Learning
The present book contains all the articles accepted and published in the Special Issue “Advances in Artificial Intelligence: Models, Optimization, and Machine Learning” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of artificial intelligence and its subfields. These topics include, among others, deep learning and classic machine learning algorithms, neural modelling, architectures and learning algorithms, biologically inspired optimization algorithms, algorithms for autonomous driving, probabilistic models and Bayesian reasoning, intelligent agents and multiagent systems. We hope that the scientific results presented in this book will serve as valuable sources of documentation and inspiration for anyone willing to pursue research in artificial intelligence, machine learning and their widespread applications
Modelling of interactions between rail service and travel demand: a passenger-oriented analysis
The proposed research is situated in the field of design, management and optimisation in railway network operations. Rail transport has in its favour several specific features which make it a key factor in public transport management, above all in high-density contexts. Indeed, such a system is environmentally friendly (reduced pollutant emissions), high-performing (high travel speeds and low values of headways), competitive (low unitary costs per seat-km or carried passenger-km) and presents a high degree of adaptability to intermodality. However, it manifests high vulnerability in the case of breakdowns. This occurs because a faulty convoy cannot be easily overtaken and, sometimes, cannot be easily removed from the line, especially in the case of isolated systems (i.e. systems which are not integrated into an effective network) or when a breakdown occurs on open tracks. Thus,
re-establishing ordinary operational conditions may require excessive amounts of time and, as a consequence, an inevitable increase in inconvenience (user generalised cost) for passengers, who might decide to abandon the system or, if already on board, to exclude the railway system from their choice set for the future. It follows that developing appropriate techniques and decision support tools for optimising rail system management, both in ordinary and disruption conditions, would consent a clear influence of the modal split in favour of public transport and, therefore, encourage an important reduction in the externalities caused by the use of private transport, such as air and noise pollution, traffic congestion and accidents, bringing clear benefits to the quality of life for both transport users and non-users (i.e. individuals who are not system users).
Managing to model such a complex context, based on numerous interactions among the various components (i.e. infrastructure, signalling system, rolling stock and timetables) is no mean feat. Moreover, in many cases, a fundamental element, which is the inclusion of the modelling of travel demand features in the simulation of railway operations, is neglected. Railway transport, just as any other transport system, is not finalised to itself, but its task is to move people or goods around, and, therefore, a realistic and accurate cost-benefit analysis cannot ignore involved flows features. In particular, considering travel demand into the analysis framework presents a two-sided effect.
Primarily, it leads to introduce elements such as convoy capacity constraints and the assessment of dwell times as flow-dependent factors which make the simulation as close as possible to the reality. Specifically, the former allows to take into account the eventuality that not all passengers can board the first arriving train, but only a part of them, due to overcrowded conditions, with a consequent increase in waiting times. Due consideration of this factor is fundamental because, if it were to be repeated, it would make a further contribution to passengers’ discontent. While, as regards the estimate of dwell times on the basis of flows, it becomes fundamental in the planning phase. In fact, estimating dwell times as fixed values, ideally equal for all runs and all stations, can induce differences between actual and planned operations, with a subsequent deterioration in system performance. Thus, neglecting these aspects, above all in crowded contexts, would render the simulation distorted, both in terms of costs and benefits.
The second aspect, on the other hand, concerns the correct assessment of effects of the strategies put in place, both in planning phases (strategic decisions such as the realisation of a new infrastructure, the improvement of the current signalling system or the purchasing of new rolling stock) and in operational phases (operational decisions such as the definition of intervention strategies for addressing disruption conditions). In fact, in the management of failures, to date, there are operational procedures which are based on hypothetical times for re-establishing ordinary conditions, estimated by the train driver or by the staff of the operation centre, who, generally, tend to minimise the impact exclusively from the company’s point of view (minimisation of operational costs), rather than from the standpoint of passengers. Additionally, in the definition of intervention strategies, passenger flow and its variation in time (different temporal intervals) and space (different points in the railway network) are rarely considered. It appears obvious, therefore, how the proposed re-examination of the dispatching and rescheduling tasks in a passenger-orientated perspective, should be accompanied by the development of estimation and forecasting techniques for travel demand, aimed at correctly taking into account the peculiarities of the railway system; as well as by the generation of ad-hoc tools designed to simulate the behaviour of passengers in the various phases of the trip (turnstile access, transfer from the turnstiles to the platform, waiting on platform, boarding and alighting process, etc.).
The latest workstream in this present study concerns the analysis of the energy problems associated to rail transport. This is closely linked to what has so far been described. Indeed, in order to implement proper energy saving policies, it is, above all, necessary to obtain a reliable estimate of the involved operational times (recovery times, inversion times, buffer times, etc.). Moreover, as the adoption of eco-driving strategies generates an increase in passenger travel times, with everything that this involves, it is important to investigate the trade-off between energy efficiency and increase in user generalised costs.
Within this framework, the present study aims at providing a DSS (Decision Support System) for all phases of planning and management of rail transport systems, from that of timetabling to dispatching and rescheduling, also considering space-time travel demand variability as well as the definition of suitable energy-saving policies, by adopting a passenger-orientated perspective
Updating structural wind turbine blade models via invertible neural networks
Wind turbine rotor blades are huge and complex composite structures that are exposed to exceptionally high loads, both extreme and fatigue loads. These can result in damages causing severe downtimes or repair costs. It is thus of utmost importance that the blades are carefully designed, including uncertainty analyses in order to produce safe, reliable, and cost-efficient wind turbines.
An accurate reliability assessment should already start during the design and manufacturing phases. Recent developments in digitalization give rise to the concept of a digital twin, which replicates a product and its properties into a digital environment. Model updating is a technique, which helps to adapt the digital twin according to the measured characteristics of the real structure. Current model updating techniques are most often based on heuristic optimization algorithms, which are computationally expensive, can only deal with a relatively small parameter space, or do not estimate the uncertainty of the computed results.
This thesis’ objective is to present a computationally efficient model updating method that recovers parameter deviation. This method is able to consider uncertainties and a high fidelity degree of the rotor blade model. A validated, fully parameterized model generator is used to perform a physics-informed training of a conditional invertible neural network. This network finally represents a surrogate of the inverse physical model, which then can be used to recover model parameters based on the structural responses of the blade. All presented generic model updating applications show excellent results, predicting the a posteriori distribution of the significant model parameters accurately.Bundesministerium für Wirtschaft und Klimaschutz/Energietechnologien (BMWi)/0324032C, 0324335B/E
Sensors Fault Diagnosis Trends and Applications
Fault diagnosis has always been a concern for industry. In general, diagnosis in complex systems requires the acquisition of information from sensors and the processing and extracting of required features for the classification or identification of faults. Therefore, fault diagnosis of sensors is clearly important as faulty information from a sensor may lead to misleading conclusions about the whole system. As engineering systems grow in size and complexity, it becomes more and more important to diagnose faulty behavior before it can lead to total failure. In the light of above issues, this book is dedicated to trends and applications in modern-sensor fault diagnosis
Recommended from our members
Adaptive Coded Modulation Classification and Spectrum Sensing for Cognitive Radio Systems. Adaptive Coded Modulation Techniques for Cognitive Radio Using Kalman Filter and Interacting Multiple Model Methods
The current and future trends of modern wireless communication systems place heavy demands on fast data transmissions in order to satisfy end users’ requirements anytime, anywhere. Such demands are obvious in recent applications such as smart phones, long term evolution (LTE), 4 & 5 Generations (4G & 5G), and worldwide interoperability for microwave access (WiMAX) platforms, where robust coding and modulations are essential especially in streaming on-line video material, social media and gaming. This eventually resulted in extreme exhaustion imposed on the frequency spectrum as a rare natural resource due to stagnation in current spectrum management policies. Since its advent in the late 1990s, cognitive radio (CR) has been conceived as an enabling technology aiming at the efficient utilisation of frequency spectrum that can lead to potential direct spectrum access (DSA) management. This is mainly attributed to its internal capabilities inherited from the concept of software defined radio (SDR) to sniff its surroundings, learn and adapt its operational parameters accordingly. CR systems (CRs) may commonly comprise one or all of the following core engines that characterise their architectures; namely, adaptive coded modulation (ACM), automatic modulation classification (AMC) and spectrum sensing (SS).
Motivated by the above challenges, this programme of research is primarily aimed at the design and development of new paradigms to help improve the adaptability of CRs and thereby achieve the desirable signal processing tasks at the physical layer of the above core engines. Approximate modelling of Rayleigh and finite state Markov channels (FSMC) with a new concept borrowed from econometric studies have been approached. Then insightful channel estimation by using Kalman filter (KF) augmented with interacting multiple model (IMM) has been examined for the purpose of robust adaptability, which is applied for the first time in wireless communication systems. Such new IMM-KF combination has been facilitated in the feedback channel between wireless transmitter and receiver to adjust the transmitted power, by using a water-filling (WF) technique, and constellation pattern and rate in the ACM algorithm. The AMC has also benefited from such IMM-KF integration to boost the performance against conventional parametric estimation methods such as maximum likelihood estimate (MLE) for channel interrogation and the estimated parameters of both inserted into the ML classification algorithm. Expectation-maximisation (EM) has been applied to examine unknown transmitted modulation sequences and channel parameters in tandem. Finally, the non-parametric multitaper method (MTM) has been thoroughly examined for spectrum estimation (SE) and SS, by relying on Neyman-Pearson (NP) detection principle for hypothesis test, to allow licensed primary users (PUs) to coexist with opportunistic unlicensed secondary users (SUs) in the same frequency bands of interest without harmful effects. The performance of the above newly suggested paradigms have been simulated and assessed under various transmission settings and revealed substantial improvements