901 research outputs found

    Controllable and tolerable generalized eigenvectors of interval max-plus matrices

    Get PDF
    summary:By max-plus algebra we mean the set of reals R\mathbb{R} equipped with the operations ab=max{a,b}a\oplus b=\max\{a,b\} and ab=a+ba\otimes b= a+b for a,bR.a,b\in \mathbb{R}. A vector xx is said to be a generalized eigenvector of max-plus matrices A,BR(m,n)A, B\in\mathbb{R}(m,n) if Ax=λBxA\otimes x=\lambda\otimes B\otimes x for some λR\lambda\in \mathbb{R}. The investigation of properties of generalized eigenvectors is important for the applications. The values of vector or matrix inputs in practice are usually not exact numbers and they can be rather considered as values in some intervals. In this paper the properties of matrices and vectors with inexact (interval) entries are studied and complete solutions of the controllable, the tolerable and the strong generalized eigenproblem in max-plus algebra are presented. As a consequence of the obtained results, efficient algorithms for checking equivalent conditions are introduced

    Data Normalization in Decision Making Processes

    Get PDF
    With the fast-growing of data-rich systems, dealing with complex decision problems is unavoidable. Normalization is a crucial step in most multi criteria decision making (MCDM) models, to produce comparable and dimensionless data from heterogeneous data. Further, MCDM requires data to be numerical and comparable to be aggregated into a single score per alternative, thus providing their ranking. Several normalization techniques are available, but their performance depends on a number of characteristics of the problem at hand i.e., different normalization techniques may provide different rankings for alternatives. Therefore, it is a challenge to select a suitable normalization technique to represent an appropriate mapping from source data to a common scale. There are some attempts in the literature to address the subject of normalization in MCDM, but there is still a lack of assessment frameworks for evaluating normalization techniques. Hence, the main contribution and objective of this study is to develop an assessment framework for analysing the effects of normalization techniques on ranking of alternatives in MCDM methods and recommend the most appropriate technique for specific decision problems. The proposed assessment framework consists of four steps: (i) determining data types; (ii) chose potential candidate normalization techniques; (iii) analysis and evaluation of techniques; and (iv) selection of the best normalization technique. To validate the efficiency and robustness of the proposed framework, six normalization techniques (Max, Max-Min, Sum, Vector, Logarithmic, and Fuzzification) are selected from linear, semi-linear, and non-linear categories, and tested with four well known MCDM methods (TOPSIS, SAW, AHP, and ELECTRE), from scoring, comparative, and ranking methods. Designing the proposed assessment framework led to a conceptual model allowing an automatic decision-making process, besides recommending the most appropriate normalization technique for MCDM problems. Furthermore, the role of normalization techniques for dynamic multi criteria decision making (DMCDM) in collaborative networks is explored, specifically related to problems of selection of suppliers, business partners, resources, etc. To validate and test the utility and applicability of the assessment framework, a number of case studies are discussed and benchmarking and testimonies from experts are used. Also, an evaluation by the research community of the work developed is presented. The validation process demonstrated that the proposed assessment framework increases the accuracy of results in MCDM decision problems.Com o rápido crescimento dos sistemas ricos em dados, lidar com problemas de decisão complexos é inevitável. A normalização é uma etapa crucial na maioria dos modelos de tomada de decisão multicritério (MCDM), para produzir dados comparáveis e adimensionais a partir de dados heterogéneos, porque os dados precisam ser numéricos e comparáveis para serem agregados em uma única pontuação por alternativa. Como tal, várias técnicas de normalização estão disponíveis, mas o seu desempenho depende de uma série de características do problema em questão, ou seja, diferentes técnicas de normalização podem resultar em diferentes classificações para as alternativas. Portanto, é um desafio selecionar uma técnica de normalização adequada para representar o mapeamento dos dados de origem para uma escala comum. Existem algumas tentativas na literatura de abordar o assunto da normalização, mas ainda há uma falta de estrutura de avaliação para avaliar as técnicas de normalização sobre qual técnica é mais apropriada para os métodos MCDM.Assim, a principal contribuição e objetivo deste estudo são desenvolver uma ferramenta de avaliação para analisar os efeitos das técnicas de normalização na seriação de alternativas em métodos MCDM e recomendar a técnica mais adequada para problemas de decisão específicos. A estrutura de avaliação da ferramenta proposta consiste em quatro etapas: (i) determinar os tipos de dados, (ii) selecionar potenciais técnicas de normalização, (iii) análise e avaliação de técnicas em problemas de MCDM, e (iv) recomendação da melhor técnica para o problema de decisão. Para validar a eficácia e robustez da ferramenta proposta, seis técnicas de normalização (Max, Max-Min, Sum, Vector, Logarithmic e Fuzzification) foram selecionadas - das categorias lineares, semilineares e não lineares- e quatro conhecidos métodos de MCDM foram escolhidos (TOPSIS, SAW, AHP e ELECTRE). O desenho da ferramenta de avaliação proposta levou ao modelo conceptual que forneceu um processo automático de tomada de decisão, além de recomendar a técnica de normalização mais adequada para problemas de decisão. Além disso, é explorado o papel das técnicas de normalização para tomada de decisão multicritério dinâmica (DMCDM) em redes colaborativas, especificamente relacionadas com problemas de seleção de fornecedores, parceiros de negócios, recursos, etc. Para validar e testar a utilidade e aplicabilidade da ferramenta de avaliação, uma série de casos de estudo são discutidos e benchmarking e testemunhos de especialistas são usados. Além disso, uma avaliação do trabalho desenvolvido pela comunidade de investigação também é apresentada. Esta validação demonstrou que a ferramenta proposta aumenta a precisão dos resultados em problemas de decisão multicritério

    Control of chaos in nonlinear circuits and systems

    Get PDF
    Nonlinear circuits and systems, such as electronic circuits (Chapter 5), power converters (Chapter 6), human brains (Chapter 7), phase lock loops (Chapter 8), sigma delta modulators (Chapter 9), etc, are found almost everywhere. Understanding nonlinear behaviours as well as control of these circuits and systems are important for real practical engineering applications. Control theories for linear circuits and systems are well developed and almost complete. However, different nonlinear circuits and systems could exhibit very different behaviours. Hence, it is difficult to unify a general control theory for general nonlinear circuits and systems. Up to now, control theories for nonlinear circuits and systems are still very limited. The objective of this book is to review the state of the art chaos control methods for some common nonlinear circuits and systems, such as those listed in the above, and stimulate further research and development in chaos control for nonlinear circuits and systems. This book consists of three parts. The first part of the book consists of reviews on general chaos control methods. In particular, a time-delayed approach written by H. Huang and G. Feng is reviewed in Chapter 1. A master slave synchronization problem for chaotic Lur’e systems is considered. A delay independent and delay dependent synchronization criteria are derived based on the H performance. The design of the time delayed feedback controller can be accomplished by means of the feasibility of linear matrix inequalities. In Chapter 2, a fuzzy model based approach written by H.K. Lam and F.H.F. Leung is reviewed. The synchronization of chaotic systems subject to parameter uncertainties is considered. A chaotic system is first represented by the fuzzy model. A switching controller is then employed to synchronize the systems. The stability conditions in terms of linear matrix inequalities are derived based on the Lyapunov stability theory. The tracking performance and parameter design of the controller are formulated as a generalized eigenvalue minimization problem which is solved numerically via some convex programming techniques. In Chapter 3, a sliding mode control approach written by Y. Feng and X. Yu is reviewed. Three kinds of sliding mode control methods, traditional sliding mode control, terminal sliding mode control and non-singular terminal sliding mode control, are employed for the control of a chaotic system to realize two different control objectives, namely to force the system states to converge to zero or to track desired trajectories. Observer based chaos synchronizations for chaotic systems with single nonlinearity and multi-nonlinearities are also presented. In Chapter 4, an optimal control approach written by C.Z. Wu, C.M. Liu, K.L. Teo and Q.X. Shao is reviewed. Systems with nonparametric regression with jump points are considered. The rough locations of all the possible jump points are identified using existing kernel methods. A smooth spline function is used to approximate each segment of the regression function. A time scaling transformation is derived so as to map the undecided jump points to fixed points. The approximation problem is formulated as an optimization problem and solved via existing optimization tools. The second part of the book consists of reviews on general chaos controls for continuous-time systems. In particular, chaos controls for Chua’s circuits written by L.A.B. Tôrres, L.A. Aguirre, R.M. Palhares and E.M.A.M. Mendes are discussed in Chapter 5. An inductorless Chua’s circuit realization is presented, as well as some practical issues, such as data analysis, mathematical modelling and dynamical characterization, are discussed. The tradeoff among the control objective, the control energy and the model complexity is derived. In Chapter 6, chaos controls for pulse width modulation current mode single phase H-bridge inverters written by B. Robert, M. Feki and H.H.C. Iu are discussed. A time delayed feedback controller is used in conjunction with the proportional controller in its simple form as well as in its extended form to stabilize the desired periodic orbit for larger values of the proportional controller gain. This method is very robust and easy to implement. In Chapter 7, chaos controls for epileptiform bursting in the brain written by M.W. Slutzky, P. Cvitanovic and D.J. Mogul are discussed. Chaos analysis and chaos control algorithms for manipulating the seizure like behaviour in a brain slice model are discussed. The techniques provide a nonlinear control pathway for terminating or potentially preventing epileptic seizures in the whole brain. The third part of the book consists of reviews on general chaos controls for discrete-time systems. In particular, chaos controls for phase lock loops written by A.M. Harb and B.A. Harb are discussed in Chapter 8. A nonlinear controller based on the theory of backstepping is designed so that the phase lock loops will not be out of lock. Also, the phase lock loops will not exhibit Hopf bifurcation and chaotic behaviours. In Chapter 9, chaos controls for sigma delta modulators written by B.W.K. Ling, C.Y.F. Ho and J.D. Reiss are discussed. A fuzzy impulsive control approach is employed for the control of the sigma delta modulators. The local stability criterion and the condition for the occurrence of limit cycle behaviours are derived. Based on the derived conditions, a fuzzy impulsive control law is formulated so that the occurrence of the limit cycle behaviours, the effect of the audio clicks and the distance between the state vectors and an invariant set are minimized supposing that the invariant set is nonempty. The state vectors can be bounded within any arbitrary nonempty region no matter what the input step size, the initial condition and the filter parameters are. The editors are much indebted to the editor of the World Scientific Series on Nonlinear Science, Prof. Leon Chua, and to Senior Editor Miss Lakshmi Narayan for their help and congenial processing of the edition

    Gene Expression Data Analysis Using Fuzzy Logic

    Get PDF
    DNA microarray technology allows for the parallel analysis of the expression of genes in an organism. The wealth of spatio-temporal data provided by the technology allows us to attempt to reverse engineer the genetic network. Fuzzy logic has been proposed as a method of analyzing the relationships between genes as well as their corresponding proteins. Combinations of genes are entered into a fuzzy model of gene interaction and evaluated on the basis of how well the combination fits the model. Those combinations of genes that fit the model are likely to be related. However, current analysis algorithms are slow and computationally complex, sensitive to noise in gene expression data, and only tested and validated on simple models of gene interaction. This thesis proposes improvements to the fuzzy gene modeling method by reducing the computation time, altering the model to make it more robust with respect to noise, and generalizing the model to accommodate any combination of genes and model of gene interaction. The improved algorithm achieves a speed-up of 15-50%, significant resistance to noise, and a degree of generality that enables the analysis of large gene complexes

    Fuzzy Sets, Fuzzy Logic and Their Applications 2020

    Get PDF
    The present book contains the 24 total articles accepted and published in the Special Issue “Fuzzy Sets, Fuzzy Logic and Their Applications, 2020” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of fuzzy sets and systems of fuzzy logic and their extensions/generalizations. These topics include, among others, elements from fuzzy graphs; fuzzy numbers; fuzzy equations; fuzzy linear spaces; intuitionistic fuzzy sets; soft sets; type-2 fuzzy sets, bipolar fuzzy sets, plithogenic sets, fuzzy decision making, fuzzy governance, fuzzy models in mathematics of finance, a philosophical treatise on the connection of the scientific reasoning with fuzzy logic, etc. It is hoped that the book will be interesting and useful for those working in the area of fuzzy sets, fuzzy systems and fuzzy logic, as well as for those with the proper mathematical background and willing to become familiar with recent advances in fuzzy mathematics, which has become prevalent in almost all sectors of the human life and activity

    ECG Classification with an Adaptive Neuro-Fuzzy Inference System

    Get PDF
    Heart signals allow for a comprehensive analysis of the heart. Electrocardiography (ECG or EKG) uses electrodes to measure the electrical activity of the heart. Extracting ECG signals is a non-invasive process that opens the door to new possibilities for the application of advanced signal processing and data analysis techniques in the diagnosis of heart diseases. With the help of today’s large database of ECG signals, a computationally intelligent system can learn and take the place of a cardiologist. Detection of various abnormalities in the patient’s heart to identify various heart diseases can be made through an Adaptive Neuro-Fuzzy Inference System (ANFIS) preprocessed by subtractive clustering. Six types of heartbeats are classified: normal sinus rhythm, premature ventricular contraction (PVC), atrial premature contraction (APC), left bundle branch block (LBBB), right bundle branch block (RBBB), and paced beats. The goal is to detect important characteristics of an ECG signal to determine if the patient’s heartbeat is normal or irregular. The results from three trials indicate an average accuracy of 98.10%, average sensitivity of 94.99%, and average specificity of 98.87%. These results are comparable to two artificial neural network (ANN) algorithms: gradient descent and Levenberg Marquardt, as well as the ANFIS preprocessed by grid partitioning

    Time-Delay Systems

    Get PDF
    Time delay is very often encountered in various technical systems, such as electric, pneumatic and hydraulic networks, chemical processes, long transmission lines, robotics, etc. The existence of pure time lag, regardless if it is present in the control or/and the state, may cause undesirable system transient response, or even instability. Consequently, the problem of controllability, observability, robustness, optimization, adaptive control, pole placement and particularly stability and robustness stabilization for this class of systems, has been one of the main interests for many scientists and researchers during the last five decades

    Dynamics under Uncertainty: Modeling Simulation and Complexity

    Get PDF
    The dynamics of systems have proven to be very powerful tools in understanding the behavior of different natural phenomena throughout the last two centuries. However, the attributes of natural systems are observed to deviate from their classical states due to the effect of different types of uncertainties. Actually, randomness and impreciseness are the two major sources of uncertainties in natural systems. Randomness is modeled by different stochastic processes and impreciseness could be modeled by fuzzy sets, rough sets, Dempster–Shafer theory, etc

    Advances in robust clustering methods with applications

    Get PDF
    Robust methods in statistics are mainly concerned with deviations from model assumptions. As already pointed out in Huber (1981) and in Huber & Ronchetti (2009) \these assumptions are not exactly true since they are just a mathematically convenient rationalization of an often fuzzy knowledge or belief". For that reason \a minor error in the mathematical model should cause only a small error in the nal conclusions". Nevertheless it is well known that many classical statistical procedures are \excessively sensitive to seemingly minor deviations from the assumptions". All statistical methods based on the minimization of the average square loss may suer of lack of robustness. Illustrative examples of how outliers' in uence may completely alter the nal results in regression analysis and linear model context are provided in Atkinson & Riani (2012). A presentation of classical multivariate tools' robust counterparts is provided in Farcomeni & Greco (2015). The whole dissertation is focused on robust clustering models and the outline of the thesis is as follows. Chapter 1 is focused on robust methods. Robust methods are aimed at increasing the eciency when contamination appears in the sample. Thus a general denition of such (quite general) concept is required. To do so we give a brief account of some kinds of contamination we can encounter in real data applications. Secondly we introduce the \Spurious outliers model" (Gallegos & Ritter 2009a) which is the cornerstone of the robust model based clustering models. Such model is aimed at formalizing clustering problems when one has to deal with contaminated samples. The assumption standing behind the \Spurious outliers model" is that two dierent random mechanisms generate the data: one is assumed to generate the \clean" part while the another one generates the contamination. This idea is actually very common within robust models like the \Tukey-Huber model" which is introduced in Subsection 1.2.2. Outliers' recognition, especially in the multivariate case, plays a key role and is not straightforward as the dimensionality of the data increases. An overview of the most widely used (robust) methods for outliers detection is provided within Section 1.3. Finally, in Section 1.4, we provide a non technical review of the classical tools introduced in the Robust Statistics' literature aimed at evaluating the robustness properties of a methodology. Chapter 2 is focused on model based clustering methods and their robustness' properties. Cluster analysis, \the art of nding groups in the data" (Kaufman & Rousseeuw 1990), is one of the most widely used tools within the unsupervised learning context. A very popular method is the k-means algorithm (MacQueen et al. 1967) which is based on minimizing the Euclidean distance of each observation from the estimated clusters' centroids and therefore it is aected by lack of robustness. Indeed even a single outlying observation may completely alter centroids' estimation and simultaneously provoke a bias in the standard errors' estimation. Cluster's contours may be in ated and the \real" underlying clusterwise structure might be completely hidden. A rst attempt of robustifying the k- means algorithm appeared in Cuesta-Albertos et al. (1997), where a trimming step is inserted in the algorithm in order to avoid the outliers' exceeding in uence. It shall be noticed that k-means algorithm is ecient for detecting spherical homoscedastic clusters. Whenever more exible shapes are desired the procedure becomes inecient. In order to overcome this problem Gaussian model based clustering methods should be adopted instead of k-means algorithm. An example, among the other proposals described in Chapter 2, is the TCLUST methodology (Garca- Escudero et al. 2008), which is the cornerstone of the thesis. Such methodology is based on two main characteristics: trimming a xed proportion of observations and imposing a constraint on the estimates of the scatter matrices. As it will be explained in Chapter 2, trimming is used to protect the results from outliers' in uence while the constraint is involved as spurious maximizers may completely spoil the solution. Chapter 3 and 4 are mainly focused on extending the TCLUST methodology. In particular, in Chapter 3, we introduce a new contribution (compare Dotto et al. 2015 and Dotto et al. 2016b), based on the TCLUST approach, called reweighted TCLUST or RTCLUST for the sake of brevity. The idea standing behind such method is based on reweighting the observations initially agged as outlying. This is helpful both to gain eciency in the parameters' estimation process and to provide a reliable estimation of the true contamination level. Indeed, as the TCLUST is based on trimming a xed proportion of observations, a proper choice of the trimming level is required. Such choice, especially in the applications, can be cumbersome. As it will be claried later on, RTCLUST methodology allows the user to overcome such problem. Indeed, in the RTCLUST approach the user is only required to impose a high preventive trimming level. The procedure, by iterating through a sequence of decreasing trimming levels, is aimed at reinserting the discarded observations at each step and provides more precise estimation of the parameters and a nal estimation of the true contamination level ^. The theoretical properties of the methodology are studied in Section 3.6 and proved in Appendix A.1, while, Section 3.7, contains a simulation study aimed at evaluating the properties of the methodology and the advantages with respect to some other robust (reweigthed and single step procedures). Chapter 4 contains an extension of the TCLUST method for fuzzy linear clustering (Dotto et al. 2016a). Such contribution can be viewed as the extension of Fritz et al. (2013a) for linear clustering problems, or, equivalently, as the extension of Garca-Escudero, Gordaliza, Mayo-Iscar & San Martn (2010) to the fuzzy clustering framework. Fuzzy clustering is also useful to deal with contamination. Fuzziness is introduced to deal with overlapping between clusters and the presence of bridge points, to be dened in Section 1.1. Indeed bridge points may arise in case of overlapping between clusters and may completely alter the estimated cluster's parameters (i.e. the coecients of a linear model in each cluster). By introducing fuzziness such observations are suitably down weighted and the clusterwise structure can be correctly detected. On the other hand, robustness against gross outliers, as in the TCLUST methodology, is guaranteed by trimming a xed proportion of observations. Additionally a simulation study, aimed at comparing the proposed methodology with other proposals (both robust and non robust) is also provided in Section 4.4. Chapter 5 is entirely dedicated to real data applications of the proposed contributions. In particular, the RTCLUST method is applied to two dierent datasets. The rst one is the \Swiss Bank Note" dataset, a well known benchmark dataset for clustering models, and to a dataset collected by Gallup Organization, which is, to our knowledge, an original dataset, on which no other existing proposals have been applied yet. Section 5.3 contains an application of our fuzzy linear clustering proposal to allometry data. In our opinion such dataset, already considered in the robust linear clustering proposal appeared in Garca-Escudero, Gordaliza, Mayo-Iscar & San Martn (2010), is particularly useful to show the advantages of our proposed methodology. Indeed allometric quantities are often linked by a linear relationship but, at the same time, there may be overlap between dierent groups and outliers may often appear due to errors in data registration. Finally Chapter 6 contains the concluding remarks and the further directions of research. In particular we wish to mention an ongoing work (Dotto & Farcomeni, In preparation) in which we consider the possibility of implementing robust parsimonious Gaussian clustering models. Within the chapter, the algorithm is briefly described and some illustrative examples are also provided. The potential advantages of such proposals are the following. First of all, by considering the parsimonious models introduced in Celeux & Govaert (1995), the user is able to impose the shape of the detected clusters, which often, in the applications, plays a key role. Secondly, by constraining the shape of the detected clusters, the constraint on the eigenvalue ratio can be avoided. This leads to the removal of a tuning parameter of the procedure and, at the same time, allows the user to obtain ane equivariant estimators. Finally, since the possibility of trimming a xed proportion of observations is allowed, then the procedure is also formally robust
    corecore