26 research outputs found
Clustering Improves the Goemans–Williamson Approximation for the Max-Cut Problem
MAX−CUT is one of the well-studied NP-hard combinatorial optimization problems. It can be formulated as an Integer Quadratic Programming problem and admits a simple relaxation obtained by replacing the integer “spin” variables xi by unitary vectors v⃗ i. The Goemans–Williamson rounding algorithm assigns the solution vectors of the relaxed quadratic program to a corresponding integer spin depending on the sign of the scalar product v⃗ i⋅r⃗ with a random vector r⃗ . Here, we investigate whether better graph cuts can be obtained by instead using a more sophisticated clustering algorithm. We answer this question affirmatively. Different initializations of k-means and k-medoids clustering produce better cuts for the graph instances of the most well known benchmark for MAX−CUT. In particular, we found a strong correlation of cluster quality and cut weights during the evolution of the clustering algorithms. Finally, since in general the maximal cut weight of a graph is not known beforehand, we derived instance-specific lower bounds for the approximation ratio, which give information of how close a solution is to the global optima for a particular instance. For the graphs in our benchmark, the instance specific lower bounds significantly exceed the Goemans–Williamson guarantee
Recommended from our members
A Bayesian Argumentation Framework for Distributed Fault Diagnosis in Telecommunication Networks
Traditionally, fault diagnosis in telecommunication network management is carried out by humans who use software support systems. The phenomenal growth in telecommunication networks has nonetheless triggered the interest in more autonomous approaches, capable of coping with emergent challenges such as the need to diagnose faults' root causes under uncertainty in geographically-distributed environments, with restrictions on data privacy. In this paper, we present a framework for distributed fault diagnosis under uncertainty based on an argumentative framework for multi-agent systems. In our approach, agents collaborate to reach conclusions by arguing in unpredictable scenarios. The observations collected from the network are used to infer possible fault root causes using Bayesian networks as causal models for the diagnosis process. Hypotheses about those fault root causes are discussed by agents in an argumentative dialogue to achieve a reliable conclusion. During that dialogue, agents handle the uncertainty of the diagnosis process, taking care of keeping data privacy among them. The proposed approach is compared against existing alternatives using benchmark multi-domain datasets. Moreover, we include data collected from a previous fault diagnosis system running in a telecommunication network for one and a half years. Results show that the proposed approach is suitable for the motivational scenario
On the Nature and Types of Anomalies: A Review
Anomalies are occurrences in a dataset that are in some way unusual and do
not fit the general patterns. The concept of the anomaly is generally
ill-defined and perceived as vague and domain-dependent. Moreover, despite some
250 years of publications on the topic, no comprehensive and concrete overviews
of the different types of anomalies have hitherto been published. By means of
an extensive literature review this study therefore offers the first
theoretically principled and domain-independent typology of data anomalies, and
presents a full overview of anomaly types and subtypes. To concretely define
the concept of the anomaly and its different manifestations, the typology
employs five dimensions: data type, cardinality of relationship, anomaly level,
data structure and data distribution. These fundamental and data-centric
dimensions naturally yield 3 broad groups, 9 basic types and 61 subtypes of
anomalies. The typology facilitates the evaluation of the functional
capabilities of anomaly detection algorithms, contributes to explainable data
science, and provides insights into relevant topics such as local versus global
anomalies.Comment: 38 pages (30 pages content), 10 figures, 3 tables. Preprint; review
comments will be appreciated. Improvements in version 2: Explicit mention of
fifth anomaly dimension; Added section on explainable anomaly detection;
Added section on variations on the anomaly concept; Various minor additions
and improvement
Decision rules construction : algorithm based on EAV model
In the paper, an approach for decision rules construction is proposed. It is studied from
the point of view of the supervised machine learning task, i.e., classification, and from the point
of view of knowledge representation. Generated rules provide comparable classification results
to the dynamic programming approach for optimization of decision rules relative to length or
support. However, the proposed algorithm is based on transformation of decision table into entity–
attribute–value (EAV) format. Additionally, standard deviation function for computation of averages’
values of attributes in particular decision classes was introduced. It allows to select from the whole
set of attributes only these which provide the highest degree of information about the decision.
Construction of decision rules is performed based on idea of partitioning of a decision table into
corresponding subtables. In opposite to dynamic programming approach, not all attributes need
to be taken into account but only these with the highest values of standard deviation per decision
classes. Consequently, the proposed solution is more time efficient because of lower computational
complexity. In the framework of experimental results, support and length of decision rules were
computed and compared with the values of optimal rules. The classification error for data sets
from UCI Machine Learning Repository was also obtained and compared with the ones for dynamic
programming approach. Performed experiments show that constructed rules are not far from the
optimal ones and classification results are comparable to these obtained in the framework of the
dynamic programming extension
Concept of a Robust & Training-free Probabilistic System for Real-time Intention Analysis in Teams
Die Arbeit beschäftigt sich mit der Analyse von Teamintentionen in Smart Environments (SE). Die fundamentale Aussage der Arbeit ist, dass die Entwicklung und Integration expliziter Modelle von Nutzeraufgaben einen wichtigen Beitrag zur Entwicklung mobiler und ubiquitärer Softwaresysteme liefern können. Die Arbeit sammelt Beschreibungen von menschlichem Verhalten sowohl in Gruppensituationen als auch Problemlösungssituationen. Sie untersucht, wie SE-Projekte die Aktivitäten eines Nutzers modellieren, und liefert ein Teamintentionsmodell zur Ableitung und Auswahl geplanten Teamaktivitäten mittels der Beobachtung mehrerer Nutzer durch verrauschte und heterogene Sensoren. Dazu wird ein auf hierarchischen dynamischen Bayes’schen Netzen basierender Ansatz gewählt
Computational Methods for Medical and Cyber Security
Over the past decade, computational methods, including machine learning (ML) and deep learning (DL), have been exponentially growing in their development of solutions in various domains, especially medicine, cybersecurity, finance, and education. While these applications of machine learning algorithms have been proven beneficial in various fields, many shortcomings have also been highlighted, such as the lack of benchmark datasets, the inability to learn from small datasets, the cost of architecture, adversarial attacks, and imbalanced datasets. On the other hand, new and emerging algorithms, such as deep learning, one-shot learning, continuous learning, and generative adversarial networks, have successfully solved various tasks in these fields. Therefore, applying these new methods to life-critical missions is crucial, as is measuring these less-traditional algorithms' success when used in these fields
Artificial Immune Systems: Principle, Algorithms and Applications
The present thesis aims to make an in-depth study of adaptive identification, digital channel equalization, functional link artificial neural network (FLANN) and Artificial Immune Systems (AIS).Two learning algorithms CPSO and IPSO are also developed in this thesis. These new algorithms are employed to train the weights of a low complexity FLANN structure by way of minimizing the squared error cost function of the hybrid model. These new models are applied for adaptive identification of complex nonlinear dynamic plants and equalization of nonlinear digital channel. Investigation has been made for identification of complex Hammerstein models.
To validate the performance of these new models simulation study is carried out using benchmark complex plants and nonlinear channels. The results of simulation are compared with those obtained with FLANN-GA, FLANN-PSO and MLP-BP based hybrid approaches. Improved identification and equalization performance of the proposed method have been observed in all cases
Security of Cyber-Physical Systems
Cyber-physical system (CPS) innovations, in conjunction with their sibling computational and technological advancements, have positively impacted our society, leading to the establishment of new horizons of service excellence in a variety of applicational fields. With the rapid increase in the application of CPSs in safety-critical infrastructures, their safety and security are the top priorities of next-generation designs. The extent of potential consequences of CPS insecurity is large enough to ensure that CPS security is one of the core elements of the CPS research agenda. Faults, failures, and cyber-physical attacks lead to variations in the dynamics of CPSs and cause the instability and malfunction of normal operations. This reprint discusses the existing vulnerabilities and focuses on detection, prevention, and compensation techniques to improve the security of safety-critical systems