2,635 research outputs found

    Transfer Learning for Improving Model Predictions in Highly Configurable Software

    Full text link
    Modern software systems are built to be used in dynamic environments using configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost. We define a cost model that transform the traditional view of model learning into a multi-objective problem that not only takes into account model accuracy but also measurements effort as well. We evaluate our cost-aware transfer learning solution using real-world configurable software including (i) a robotic system, (ii) 3 different stream processing applications, and (iii) a NoSQL database system. The experimental results demonstrate that our approach can achieve (a) a high prediction accuracy, as well as (b) a high model reliability.Comment: To be published in the proceedings of the 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS'17

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Risk analysis and decision making for autonomous underwater vehicles

    Get PDF
    Risk analysis for autonomous underwater vehicles (AUVs) is essential to enable AUVs to explore extreme and dynamic environments. This research aims to augment existing risk analysis methods for AUVs, and it proposes a suite of methods to quantify mission risks and to support the implementation of safety-based decision making strategies for AUVs in harsh marine environments. This research firstly provides a systematic review of past progress of risk analysis research for AUV operations. The review answers key questions including fundamental concepts and evolving methods in the domain of risk analysis for AUVs, and it highlights future research trends to bridge existing gaps. Based on the state-of-the-art research, a copula-based approach is proposed for predicting the risk of AUV loss in underwater environments. The developed copula Bayesian network (CBN) aims to handle non-linear dependencies among environmental variables and inherent technical failures for AUVs, and therefore achieve accurate risk estimation for vehicle loss given various environmental observations. Furthermore, path planning for AUVs is an effective decision making strategy for mitigating risks and ensuring safer routing. A further study presents an offboard risk-based path planning approach for AUVs, considering a challenging environment with oil spill scenarios incorporated. The proposed global Risk-A* planner combines a Bayesian-based risk model for probabilistic risk reasoning and an A*-based algorithm for path searching. However, global path planning designed for static environments cannot handle the unpredictable situations that may emerge, and real-time replanned solutions are required to account for dynamic environmental observations. Therefore, a hybrid risk-aware decision making strategy is investigated for AUVs to combine static global planning with dynamic local re-planning. A dynamic risk analysis model based on the system theoretic process analysis (STPA) and BN is applied for generating a real-time risk map in target mission areas. The dynamic window algorithm (DWA) serves for local path planning to avoid moving obstacles. The proposed hybrid risk-aware decisionmaking architecture is essential for the real-life implementation of AUVs, leading eventually to a real-time adaptive path planning process onboard the AUV

    A Mechanism Design Approach to Bandwidth Allocation in Tactical Data Networks

    Get PDF
    The defense sector is undergoing a phase of rapid technological advancement, in the pursuit of its goal of information superiority. This goal depends on a large network of complex interconnected systems - sensors, weapons, soldiers - linked through a maze of heterogeneous networks. The sheer scale and size of these networks prompt behaviors that go beyond conglomerations of systems or `system-of-systems\u27. The lack of a central locus and disjointed, competing interests among large clusters of systems makes this characteristic of an Ultra Large Scale (ULS) system. These traits of ULS systems challenge and undermine the fundamental assumptions of today\u27s software and system engineering approaches. In the absence of a centralized controller it is likely that system users may behave opportunistically to meet their local mission requirements, rather than the objectives of the system as a whole. In these settings, methods and tools based on economics and game theory (like Mechanism Design) are likely to play an important role in achieving globally optimal behavior, when the participants behave selfishly. Against this background, this thesis explores the potential of using computational mechanisms to govern the behavior of ultra-large-scale systems and achieve an optimal allocation of constrained computational resources Our research focusses on improving the quality and accuracy of the common operating picture through the efficient allocation of bandwidth in tactical data networks among self-interested actors, who may resort to strategic behavior dictated by self-interest. This research problem presents the kind of challenges we anticipate when we have to deal with ULS systems and, by addressing this problem, we hope to develop a methodology which will be applicable for ULS system of the future. We build upon the previous works which investigate the application of auction-based mechanism design to dynamic, performance-critical and resource-constrained systems of interest to the defense community. In this thesis, we consider a scenario where a number of military platforms have been tasked with the goal of detecting and tracking targets. The sensors onboard a military platform have a partial and inaccurate view of the operating picture and need to make use of data transmitted from neighboring sensors in order to improve the accuracy of their own measurements. The communication takes place over tactical data networks with scarce bandwidth. The problem is compounded by the possibility that the local goals of military platforms might not be aligned with the global system goal. Such a scenario might occur in multi-flag, multi-platform military exercises, where the military commanders of each platform are more concerned with the well-being of their own platform over others. Therefore there is a need to design a mechanism that efficiently allocates the flow of data within the network to ensure that the resulting global performance maximizes the information gain of the entire system, despite the self-interested actions of the individual actors. We propose a two-stage mechanism based on modified strictly-proper scoring rules, with unknown costs, whereby multiple sensor platforms can provide estimates of limited precisions and the center does not have to rely on knowledge of the actual outcome when calculating payments. In particular, our work emphasizes the importance of applying robust optimization techniques to deal with the uncertainty in the operating environment. We apply our robust optimization - based scoring rules algorithm to an agent-based model framework of the combat tactical data network, and analyze the results obtained. Through the work we hope to demonstrate how mechanism design, perched at the intersection of game theory and microeconomics, is aptly suited to address one set of challenges of the ULS system paradigm - challenges not amenable to traditional system engineering approaches

    The University Defence Research Collaboration In Signal Processing

    Get PDF
    This chapter describes the development of algorithms for automatic detection of anomalies from multi-dimensional, undersampled and incomplete datasets. The challenge in this work is to identify and classify behaviours as normal or abnormal, safe or threatening, from an irregular and often heterogeneous sensor network. Many defence and civilian applications can be modelled as complex networks of interconnected nodes with unknown or uncertain spatio-temporal relations. The behavior of such heterogeneous networks can exhibit dynamic properties, reflecting evolution in both network structure (new nodes appearing and existing nodes disappearing), as well as inter-node relations. The UDRC work has addressed not only the detection of anomalies, but also the identification of their nature and their statistical characteristics. Normal patterns and changes in behavior have been incorporated to provide an acceptable balance between true positive rate, false positive rate, performance and computational cost. Data quality measures have been used to ensure the models of normality are not corrupted by unreliable and ambiguous data. The context for the activity of each node in complex networks offers an even more efficient anomaly detection mechanism. This has allowed the development of efficient approaches which not only detect anomalies but which also go on to classify their behaviour
    corecore