3,544 research outputs found

    Survey of Autonomic Computing and Experiments on JMX-based Autonomic Features

    Get PDF
    Autonomic Computing (AC) aims at solving the problem of managing the rapidly-growing complexity of Information Technology systems, by creating self-managing systems. In this thesis, we have surveyed the progress of the AC field, and studied the requirements, models and architectures of AC. The commonly recognized AC requirements are four properties - self-configuring, self-healing, self-optimizing, and self-protecting. The recommended software architecture is the MAPE-K model containing four modules, namely - monitor, analyze, plan and execute, as well as the knowledge repository. In the modern software marketplace, Java Management Extensions (JMX) has facilitated one function of the AC requirements - monitoring. Using JMX, we implemented a package that attempts to assist programming for AC features including socket management, logging, and recovery of distributed computation. In the experiments, we have not only realized the powerful Java capabilities that are unknown to many educators, we also illustrated the feasibility of learning AC in senior computer science courses

    A Machine Learning Enhanced Scheme for Intelligent Network Management

    Get PDF
    The versatile networking services bring about huge influence on daily living styles while the amount and diversity of services cause high complexity of network systems. The network scale and complexity grow with the increasing infrastructure apparatuses, networking function, networking slices, and underlying architecture evolution. The conventional way is manual administration to maintain the large and complex platform, which makes effective and insightful management troublesome. A feasible and promising scheme is to extract insightful information from largely produced network data. The goal of this thesis is to use learning-based algorithms inspired by machine learning communities to discover valuable knowledge from substantial network data, which directly promotes intelligent management and maintenance. In the thesis, the management and maintenance focus on two schemes: network anomalies detection and root causes localization; critical traffic resource control and optimization. Firstly, the abundant network data wrap up informative messages but its heterogeneity and perplexity make diagnosis challenging. For unstructured logs, abstract and formatted log templates are extracted to regulate log records. An in-depth analysis framework based on heterogeneous data is proposed in order to detect the occurrence of faults and anomalies. It employs representation learning methods to map unstructured data into numerical features, and fuses the extracted feature for network anomaly and fault detection. The representation learning makes use of word2vec-based embedding technologies for semantic expression. Next, the fault and anomaly detection solely unveils the occurrence of events while failing to figure out the root causes for useful administration so that the fault localization opens a gate to narrow down the source of systematic anomalies. The extracted features are formed as the anomaly degree coupled with an importance ranking method to highlight the locations of anomalies in network systems. Two types of ranking modes are instantiated by PageRank and operation errors for jointly highlighting latent issue of locations. Besides the fault and anomaly detection, network traffic engineering deals with network communication and computation resource to optimize data traffic transferring efficiency. Especially when network traffic are constrained with communication conditions, a pro-active path planning scheme is helpful for efficient traffic controlling actions. Then a learning-based traffic planning algorithm is proposed based on sequence-to-sequence model to discover hidden reasonable paths from abundant traffic history data over the Software Defined Network architecture. Finally, traffic engineering merely based on empirical data is likely to result in stale and sub-optimal solutions, even ending up with worse situations. A resilient mechanism is required to adapt network flows based on context into a dynamic environment. Thus, a reinforcement learning-based scheme is put forward for dynamic data forwarding considering network resource status, which explicitly presents a promising performance improvement. In the end, the proposed anomaly processing framework strengthens the analysis and diagnosis for network system administrators through synthesized fault detection and root cause localization. The learning-based traffic engineering stimulates networking flow management via experienced data and further shows a promising direction of flexible traffic adjustment for ever-changing environments

    Flexible multi-layer virtual machine design for virtual laboratory in distributed systems and grids.

    Get PDF
    We propose a flexible Multi-layer Virtual Machine (MVM) design intended to improve efficiencies in distributed and grid computing and to overcome the known current problems that exist within traditional virtual machine architectures and those used in distributed and grid systems. This thesis presents a novel approach to building a virtual laboratory to support e-science by adapting MVMs within the distributed systems and grids, thereby providing enhanced flexibility and reconfigurability by raising the level of abstraction. The MVM consists of three layers. They are OS-level VM, queue VMs, and components VMs. The group of MVMs provides the virtualized resources, virtualized networks, and reconfigurable components layer for virtual laboratories. We demonstrate how our reconfigurable virtual machine can allow software designers and developers to reuse parallel communication patterns. In our framework, the virtual machines can be created on-demand and their applications can be distributed at the source-code level, compiled and instantiated in runtime. (Abstract shortened by UMI.) Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .K56. Source: Masters Abstracts International, Volume: 44-03, page: 1405. Thesis (M.Sc.)--University of Windsor (Canada), 2005

    Big data analytics towards predictive maintenance at the INFN-CNAF computing centre

    Get PDF
    La Fisica delle Alte Energie (HEP) è da lungo tra i precursori nel gestire e processare enormi dataset scientifici e nell'operare alcuni tra i più grandi data centre per applicazioni scientifiche. HEP ha sviluppato una griglia computazionale (Grid) per il calcolo al Large Hadron Collider (LHC) del CERN di Ginevra, che attualmente coordina giornalmente le operazioni di calcolo su oltre 800k processori in 170 centri di calcolo e gestendo mezzo Exabyte di dati su disco distribuito in 5 continenti. Nelle prossime fasi di LHC, soprattutto in vista di Run-4, il quantitativo di dati gestiti dai centri di calcolo aumenterà notevolmente. In questo contesto, la HEP Software Foundation ha redatto un Community White Paper (CWP) che indica il percorso da seguire nell'evoluzione del software moderno e dei modelli di calcolo in preparazione alla fase cosiddetta di High Luminosity di LHC. Questo lavoro ha individuato in tecniche di Big Data Analytics un enorme potenziale per affrontare le sfide future di HEP. Uno degli sviluppi riguarda la cosiddetta Operation Intelligence, ovvero la ricerca di un aumento nel livello di automazione all'interno dei workflow. Questo genere di approcci potrebbe portare al passaggio da un sistema di manutenzione reattiva ad uno, più evoluto, di manutenzione predittiva o addirittura prescrittiva. La tesi presenta il lavoro fatto in collaborazione con il centro di calcolo dell'INFN-CNAF per introdurre un sistema di ingestione, organizzazione e processing dei log del centro su una piattaforma di Big Data Analytics unificata, al fine di prototipizzare un modello di manutenzione predittiva per il centro. Questa tesi contribuisce a tale progetto con lo sviluppo di un algoritmo di clustering dei messaggi di log basato su misure di similarità tra campi testuali, per superare il limite connesso alla verbosità ed eterogeneità dei log raccolti dai vari servizi operativi 24/7 al centro

    Adaptive sampling-based profiling techniques for optimizing the distributed JVM runtime

    Get PDF
    Extending the standard Java virtual machine (JVM) for cluster-awareness is a transparent approach to scaling out multithreaded Java applications. While this clustering solution is gaining momentum in recent years, efficient runtime support for fine-grained object sharing over the distributed JVM remains a challenge. The system efficiency is strongly connected to the global object sharing profile that determines the overall communication cost. Once the sharing or correlation between threads is known, access locality can be optimized by collocating highly correlated threads via dynamic thread migrations. Although correlation tracking techniques have been studied in some page-based sof Tware DSM systems, they would entail prohibitively high overheads and low accuracy when ported to fine-grained object-based systems. In this paper, we propose a lightweight sampling-based profiling technique for tracking inter-thread sharing. To preserve locality across migrations, we also propose a stack sampling mechanism for profiling the set of objects which are tightly coupled with a migrant thread. Sampling rates in both techniques can vary adaptively to strike a balance between preciseness and overhead. Such adaptive techniques are particularly useful for applications whose sharing patterns could change dynamically. The profiling results can be exploited for effective thread-to-core placement and dynamic load balancing in a distributed object sharing environment. We present the design and preliminary performance result of our distributed JVM with the profiling implemented. Experimental results show that the profiling is able to obtain over 95% accurate global sharing profiles at a cost of only a few percents of execution time increase for fine- to medium- grained applications. © 2010 IEEE.published_or_final_versionThe 24th IEEE International Symposium on Parallel & Distributed Processing (IPDPS 2010), Atlanta, GA., 19-23 April 2010. In Proceedings of the 24th IPDPS, 2010, p. 1-1

    Microservice Transition and its Granularity Problem: A Systematic Mapping Study

    Get PDF
    Microservices have gained wide recognition and acceptance in software industries as an emerging architectural style for autonomic, scalable, and more reliable computing. The transition to microservices has been highly motivated by the need for better alignment of technical design decisions with improving value potentials of architectures. Despite microservices' popularity, research still lacks disciplined understanding of transition and consensus on the principles and activities underlying "micro-ing" architectures. In this paper, we report on a systematic mapping study that consolidates various views, approaches and activities that commonly assist in the transition to microservices. The study aims to provide a better understanding of the transition; it also contributes a working definition of the transition and technical activities underlying it. We term the transition and technical activities leading to microservice architectures as microservitization. We then shed light on a fundamental problem of microservitization: microservice granularity and reasoning about its adaptation as first-class entities. This study reviews state-of-the-art and -practice related to reasoning about microservice granularity; it reviews modelling approaches, aspects considered, guidelines and processes used to reason about microservice granularity. This study identifies opportunities for future research and development related to reasoning about microservice granularity.Comment: 36 pages including references, 6 figures, and 3 table

    Towards reliable logging in the internet of things networks

    Get PDF
    The internet of things is one of the most rapidly developing technologies, and its low cost and usability make it applicable to various critical disciplines. Being a component of such critical infrastructure needs, these networks have to be dependable and offer the best outcome. Keeping track of network events is one method for enhancing network reliability, as network event logging supports essential processes such as debugging, checkpointing, auditing, root-cause analysis, and forensics. However, logging in the IoT networks is not a simple task. IoT devices are positioned in remote places with unstable connectivity and inadequate security protocols, making them vulnerable to environmental flaws and security breaches. This thesis investigates the problem of reliable logging in IoT networks. We concentrate on the problem in the presence of Byzantine behaviour and the integration of logging middleware into the network stack. To overcome these concerns, we propose a technique for distributed logging by distributing loggers around the network. We define the logger selection problem and the collection problem, and show that only the probabilistic weak variant can solve the problem. We examine the performance of the Collector algorithm in several MAC setups. We then explore the auditability notion in IoT; we show how safety specification can be enforced through the analogies of fair exchange. Next, we review our findings and their place in the existing body of knowledge. We also explore the limits we faced when investigating this problem, and we finish this thesis by providing opportunities for future work
    corecore