489 research outputs found

    Experimental Analysis of Energy Efficiency of Server Infrastructure in University Datacenters

    Get PDF
    With the increased number of user applications, the amount of data generated by users, the need for more intensive data processing, in modern data centers the question of energy efficiency arises. IT equipment requires permanent maintenance of appropriate climatic conditions, therefore significant investments are needed in cooling systems and ensuring a constant supply of electricity. In this paper, an experimental analysis is performed concerning the economic and environmental aspects of server virtualization, including the business value of virtualization. An analysis was conducted to be used concurrently on a traditional architecture and a virtual ecosystem. The acquired findings show considerable advantages of virtual and cloud ecosystem in the form of optimum provision and use of physical workstation properties. Through this paper, authors analysed the power utilization when utilizing a higher number of physical servers, as opposed to the same number of virtual servers. Acquired results present a assessment of accumulative energy utilization and load on physical units during optimal workload client requests during the working week. Through the paper authors presented the idea of low electric energy consumption, using the green datacenter concept and contribution to the advancement of IT technologies at the Singidunum University, which is also applicable to other modern university datacenters

    Dead-zone logic in autonomic systems

    Get PDF
    Published in Evolving and adaptive intelligent systems. IEEE Conference 2014. (EAIS 2014)Dead-Zone logic is a mechanism to prevent autonomic managers from unnecessary, inefficient and ineffective control brevity when the system is sufficiently close to its target state. It provides a natural and powerful framework for achieving dependable self-management in autonomic systems by enabling autonomic managers to smartly carry out a change (or adapt) only when it is safe and efficient to do so-within a particular (defined) safety margin. This paper explores and evaluates the performance impact of dead-zone logic in trustworthy autonomic computing. Using two case example scenarios, we present empirical analyses that demonstrate the effectiveness of dead-zone logic in achieving stability, dependability and trustworthiness in adaptive systems. Dynamic temperature target tracking and autonomic datacentre resource request and allocation management scenarios are used. Results show that dead-zone logic can significantly enhance the trustability of autonomic systems

    Improving quality of service in application clusters

    Get PDF
    Quality of service (QoS) requirements, which include availability, integrity, performance and responsiveness are increasingly needed by science and engineering applications. Rising computational demands and data mining present a new challenge in the IT world. As our needs for more processing, research and analysis increase, performance and reliability degrade exponentially. In this paper we present a software system that manages quality of service for Unix based distributed application clusters. Our approach is synthetic and involves intelligent agents that make use of static and dynamic ontologies to monitor, diagnose and correct faults at run time, over a private network. Finally, we provide experimental results from our pilot implementation in a production environment

    From business continuity to design of critical infrastructures: ensuring the proper resilience level to datacentres

    Get PDF
    Since a few years, companies that runs business critical applications are increasing their focus on their support infrastructures. Indeed, it is clearly useless to pursue higher systems reliability, when the infrastructure is vulnerable. Aim of this paper is to explore the value of business continuity within the scope of the design of resilient system. The publication of the fifth revision of ANSI/TIA/EIA 942 standard provides operation managers and risk managers with a framework to plan and design resilient infrastructures. It will be shown how to use the aforementioned standard to analyse the gap between the current and the desired resilience level of a system, and suggest the proper steps to reach it, accordingly to the business continuity requirements. This approach was adopted on the case of the power system infrastructure of a primary Italian Application Service Provider, granting 24/7 mission critical services to its customers

    Trustworthy autonomic architecture (TAArch): Implementation and empirical investigation

    Get PDF
    This paper presents a new architecture for trustworthy autonomic systems. This trustworthy autonomic architecture is different from the traditional autonomic computing architecture and includes mechanisms and instrumentation to explicitly support run-time self-validation and trustworthiness. The state of practice does not lend itself robustly enough to support trustworthiness and system dependability. For example, despite validating system's decisions within a logical boundary set for the system, there’s the possibility of overall erratic behaviour or inconsistency in the system emerging for example, at a different logical level or on a different time scale. So a more thorough and holistic approach, with a higher level of check, is required to convincingly address the dependability and trustworthy concerns. Validation alone does not always guarantee trustworthiness as each individual decision could be correct (validated) but overall system may not be consistent and thus not dependable. A robust approach requires that validation and trustworthiness are designed in and integral at the architectural level, and not treated as add-ons as they cannot be reliably retro-fitted to systems. This paper analyses the current state of practice in autonomic architecture, presents a different architectural approach for trustworthy autonomic systems, and uses a datacentre scenario as the basis for empirical analysis of behaviour and performance. Results show that the proposed trustworthy autonomic architecture has significant performance improvement over existing architectures and can be relied upon to operate (or manage) almost all level of datacentre scale and complexity

    Strain‐Release‐Driven Friedel‐Crafts Spirocyclization of Azabicyclo[1.1.0]butanes

    Get PDF
    The identification of spiro N‐heterocycles as scaffolds that display structural novelty, three‐dimensionality, beneficial physicochemical properties, and enable the controlled spatial disposition of substituents has led to a surge of interest in utilizing these compounds in drug discovery programs. Herein, we report the strain‐release‐driven Friedel–Crafts spirocyclization of azabicyclo[1.1.0]butane‐tethered (hetero)aryls for the synthesis of a unique library of azetidine spiro‐tetralins. The reaction was discovered to proceed through an unexpected interrupted Friedel–Crafts mechanism, generating a highly complex azabicyclo[2.1.1]hexane scaffold. This dearomatized intermediate, formed exclusively as a single diastereomer, can be subsequently converted to the Friedel–Crafts product upon electrophilic activation of the tertiary amine, or trapped as a Diels–Alder adduct in one‐pot. The rapid assembly of molecular complexity demonstrated in these reactions highlights the potential of the strain‐release‐driven spirocyclization strategy to be utilized in the synthesis of medicinally relevant scaffolds

    Distributed, multi-level network anomaly detection for datacentre networks

    Get PDF
    Over the past decade, numerous systems have been proposed to detect and subsequently prevent or mitigate security vulnerabilities. However, many existing intrusion or anomaly detection solutions are limited to a subset of the traffic due to scalability issues, hence failing to operate at line-rate on large, high-speed datacentre networks. In this paper, we present a two-level solution for anomaly detection leveraging independent execution and message passing semantics. We employ these constructs within a network-wide distributed anomaly detection framework that allows for greater detection accuracy and bandwidth cost saving through attack path reconstruction. Experimental results using real operational traffic traces and known network attacks generated through the Pytbull IDS evaluation framework, show that our approach is capable of detecting anomalies in a timely manner while allowing reconstruction of the attack path, hence further enabling the composition of advanced mitigation strategies. The resulting system shows high detection accuracy when compared to similar techniques, at least 20% better at detecting anomalies, and enables full path reconstruction even at small-to-moderate attack traffic intensities (as a fraction of the total traffic), saving up to 75% of bandwidth due to early attack detection

    Experimental Evaluation of SDN-Controlled, Joint Consolidation of Policies and Virtual Machines

    Get PDF
    Middleboxes (MBs) are ubiquitous in modern data centre (DC) due to their crucial role in implementing network security, management and optimisation. In order to meet network policy's requirement on correct traversal of an ordered sequence of MBs, network administrators rely on static policy based routing or VLAN stitching to steer traffic flows. However, dynamic virtual server migration in virtual environment has greatly challenged such static traffic steering. In this paper, we design and implement Sync, an efficient and synergistic scheme to jointly consolidate network policies and virtual machines (VMs), in a readily deployable Mininet environment. We present the architecture of Sync framework and open source its code. We also extensively evaluate Sync over diverse workload and policies. Our results show that in an emulated DC of 686 servers, 10k VMs, 8k policies, and 100k flows, Sync processes a group of 900 VMs and 10 VMs in 634 seconds and 4 seconds respectively
    • …
    corecore