35 research outputs found

    Reducing the number of miscreant tasks executions in a multi-use cluster.

    Get PDF
    Exploiting computational resources within an organisation for more than their primary task offers great benefits – making better use of capital expenditure and provides a pool of computational power. This can be achieved through the deployment of a cycle stealing distributed system, where tasks execute during the idle time on computers. However, if a task has not completed when a computer returns to its primary function the task will be preempted, wasting time (and energy), and is often reallocated to a new resource in an attempt to complete. This becomes exacerbated when tasks are incapable of completing due to excessive execution time or faulty hardware / software, leading to a situation where tasks are perpetually reallocated between computers – wasting time and energy. In this work we investigate techniques to increase the chance of ‘good’ tasks completing whilst curtailing the execution of ‘bad’ tasks. We demonstrate, through simulation, that we could have reduce the energy consumption of our cycle stealing system by approximately 50%

    Reducing the number of miscreant tasks executions in a multi-use cluster

    Get PDF
    Exploiting computational resources within an organisation for more than their primary task offers great benefits – making better use of capital expenditure and provides a pool of computational power. This can be achieved through the deployment of a cycle stealing distributed system, where tasks execute during the idle time on computers. However, if a task has not completed when a computer returns to its primary function the task will be preempted, wasting time (and energy), and is often reallocated to a new resource in an attempt to complete. This becomes exacerbated when tasks are incapable of completing due to excessive execution time or faulty hardware / software, leading to a situation where tasks are perpetually reallocated between computers – wasting time and energy. In this work we investigate techniques to increase the chance of ‘good’ tasks completing whilst curtailing the execution of ‘bad’ tasks. We demonstrate, through simulation, that we could have reduce the energy consumption of our cycle stealing system by approximately 50%

    Operating policies for energy efficient large scale computing

    Get PDF
    PhD ThesisEnergy costs now dominate IT infrastructure total cost of ownership, with datacentre operators predicted to spend more on energy than hardware infrastructure in the next five years. With Western European datacentre power consumption estimated at 56 TWh/year in 2007 and projected to double by 2020, improvements in energy efficiency of IT operations is imperative. The issue is further compounded by social and political factors and strict environmental legislation governing organisations. One such example of large IT systems includes high-throughput cycle stealing distributed systems such as HTCondor and BOINC, which allow organisations to leverage spare capacity on existing infrastructure to undertake valuable computation. As a consequence of increased scrutiny of the energy impact of these systems, aggressive power management policies are often employed to reduce the energy impact of institutional clusters, but in doing so these policies severely restrict the computational resources available for high-throughput systems. These policies are often configured to quickly transition servers and end-user cluster machines into low power states after only short idle periods, further compounding the issue of reliability. In this thesis, we evaluate operating policies for energy efficiency in large-scale computing environments by means of trace-driven discrete event simulation, leveraging real-world workload traces collected within Newcastle University. The major contributions of this thesis are as follows: i) Evaluation of novel energy efficient management policies for a decentralised peer-to-peer (P2P) BitTorrent environment. ii) Introduce a novel simulation environment for the evaluation of energy efficiency of large scale high-throughput computing systems, and propose a generalisable model of energy consumption in high-throughput computing systems. iii iii) Proposal and evaluation of resource allocation strategies for energy consumption in high-throughput computing systems for a real workload. iv) Proposal and evaluation for a realworkload ofmechanisms to reduce wasted task execution within high-throughput computing systems to reduce energy consumption. v) Evaluation of the impact of fault tolerance mechanisms on energy consumption

    Trace-Driven Simulation for Energy Consumption in High Throughput Computing Systems

    Get PDF
    High Throughput Computing (HTC) is a powerful paradigm allowing vast quantities of independent work to be performed simultaneously. However, until recently little evaluation has been performed on the energy impact of HTC. Many organisations now seek to minimise energy consumption across their IT infrastructure though it is unclear how this will affect the usability of HTC systems. We present here HTC-Sim, a simulation system which allows the evaluation of different energy reduction policies across an HTC system comprising a collection of computational resources dedicated to HTC work and resources provided through cycle scavenging -- a Desktop Grid. We demonstrate that our simulation software scales linearly with increasing HTC workload

    Energy-efficient checkpointing in high-throughput cycle-stealing distributed systems

    Get PDF
    Checkpointing is a fault-tolerance mechanism commonly used in High Throughput Computing (HTC) environments to allow the execution of long-running computational tasks on compute resources subject to hardware or software failures as well as interruptions from resource owners and more important tasks. Until recently many researchers have focused on the performance gains achieved through checkpointing, but now with growing scrutiny of the energy consumption of IT infrastructures it is increasingly important to understand the energy impact of checkpointing within an HTC environment. In this paper we demonstrate through trace-driven simulation of real-world datasets that existing checkpointing strategies are inadequate at maintaining an acceptable level of energy consumption whilst maintaing the performance gains expected with checkpointing. Furthermore, we identify factors important in deciding whether to exploit checkpointing within an HTC environment, and propose novel strategies to curtail the energy consumption of checkpointing approaches whist maintaining the performance benefits

    Using honeypots to trace back amplification DDoS attacks

    Get PDF
    In today’s interconnected world, Denial-of-Service attacks can cause great harm by simply rendering a target system or service inaccessible. Amongst the most powerful and widespread DoS attacks are amplification attacks, in which thousands of vulnerable servers are tricked into reflecting and amplifying attack traffic. However, as these attacks inherently rely on IP spoofing, the true attack source is hidden. Consequently, going after the offenders behind these attacks has so far been deemed impractical. This thesis presents a line of work that enables practical attack traceback supported by honeypot reflectors. To this end, we investigate the tradeoffs between applicability, required a priori knowledge, and traceback granularity in three settings. First, we show how spoofed attack packets and non-spoofed scan packets can be linked using honeypot-induced fingerprints, which allows attributing attacks launched from the same infrastructures as scans. Second, we present a classifier-based approach to trace back attacks launched from booter services after collecting ground-truth data through self-attacks. Third, we propose to use BGP poisoning to locate the attacking network without prior knowledge and even when attack and scan infrastructures are disjoint. Finally, as all of our approaches rely on honeypot reflectors, we introduce an automated end-to-end pipeline to systematically find amplification vulnerabilities and synthesize corresponding honeypots.In der heutigen vernetzten Welt können Denial-of-Service-Angriffe große Schäden verursachen, einfach indem sie ihr Zielsystem unerreichbar machen. Zu den stärksten und verbreitetsten DoS-Angriffen zählen Amplification-Angriffe, bei denen tausende verwundbarer Server missbraucht werden, um Angriffsverkehr zu reflektieren und zu verstärken. Da solche Angriffe jedoch zwingend gefälschte IP-Absenderadressen nutzen, ist die wahre Angriffsquelle verdeckt. Damit gilt die Verfolgung der Täter bislang als unpraktikabel. Diese Dissertation präsentiert eine Reihe von Arbeiten, die praktikable Angriffsrückverfolgung durch den Einsatz von Honeypots ermöglicht. Dazu untersuchen wir das Spannungsfeld zwischen Anwendbarkeit, benötigtem Vorwissen, und Rückverfolgungsgranularität in drei Szenarien. Zuerst zeigen wir, wie gefälschte Angriffs- und ungefälschte Scan-Datenpakete miteinander verknüpft werden können. Dies ermöglicht uns die Rückverfolgung von Angriffen, die ebenfalls von Scan-Infrastrukturen aus durchgeführt wurden. Zweitens präsentieren wir einen Klassifikator-basierten Ansatz um Angriffe durch Booter-Services mittels vorher durch Selbstangriffe gesammelter Daten zurückzuverfolgen. Drittens zeigen wir auf, wie BGP Poisoning genutzt werden kann, um ohne weiteres Vorwissen das angreifende Netzwerk zu ermitteln. Schließlich präsentieren wir einen automatisierten Prozess, um systematisch Schwachstellen zu finden und entsprechende Honeypots zu synthetisieren

    EMPIRICAL STUDIES BASED ON HONEYPOTS FOR CHARACTERIZING ATTACKERS BEHAVIOR

    Get PDF
    The cybersecurity community has made substantial efforts to understand and mitigate security flaws in information systems. Oftentimes when a compromise is discovered, it is difficult to identify the actions performed by an attacker. In this study, we explore the compromise phase, i.e., when an attacker exploits the host he/she gained access to using a vulnerability exposed by an information system. More specifically, we look at the main actions performed during the compromise and the factors deterring the attackers from exploiting the compromised systems. Because of the lack of security datasets on compromised systems, we need to deploy systems to more adequately study attackers and the different techniques they employ to compromise computer. Security researchers employ target computers, called honeypots, that are not used by normal or authorized users. In this study we first describe the distributed honeypot network architecture deployed at the University of Maryland and the different honeypot-based experiments enabling the data collection required to conduct the studies on attackers' behavior. In a first experiment we explore the attackers' skill levels and the purpose of the malicious software installed on the honeypots. We determined the relative skill levels of the attackers and classified the different software installed. We then focused on the crimes committed by the attackers, i.e., the attacks launched from the honeypots by the attackers. We defined the different computer crimes observed (e.g., brute-force attacks and denial of service attacks) and their characteristics (whether they were coordinated and/or destructive). We looked at the impact of computer resources restrictions on the crimes and then, at the deterrent effect of warning and surveillance. Lastly, we used different metrics related to the attack sessions to investigate the impact of surveillance on the attackers based on their country of origin. During attacks, we found that attackers mainly installed IRC-based bot tools and sometimes shared their honeypot access. From the analysis on crimes, it appears that deterrence does not work; we showed attackers seem to favor certain computer resources. Lastly, we observed that the presence of surveillance had no significant impact on the attack sessions, however surveillance altered the behavior originating from a few countries

    Analyzing and Defending Against Evolving Web Threats

    Get PDF
    The browser has evolved from a simple program that displays static web pages into a continuously-changing platform that is shaping the Internet as we know it today. The fierce competition among browser vendors has led to the introduction of a plethora of features in the past few years. At the same time, it remains the de facto way to access the Internet for billions of users. Because of such rapid evolution and wide popularity, the browser has attracted attackers, who pose new threats to unsuspecting Internet surfers.In this dissertation, I present my work on securing the browser againstcurrent and emerging threats. First, I discuss my work on honeyclients,which are tools that identify malicious pages that compromise the browser, and how one can evade such systems. Then, I describe a new system that I built, called Revolver, that automatically tracks the evolution of JavaScriptand is capable of identifying evasive web-based malware by finding similarities in JavaScript samples with different classifications. Finally, I present Hulk, a system that automatically analyzes and classifies browser extensions

    Battle for the Ruhr: The German Army\u27s Final Defeat in the West

    Get PDF
    This chronicle describes the events concerning the retreat of Field Marshal Walter Model’s Army Group B from the invasion of Normandy in June, 1944 to it’s ultimate destruction in the Ruhr Pocket in April, 1945. The author focuses on the German perspective of the Second World War, and describes the experiences of former Wehrmacht soldiers, Volkssturm conscripts, Hitler Youth members, and civilians as they witnessed the collapse of the Third Reich. The study encompasses events in northwest Germany, primarily in the lower Rhineland and the Ruhr Valley, as Model’s army group was encircled and destroyed by Allied forces. Detailed accounts of the Ardennes Offensive in Belgium, penetrating the German frontier, and the subsequent capture of the Ludendorff Bridge reveal how the impending defeat was experienced by the German defenders. An examination of the subsequent courts martial and executions of German officers by Nazi officials following the loss of the bridge at Remagen exhibit the ruthlessness and the perverse system of justice within the Third Reich. The Allied crossings of the Rhine through Operations Varsity and Plunder are described by Wehrmacht survivors, local inhabitants and refugees, many of whom had fled the Ruhr industrial area to escape the Allied strategic bombing campaign only to find themselves caught in the vicious battles on the east bank of the river as Eisenhower prepared to launch the final drive toward Berlin. Interviews with former German soldiers portray the experience of becoming prisoners of the United States Army during the closing weeks of the conflict. Accounts from witnesses reveal how local inhabitants endured marauding bands of liberated prisoners and former slave laborers while experiencing defeat, anarchy and eventual occupation by the Allies. The narrative details the dissolution of Army Group B as American forces destroyed the remnants of the Wehrmacht units trapped within the pocket. Walter Model’s final days are portrayed through interviews with the sole surviving staff officer who accompanied him as they evaded Allied forces until the field marshal chose suicide over surrender, and ended his life in a forest south of Duisburg, Germany on 21 April, 1945

    Technologies on the stand:Legal and ethical questions in neuroscience and robotics

    Get PDF
    corecore