1,707 research outputs found

    A Literature Survey on Resource Management Techniques, Issues and Challenges in Cloud Computing

    Get PDF
    Cloud computing is a large scale distributed computing which provides on demand services for clients. Cloud Clients use web browsers, mobile apps, thin clients, or terminal emulators to request and control their cloud resources at any time and anywhere through the network. As many companies are shifting their data to cloud and as many people are being aware of the advantages of storing data to cloud, there is increasing number of cloud computing infrastructure and large amount of data which lead to the complexity management for cloud providers. We surveyed the state-of-the-art resource management techniques for IaaS (infrastructure as a service) in cloud computing. Then we put forward different major issues in the deployment of the cloud infrastructure in order to avoid poor service delivery in cloud computing

    Analysis and Mitigation of Remote Side-Channel and Fault Attacks on the Electrical Level

    Get PDF
    In der fortlaufenden Miniaturisierung von integrierten Schaltungen werden physikalische Grenzen erreicht, wobei beispielsweise Einzelatomtransistoren eine mögliche untere Grenze für Strukturgrößen darstellen. Zudem ist die Herstellung der neuesten Generationen von Mikrochips heutzutage finanziell nur noch von großen, multinationalen Unternehmen zu stemmen. Aufgrund dieser Entwicklung ist Miniaturisierung nicht länger die treibende Kraft um die Leistung von elektronischen Komponenten weiter zu erhöhen. Stattdessen werden klassische Computerarchitekturen mit generischen Prozessoren weiterentwickelt zu heterogenen Systemen mit hoher Parallelität und speziellen Beschleunigern. Allerdings wird in diesen heterogenen Systemen auch der Schutz von privaten Daten gegen Angreifer zunehmend schwieriger. Neue Arten von Hardware-Komponenten, neue Arten von Anwendungen und eine allgemein erhöhte Komplexität sind einige der Faktoren, die die Sicherheit in solchen Systemen zur Herausforderung machen. Kryptografische Algorithmen sind oftmals nur unter bestimmten Annahmen über den Angreifer wirklich sicher. Es wird zum Beispiel oft angenommen, dass der Angreifer nur auf Eingaben und Ausgaben eines Moduls zugreifen kann, während interne Signale und Zwischenwerte verborgen sind. In echten Implementierungen zeigen jedoch Angriffe über Seitenkanäle und Faults die Grenzen dieses sogenannten Black-Box-Modells auf. Während bei Seitenkanalangriffen der Angreifer datenabhängige Messgrößen wie Stromverbrauch oder elektromagnetische Strahlung ausnutzt, wird bei Fault Angriffen aktiv in die Berechnungen eingegriffen, und die falschen Ausgabewerte zum Finden der geheimen Daten verwendet. Diese Art von Angriffen auf Implementierungen wurde ursprünglich nur im Kontext eines lokalen Angreifers mit Zugriff auf das Zielgerät behandelt. Jedoch haben bereits Angriffe, die auf der Messung der Zeit für bestimmte Speicherzugriffe basieren, gezeigt, dass die Bedrohung auch durch Angreifer mit Fernzugriff besteht. In dieser Arbeit wird die Bedrohung durch Seitenkanal- und Fault-Angriffe über Fernzugriff behandelt, welche eng mit der Entwicklung zu mehr heterogenen Systemen verknüpft sind. Ein Beispiel für neuartige Hardware im heterogenen Rechnen sind Field-Programmable Gate Arrays (FPGAs), mit welchen sich fast beliebige Schaltungen in programmierbarer Logik realisieren lassen. Diese Logik-Chips werden bereits jetzt als Beschleuniger sowohl in der Cloud als auch in Endgeräten eingesetzt. Allerdings wurde gezeigt, wie die Flexibilität dieser Beschleuniger zur Implementierung von Sensoren zur Abschätzung der Versorgungsspannung ausgenutzt werden kann. Zudem können durch eine spezielle Art der Aktivierung von großen Mengen an Logik Berechnungen in anderen Schaltungen für Fault Angriffe gestört werden. Diese Bedrohung wird hier beispielsweise durch die Erweiterung bestehender Angriffe weiter analysiert und es werden Strategien zur Absicherung dagegen entwickelt

    Sustainable manufacturing in the fourth industrial revolution: a big data application proposal in the textile industry

    Get PDF
    Purpose: Design an industrial production model with a focus on industry 4.0 (Big Data) and decision-making analysis for small and medium-sized enterprises (SMEs) in the clothing sector that allows improving procedures, jobs and related costs within the study organization Develop a sustainable manufacturing proposal for the industrial textile sector with a focus on Big data (entry, transformation, data loading and analysis) in organizational decision making, in search of time and cost optimization and environmental impact mitigation related. Design/methodology/approach: The present research, of an applied nature, raises a value proposition focused on the planning, design and structuring of an industrial model focused on Big Data, specifically in the apparel manufacturing sector for decision-making in a structured and automated way with the methodological approach to follow: 1) Approach of production strategies oriented in Big Data for the textile sector; 2) Definition of the production model and configuration of the operational system; 3) Data science and industrial analysis, 4) Production model approach (Power BI) and 5) model validation. Methodological design of the investigation. 1) Presentation of the case study, where the current situational analysis of the company is carried out, formulation of the problem and proposal of solution for the set of data analyzed; 2) Presentation of a solution proposal focused on Big Data, on the identification of the industrial ecosystem and integration with the company's information systems, as well as the solution approach in the study and science of data in real time; 3) Presentation of the Model proposal for SQL structured databases in the loading, transformation and loading of important information for this study; 4) Information processing, in the edition of data in the M language of Power BI software, construction and elaboration of the model; 5) Presentation of the related databases, in the integration with the foreign key of the Master table and the transactional Tables; 6) Data analysis and presentation of the Dashboard, in the design, construction and analysis of the related study variables, as well as the approach of solution scenarios in the correct organizational decision making Findings: The results obtained show an improvement in operational efficiency from the value-added proposal. Research limitations/implications: Currently, the number of studies applying Big Data technology for organizations in the textile and manufacturing sector in organizational decision making are limited. If analyzed from the local scene, there are few cases of Big Data implementation in the textile sector, as a consequence of the lack of projects and financing of value propositions. Another limiting factor in this research is the absence of digital information of high relevance for study and analysis, which leads to longer times in data entry and placement in information systems in real time. Finally, there is no data organizational culture, where there are processes and/or procedures for data registration and its transformation into clean data. Originality/value: This research integrates, as well as the correct organizational decision making For the verification of originality, the project search and systematic review of literature in the main online search engines are carried out for this research; In addition, the percentages of coincidence with online reviewers such as turnitin and plag.es are reviewed in the transparency of this study projectPeer Reviewe

    Exceeding Conservative Limits: A Consolidated Analysis on Modern Hardware Margins

    Get PDF
    Modern large-scale computing systems (data centers, supercomputers, cloud and edge setups and high-end cyber-physical systems) employ heterogeneous architectures that consist of multicore CPUs, general-purpose many-core GPUs, and programmable FPGAs. The effective utilization of these architectures poses several challenges, among which a primary one is power consumption. Voltage reduction is one of the most efficient methods to reduce power consumption of a chip. With the galloping adoption of hardware accelerators (i.e., GPUs and FPGAs) in large datacenters and other large-scale computing infrastructures, a comprehensive evaluation of the safe voltage reduction levels for each different chip can be employed for efficient reduction of the total power. We present a survey of recent studies in voltage margins reduction at the system level for modern CPUs, GPUs and FPGAs. The pessimistic voltage guardbands inserted by the silicon vendors can be exploited in all devices for significant power savings. On average, voltage reduction can reach 12% in multicore CPUs, 20% in manycore GPUs and 39% in FPGAs.Comment: Accepted for publication in IEEE Transactions on Device and Materials Reliabilit

    TRRespass: Exploiting the Many Sides of Target Row Refresh

    Full text link
    After a plethora of high-profile RowHammer attacks, CPU and DRAM vendors scrambled to deliver what was meant to be the definitive hardware solution against the RowHammer problem: Target Row Refresh (TRR). A common belief among practitioners is that, for the latest generation of DDR4 systems that are protected by TRR, RowHammer is no longer an issue in practice. However, in reality, very little is known about TRR. In this paper, we demystify the inner workings of TRR and debunk its security guarantees. We show that what is advertised as a single mitigation mechanism is actually a series of different solutions coalesced under the umbrella term TRR. We inspect and disclose, via a deep analysis, different existing TRR solutions and demonstrate that modern implementations operate entirely inside DRAM chips. Despite the difficulties of analyzing in-DRAM mitigations, we describe novel techniques for gaining insights into the operation of these mitigation mechanisms. These insights allow us to build TRRespass, a scalable black-box RowHammer fuzzer. TRRespass shows that even the latest generation DDR4 chips with in-DRAM TRR, immune to all known RowHammer attacks, are often still vulnerable to new TRR-aware variants of RowHammer that we develop. In particular, TRRespass finds that, on modern DDR4 modules, RowHammer is still possible when many aggressor rows are used (as many as 19 in some cases), with a method we generally refer to as Many-sided RowHammer. Overall, our analysis shows that 13 out of the 42 modules from all three major DRAM vendors are vulnerable to our TRR-aware RowHammer access patterns, and thus one can still mount existing state-of-the-art RowHammer attacks. In addition to DDR4, we also experiment with LPDDR4 chips and show that they are susceptible to RowHammer bit flips too. Our results provide concrete evidence that the pursuit of better RowHammer mitigations must continue.Comment: 16 pages, 16 figures, in proceedings IEEE S&P 202

    Oceanids C2: An Integrated Command, Control, and Data Infrastructure for the Over-the-Horizon Operation of Marine Autonomous Systems

    Get PDF
    Long-range Marine Autonomous Systems (MAS), operating beyond the visual line-of-sight of a human pilot or research ship, are creating unprecedented opportunities for oceanographic data collection. Able to operate for up to months at a time, periodically communicating with a remote pilot via satellite, long-range MAS vehicles significantly reduce the need for an expensive research ship presence within the operating area. Heterogeneous fleets of MAS vehicles, operating simultaneously in an area for an extended period of time, are becoming increasingly popular due to their ability to provide an improved composite picture of the marine environment. However, at present, the expansion of the size and complexity of these multi-vehicle operations is limited by a number of factors: (1) custom control-interfaces require pilots to be trained in the use of each individual vehicle, with limited cross-platform standardization; (2) the data produced by each vehicle are typically in a custom vehicle-specific format, making the automated ingestion of observational data for near-real-time analysis and assimilation into operational ocean models very difficult; (3) the majority of MAS vehicles do not provide machine-to-machine interfaces, limiting the development and usage of common piloting tools, multi-vehicle operating strategies, autonomous control algorithms and automated data delivery. In this paper, we describe a novel piloting and data management system (C2) which provides a unified web-based infrastructure for the operation of long-range MAS vehicles within the UK's National Marine Equipment Pool. The system automates the archiving, standardization and delivery of near-real-time science data and associated metadata from the vehicles to end-users and Global Data Assembly Centers mid-mission. Through the use and promotion of standard data formats and machine interfaces throughout the C2 system, we seek to enable future opportunities to collaborate with both the marine science and robotics communities to maximize the delivery of high-quality oceanographic data for world-leading science

    Advances in Grid Computing

    Get PDF
    This book approaches the grid computing with a perspective on the latest achievements in the field, providing an insight into the current research trends and advances, and presenting a large range of innovative research papers. The topics covered in this book include resource and data management, grid architectures and development, and grid-enabled applications. New ideas employing heuristic methods from swarm intelligence or genetic algorithm and quantum encryption are considered in order to explain two main aspects of grid computing: resource management and data management. The book addresses also some aspects of grid computing that regard architecture and development, and includes a diverse range of applications for grid computing, including possible human grid computing system, simulation of the fusion reaction, ubiquitous healthcare service provisioning and complex water systems

    Achieving Reliable and Sustainable Next-Generation Memories

    Get PDF
    Conventional memory technology scaling has introduced reliability challenges due to dysfunctional, improperly formed cells and crosstalk from increased cell proximity. Furthermore, as the manufacturing effort becomes increasingly complex due to these deeply scaled technologies, holistic sustainability is negatively impacted. The development of new memory technologies can help overcome the capacitor scaling limitations of DRAM. However, these technologies have their own reliability concerns, such as limited write endurance in the case of Phase Change Memories (PCM). Moreover, emerging system requirements, such as in-memory encryption to protect sensitive or private data and operation in harsh environments create additional challenges that must be addressed in the context of reliability and sustainability. This dissertation provides new multifactor and ultimately unified solutions to address many of these concerns in the same system. In particular, my contributions toward mitigating these issues are as follows. I present GreenChip and GreenAsic, which together provide the first tools to holistically evaluate new computer architecture, chip, and memory design concepts for sustainability. These tools provide detailed estimates of manufacturing and operational-phase metrics for different computing workloads and deployment scenarios. Using GreenChip, I examined existing DRAM reliability techniques in the context of their holistic sustainability impact, including my own technique to mitigate bitline crosstalk. For PCM, I provided a new reliability technique with no additional storage overhead that substantially increases the lifetime of an encrypted memory system. To provide bit-level error correction, I developed compact linked-list and Bloom-filter-based bit-level fault map structures, that provide unprecedented levels of error tabulation, combined with my own novel error correction and lifetime extension approaches based on these maps for less area than traditional ECC. In particular, FaME, can correct N faults using N bits when utilizing a bit-level fault map. For operation in harsh environments, I created a triple modular redundancy (TMR) pointer-based fault map, HOTH, which specifically protects cells shown to be weak to radiation. Finally, to combine the analyses of holistic sustainability and memory lifetime, I created the LARS technique, which adjusts the GreenChip indifference analysis to account for the additional sustainability benefit provided by increased reliability and lifetime
    • …
    corecore