2,207 research outputs found

    AUGURES : profit-aware web infrastructure management

    Get PDF
    Over the last decade, advances in technology together with the increasing use of the Internet for everyday tasks, are causing profound changes in end-users, as well as in businesses and technology providers. The widespread adoption of high-speed and ubiquitous Internet access, is also changing the way users interact with Web applications and their expectations in terms of Quality-of-Service (QoS) and User eXperience (UX). Recently, Cloud computing has been rapidly adopted to host and manage Web applications, due to its inherent cost effectiveness and on-demand scaling of infrastructures. However, system administrators still need to make manual decisions about the parameters that affect the business results of their applications ie., setting QoS targets and defining metrics for scaling the number of servers during the day. Therefore, understanding the workload and user behavior ¿the demand, poses new challenges for capacity planning and scalability ¿the supply, and ultimately for the success of a Web site. This thesis contributes to the current state-of-art of Web infrastructure management by providing: i) a methodology for predicting Web session revenue; ii) a methodology to determine high response time effect on sales; and iii) a policy for profit-aware resource management, that relates server capacity, to QoS, and sales. The approach leverages Machine Learning (ML) techniques on custom, real-life datasets from an Ecommerce retailer featuring popular Web applications. Where the experimentation shows how user behavior and server performance models can be built from offline information, to determine how demand and supply relations work as resources are consumed. Producing in this way, economical metrics that are consumed by profit-aware policies, that allow the self-configuration of cloud infrastructures to an optimal number of servers under a variety of conditions. While at the same time, the thesis, provides several insights applicable for improving Autonomic infrastructure management and the profitability of Ecommerce applications.Durante la última década, avances en tecnología junto al incremento de uso de Internet, están causando cambios en los usuarios finales, así como también a las empresas y proveedores de tecnología. La adopción masiva del acceso ubicuo a Internet de alta velocidad, crea cambios en la forma de interacción con las aplicaciones Web y en las expectativas de los usuarios en relación de calidad de servicio (QoS) y experiencia de usuario (UX) ofrecidas. Recientemente, el modelo de computación Cloud ha sido adoptado rápidamente para albergar y gestionar aplicaciones Web, debido a su inherente efectividad en costos y servidores bajo demanda. Sin embargo, los administradores de sistema aún tienen que tomar decisiones manuales con respecto a los parámetros de ejecución que afectan a los resultados de negocio p.ej. definir objetivos de QoS y métricas para escalar en número de servidores. Por estos motivos, entender la carga y el comportamiento de usuario (la demanda), pone nuevos desafíos a la planificación de capacidad y escalabilidad (el suministro), y finalmente el éxito de un sitio Web.Esta tesis contribuye al estado del arte actual en gestión de infraestructuras Web presentado: i) una metodología para predecir los beneficios de una sesión Web; ii) una metodología para determinar el efecto de tiempos de respuesta altos en las ventas; y iii) una política para la gestión de recursos basada en beneficios, al relacionar la capacidad de los servidores, QoS, y ventas. La propuesta se basa en aplicar técnicas Machine Learning (ML) a fuentes de datos de producción de un proveedor de Ecommerce, que ofrece aplicaciones Web populares. Donde los experimentos realizados muestran cómo modelos de comportamiento de usuario y de rendimiento de servidor pueden obtenerse de datos históricos; con el fin de determinar la relación entre la demanda y el suministro, según se utilizan los recursos. Produciendo así, métricas económicas que son luego aplicadas en políticas basadas en beneficios, para permitir la auto-configuración de infraestructuras Cloud a un número adecuado de servidores. Mientras que al mismo tiempo, la tesis provee información relevante para mejorar la gestión de infraestructuras Web de forma autónoma y aumentar los beneficios en aplicaciones de Ecommerce

    Extended resource management using client classification and economic enhancements

    Get PDF
    Commercialization of Grid resources will become more and more important as utility computing and the deployment of Grids gains momentum. This results in the necessity to not only base Grid components on technical aspects, but also to include economical aspects in their design. This paper presents a framework that links technical and economical aspects to the management of computational resources. Economic enhancements like dynamic pricing and client classification are introduced based on a technical resource management environment and positioned within this resulting in a proposed architecture for an Economically Enhanced Resource Manager (EERM). The introduced approach is evaluated considering various economic design criteria and example scenarios.Postprint (published version

    Fourth ERCIM workshop on e-mobility

    Get PDF

    Revista Economica

    Get PDF

    Mitigating the Effects of Partial Resource Failures for Cloud Providers

    Get PDF
    Competition for users on a global market is fierce, forcing enterprises to provide for better, faster services while offering the same more cheaply. At the same time, users choose to remain oblivious of the infrastructure behind the service – only demanding that it works. Cloud service failures and inefficient management of such failures can result in significant financial cost, loss of reputation for providers, and drive key customers away. At the same time failure situations can never be completely avoided. To mitigate their effects we present a decision model for providers to help them decide which jobs to keep running and which to cancel in order to minimize loss of revenue and key customers during partial resource failures. The results of the evaluation of the model and its extension show its ability to significantly improve revenue. Furthermore the model can also help to reduce the number of cancelled jobs

    Evaluating the Gasday Security Policy Through Penetration Testing and Application of the Nist Cybersecurity Framework

    Get PDF
    This thesis explores cybersecurity from the perspective of the Marquette University GasDay lab. We analyze three different areas of cybersecurity in three independent chapters. Our goal is to improve the cybersecurity capabilities of GasDay, Marquette University, and the natural gas industry. We present network penetration testing as a process of attempting to gain access to resources of GasDay without prior knowledge of any valid credentials. We discuss our method of identifying potential targets using industry standard reconnaissance methods. We outline the process of attempting to gain access to these targets using automated tools and manual exploit creation. We propose several solutions to those targets successfully exploited and recommendations for others. Next, we discuss GasDay Web and techniques to validate the security of a web-based GasDay software product. We use a form of penetration testing specifically targeted for a website. We demonstrate several vulnerabilities that are able to cripple the availability of the website and recommendations to mitigate these vulnerabilities. We then present the results of performing an inspection of GasDay Web code to uncover vulnerabilities undetectable by automated tools and make suggestions on their fixes. We discuss recommendations on how vulnerabilities can be mitigated or detected in the future. Finally, we apply the NIST Cybersecurity Framework to GasDay. We present the Department of Energy recommendations for the natural gas industry. Using these recommendations and the NIST Framework, we evaluate the overall cybersecurity maturity of the GasDay lab. We present several recommendations where GasDay could improve the maturity levels that are cost-effective and easy to implement. We identify several items missing from a cybersecurity plan and propose methods to implement them. The results of this thesis show that cybersecurity at a research lab is difficult. We demonstrate that even as a member of Marquette University, GasDay cannot rely on Marquette for cybersecurity. We show that the primary obstacle is lack of information - about cybersecurity and the assets GasDay controls. We make recommendations on how these items can be effectively created and managed

    Understanding and Improving Continuous Experimentation : From A/B Testing to Continuous Software Optimization

    Get PDF
    Controlled experiments (i.e. A/B tests) are used by many companies with user-intensive products to improve their software with user data. Some companies adopt an experiment-driven approach to software development with continuous experimentation (CE). With CE, every user-affecting software change is evaluated in an experiment and specialized roles seek out opportunities to experiment with functionality. The goal of the thesis is to describe current practice and support CE in industry. The main contributions are threefold. First, a review of the CE literature on: infrastructure and processes, the problem-solution pairs applied in industry practice, and the benefits and challenges of the practice. Second, a multi-case study with 12 companies to analyze how experimentation is used and why some companies fail to fully realize the benefits of CE. A theory for Factors Affecting Continuous Experimentation (FACE) is constructed to realize this goal. Finally, a toolkit called Constraint Oriented Multi-variate Bandit Optimization (COMBO) is developed for supporting automated experimentation with many variables simultaneously, live in a production environment.The research in the thesis is conducted under the design science paradigm using empirical research methods, with simulation experiments of tool proposals and a multi-case study on company usage of CE. Other research methods include systematic literature review and theory building.From FACE we derive three factors that explain CE utility: (1) investments in data infrastructure, (2) user problem complexity, and (3) incentive structures for experimentation. Guidelines are provided on how to strive towards state-of-the-art CE based on company factors. All three factors are relevant for companies wanting to use CE, in particular, for those companies wanting to apply algorithms such as those in COMBO to support personalization of software to users' context in a process of continuous optimization

    Automating Security Risk and Requirements Management for Cyber-Physical Systems

    Get PDF
    Cyber-physische Systeme ermöglichen zahlreiche moderne Anwendungsfälle und Geschäftsmodelle wie vernetzte Fahrzeuge, das intelligente Stromnetz (Smart Grid) oder das industrielle Internet der Dinge. Ihre Schlüsselmerkmale Komplexität, Heterogenität und Langlebigkeit machen den langfristigen Schutz dieser Systeme zu einer anspruchsvollen, aber unverzichtbaren Aufgabe. In der physischen Welt stellen die Gesetze der Physik einen festen Rahmen für Risiken und deren Behandlung dar. Im Cyberspace gibt es dagegen keine vergleichbare Konstante, die der Erosion von Sicherheitsmerkmalen entgegenwirkt. Hierdurch können sich bestehende Sicherheitsrisiken laufend ändern und neue entstehen. Um Schäden durch böswillige Handlungen zu verhindern, ist es notwendig, hohe und unbekannte Risiken frühzeitig zu erkennen und ihnen angemessen zu begegnen. Die Berücksichtigung der zahlreichen dynamischen sicherheitsrelevanten Faktoren erfordert einen neuen Automatisierungsgrad im Management von Sicherheitsrisiken und -anforderungen, der über den aktuellen Stand der Wissenschaft und Technik hinausgeht. Nur so kann langfristig ein angemessenes, umfassendes und konsistentes Sicherheitsniveau erreicht werden. Diese Arbeit adressiert den dringenden Bedarf an einer Automatisierungsmethodik bei der Analyse von Sicherheitsrisiken sowie der Erzeugung und dem Management von Sicherheitsanforderungen für Cyber-physische Systeme. Das dazu vorgestellte Rahmenwerk umfasst drei Komponenten: (1) eine modelbasierte Methodik zur Ermittlung und Bewertung von Sicherheitsrisiken; (2) Methoden zur Vereinheitlichung, Ableitung und Verwaltung von Sicherheitsanforderungen sowie (3) eine Reihe von Werkzeugen und Verfahren zur Erkennung und Reaktion auf sicherheitsrelevante Situationen. Der Schutzbedarf und die angemessene Stringenz werden durch die Sicherheitsrisikobewertung mit Hilfe von Graphen und einer sicherheitsspezifischen Modellierung ermittelt und bewertet. Basierend auf dem Modell und den bewerteten Risiken werden anschließend fundierte Sicherheitsanforderungen zum Schutz des Gesamtsystems und seiner Funktionalität systematisch abgeleitet und in einer einheitlichen, maschinenlesbaren Struktur formuliert. Diese maschinenlesbare Struktur ermöglicht es, Sicherheitsanforderungen automatisiert entlang der Lieferkette zu propagieren. Ebenso ermöglicht sie den effizienten Abgleich der vorhandenen Fähigkeiten mit externen Sicherheitsanforderungen aus Vorschriften, Prozessen und von Geschäftspartnern. Trotz aller getroffenen Maßnahmen verbleibt immer ein gewisses Restrisiko einer Kompromittierung, worauf angemessen reagiert werden muss. Dieses Restrisiko wird durch Werkzeuge und Prozesse adressiert, die sowohl die lokale und als auch die großräumige Erkennung, Klassifizierung und Korrelation von Vorfällen verbessern. Die Integration der Erkenntnisse aus solchen Vorfällen in das Modell führt häufig zu aktualisierten Bewertungen, neuen Anforderungen und verbessert weitere Analysen. Abschließend wird das vorgestellte Rahmenwerk anhand eines aktuellen Anwendungsfalls aus dem Automobilbereich demonstriert.Cyber-Physical Systems enable various modern use cases and business models such as connected vehicles, the Smart (power) Grid, or the Industrial Internet of Things. Their key characteristics, complexity, heterogeneity, and longevity make the long-term protection of these systems a demanding but indispensable task. In the physical world, the laws of physics provide a constant scope for risks and their treatment. In cyberspace, on the other hand, there is no such constant to counteract the erosion of security features. As a result, existing security risks can constantly change and new ones can arise. To prevent damage caused by malicious acts, it is necessary to identify high and unknown risks early and counter them appropriately. Considering the numerous dynamic security-relevant factors requires a new level of automation in the management of security risks and requirements, which goes beyond the current state of the art. Only in this way can an appropriate, comprehensive, and consistent level of security be achieved in the long term. This work addresses the pressing lack of an automation methodology for the security-risk assessment as well as the generation and management of security requirements for Cyber-Physical Systems. The presented framework accordingly comprises three components: (1) a model-based security risk assessment methodology, (2) methods to unify, deduce and manage security requirements, and (3) a set of tools and procedures to detect and respond to security-relevant situations. The need for protection and the appropriate rigor are determined and evaluated by the security risk assessment using graphs and a security-specific modeling. Based on the model and the assessed risks, well-founded security requirements for protecting the overall system and its functionality are systematically derived and formulated in a uniform, machine-readable structure. This machine-readable structure makes it possible to propagate security requirements automatically along the supply chain. Furthermore, they enable the efficient reconciliation of present capabilities with external security requirements from regulations, processes, and business partners. Despite all measures taken, there is always a slight risk of compromise, which requires an appropriate response. This residual risk is addressed by tools and processes that improve the local and large-scale detection, classification, and correlation of incidents. Integrating the findings from such incidents into the model often leads to updated assessments, new requirements, and improves further analyses. Finally, the presented framework is demonstrated by a recent application example from the automotive domain
    corecore