532 research outputs found

    A NEW POLICY FOR THE SERVICE REQUEST ASSIGNMENT PROBLEM WITH MULTIPLE SEVERITY LEVEL, DUE DATE AND SLA PENALTY SERVICE REQUESTS

    Get PDF
    We study the problem of assigning multiple severity level service requests to agents in an agent pool. Each severity level is associated with a due date and a penalty, which is incurred if the service request is not resolved by the due date. Motivated by Van Meighem (2003), who shows the asymptotic optimality of the Generalized Longest Queue policy for the problem of minimizing the due date dependent expected delay costs when there is a single agent, we develop a class of Index-based policies that is a generalization of the Priority First-Come-First-Serve, Weighted Shortest Expected Processing Time and Generalized Longest Queue policy. In our simulation study of an assignment system of a large technology firm, the Index-based policy shows an improvement of 0-20 % over the Priority First-Come-First-Serve policy depending upon the load conditions.

    Security in Cloud Computing: Evaluation and Integration

    Get PDF
    Au cours de la dernière décennie, le paradigme du Cloud Computing a révolutionné la manière dont nous percevons les services de la Technologie de l’Information (TI). Celui-ci nous a donné l’opportunité de répondre à la demande constamment croissante liée aux besoins informatiques des usagers en introduisant la notion d’externalisation des services et des données. Les consommateurs du Cloud ont généralement accès, sur demande, à un large éventail bien réparti d’infrastructures de TI offrant une pléthore de services. Ils sont à même de configurer dynamiquement les ressources du Cloud en fonction des exigences de leurs applications, sans toutefois devenir partie intégrante de l’infrastructure du Cloud. Cela leur permet d’atteindre un degré optimal d’utilisation des ressources tout en réduisant leurs coûts d’investissement en TI. Toutefois, la migration des services au Cloud intensifie malgré elle les menaces existantes à la sécurité des TI et en crée de nouvelles qui sont intrinsèques à l’architecture du Cloud Computing. C’est pourquoi il existe un réel besoin d’évaluation des risques liés à la sécurité du Cloud durant le procédé de la sélection et du déploiement des services. Au cours des dernières années, l’impact d’une efficace gestion de la satisfaction des besoins en sécurité des services a été pris avec un sérieux croissant de la part des fournisseurs et des consommateurs. Toutefois, l’intégration réussie de l’élément de sécurité dans les opérations de la gestion des ressources du Cloud ne requiert pas seulement une recherche méthodique, mais aussi une modélisation méticuleuse des exigences du Cloud en termes de sécurité. C’est en considérant ces facteurs que nous adressons dans cette thèse les défis liés à l’évaluation de la sécurité et à son intégration dans les environnements indépendants et interconnectés du Cloud Computing. D’une part, nous sommes motivés à offrir aux consommateurs du Cloud un ensemble de méthodes qui leur permettront d’optimiser la sécurité de leurs services et, d’autre part, nous offrons aux fournisseurs un éventail de stratégies qui leur permettront de mieux sécuriser leurs services d’hébergements du Cloud. L’originalité de cette thèse porte sur deux aspects : 1) la description innovatrice des exigences des applications du Cloud relativement à la sécurité ; et 2) la conception de modèles mathématiques rigoureux qui intègrent le facteur de sécurité dans les problèmes traditionnels du déploiement des applications, d’approvisionnement des ressources et de la gestion de la charge de travail au coeur des infrastructures actuelles du Cloud Computing. Le travail au sein de cette thèse est réalisé en trois phases.----------ABSTRACT: Over the past decade, the Cloud Computing paradigm has revolutionized the way we envision IT services. It has provided an opportunity to respond to the ever increasing computing needs of the users by introducing the notion of service and data outsourcing. Cloud consumers usually have online and on-demand access to a large and distributed IT infrastructure providing a plethora of services. They can dynamically configure and scale the Cloud resources according to the requirements of their applications without becoming part of the Cloud infrastructure, which allows them to reduce their IT investment cost and achieve optimal resource utilization. However, the migration of services to the Cloud increases the vulnerability to existing IT security threats and creates new ones that are intrinsic to the Cloud Computing architecture, thus the need for a thorough assessment of Cloud security risks during the process of service selection and deployment. Recently, the impact of effective management of service security satisfaction has been taken with greater seriousness by the Cloud Service Providers (CSP) and stakeholders. Nevertheless, the successful integration of the security element into the Cloud resource management operations does not only require methodical research, but also necessitates the meticulous modeling of the Cloud security requirements. To this end, we address throughout this thesis the challenges to security evaluation and integration in independent and interconnected Cloud Computing environments. We are interested in providing the Cloud consumers with a set of methods that allow them to optimize the security of their services and the CSPs with a set of strategies that enable them to provide security-aware Cloud-based service hosting. The originality of this thesis lies within two aspects: 1) the innovative description of the Cloud applications’ security requirements, which paved the way for an effective quantification and evaluation of the security of Cloud infrastructures; and 2) the design of rigorous mathematical models that integrate the security factor into the traditional problems of application deployment, resource provisioning, and workload management within current Cloud Computing infrastructures. The work in this thesis is carried out in three phases

    SLA management of non-computational services.

    Get PDF
    El incremento en el uso de arquitecturas orientadas a servicios en los últimos 15 años ha propiciado la propuesta de numerosas técnicas para automatizar y dar soporte al uso de dichos servicios. Un elemento fundamental en la provisión de servicios es el Acuerdo de Nivel de Servicio (ANS), donde se formalizan los requisitos y garantías de consumidor y proveedor respecto del rendimiento del servicio. Las propuestas para servicios computacionales, además de proveer modelos formales para describirlos, proponen la automatización de las diferentes etapas del ciclo de vida del ANS, tales como la negociación de las garantías para crear un ANS, el despliegue de servicios basados en el ANS, o la gestión de los recursos para cumplir las garantías provistas en el mismo. Sin embargo, en los servicios tradicionales, no computacionales, es decir, los servicios que no son ejecutados por recursos computacionales, tales como los servicios de logística o de desarrollo de software, la gestión de sus ANSs todavía se realiza por medios ad-hoc. Así, las soluciones existentes no pueden ser reutilizadas por diferentes servicios. Y, en la mayoría de los casos, esta gestión se hace de manera manual (p.e. revisión de los objetivos acordados en los ANSs de servicios de transporte), por lo que la evaluación de estos ANSs es susceptible a errores y se suele retrasar respecto a la ejecución del servicio (p.e. cuando el ANS ha finalizado), por lo que no se pueden tomar acciones preventivas para evitar el incumplimiento del ANS o estas acciones no son rentables. En estos escenarios, aparecen, además, acuerdos marco para un periodo largo (p.e. 1 aõ), durante el cual pueden aparecen ANSs relacionados con éste para un periodo más específico y el análisis de la coherencia entre acuerdos marco y acuerdos específicos es complicada de hacer durante la ejecución del servicio. En esta tesis, nos proponemos automatizar parcialmente la gestión de los ANSs de servicios no computacionales. Así, por un lado, proponemos que los modelos para servicios computacionales se extiendan a servicios no computacionales, de manera que permitan describir la operativa del servicio y sus garantías. Y, por otro lado, basado en estos modelos, proporcionamos el diseño de operaciones para gestionar el ciclo de vida de los ANS. Concretamente, estas operaciones se basan en las fases de despligue y evaluación del ANS. De forma específica, esta tesis propone tres contribuciones principales. Primero, (A) extender iAgree para dar soporte al modelado de los ANS de servicios no computacionales. Segundo, (B) dar soporte al ciclo de vida de dichos ANS mediante la formalización de las operaciones citadas (configuración del servicio basada en el ANS y monitorización del mismo) y, a partir de estas operaciones, implementamos una arquitectura de referencia para estas operaciones. Y, por último, (C) proveemos el modelado de la relación entre acuerdos marco y específicos que relacione sus términos junto con la formalización de las operaciones para el análisis que aparecen entre ellos. Otros aspectos del ciclo de vida del servicio y del ANS, como la gestión de los recursos para mejorar el rendimiento del servicio o el uso de técnicas (como machine learning) para la predicción del cumplimiento de los ANSs están fuera del contexto de esta tesis, pero se plantean como futuras líneas de extensión. Este trabajo se ha basado en ANSs reales de diferentes dominios, tales como servicios de Transporte y Logística, proveedores de Cloud or outsourcing de desarrollo TIC, que se han utilizado para validar las propuestas. Además, las contribuciones presentadas se han aplicado en el contexto de proyectos reales de soporte de sistemas TIC.The rise of computational services in the last 15 years brought the proposal of a number of techniques to automate and support their enactment. One key element in services is the Service Level Agreement (SLA), where the requirements of service customer are matched with the performance levels from the service provider to define service level guarantees and related responsibilities. The proposals from computational domains are oriented to automate the different stages in the SLA Lifecycle, such as the negotiation of terms which will form the SLA, the deployment of services based on the SLA artifact or the management of computational resources to accomplish SLA goals on runtime. However, traditional non-computational services, that is, services which are not performed by computational resources, such as logistics or software development services, are still supported by ad-hoc mechanisms. Therefore, the existing solutions for the management of their SLAs cannot be reused for other services. This management is usually manually performed (e.g.: reviewing of the goals of an SLA in transport service), so their evaluation is error-prone and delayed regarding the service execution (e.g.: when the SLA is finished), so preemptive actions to avoid SLA violations cannot be taken or/and are expensive to perform. Furthermore, these SLAs are sometimes described on a long term basis (frame agreements), and related SLAs can appear for a shorter term (specific agreements) and the analysis of the validity among them is complex to perform on runtime. In this dissertation, we aim at partially automate the management of SLAs in noncomputational services. On the one hand, we suggest that existing models for computational services can be extended to non computational services and enable the description of the service operative and their guarantees. And, on the other hand, we provide a design for operations to partially support the SLA Lifecycle, based on the previous models. Specifically, these operations are mainly focused on the deployment and fulfillment stages of the SLA. Therefore, the contributions of this dissertation are three. First, (A) providing a model to describe Service Level Agreements of non computational services, as an extension of iAgree, an existing model for SLAs of computational services. Second side, (B) supporting the SLA Lifecycle with the design of the aforementioned operations (service configuration based on SLA and monitoring of SLA) and implementing a reference architecture for such operations. And, lastly, (C) providing a model for frame and specific agreements which relates their terms and formalises the analysis operations among them. Other related operations of the service lifecycle as the management of resources to improve service performance or the use of novel techniques (such as machine learning) to predict the SLA accomplishment are out of the scope of this thesis but planned as future line of extension. The current dissertation has been based on real SLAs from different domains, such as Transport & Logistics, public Cloud providers or IT Maintenance outsourcing, which have been used to validate the proposal. And, furthermore, the contributions have been applied in the context of real IT Maintenance outsourcing projects

    Framework of Six Sigma implementation analysis on SMEs in Malaysia for information technology services, products and processes

    Get PDF
    For the past two decades, the majority of Malaysia’s IT companies have been widely adopting a Quality Assurance (QA) approach as a basis for self-improvement and internal-assessment in IT project management. Quality Control (QC) is a comprehensive top-down observation approach used to fulfill requirements for quality outputs which focuses on the aspect of process outputs evaluation. However in the Malaysian context, QC and combination of QA and QC as a means of quality improvement approaches have not received significant attention. This research study aims to explore the possibility of integrating QC and QA+QC approaches through Six Sigma quality management standard to provide tangible and measureable business results by continuous process improvement to boost customer satisfactions. The research project adopted an exploratory case study approach on three Malaysian IT companies in the business area of IT Process, IT Service and IT Product. Semi-structured interviews, online surveys, self-administered questionnaires, job observations, document analysis and on-the-job-training are amongst the methodologies employed in these case studies. These collected data and viewpoints along with findings from an extensive literature review were used to benchmark quality improvement initiatives, best practices and to develop a Six Sigma framework for the context of the SMEs in the Malaysian IT industry. This research project contributed to both the theory and practice of implementing and integrating Six Sigma in IT products, services and processes. The newly developed framework has been proven capable of providing a general and fundamental start-up decision by demonstrating how a company with and without formal QIM can be integrated and implemented with Six Sigma practices to close the variation gap between QA and QC. This framework also takes into consideration those companies with an existing QIM for a new face-lift migration without having to drop their existing QIM. This can be achieved by integrating a new QIM which addresses most weaknesses of the current QIM while retaining most of the current business routine strengths. This framework explored how Six Sigma can be expanded and extended to include secondary external factors that are critical to successful QIM implementation. A vital segment emphasizes Six Sigma as a QA+QC approach in IT processes; and the ability to properly manage IT processes will result in overall performance improvement to IT Products and IT Services. The developed Six Sigma implementation framework can serve as a baseline for SMEs to better manage, control and track business performance and product quality; and at the same time creates clearer insights and un-biased views of Six Sigma implementation onto the IT industries to drive towards operational excellence

    Service Quality Assessment for Cloud-based Distributed Data Services

    Full text link
    The issue of less-than-100% reliability and trust-worthiness of third-party controlled cloud components (e.g., IaaS and SaaS components from different vendors) may lead to laxity in the QoS guarantees offered by a service-support system S to various applications. An example of S is a replicated data service to handle customer queries with fault-tolerance and performance goals. QoS laxity (i.e., SLA violations) may be inadvertent: say, due to the inability of system designers to model the impact of sub-system behaviors onto a deliverable QoS. Sometimes, QoS laxity may even be intentional: say, to reap revenue-oriented benefits by cheating on resource allocations and/or excessive statistical-sharing of system resources (e.g., VM cycles, number of servers). Our goal is to assess how well the internal mechanisms of S are geared to offer a required level of service to the applications. We use computational models of S to determine the optimal feasible resource schedules and verify how close is the actual system behavior to a model-computed \u27gold-standard\u27. Our QoS assessment methods allow comparing different service vendors (possibly with different business policies) in terms of canonical properties: such as elasticity, linearity, isolation, and fairness (analogical to a comparative rating of restaurants). Case studies of cloud-based distributed applications are described to illustrate our QoS assessment methods. Specific systems studied in the thesis are: i) replicated data services where the servers may be hosted on multiple data-centers for fault-tolerance and performance reasons; and ii) content delivery networks to geographically distributed clients where the content data caches may reside on different data-centers. The methods studied in the thesis are useful in various contexts of QoS management and self-configurations in large-scale cloud-based distributed systems that are inherently complex due to size, diversity, and environment dynamicity

    Machine Learning-based Orchestration Solutions for Future Slicing-Enabled Mobile Networks

    Get PDF
    The fifth generation mobile networks (5G) will incorporate novel technologies such as network programmability and virtualization enabled by Software-Defined Networking (SDN) and Network Function Virtualization (NFV) paradigms, which have recently attracted major interest from both academic and industrial stakeholders. Building on these concepts, Network Slicing raised as the main driver of a novel business model where mobile operators may open, i.e., “slice”, their infrastructure to new business players and offer independent, isolated and self-contained sets of network functions and physical/virtual resources tailored to specific services requirements. While Network Slicing has the potential to increase the revenue sources of service providers, it involves a number of technical challenges that must be carefully addressed. End-to-end (E2E) network slices encompass time and spectrum resources in the radio access network (RAN), transport resources on the fronthauling/backhauling links, and computing and storage resources at core and edge data centers. Additionally, the vertical service requirements’ heterogeneity (e.g., high throughput, low latency, high reliability) exacerbates the need for novel orchestration solutions able to manage end-to-end network slice resources across different domains, while satisfying stringent service level agreements and specific traffic requirements. An end-to-end network slicing orchestration solution shall i) admit network slice requests such that the overall system revenues are maximized, ii) provide the required resources across different network domains to fulfill the Service Level Agreements (SLAs) iii) dynamically adapt the resource allocation based on the real-time traffic load, endusers’ mobility and instantaneous wireless channel statistics. Certainly, a mobile network represents a fast-changing scenario characterized by complex spatio-temporal relationship connecting end-users’ traffic demand with social activities and economy. Legacy models that aim at providing dynamic resource allocation based on traditional traffic demand forecasting techniques fail to capture these important aspects. To close this gap, machine learning-aided solutions are quickly arising as promising technologies to sustain, in a scalable manner, the set of operations required by the network slicing context. How to implement such resource allocation schemes among slices, while trying to make the most efficient use of the networking resources composing the mobile infrastructure, are key problems underlying the network slicing paradigm, which will be addressed in this thesis

    Framework of Six Sigma implementation analysis on SMEs in Malaysia for information technology services, products and processes

    Get PDF
    For the past two decades, the majority of Malaysia’s IT companies have been widely adopting a Quality Assurance (QA) approach as a basis for self-improvement and internal-assessment in IT project management. Quality Control (QC) is a comprehensive top-down observation approach used to fulfill requirements for quality outputs which focuses on the aspect of process outputs evaluation. However in the Malaysian context, QC and combination of QA and QC as a means of quality improvement approaches have not received significant attention. This research study aims to explore the possibility of integrating QC and QA+QC approaches through Six Sigma quality management standard to provide tangible and measureable business results by continuous process improvement to boost customer satisfactions. The research project adopted an exploratory case study approach on three Malaysian IT companies in the business area of IT Process, IT Service and IT Product. Semi-structured interviews, online surveys, self-administered questionnaires, job observations, document analysis and on-the-job-training are amongst the methodologies employed in these case studies. These collected data and viewpoints along with findings from an extensive literature review were used to benchmark quality improvement initiatives, best practices and to develop a Six Sigma framework for the context of the SMEs in the Malaysian IT industry. This research project contributed to both the theory and practice of implementing and integrating Six Sigma in IT products, services and processes. The newly developed framework has been proven capable of providing a general and fundamental start-up decision by demonstrating how a company with and without formal QIM can be integrated and implemented with Six Sigma practices to close the variation gap between QA and QC. This framework also takes into consideration those companies with an existing QIM for a new face-lift migration without having to drop their existing QIM. This can be achieved by integrating a new QIM which addresses most weaknesses of the current QIM while retaining most of the current business routine strengths. This framework explored how Six Sigma can be expanded and extended to include secondary external factors that are critical to successful QIM implementation. A vital segment emphasizes Six Sigma as a QA+QC approach in IT processes; and the ability to properly manage IT processes will result in overall performance improvement to IT Products and IT Services. The developed Six Sigma implementation framework can serve as a baseline for SMEs to better manage, control and track business performance and product quality; and at the same time creates clearer insights and un-biased views of Six Sigma implementation onto the IT industries to drive towards operational excellence

    Quantifying cloud performance and dependability:Taxonomy, metric design, and emerging challenges

    Get PDF
    In only a decade, cloud computing has emerged from a pursuit for a service-driven information and communication technology (ICT), becoming a significant fraction of the ICT market. Responding to the growth of the market, many alternative cloud services and their underlying systems are currently vying for the attention of cloud users and providers. To make informed choices between competing cloud service providers, permit the cost-benefit analysis of cloud-based systems, and enable system DevOps to evaluate and tune the performance of these complex ecosystems, appropriate performance metrics, benchmarks, tools, and methodologies are necessary. This requires re-examining old system properties and considering new system properties, possibly leading to the re-design of classic benchmarking metrics such as expressing performance as throughput and latency (response time). In this work, we address these requirements by focusing on four system properties: (i) elasticity of the cloud service, to accommodate large variations in the amount of service requested, (ii) performance isolation between the tenants of shared cloud systems and resulting performance variability, (iii) availability of cloud services and systems, and (iv) the operational risk of running a production system in a cloud environment. Focusing on key metrics for each of these properties, we review the state-of-the-art, then select or propose new metrics together with measurement approaches. We see the presented metrics as a foundation toward upcoming, future industry-standard cloud benchmarks

    Business-driven IT Management

    Get PDF
    Business-driven IT management (BDIM) aims at ensuring successful alignment of business and IT through thorough understanding of the impact of IT on business results, and vice versa. In this dissertation, we review the state of the art of BDIM research and we position our intended contribution within the BDIM research space along the dimensions of decision support (as opposed of automation) and its application to IT service management processes. Within these research dimensions, we advance the state of the art by 1) contributing a decision theoretical framework for BDIM and 2) presenting two novel BDIM solutions in the IT service management space. First we present a simpler BDIM solution for prioritizing incidents, which can be used as a template for creating BDIM solutions in other IT service management processes. Then, we present a more comprehensive solution for optimizing the business-related performance of an IT support organization in dealing with incidents. Our decision theoretical framework and models for BDIM bring the concepts of business impact and risk to the fore, and are able to cope with both monetizable and intangible aspects of business impact. We start from a constructive and quantitative re-definition of some terms that are widely used in IT service management but for which was never given a rigorous decision: business impact, cost, benefit, risk and urgency. On top of that, we build a coherent methodology for linking IT-level metrics with business level metrics and make progress toward solving the business-IT alignment problem. Our methodology uses a constructive and quantitative definition of alignment with business objectives, taken as the likelihood – to the best of one’s knowledge – that such objectives will be met. That is used as the basis for building an engine for business impact calculation that is in fact an alignment computation engine. We show a sample BDIM solution for incident prioritization that is built using the decision theoretical framework, the methodology and the tools developed. We show how the sample BDIM solution could be used as a blueprint to build BDIM solutions for decision support in other IT service management processes, such as change management for example. However, the full power of BDIM can be best understood by studying the second fully fledged BDIM application that we present in this thesis. While incident management is used as a scenario for this second application as well, the main contribution that it brings about is really to provide a solution for business-driven organizational redesign to optimize the performance of an IT support organization. The solution is quite rich, and features components that orchestrate together advanced techniques in visualization, simulation, data mining and operations research. We show that the techniques we use - in particular the simulation of an IT organization enacting the incident management process – bring considerable benefits both when the performance is measured in terms of traditional IT metrics (mean time to resolution of incidents), and even more so when business impact metrics are brought into the picture, thereby providing a justification for investing time and effort in creating BDIM solutions. In terms of impact, the work presented in this thesis produced about twenty conference and journal publications, and resulted so far in three patent applications. Moreover this work has greatly influenced the design and implementation of Business Impact Optimization module of HP DecisionCenter™: a leading commercial software product for IT optimization, whose core has been re-designed to work as described here

    An Approach to Guide Users Towards Less Revealing Internet Browsers

    Get PDF
    When browsing the Internet, HTTP headers enable both clients and servers send extra data in their requests or responses such as the User-Agent string. This string contains information related to the sender’s device, browser, and operating system. Previous research has shown that there are numerous privacy and security risks result from exposing sensitive information in the User-Agent string. For example, it enables device and browser fingerprinting and user tracking and identification. Our large analysis of thousands of User-Agent strings shows that browsers differ tremendously in the amount of information they include in their User-Agent strings. As such, our work aims at guiding users towards using less exposing browsers. In doing so, we propose to assign an exposure score to browsers based on the information they expose and vulnerability records. Thus, our contribution in this work is as follows: first, provide a full implementation that is ready to be deployed and used by users. Second, conduct a user study to identify the effectiveness and limitations of our proposed approach. Our implementation is based on using more than 52 thousand unique browsers. Our performance and validation analysis show that our solution is accurate and efficient. The source code and data set are publicly available and the solution has been deployed
    • …
    corecore