10 research outputs found

    A Hierarchical framework of Cloud Computing using Heart beat as Biometric

    Get PDF
    Cloud computing is the technique which is mainly used to create a cloud space. These cloud spaces that are created by the user, can store files and also can upload and download information. So in this paper we will be mainly discussing about the security purpose that we are going to use in this project. Till now we have used the biometric security methods like face recognition, finger print and so on. But in this project what we will be discussing about it is a combination of two or more biometrics. We fuse ECG and Palm print for achieving the multi modal system. Moreover we take the heartbeat of the human being as the main biometric. So here we will be also discussing about few algorithms and methods in this topic. The two signals that we consider are Electrocardiogram and Phonocardiogram. When these two signals combine together we will be able to run the multimodal system. The ECG purpose is to record the signal frequency of the heart and store them in the database. This is done because for the user authentication purpose. The usage of PCG is to record the sound made by the heart, that is nothing but the sound of the heartbeat. There are also few complications in this model, because it is not that easy to show as a real time model. All the templates has to be stored in the database, only then the user will be able to authenticate. Considering in all this the basic challenge is the time dependency, this is because the authentication has to be done in a timely manner

    Efficient optimal policy and resource allocation to provide qos services in multi-cloud

    Get PDF
    ABSTRACT: we propose a novel Service Level Agreement (SLA) framework  for cloud computing, in which a value control parameter is utilized to satisfy QoS needs for all classes in the market. The framework  utilizes reinforcement learning (RL) to infer a VM enlisting approach that can adjust to changes in the framework to ensure the QoS for all User classes. These progressions include: administration cost, framework limit, and the interest for administration. In displaying arrangements, when the CP rents more VMs to a class of Users, the QoS is debased for different classes because of a deficient number of VMs. In any case, our methodology coordinates processing assets adjustment with administration affirmation control dependent on the RL show. To the best of our insight, this investigation is the principal endeavor that encourages this mix to upgrade the CP's benefit and maintain a strategic distance from SLA infringement

    Adaptive Resource Allocation and Provisioning in Multi-Service Cloud Environments

    Get PDF
    In the current cloud business environment, the cloud provider (CP) can provide a means for offering the required quality of service (QoS) for multiple classes of clients. We consider the cloud market where various resources such as CPUs, memory, and storage in the form of Virtual Machine (VM) instances can be provisioned and then leased to clients with QoS guarantees. Unlike existing works, we propose a novel Service Level Agreement (SLA) framework for cloud computing, in which a price control parameter is used to meet QoS demands for all classes in the market. The framework uses reinforcement learning (RL) to derive a VM hiring policy that can adapt to changes in the system to guarantee the QoS for all client classes. These changes include: service cost, system capacity, and the demand for service. In exhibiting solutions, when the CP leases more VMs to a class of clients, the QoS is degraded for other classes due to an inadequate number of VMs. However, our approach integrates computing resources adaptation with service admission control based on the RL model. To the best of our knowledge, this study is the first attempt that facilitates this integration to enhance the CP's profit and avoid SLA violation. Numerical analysis stresses the ability of our approach to avoid SLA violation while maximizing the CP’s profit under varying cloud environment conditions

    Monitoraggio Adattivo del livello di qualitĂ  in Architetture Orientate ai Servizi

    Get PDF
    Formulazione di un approccio adattivo e realizzazione di un meccanismo di monitoraggio e verifica di contratti di livelli di qualitĂ  in architetture SO

    Business-driven IT Management

    Get PDF
    Business-driven IT management (BDIM) aims at ensuring successful alignment of business and IT through thorough understanding of the impact of IT on business results, and vice versa. In this dissertation, we review the state of the art of BDIM research and we position our intended contribution within the BDIM research space along the dimensions of decision support (as opposed of automation) and its application to IT service management processes. Within these research dimensions, we advance the state of the art by 1) contributing a decision theoretical framework for BDIM and 2) presenting two novel BDIM solutions in the IT service management space. First we present a simpler BDIM solution for prioritizing incidents, which can be used as a template for creating BDIM solutions in other IT service management processes. Then, we present a more comprehensive solution for optimizing the business-related performance of an IT support organization in dealing with incidents. Our decision theoretical framework and models for BDIM bring the concepts of business impact and risk to the fore, and are able to cope with both monetizable and intangible aspects of business impact. We start from a constructive and quantitative re-definition of some terms that are widely used in IT service management but for which was never given a rigorous decision: business impact, cost, benefit, risk and urgency. On top of that, we build a coherent methodology for linking IT-level metrics with business level metrics and make progress toward solving the business-IT alignment problem. Our methodology uses a constructive and quantitative definition of alignment with business objectives, taken as the likelihood – to the best of one’s knowledge – that such objectives will be met. That is used as the basis for building an engine for business impact calculation that is in fact an alignment computation engine. We show a sample BDIM solution for incident prioritization that is built using the decision theoretical framework, the methodology and the tools developed. We show how the sample BDIM solution could be used as a blueprint to build BDIM solutions for decision support in other IT service management processes, such as change management for example. However, the full power of BDIM can be best understood by studying the second fully fledged BDIM application that we present in this thesis. While incident management is used as a scenario for this second application as well, the main contribution that it brings about is really to provide a solution for business-driven organizational redesign to optimize the performance of an IT support organization. The solution is quite rich, and features components that orchestrate together advanced techniques in visualization, simulation, data mining and operations research. We show that the techniques we use - in particular the simulation of an IT organization enacting the incident management process – bring considerable benefits both when the performance is measured in terms of traditional IT metrics (mean time to resolution of incidents), and even more so when business impact metrics are brought into the picture, thereby providing a justification for investing time and effort in creating BDIM solutions. In terms of impact, the work presented in this thesis produced about twenty conference and journal publications, and resulted so far in three patent applications. Moreover this work has greatly influenced the design and implementation of Business Impact Optimization module of HP DecisionCenter™: a leading commercial software product for IT optimization, whose core has been re-designed to work as described here

    Self-adaptive SLA-driven capacity management for internet services

    No full text
    Abstract — This work considers the problem of hosting multiple third-party Internet services in a cost-effective manner so as to maximize a provider’s business objective. For this purpose, we present a dynamic capacity management framework based on an optimization model, which links a cost model based on SLA contracts with an analytical queuing-based performance model, in an attempt to adapt the platform to changing capacity needs in real time. In addition, we propose a two-level SLA specification for different operation modes, namely, normal and surge, which allows for per-use service accounting with respect to requirements of throughput and tail distribution response time. The cost model proposed is based on penalties, incurred by the provider due to SLA violation, and rewards, received when the service level expectations are exceeded. Finally, we evaluate approximations for predicting the performance of the hosted services under two different scheduling disciplines, namely FCFS and processor sharing. Through simulation, we assess the effectiveness of the proposed approach as well as the level of accuracy resulting from the performance model approximations. I

    Extra Functional Properties Evaluation of Self-managed Software Systems with Formal Methods

    Get PDF
    Multitud de aplicaciones software actuales están abocadas a operar en contextos dinámicos. Estos pueden manifestarse en términos de cambios en el entorno de ejecución de la aplicación, cambios en los requisitos de la aplicación, cambios en la carga de trabajo recibida por la aplicación, o cambios en cualquiera de los elementos que la aplicación software pueda percibir y verse afectada. Además, estos contextos dinámicos no están restringidos a un dominio particular de aplicaciones sino que se pueden encontrar en múltiples dominios, tales como: sistemas empotrados, arquitecturas orientadas a servicios, clusters para computación de altas prestaciones, dispositivos móviles o software para el funcionamiento de la red. La existencia de estas características disuade a los ingenieros de desarrollar software que no sea capaz de cambiar de modo alguno su ejecución para acomodarla al contexto en el que se está ejecutando el software en cada momento. Por lo tanto, con el objetivo de que el software pueda satisfacer sus requisitos en todo momento, este debe incluir mecanismos para poder cambiar su configuración de ejecución. Además, debido a que los cambios de contexto son frecuentes y afectan a múltiples dispositivos de la aplicación, la intervención humana que cambie manualmente la configuración del software no es una solución factible. Para enfrentarse a estos desafíos, la comunidad de Ingeniería del Software ha propuesto nuevos paradigmas que posibilitan el desarrollo de software que se enfrenta a contextos cambiantes de un modo automático; por ejemplo las propuestas Autonomic Computing y Self-* Software. En tales propuestas es el propio software quien gestiona sus mecanismos para cambiar la configuración de ejecución, sin requerir por lo tanto intervención humana alguna. Un aspecto esencial del software auto-adaptativo (Self-adaptive Software es uno de los términos más generales para referirse a Self-* Software) es el de planear sus cambios o adaptaciones. Los planes de adaptación determinan tanto el modo en el que se adaptará el software como los momentos oportunos para ejecutar tales adaptaciones. Hay un gran conjunto de situaciones para las cuales la propiedad de auto- adaptación es una solución. Una de esas situaciones es la de mantener al sistema satisfaciendo sus requisitos extra funcionales, tales como la calidad de servicio (Quality of Service, QoS) y su consumo de energía. Esta tesis ha investigado esa situación mediante el uso de métodos formales. Una de las contribuciones de esta tesis es la propuesta para asentar en una arquitectura software los sistemas que son auto-adaptativos respecto a su QoS y su consumo de energía. Con este objetivo, esta parte de la investigación la guía una arquitectura de tres capas de referencia para sistemas auto-adaptativos. La bondad del uso de una arquitectura de referencia es que muestra fácilmente los nuevos desafíos en el diseño de este tipo de sistemas. Naturalmente, la planificación de la adaptación es una de las actividades consideradas en la arquitectura. Otra de las contribuciones de la tesis es la propuesta de métodos para la creación de planes de adaptación. Los métodos formales juegan un rol esencial en esta actividad, ya que posibilitan el estudio de las propiedades extra funcionales de los sistemas en diferentes configuraciones. El método formal utilizado para estos análisis es el de las redes de Petri markovianas. Una vez que se ha creado el plan de adaptación, hemos investigado la utilización de los métodos formales para la evaluación de QoS y consumo de energía de los sistemas auto-adaptativos. Por lo tanto, se ha contribuido a la comunidad de análisis de QoS con el análisis de un nuevo y particularmente complejo tipo de sistemas software. Para llevar a cabo este análisis se requiere el modelado de los cambios din·micos del contexto de ejecución, para lo que se han utilizado una variedad de métodos formales, como los Markov modulated Poisson processes para estimar los parámetros de las variaciones en la carga de trabajo recibida por la aplicación, o los hidden Markov models para predecir el estado del entorno de ejecución. Estos modelos han sido usados junto a las redes de Petri para evaluar sistemas auto-adaptativos y obtener resultados sobre su QoS y consumo de energía. El trabajo de investigación anterior sacó a la luz el hecho de que la adaptabilidad de un sistema no es una propiedad tan fácilmente cuantificable como las propiedades de QoS -por ejemplo, el tiempo de respuesta- o el consumo de energÌa. En consecuencia, se ha investigado en esa dirección y, como resultado, otra de las contribuciones de esta tesis es la propuesta de un conjunto de métricas para la cuantificación de la propiedad de adaptabilidad de sistemas basados en servicios. Para conseguir las anteriores contribuciones se realiza un uso intensivo de modelos y transformaciones de modelos; tarea para la que se han seguido las mejores prácticas en el campo de investigación de la Ingeniería orientada a modelos (Model-driven Engineering, MDE). El trabajo de investigación de esta tesis en el campo MDE ha contribuido con: el aumento de la potencia de modelado de un lenguaje de modelado de software propuesto anteriormente y métodos de transformación desde dos lenguajes de modelado de software a redes de Petri estocasticas

    Determining the Cost of Business Continuity Management - A Case Study of IT Service Continuity Management Activity Cost Analysis

    Get PDF
    This single organisation case study discusses the cost of business continuity management in IT services. Information technology (IT) expenses can amount to a substantial part of operational costs in a company, and IT leaders tend to aim for thorough IT cost management to meet financial targets. Thus, information security activities such as business continuity management (BCM) rank among the most important concerns for IT leaders. Despite the concerns of IT management, senior management appears to be hesitant to spend on BCM as much as IT management would hope for. Senior management may struggle with the question of how to justify spending on an activity that proves its usefulness only when a rare event occurs. The challenge for measuring costs of sociotechnical activities was the inspiration for this work – to find out whether the cost of business continuity management (BCM) could be explained better to help decision making. Two main paradigms emerged from literature – BCM activities in the context of organisational routines, and IT cost and information security cost classifications. The theoretical assumption was that the relationship between IT costs and BCM activities emulates the activity- based costing theory (ABC) – the premise of cause-and-effect relationship between activities and costs. The key question is “How to determine the cost of BCM activities in IT services?” To find out, I used comprehensive archival data set from a case company and designed a retrospective quantitative model to analyse the association between BCM activities and IT costs. By employing causal-comparative method and multiple linear regression analysis, I compared distinct groups of IT services to determine how much of the variation in IT costs could be explained by BCM activities. In addition, I measured the relative effect of each independent variable towards the total cost of BCM. As both statistical and practical significance test results were supported, several interesting results were observed between BCM activities and IT costs – namely human, technology and organisational resources, as well as IT service designs. The research presents two theoretical contributions and one empirical contribution to the theory. The first and primary contribution is the BCM activity cost model. This is the final product for the main research question of determining the cost of BCM in IT services. The second contribution is the total cost of BCM framework. This framework contributes to the broader academic discussion of information system (IS) cost taxonomies in IT services and information security. The third contribution is empirical confirmation how to observe unknown cost effects by multiple regression analysis. Learnings from this research can contribute IS researchers focused on the economic aspects of IS and IT. The research also introduces three practical contributions. The first one considers the observation of overall BCM cost effects on IT services. Although the results of a single case study cannot be generalized directly to every organization, information herein may aid companies to evaluate BCM impact on their budgets. The second practical contribution considers the challenges regarding measurement of activity costs that can be difficult to observe directly. Within the limitations of this research, nothing here suggests that the BCM activity cost model could not be productized and integrated into other cost appraisal tools in a company or applied in other IT service management areas. The last important practical contribution are the definitions of BCM activity cost variables. Confirming the cost association between theoretical and empirical BCM frameworks can help BCM professionals to promote BCM process.Tämä yhden organisaation tapaustutkimus pohtii jatkuvuudenhallinnan kustannusten osuutta tietojärjestelmäpalveluissa. Informaatioteknologian (IT) kustannukset saattavat muodostaa merkittävän osa yrityksen menoista, ja IT-johtajat pyrkivät yleensä tarkkaan kulujenhallintaan saavuttaakseen yrityksen taloudelliset tavoitteet. Siksi tietoturva-aktiiviteetit kuten jatkuvuudenhallinta (business continuity management, BCM) ovat heidän olennaisimpia huolenaiheitaan. IT-johtajien huolista huolimatta ylin johto ei yleensä ole kovin innokas panostamaan BCM:ään niin paljon kuin IT-johto toivoisi. Ylin johto saattaa tuskailla sen kanssa, miten perustella kulut toimiin, joita kaivataan vain harvinaisissa poikkeustilanteissa. Sosioteknisten kulujen mittaamisen haaste antoi inspiraation tälle tutkimukselle; tavoite oli selvittää, olisiko mahdollista selittää BCM-kustannuksia paremmin päätöksenteon tueksi. Kirjallisuudesta nousee esiin kaksi keskeistä aihepiiriä: BCM organisaation toimintatapojen kontekstissa sekä IT-ja tietoturvakulujen luokittelu. Teoreettinen oletus oli, että IT-kulujen ja BCM- toimenpiteiden suhde emuloi toimintolaskennan (activity-based costing, ABC) teoriaa – se, että toimenpiteiden ja kulujen välillä on syy-seuraussuhde. Avainkysymys on ”Miten määritellä BCM- toimenpiteiden kulut IT-palveluissa?” Tämän selvittämiseksi käytin kattavaa arkistodataa caseyhtiöstä ja kehitin retrospektiivisen kvantitatiivisen mallin analysoidakseni BCM-toimenpiteiden ja IT-kulujen suhdetta. Kausaalis-komparatiivisen metodin ja lineaarisen regressioanalyysin avulla vertailin erilaisia IT-palvelujen ryhmiä selvittääkseni missä määrin BCM-toimenpiteet voisivat selittää IT-kulujen vaihtelua. Lisäksi mittasin jokaisen muuttujan suhteellisen vaikutuksen BCM:n kokonaiskustannuksiin. Kun sekä tilastolliset että käytännölliset testitulokset huomioitiin, BCM- toimenpiteiden ja IT-kulujen suhteesta ilmeni useita kiinnostavia tuloksia: sekä inhimillisiä että teknologia- ja organisaatioresursseihin ja IT-palvelujen muotoiluun liittyviä. Tutkimus tuotti kaksi teoreettista kontribuutiota sekä yhden empiirisen todistuksen teorialle. Ensimmäinen ja olennaisin näistä on BCM-toimenpiteiden kustannusmalli. Tämä lopputuotos vastaa tutkielman avainkysymykseen BCM-kuluista IT-palveluissa. Toinen kontribuutio on BCM-kehyksen kokonaishinta. Tämä voi ruokkia laajempaa akateemista keskustelua tietojärjestelmien (information system, IS) kustannustaksonomioista IT- palveluissa ja tietoturvassa. Kolmas kontribuutio, empiirinen todistus, osoittaa epäsuorien kulujen mittaamisen olevan mahdollista regressioanalyysiä hyödyntäen. Tutkimuksen havainnoista voi olla hyötyä IS:n ja IT:n taloudellisiin aspekteihin keskittyneille IS-tutkijoille. Tutkimuksesta nousee esiin myös kolme käytännön kontribuutiota. Ensimmäinen liittyy siihen, miten BCM-kokonaiskulujen vaikutuksia IT-palveluihin seurataan. Vaikka yhden tapaustutkimuksen tuloksia ei voida yleistää, tutkimuksen havainnot voivat auttaa yrityksiä arvioimaan BCM:n vaikutuksia budjetteihinsa. Toinen käytännön kontribuutio liittyy haasteisiin siinä, kuinka mitata toimenpidekustannuksia, joita on hankala tarkkailla suoraan. Tämän tutkimuksen rajoissa ei ilmennyt mitään syytä sille, etteikö BCM-toimenpiteiden kustannusmallia voitaisi tuotteistaa ja integroida yrityksen muihin kustannusarviotyökaluihin tai etteikö sitä voisi soveltaa muille IT-palvelujen hallinnon alueille. Viimeinen merkittävä käytännön kontribuutio on BCM-toimenpiteiden kustannusmuuttujien määrittely. BCM-ammattilaiset voivat helpommin edistää BCM-prosessia, kun teoreettisten ja empiiristen BCM-kehysten kulujen vastaavuus vahvistetaan

    Business-driven IT Management

    Get PDF
    Business-driven IT management (BDIM) aims at ensuring successful alignment of business and IT through thorough understanding of the impact of IT on business results, and vice versa. In this dissertation, we review the state of the art of BDIM research and we position our intended contribution within the BDIM research space along the dimensions of decision support (as opposed of automation) and its application to IT service management processes. Within these research dimensions, we advance the state of the art by 1) contributing a decision theoretical framework for BDIM and 2) presenting two novel BDIM solutions in the IT service management space. First we present a simpler BDIM solution for prioritizing incidents, which can be used as a template for creating BDIM solutions in other IT service management processes. Then, we present a more comprehensive solution for optimizing the business-related performance of an IT support organization in dealing with incidents. Our decision theoretical framework and models for BDIM bring the concepts of business impact and risk to the fore, and are able to cope with both monetizable and intangible aspects of business impact. We start from a constructive and quantitative re-definition of some terms that are widely used in IT service management but for which was never given a rigorous decision: business impact, cost, benefit, risk and urgency. On top of that, we build a coherent methodology for linking IT-level metrics with business level metrics and make progress toward solving the business-IT alignment problem. Our methodology uses a constructive and quantitative definition of alignment with business objectives, taken as the likelihood – to the best of one’s knowledge – that such objectives will be met. That is used as the basis for building an engine for business impact calculation that is in fact an alignment computation engine. We show a sample BDIM solution for incident prioritization that is built using the decision theoretical framework, the methodology and the tools developed. We show how the sample BDIM solution could be used as a blueprint to build BDIM solutions for decision support in other IT service management processes, such as change management for example. However, the full power of BDIM can be best understood by studying the second fully fledged BDIM application that we present in this thesis. While incident management is used as a scenario for this second application as well, the main contribution that it brings about is really to provide a solution for business-driven organizational redesign to optimize the performance of an IT support organization. The solution is quite rich, and features components that orchestrate together advanced techniques in visualization, simulation, data mining and operations research. We show that the techniques we use - in particular the simulation of an IT organization enacting the incident management process – bring considerable benefits both when the performance is measured in terms of traditional IT metrics (mean time to resolution of incidents), and even more so when business impact metrics are brought into the picture, thereby providing a justification for investing time and effort in creating BDIM solutions. In terms of impact, the work presented in this thesis produced about twenty conference and journal publications, and resulted so far in three patent applications. Moreover this work has greatly influenced the design and implementation of Business Impact Optimization module of HP DecisionCenter™: a leading commercial software product for IT optimization, whose core has been re-designed to work as described here

    Architecture-Level Software Performance Models for Online Performance Prediction

    Get PDF
    Proactive performance and resource management of modern IT infrastructures requires the ability to predict at run-time, how the performance of running services would be affected if the workload or the system changes. In this thesis, modeling and prediction facilities that enable online performance prediction during system operation are presented. Analyses about the impact of reconfigurations and workload trends can be conducted on the model level, without executing expensive performance tests
    corecore