232 research outputs found

    Open semantic service networks

    Get PDF
    Online service marketplaces will soon be part of the economy to scale the provision of specialized multi-party services through automation and standardization. Current research, such as the *-USDL service description language family, is already deïŹning the basic building blocks to model the next generation of business services. Nonetheless, the developments being made do not target to interconnect services via service relationships. Without the concept of relationship, marketplaces will be seen as mere functional silos containing service descriptions. Yet, in real economies, all services are related and connected. Therefore, to address this gap we introduce the concept of open semantic service network (OSSN), concerned with the establishment of rich relationships between services. These networks will provide valuable knowledge on the global service economy, which can be exploited for many socio-economic and scientiïŹc purposes such as service network analysis, management, and control

    Gaussian-based Probabilistic Deep Supervision Network for Noise-Resistant QoS Prediction

    Full text link
    Quality of Service (QoS) prediction is an essential task in recommendation systems, where accurately predicting unknown QoS values can improve user satisfaction. However, existing QoS prediction techniques may perform poorly in the presence of noise data, such as fake location information or virtual gateways. In this paper, we propose the Probabilistic Deep Supervision Network (PDS-Net), a novel framework for QoS prediction that addresses this issue. PDS-Net utilizes a Gaussian-based probabilistic space to supervise intermediate layers and learns probability spaces for both known features and true labels. Moreover, PDS-Net employs a condition-based multitasking loss function to identify objects with noise data and applies supervision directly to deep features sampled from the probability space by optimizing the Kullback-Leibler distance between the probability space of these objects and the real-label probability space. Thus, PDS-Net effectively reduces errors resulting from the propagation of corrupted data, leading to more accurate QoS predictions. Experimental evaluations on two real-world QoS datasets demonstrate that the proposed PDS-Net outperforms state-of-the-art baselines, validating the effectiveness of our approach

    Context Aware Computing for The Internet of Things: A Survey

    Get PDF
    As we are moving towards the Internet of Things (IoT), the number of sensors deployed around the world is growing at a rapid pace. Market research has shown a significant growth of sensor deployments over the past decade and has predicted a significant increment of the growth rate in the future. These sensors continuously generate enormous amounts of data. However, in order to add value to raw sensor data we need to understand it. Collection, modelling, reasoning, and distribution of context in relation to sensor data plays critical role in this challenge. Context-aware computing has proven to be successful in understanding sensor data. In this paper, we survey context awareness from an IoT perspective. We present the necessary background by introducing the IoT paradigm and context-aware fundamentals at the beginning. Then we provide an in-depth analysis of context life cycle. We evaluate a subset of projects (50) which represent the majority of research and commercial solutions proposed in the field of context-aware computing conducted over the last decade (2001-2011) based on our own taxonomy. Finally, based on our evaluation, we highlight the lessons to be learnt from the past and some possible directions for future research. The survey addresses a broad range of techniques, methods, models, functionalities, systems, applications, and middleware solutions related to context awareness and IoT. Our goal is not only to analyse, compare and consolidate past research work but also to appreciate their findings and discuss their applicability towards the IoT.Comment: IEEE Communications Surveys & Tutorials Journal, 201

    AN ADAPTABILITY-DRIVEN ECONOMIC MODEL FOR SERVICE PROFITABILITY

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Security and trust in cloud computing and IoT through applying obfuscation, diversification, and trusted computing technologies

    Get PDF
    Cloud computing and Internet of Things (IoT) are very widely spread and commonly used technologies nowadays. The advanced services offered by cloud computing have made it a highly demanded technology. Enterprises and businesses are more and more relying on the cloud to deliver services to their customers. The prevalent use of cloud means that more data is stored outside the organization’s premises, which raises concerns about the security and privacy of the stored and processed data. This highlights the significance of effective security practices to secure the cloud infrastructure. The number of IoT devices is growing rapidly and the technology is being employed in a wide range of sectors including smart healthcare, industry automation, and smart environments. These devices collect and exchange a great deal of information, some of which may contain critical and personal data of the users of the device. Hence, it is highly significant to protect the collected and shared data over the network; notwithstanding, the studies signify that attacks on these devices are increasing, while a high percentage of IoT devices lack proper security measures to protect the devices, the data, and the privacy of the users. In this dissertation, we study the security of cloud computing and IoT and propose software-based security approaches supported by the hardware-based technologies to provide robust measures for enhancing the security of these environments. To achieve this goal, we use obfuscation and diversification as the potential software security techniques. Code obfuscation protects the software from malicious reverse engineering and diversification mitigates the risk of large-scale exploits. We study trusted computing and Trusted Execution Environments (TEE) as the hardware-based security solutions. Trusted Platform Module (TPM) provides security and trust through a hardware root of trust, and assures the integrity of a platform. We also study Intel SGX which is a TEE solution that guarantees the integrity and confidentiality of the code and data loaded onto its protected container, enclave. More precisely, through obfuscation and diversification of the operating systems and APIs of the IoT devices, we secure them at the application level, and by obfuscation and diversification of the communication protocols, we protect the communication of data between them at the network level. For securing the cloud computing, we employ obfuscation and diversification techniques for securing the cloud computing software at the client-side. For an enhanced level of security, we employ hardware-based security solutions, TPM and SGX. These solutions, in addition to security, ensure layered trust in various layers from hardware to the application. As the result of this PhD research, this dissertation addresses a number of security risks targeting IoT and cloud computing through the delivered publications and presents a brief outlook on the future research directions.Pilvilaskenta ja esineiden internet ovat nykyÀÀn hyvin tavallisia ja laajasti sovellettuja tekniikkoja. Pilvilaskennan pitkĂ€lle kehittyneet palvelut ovat tehneet siitĂ€ hyvin kysytyn teknologian. Yritykset enenevĂ€ssĂ€ mÀÀrin nojaavat pilviteknologiaan toteuttaessaan palveluita asiakkailleen. Vallitsevassa pilviteknologian soveltamistilanteessa yritykset ulkoistavat tietojensa kĂ€sittelyĂ€ yrityksen ulkopuolelle, minkĂ€ voidaan nĂ€hdĂ€ nostavan esiin huolia taltioitavan ja kĂ€siteltĂ€vĂ€n tiedon turvallisuudesta ja yksityisyydestĂ€. TĂ€mĂ€ korostaa tehokkaiden turvallisuusratkaisujen merkitystĂ€ osana pilvi-infrastruktuurin turvaamista. Esineiden internet -laitteiden lukumÀÀrĂ€ on nopeasti kasvanut. Teknologiana sitĂ€ sovelletaan laajasti monilla sektoreilla, kuten Ă€lykkÀÀssĂ€ terveydenhuollossa, teollisuusautomaatiossa ja Ă€lytiloissa. Sellaiset laitteet kerÀÀvĂ€t ja vĂ€littĂ€vĂ€t suuria mÀÀriĂ€ informaatiota, joka voi sisĂ€ltÀÀ laitteiden kĂ€yttĂ€jien kannalta kriittistĂ€ ja yksityistĂ€ tietoa. TĂ€stĂ€ syystĂ€ johtuen on erittĂ€in merkityksellistĂ€ suojata verkon yli kerĂ€ttĂ€vÀÀ ja jaettavaa tietoa. Monet tutkimukset osoittavat esineiden internet -laitteisiin kohdistuvien tietoturvahyökkĂ€ysten mÀÀrĂ€n olevan nousussa, ja samaan aikaan suuri osuus nĂ€istĂ€ laitteista ei omaa kunnollisia teknisiĂ€ ominaisuuksia itse laitteiden tai niiden kĂ€yttĂ€jien yksityisen tiedon suojaamiseksi. TĂ€ssĂ€ vĂ€itöskirjassa tutkitaan pilvilaskennan sekĂ€ esineiden internetin tietoturvaa ja esitetÀÀn ohjelmistopohjaisia tietoturvalĂ€hestymistapoja turvautumalla osittain laitteistopohjaisiin teknologioihin. Esitetyt lĂ€hestymistavat tarjoavat vankkoja keinoja tietoturvallisuuden kohentamiseksi nĂ€issĂ€ konteksteissa. TĂ€mĂ€n saavuttamiseksi työssĂ€ sovelletaan obfuskaatiota ja diversifiointia potentiaalisiana ohjelmistopohjaisina tietoturvatekniikkoina. Suoritettavan koodin obfuskointi suojaa pahantahtoiselta ohjelmiston takaisinmallinnukselta ja diversifiointi torjuu tietoturva-aukkojen laaja-alaisen hyödyntĂ€misen riskiĂ€. VĂ€itöskirjatyössĂ€ tutkitaan luotettua laskentaa ja luotettavan laskennan suoritusalustoja laitteistopohjaisina tietoturvaratkaisuina. TPM (Trusted Platform Module) tarjoaa turvallisuutta ja luottamuksellisuutta rakentuen laitteistopohjaiseen luottamukseen. PyrkimyksenĂ€ on taata suoritusalustan eheys. TyössĂ€ tutkitaan myös Intel SGX:ÀÀ yhtenĂ€ luotettavan suorituksen suoritusalustana, joka takaa suoritettavan koodin ja datan eheyden sekĂ€ luottamuksellisuuden pohjautuen suojatun sĂ€iliön, saarekkeen, tekniseen toteutukseen. Tarkemmin ilmaistuna työssĂ€ turvataan kĂ€yttöjĂ€rjestelmĂ€- ja sovellusrajapintatasojen obfuskaation ja diversifioinnin kautta esineiden internet -laitteiden ohjelmistokerrosta. Soveltamalla samoja tekniikoita protokollakerrokseen, työssĂ€ suojataan laitteiden vĂ€listĂ€ tiedonvaihtoa verkkotasolla. Pilvilaskennan turvaamiseksi työssĂ€ sovelletaan obfuskaatio ja diversifiointitekniikoita asiakaspuolen ohjelmistoratkaisuihin. Vankemman tietoturvallisuuden saavuttamiseksi työssĂ€ hyödynnetÀÀn laitteistopohjaisia TPM- ja SGX-ratkaisuja. Tietoturvallisuuden lisĂ€ksi nĂ€mĂ€ ratkaisut tarjoavat monikerroksisen luottamuksen rakentuen laitteistotasolta ohjelmistokerrokseen asti. TĂ€mĂ€n vĂ€itöskirjatutkimustyön tuloksena, osajulkaisuiden kautta, vastataan moniin esineiden internet -laitteisiin ja pilvilaskentaan kohdistuviin tietoturvauhkiin. TyössĂ€ esitetÀÀn myös nĂ€kemyksiĂ€ jatkotutkimusaiheista

    Blueprint model and language for engineering cloud applications

    Get PDF
    Abstract: The research presented in this thesis is positioned within the domain of engineering CSBAs. Its contribution is twofold: (1) a uniform specification language, called the Blueprint Specification Language (BSL), for specifying cloud services across several cloud vendors and (2) a set of associated techniques, called the Blueprint Manipulation Techniques (BMTs), for publishing, querying, and composing cloud service specifications with aim to support the flexible design and configuration of an CSBA.

    Molecular phylogenetic analysis: design and implementation of scalable and reliable algorithms and verification of phylogenetic properties

    Get PDF
    El tĂ©rmino bioinformĂĄtica tiene muchas acepciones, una gran parte referentes a la bioinformĂĄtica molecular: el conjunto de mĂ©todos matemĂĄticos, estadĂ­sticos y computacionales que tienen como objetivo dar soluciĂłn a problemas biolĂłgicos, haciendo uso exclusivamente de las secuencias de ADN, ARN y proteĂ­nas y su informaciĂłn asociada. La filogenĂ©tica es el ĂĄrea de la bioinformĂĄtica encargada del estudio de la relaciĂłn evolutiva entre organismos de la misma o distintas especies. Al igual que sucedĂ­a con la definiciĂłn anterior, los trabajos realizados a lo largo de esta tesis se centran en la filogenĂ©tica molecular: la rama de la filogenĂ©tica que analiza las mutaciones hereditarias en secuencias biolĂłgicas (principalmente ADN) para establecer dicha relaciĂłn evolutiva. El resultado de este anĂĄlisis se plasma en un ĂĄrbol evolutivo o filogenia. Una filogenia suele representarse como un ĂĄrbol con raĂ­z, normalmente binario, en el que las hojas simbolizan los organismos existentes actualmente y, la raĂ­z, su ancestro comĂșn. Cada nodo interno representa una mutaciĂłn que ha dado lugar a una divisiĂłn en la clasificaciĂłn de los descendientes. Las filogenias se construyen mediante procesos de inferencia en base a la informaciĂłn disponible, que pertenece mayoritariamente a organismos existentes hoy en dĂ­a. La complejidad de este problema se ha visto reflejada en la clasificaciĂłn de la mayorĂ­a de mĂ©todos propuestos para su soluciĂłn como NP-duros [1-3].El caso real de aplicaciĂłn de esta tesis ha sido el ADN mitocondrial. Este tipo de secuencias biolĂłgicas es relevante debido a que tiene un alto Ă­ndice de mutaciĂłn, por lo que incluso filogenias de organismos muy cercanos evolutivamente proporcionan datos significativos para la comunidad biolĂłgica. AdemĂĄs, varias mutaciones del ADN mitocondrial humano se han relacionado directamente con enfermedad y patogenias, la mayorĂ­a mortales en individuos no natos o de corta edad. En la actualidad hay mĂĄs de 30000 secuencias disponibles de ADN mitocondrial humano, lo que, ademĂĄs de su utilidad cientĂ­fica, ha permitido el anĂĄlisis de rendimiento de nuestras contribuciones para datos masivos (Big Data). La reciente incorporaciĂłn de la bioinformĂĄtica en la categorĂ­a Big Data viene respaldada por la mejora de las tĂ©cnicas de digitalizaciĂłn de secuencias biolĂłgicas que sucediĂł a principios del siglo 21 [4]. Este cambio aumentĂł drĂĄsticamente el nĂșmero de secuencias disponibles. Por ejemplo, el nĂșmero de secuencias de ADN mitocondrial humano pasĂł de duplicarse cada cuatro años, a hacerlo en menos de dos. Por ello, un gran nĂșmero de mĂ©todos y herramientas usados hasta entonces han quedado obsoletos al no ser capaces de procesar eficientemente estos nuevos volĂșmenes de datos.Este es motivo por el que todas las aportaciones de esta tesis han sido desarrolladas para poder tratar grandes volĂșmenes de datos. La contribuciĂłn principal de esta tesis es un framework que permite diseñar y ejecutar automĂĄticamente flujos de trabajo para la inferencia filogenĂ©tica: PhyloFlow [5-7]. Su creaciĂłn fue promovida por el hecho de que la mayorĂ­a de sistemas de inferencia filogenĂ©tica existentes tienen un flujo de trabajo fijo y no se pueden modificar ni las herramientas software que los componen ni sus parĂĄmetros. Esta decisiĂłn puede afectar negativamente a la precisiĂłn del resultado si el flujo del sistema o alguno de sus componentes no estĂĄ adaptado a la informaciĂłn biolĂłgica que se va a utilizar como entrada. Por ello, PhyloFlow incorpora un proceso de configuraciĂłn que permite seleccionar tanto cada uno de los procesos que formarĂĄn parte del sistema final, como las herramientas y mĂ©todos especĂ­ficos y sus parĂĄmetros. Se han incluido consejos y opciones por defecto durante el proceso de configuraciĂłn para facilitar su uso, sobre todo a usuarios nĂłveles. AdemĂĄs, nuestro framework permite la ejecuciĂłn desatendida de los sistemas filogenĂ©ticos generados, tanto en ordenadores de sobremesa como en plataformas hardware (clusters, computaciĂłn en la nube, etc.). Finalmente, se han evaluado las capacidades de PhyloFlow tanto en la reproducciĂłn de sistemas de inferencia filogenĂ©tica publicados anteriormente como en la creaciĂłn de sistemas orientados a problemas intensivos como el de inferencia del ADN mitocondrial humano. Los resultados muestran que nuestro framework no solo es capaz de realizar los retos planteados, sino que, en el caso de la replicaciĂłn de sistemas, la posibilidad de configurar cada elemento que los componen mejora ampliamente su aplicabilidad.Durante la implementaciĂłn de PhyloFlow descubrimos varias carencias importantes en algunas bibliotecas software actuales que dificultaron la integraciĂłn y gestiĂłn de las herramientas filogenĂ©ticas. Por este motivo se decidiĂł crear la primera biblioteca software en Python para estudios de filogenĂ©tica molecular: MEvoLib [8]. Esta biblioteca ha sido diseñada para proveer una sola interfaz para los conjuntos de herramientas software orientados al mismo proceso, como el multialineamiento o la inferencia de filogenias. MEvoLib incluye ademĂĄs configuraciones por defecto y mĂ©todos que hacen uso de conocimiento biolĂłgico especĂ­fico para mejorar su precisiĂłn, adaptĂĄndose a las necesidades de cada tipo de usuario. Como Ășltima caracterĂ­stica relevante, se ha incorporado un proceso de conversiĂłn de formatos para los ficheros de entrada y salida de cada interfaz, de forma que, si la herramienta seleccionada no soporta dicho formato, este es adaptado automĂĄticamente. Esta propiedad facilita el uso e integraciĂłn de MEvoLib en scripts y herramientas software.El estudio del caso de aplicaciĂłn de PhyloFlow al ADN mitocondrial humano ha expuesto los elevados costes tanto computacionales como econĂłmicos asociados a la inferencia de grandes filogenias. Por ello, sistemas como PhyloTree [9], que infiere un tipo especial de filogenias de ADN mitocondrial humano, recalculan sus resultados con una frecuencia mĂĄxima anual. Sin embargo, como ya hemos comentado anteriormente, las tĂ©cnicas de secuenciaciĂłn actuales permiten la incorporaciĂłn de cientos o incluso miles de secuencias biolĂłgicas nuevas cada mes. Este desfase entre productor y consumidor hace que dichas filogenias queden desactualizadas en unos pocos meses. Para solucionar este problema hemos diseñado un nuevo algoritmo que permite la actualizaciĂłn de una filogenia mediante la incorporaciĂłn iterativa de nuevas secuencias: PHYSER [10]. AdemĂĄs, la propia informaciĂłn evolutiva se utiliza para detectar posibles mutaciones introducidas artificialmente por el proceso de secuenciaciĂłn, inexistentes en la secuencia original. Las pruebas realizadas con ADN mitocondrial han probado su eficacia y eficiencia, con un coste temporal por secuencia inferior a los 20 segundos.El desarrollo de nuevas herramientas para el anĂĄlisis de filogenias tambiĂ©n ha sido una parte importante de esta tesis. En concreto, se han realizado dos aportaciones principales en este aspecto: PhyloViewer [11] y una herramienta para el anĂĄlisis de la conservaciĂłn [12]. PhyloViewer es un visualizador de filogenias extensivas, es decir, filogenias que poseen al menos un millar de hojas. Esta herramienta aporta una novedosa interfaz en la que se muestra el nodo seleccionado y sus nodos hijo, asĂ­ como toda la informaciĂłn asociada a cada uno de ellos: identificador, secuencia biolĂłgica, ... Esta decisiĂłn de diseño ha sido orientada a evitar el habitual “borrĂłn” que se produce en la mayorĂ­a de herramientas de visualizaciĂłn al mostrar este tipo de filogenias enteras por pantalla. AdemĂĄs, se ha desarrollado en una arquitectura clienteservidor, con lo que el procesamiento de la filogenia se realiza una Ășnica vez por parte el servidor. AsĂ­, se ha conseguido reducir significativamente los tiempos de carga y acceso por parte del cliente. Por otro lado, la aportaciĂłn principal de nuestra herramienta para el anĂĄlisis de la conservaciĂłn se basa en la paralelizaciĂłn de los mĂ©todos clĂĄsicos aplicados en este campo, alcanzando speed-ups cercanos al teĂłrico sin pĂ©rdida de precisiĂłn. Esto ha sido posible gracias a la implementaciĂłn de dichos mĂ©todos desde cero, incorporando la paralelizaciĂłn a nivel de instrucciĂłn, en vez de paralelizar implementaciones existentes. Como resultado, nuestra herramienta genera un informe que contiene las conclusiones del anĂĄlisis de conservaciĂłn realizado. El usuario puede introducir un umbral de conservaciĂłn para que el informe destaque solo aquellas posiciones que no lo cumplan. AdemĂĄs, existen dos tipos de informe con distinto nivel de detalle. Ambos se han diseñado para que sean comprensibles y Ăștiles para los usuarios.Finalmente, se ha diseñado e implementado un predictor de mutaciones patĂłgenas en ADN mitocondrial desarollado en mĂĄquinas de vectores de soporte (SVM): Mitoclass.1 [13]. Se trata del primer predictor para este tipo de secuencias biolĂłgicas. Tanto es asĂ­, que ha sido necesario crear el primer repositorio de mutaciones patĂłgenas conocidas, mdmv.1, para poder entrenar y evaluar nuestro predictor. Se ha demostrado que Mitoclass.1 mejora la clasificaciĂłn de las mutaciones frente a los predictores mĂĄs conocidos y utilizados, todos ellos orientados al estudio de patogenicidad en ADN nuclear. Este Ă©xito radica en la novedosa combinaciĂłn de propiedades a evaluar por cada mutaciĂłn en el proceso de clasificaciĂłn. AdemĂĄs, otro factor a destacar es el uso de SVM frente a otras alternativas, que han sido probadas y descartadas debido a su menor capacidad de predicciĂłn para nuestro caso de aplicaciĂłn.REFERENCIAS[1] L. Wang and T. Jiang, “On the complexity of multiple sequence alignment,” Journal of computational biology, vol. 1, no. 4, pp. 337–348, 1994.[2] W. H. E. Day, D. S. Johnson, and D. Sankoff, “The Computational Complexity of Inferring Rooted Phylogenies by Parsimony,” Mathematical Biosciences, vol. 81, no. 1, pp. 33–42, 1986.[3] S. Roch, “A short proof that phylogenetic tree reconstruction by maximum likelihood is hard,” IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), vol. 3, no. 1, p. 92, 2006.[4] E. R. Mardis, “The impact of next-generation sequencing technology on genetics,” Trends in genetics, vol. 24, no. 3, pp. 133–141, 2008.[5] J. Álvarez-Jarreta, G. de Miguel Casado, and E. Mayordomo, “PhyloFlow: A Fully Customizable and Automatic Workflow for Phylogeny Estimation,” in ECCB 2014, 2014.[6] J. Álvarez-Jarreta, G. de Miguel Casado, and E. Mayordomo, “PhyloFlow: A Fully Customizable and Automatic Workflow for Phylogenetic Reconstruction,” in IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 1–7, IEEE, 2014.[7] J. Álvarez, R. Blanco, and E. Mayordomo, “Workflows with Model Selection: A Multilocus Approach to Phylogenetic Analysis,” in 5th International Conference on Practical Applications of Computational Biology & Bioinformatics (PACBB 2011), vol. 93 of Advances in Intelligent and Soft Computing, pp. 39–47, Springer Berlin Heidelberg, 2011.[8] J. Álvarez-Jarreta and E. Ruiz-Pesini, “MEvoLib v1.0: the First Molecular Evolution Library for Python,” BMC Bioinformatics, vol. 17, no. 436, pp. 1–8, 2016.[9] M. van Oven and M. Kayser, “Updated comprehensive phylogenetic tree of global human mitochondrial DNA variation,” Human Mutation, vol. 30, no. 2, pp. E386–E394, 2009.[10] J. Álvarez-Jarreta, E. Mayordomo, and E. Ruiz-Pesini, “PHYSER: An Algorithm to Detect Sequencing Errors from Phylogenetic Information,” in 6th International Conference on Practical Applications of Computational Biology & Bioinformatics (PACBB 2012), pp. 105–112, 2012.[11] J. Álvarez-Jarreta and G. de Miguel Casado, “PhyloViewer: A Phylogenetic Tree Viewer for Extense Phylogenies,” in ECCB 2014, 2014.[12] F. Merino-Casallo, J. Álvarez-Jarreta, and E. Mayordomo, “Conservation in mitochondrial DNA: Parallelized estimation and alignment influence,” in 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM 2015), pp. 1434–1440, IEEE, 2015.[13] A. MartĂ­n-Navarro, A. Gaudioso-SimĂłn, J. Álvarez-Jarreta, J. Montoya, E. Mayordomo, and E. Ruiz-Pesini, “Machine learning classifier for identification of damaging missense mutations exclusive to human mitochondrial DNA-encoded polypeptides,” BMC Bioinformatics, vol. 18, no. 158, pp. 1–11, 2017.<br /

    Adaptive monitoring and control framework in Application Service Management environment

    Get PDF
    The economics of data centres and cloud computing services have pushed hardware and software requirements to the limits, leaving only very small performance overhead before systems get into saturation. For Application Service Management–ASM, this carries the growing risk of impacting the execution times of various processes. In order to deliver a stable service at times of great demand for computational power, enterprise data centres and cloud providers must implement fast and robust control mechanisms that are capable of adapting to changing operating conditions while satisfying service–level agreements. In ASM practice, there are normally two methods for dealing with increased load, namely increasing computational power or releasing load. The first approach typically involves allocating additional machines, which must be available, waiting idle, to deal with high demand situations. The second approach is implemented by terminating incoming actions that are less important to new activity demand patterns, throttling, or rescheduling jobs. Although most modern cloud platforms, or operating systems, do not allow adaptive/automatic termination of processes, tasks or actions, it is administrators’ common practice to manually end, or stop, tasks or actions at any level of the system, such as at the level of a node, function, or process, or kill a long session that is executing on a database server. In this context, adaptive control of actions termination remains a significantly underutilised subject of Application Service Management and deserves further consideration. For example, this approach may be eminently suitable for systems with harsh execution time Service Level Agreements, such as real–time systems, or systems running under conditions of hard pressure on power supplies, systems running under variable priority, or constraints set up by the green computing paradigm. Along this line of work, the thesis investigates the potential of dimension relevance and metrics signals decomposition as methods that would enable more efficient action termination. These methods are integrated in adaptive control emulators and actuators powered by neural networks that are used to adjust the operation of the system to better conditions in environments with established goals seen from both system performance and economics perspectives. The behaviour of the proposed control framework is evaluated using complex load and service agreements scenarios of systems compatible with the requirements of on–premises, elastic compute cloud deployments, server–less computing, and micro–services architectures

    Gamification Analytics: Support for Monitoring and Adapting Gamification Designs

    Get PDF
    Inspired by the engaging effects in video games, gamification aims at motivating people to show desired behaviors in a variety of contexts. During the last years, gamification influenced the design of many software applications in the consumer as well as enterprise domain. In some cases, even whole businesses, such as Foursquare, owe their success to well-designed gamification mechanisms in their product. Gamification also attracted the interest of academics from fields, such as human-computer interaction, marketing, psychology, and software engineering. Scientific contributions comprise psychological theories and models to better understand the mechanisms behind successful gamification, case studies that measure the psychological and behavioral outcomes of gamification, methodologies for gamification projects, and technical concepts for platforms that support implementing gamification in an efficient manner. Given a new project, gamification experts can leverage the existing body of knowledge to reuse previous, or derive new gamification ideas. However, there is no one size fits all approach for creating engaging gamification designs. Gamification success always depends on a wide variety of factors defined by the characteristics of the audience, the gamified application, and the chosen gamification design. In contrast to researchers, gamification experts in the industry rarely have the necessary skills and resources to assess the success of their gamification design systematically. Therefore, it is essential to provide them with suitable support mechanisms, which help to assess and improve gamification designs continuously. Providing suitable and efficient gamification analytics support is the ultimate goal of this thesis. This work presents a study with gamification experts that identifies relevant requirements in the context of gamification analytics. Given the identified requirements and earlier work in the analytics domain, this thesis then derives a set of gamification analytics-related activities and uses them to extend an existing process model for gamification projects. The resulting model can be used by experts to plan and execute their gamification projects with analytics in mind. Next, this work identifies existing tools and assesses them with regards to their applicability in gamification projects. The results can help experts to make objective technology decisions. However, they also show that most tools have significant gaps towards the identified user requirements. Consequently, a technical concept for a suitable realization of gamification analytics is derived. It describes a loosely coupled analytics service that helps gamification experts to seamlessly collect and analyze gamification-related data while minimizing dependencies to IT experts. The concept is evaluated successfully via the implementation of a prototype and application in two real-world gamification projects. The results show that the presented gamification analytics concept is technically feasible, applicable to actual projects, and also valuable for the systematic monitoring of gamification success
    • 

    corecore