19 research outputs found

    Empowering Machine Learning Development with Service-Oriented Computing Principles

    Get PDF
    Despite software industries’ successful utilization of Service-Oriented Computing (SOC) to streamline software development, machine learning (ML) development has yet to fully integrate these practices. This disparity can be attributed to multiple factors, such as the unique challenges inherent to ML development and the absence of a unified framework for incorporating services into this process. In this paper, we shed light on the disparities between services-oriented computing and machine learning development. We propose “Everything as a Module” (XaaM), a framework designed to encapsulate every ML artifacts including models, code, data, and configurations as individual modules, to bridge this gap. We propose a set of additional steps that need to be taken to empower machine learning development using services-oriented computing via an architecture that facilitates efficient management and orchestration of complex ML systems. By leveraging the best practices of services-oriented computing, we believe that machine learning development can achieve a higher level of maturity, improve the efficiency of the development process, and ultimately, facilitate the more effective creation of machine learning applications.</p

    Cloud provider independence using DevOps methodologies with Infrastructure-as-Code

    Get PDF
    On choosing cloud computing infrastructure for IT needs there is a risk of becoming dependent and locked-in on a specific cloud provider from which it becomes difficult to switch should an entity decide to move all of the infrastructure resources into a different provider. There’s widespread information available on how to migrate existing infrastructure to the cloud notwithstanding common cloud solutions and providers don't have any clear path or framework for supporting their tenants to migrate off the cloud into another provider or cloud infrastructure with similar service levels should they decide to do so. Under these circumstances it becomes difficult to switch from cloud provider not just because of the technical complexity of recreating the entire infrastructure from scratch and moving related data but also because of the cost it may involve. One possible solution is to evaluate the use of Infrastructure-as-Code languages for defining infrastructure (“Infrastructure-as-Code”) combined with DevOps methodologies and technologies to create a mechanism that helps streamline the migration process between different cloud infrastructure especially if taken into account from the beginning of a project. A well-structured DevOps methodology combined with Infrastructure-as-Code may allow a more integrated control on cloud resources as those can be defined and controlled with specific languages and be submitted to automation processes. Such definitions must take into account what is currently available to support those operations under the chosen cloud infrastructure APIs, always seeking to guarantee the tenant an higher degree of control over its infrastructure and higher level of preparation of the necessary steps for the recreation or migration of such infrastructure should the need arise, somehow integrating cloud resources as part of a development model. The objective of this dissertation is to create a conceptual reference framework that can identify different forms for migration of IT infrastructure while always contemplating a higher provider independence by resorting to such mechanisms, as well as identify possible constraints or obstacles under this approach. Such a framework can be referenced from the beginning of a development project if foreseeable changes in infrastructure or provider are a possibility in the future, taking into account what the API’s provide in order to make such transitions easier.Ao optar-se por infraestruturas de computação em nuvem para soluções de TI existe um risco associado de se ficar dependente de um fornecedor de serviço específico, do qual se torna difícil mudar caso se decida posteriormente movimentar toda essa infraestrutura para um outro fornecedor. Encontra-se disponível extensa documentação sobre como migrar infraestrutura já  existente para modelos de computação em nuvem, de qualquer modo as soluções e os fornecedores de serviço não dispõem de formas ou metodologias claras que suportem os seus clientes em migrações para fora da nuvem, seja para outro fornecedor ou infraestrutura com semelhantes tipos de serviço, caso assim o desejem. Nestas circunstâncias torna-se difícil mudar de fornecedor de serviço não apenas pela complexidade técnica associada à criação de toda a infraestrutura de raiz e movimentação de todos os dados associados a esta mas também devido aos custos que envolve uma operação deste tipo. Uma possível solução é avaliar a utilização de linguagens para definição de infraestrutura como código (“Infrastructure-as-Code”) em conjunção com metodologias e tecnologias “DevOps” de forma a criar um mecanismo que permita flexibilizar um processo de migração entre diferentes infraestruturas de computação em nuvem, especialmente se for contemplado desde o início de um projecto. Uma metodologia “DevOps” devidamente estruturada quando combinada com definição de infraestrutura como código pode permitir um controlo mais integrado de recursos na nuvem uma vez que estes podem ser definidos e controlados através de linguagens específicas e submetidos a processos de automação. Tais definições terão de ter em consideração o que existe disponível para suportar as necessárias operações através das “API’s” das infraestruturas de computação em nuvem, procurando sempre garantir ao utilizador um elevado grau de controlo sobre a sua infraestrutura e um maior nível de preparação dos passos necessários para recriação ou migração da infraestrutura caso essa necessidade surja, integrando de certa forma os recursos de computação em nuvem como parte do modelo de desenvolvimento. Esta dissertação tem como objetivo a criação de um modelo de referência conceptual que identifique formas de migração de infraestruturas de computação procurando ao mesmo tempo uma maior independência do fornecedor de serviço com recurso a tais mecanismos, assim como identificar possíveis constrangimentos ou impedimentos nesta aproximação. Tal modelo poderá ser referenciado desde o início de um projecto de desenvolvimento caso seja necessário contemplar uma possível necessidade futura de alterações ao nível da infraestrutura ou de fornecedor, com base no que as “API’s” disponibilizam, de modo a facilitar essa operação.info:eu-repo/semantics/publishedVersio

    A framework for SLA-centric service-based Utility Computing

    Get PDF
    Nicht angegebenService oriented Utility Computing paves the way towards realization of service markets, which promise metered services through negotiable Service Level Agreements (SLA). A market does not necessarily imply a simple buyer-seller relationship, rather it is the culmination point of a complex chain of stake-holders with a hierarchical integration of value along each link in the chain. In service value chains, services corresponding to different partners are aggregated in a producer-consumer manner resulting in hierarchical structures of added value. SLAs are contracts between service providers and service consumers, which ensure the expected Quality of Service (QoS) to different stakeholders at various levels in this hierarchy. \emph{This thesis addresses the challenge of realizing SLA-centric infrastructure to enable service markets for Utility Computing.} Service Level Agreements play a pivotal role throughout the life cycle of service aggregation. The activities of service selection and service negotiation followed by the hierarchical aggregation and validation of services in service value chain, require SLA as an enabling technology. \emph{This research aims at a SLA-centric framework where the requirement-driven selection of services, flexible SLA negotiation, hierarchical SLA aggregation and validation, and related issues such as privacy, trust and security have been formalized and the prototypes of the service selection model and the validation model have been implemented. } The formal model for User-driven service selection utilizes Branch and Bound and Heuristic algorithms for its implementation. The formal model is then extended for SLA negotiation of configurable services of varying granularity in order to tweak the interests of the service consumers and service providers. %and then formalizing the requirements of an enabling infrastructure for aggregation and validation of SLAs existing at multiple levels and spanning % along the corresponding service value chains. The possibility of service aggregation opens new business opportunities in the evolving landscape of IT-based Service Economy. A SLA as a unit of business relationships helps establish innovative topologies for business networks. One example is the composition of computational services to construct services of bigger granularity thus giving room to business models based on service aggregation, Composite Service Provision and Reselling. This research introduces and formalizes the notions of SLA Choreography and hierarchical SLA aggregation in connection with the underlying service choreography to realize SLA-centric service value chains and business networks. The SLA Choreography and aggregation poses new challenges regarding its description, management, maintenance, validation, trust, privacy and security. The aggregation and validation models for SLA Choreography introduce concepts such as: SLA Views to protect the privacy of stakeholders; a hybrid trust model to foster business among unknown partners; and a PKI security mechanism coupled with rule based validation system to enable distributed queries across heterogeneous boundaries. A distributed rule based hierarchical SLA validation system is designed to demonstrate the practical significance of these notions

    Development of a real-time business intelligence (BI) framework based on hex-elementization of data points for accurate business decision-making

    Get PDF
    The desire to use business intelligence (BI) to enhance efficiency and effectiveness of business decisions is neither new nor revolutionary. The promise of BI is to provide the ability to capture interrelationship from data and information to guide action towards a business goal. Although BI has been around since the 1960s, businesses still cannot get competitive information in the form they want, when they want and how they want. Business decisions are already full of challenges. The challenges in business decision-making include the use of a vast amount of data, adopting new technologies, and making decisions on a real-time basis. To address these challenges, businesses spend valuable time and resources on data, technologies and business processes. Integration of data in decision-making is crucial for modern businesses. This research aims to propose and validate a framework for organic integration of data into business decision-making. This proposed framework enables efficient business decisions in real-time. The core of this research is to understand and modularise the pre-established set of data points into intelligent and granular “hex-elements” (stated simply, hex-element is a data point with six properties). These intelligent hex-elements build semi-automatic relationships using their six properties between the large volume and high-velocity data points in a dynamic, automated and integrated manner. The proposed business intelligence framework is called “Hex-Elementization” (or “Hex-E” for short). Evolution of technology presents ongoing challenges to BI. These challenges emanate from the challenging nature of the underlying new-age data characterised by large volume, high velocity and wide variety. Efficient and effective analysis of such data depends on the business context and the corresponding technical capabilities of the organisation. Technologies like Big Data, Internet of Things (IoT), Artificial Intelligence (AI) and Machine Learning (ML), play a key role in capitalising on the variety, volume and veracity of data. Extricating the “value” from data in its various forms, depth and scale require synchronizing technologies with analytics and business processes. Transforming data into useful and actionable intelligence is the discipline of data scientists. Data scientists and data analysts use sophisticated tools to crunch data into information which, in turn, are converted into intelligence. The transformation of data into information and its final consumption as actionable business intelligence is an end-to-end journey. This end-to-end transformation of data to intelligence is complex, time-consuming and resource-intensive. This research explores approaches to ease the challenges the of end-to-end transformation of data into intelligence. This research presents Hex-E as a simplified and semi-automated framework to integrate, unify, correlate and coalesce data (from diverse sources and disparate formats) into intelligence. Furthermore, this framework aims to unify data from diverse sources and disparate formats to help businesses make accurate and timely decisions

    A new MDA-SOA based framework for intercloud interoperability

    Get PDF
    Cloud computing has been one of the most important topics in Information Technology which aims to assure scalable and reliable on-demand services over the Internet. The expansion of the application scope of cloud services would require cooperation between clouds from different providers that have heterogeneous functionalities. This collaboration between different cloud vendors can provide better Quality of Services (QoS) at the lower price. However, current cloud systems have been developed without concerns of seamless cloud interconnection, and actually they do not support intercloud interoperability to enable collaboration between cloud service providers. Hence, the PhD work is motivated to address interoperability issue between cloud providers as a challenging research objective. This thesis proposes a new framework which supports inter-cloud interoperability in a heterogeneous computing resource cloud environment with the goal of dispatching the workload to the most effective clouds available at runtime. Analysing different methodologies that have been applied to resolve various problem scenarios related to interoperability lead us to exploit Model Driven Architecture (MDA) and Service Oriented Architecture (SOA) methods as appropriate approaches for our inter-cloud framework. Moreover, since distributing the operations in a cloud-based environment is a nondeterministic polynomial time (NP-complete) problem, a Genetic Algorithm (GA) based job scheduler proposed as a part of interoperability framework, offering workload migration with the best performance at the least cost. A new Agent Based Simulation (ABS) approach is proposed to model the inter-cloud environment with three types of agents: Cloud Subscriber agent, Cloud Provider agent, and Job agent. The ABS model is proposed to evaluate the proposed framework.Fundação para a Ciência e a Tecnologia (FCT) - (Referencia da bolsa: SFRH SFRH / BD / 33965 / 2009) and EC 7th Framework Programme under grant agreement n° FITMAN 604674 (http://www.fitman-fi.eu

    Les opérateurs sauront-ils survivre dans un monde en constante évolution? Considérations techniques conduisant à des scénarios de rupture

    Get PDF
    Le secteur des télécommunications passe par une phase délicate en raison de profondes mutations technologiques, principalement motivées par le développement de l'Internet. Elles ont un impact majeur sur l'industrie des télécommunications dans son ensemble et, par conséquent, sur les futurs déploiements des nouveaux réseaux, plateformes et services. L'évolution de l'Internet a un impact particulièrement fort sur les opérateurs des télécommunications (Telcos). En fait, l'industrie des télécommunications est à la veille de changements majeurs en raison de nombreux facteurs, comme par exemple la banalisation progressive de la connectivité, la domination dans le domaine des services de sociétés du web (Webcos), l'importance croissante de solutions à base de logiciels et la flexibilité qu'elles introduisent (par rapport au système statique des opérateurs télécoms). Cette thèse élabore, propose et compare les scénarios possibles basés sur des solutions et des approches qui sont technologiquement viables. Les scénarios identifiés couvrent un large éventail de possibilités: 1) Telco traditionnel; 2) Telco transporteur de Bits; 3) Telco facilitateur de Plateforme; 4) Telco fournisseur de services; 5) Disparition des Telco. Pour chaque scénario, une plateforme viable (selon le point de vue des opérateurs télécoms) est décrite avec ses avantages potentiels et le portefeuille de services qui pourraient être fournisThe telecommunications industry is going through a difficult phase because of profound technological changes, mainly originated by the development of the Internet. They have a major impact on the telecommunications industry as a whole and, consequently, the future deployment of new networks, platforms and services. The evolution of the Internet has a particularly strong impact on telecommunications operators (Telcos). In fact, the telecommunications industry is on the verge of major changes due to many factors, such as the gradual commoditization of connectivity, the dominance of web services companies (Webcos), the growing importance of software based solutions that introduce flexibility (compared to static system of telecom operators). This thesis develops, proposes and compares plausible future scenarios based on future solutions and approaches that will be technologically feasible and viable. Identified scenarios cover a wide range of possibilities: 1) Traditional Telco; 2) Telco as Bit Carrier; 3) Telco as Platform Provider; 4) Telco as Service Provider; 5) Telco Disappearance. For each scenario, a viable platform (from the point of view of telecom operators) is described highlighting the enabled service portfolio and its potential benefitsEVRY-INT (912282302) / SudocSudocFranceF

    Internet of Things and the Law: Legal Strategies for Consumer-Centric Smart Technologies

    Get PDF
    Internet of Things and the Law: Legal Strategies for Consumer-Centric Smart Technologies is the most comprehensive and up-to-date analysis of the legal issues in the Internet of Things (IoT). For decades, the decreasing importance of tangible wealth and power – and the increasing significance of their disembodied counterparts – has been the subject of much legal research. For some time now, legal scholars have grappled with how laws drafted for tangible property and predigital ‘offline’ technologies can cope with dematerialisation, digitalisation, and the internet. As dematerialisation continues, this book aims to illuminate the opposite movement: rematerialisation, namely, the return of data, knowledge, and power within a physical ‘smart’ world. This development frames the book’s central question: can the law steer rematerialisation in a human-centric and socially just direction? To answer it, the book focuses on the IoT, the sociotechnological phenomenon that is primarily responsible for this shift. After a thorough analysis of how existing laws can be interpreted to empower IoT end users, Noto La Diega leaves us with the fundamental question of what happens when the law fails us and concludes with a call for collective resistance against ‘smart’ capitalism

    Internet of Things and the Law : Legal Strategies for Consumer‐Centric Smart Technologies

    Get PDF
    Internet of Things and the Law: Legal Strategies for Consumer-Centric Smart Technologies is the most comprehensive and up-to-date analysis of the legal issues in the Internet of Things (IoT). For decades, the decreasing importance of tangible wealth and power – and the increasing significance of their disembodied counterparts – has been the subject of much legal research. For some time now, legal scholars have grappled with how laws drafted for tangible property and predigital ‘offline’ technologies can cope with dematerialisation, digitalisation, and the internet. As dematerialisation continues, this book aims to illuminate the opposite movement: rematerialisation, namely, the return of data, knowledge, and power within a physical ‘smart’ world. This development frames the book’s central question: can the law steer rematerialisation in a human-centric and socially just direction? To answer it, the book focuses on the IoT, the sociotechnological phenomenon that is primarily responsible for this shift. After a thorough analysis of how existing laws can be interpreted to empower IoT end users, Noto La Diega leaves us with the fundamental question of what happens when the law fails us and concludes with a call for collective resistance against ‘smart’ capitalism

    Internet of Things and the Law: Legal Strategies for Consumer-Centric Smart Technologies

    Get PDF
    Internet of Things and the Law: Legal Strategies for Consumer-Centric Smart Technologies is the most comprehensive and up-to-date analysis of the legal issues in the Internet of Things (IoT). For decades, the decreasing importance of tangible wealth and power – and the increasing significance of their disembodied counterparts – has been the subject of much legal research. For some time now, legal scholars have grappled with how laws drafted for tangible property and predigital ‘offline’ technologies can cope with dematerialisation, digitalisation, and the internet. As dematerialisation continues, this book aims to illuminate the opposite movement: rematerialisation, namely, the return of data, knowledge, and power within a physical ‘smart’ world. This development frames the book’s central question: can the law steer rematerialisation in a human-centric and socially just direction? To answer it, the book focuses on the IoT, the sociotechnological phenomenon that is primarily responsible for this shift. After a thorough analysis of how existing laws can be interpreted to empower IoT end users, Noto La Diega leaves us with the fundamental question of what happens when the law fails us and concludes with a call for collective resistance against ‘smart’ capitalism

    The Future of Financial Systems in the Digital Age

    Get PDF
    This book is open access, which means that you have free and unlimited access. The increasing capacity of digital networks and computing power, together with the resulting connectivity and availability of “big data”, are impacting financial systems worldwide with rapidly advancing deep-learning algorithms and distributed ledger technologies. They transform the structure and performance of financial markets, the service proposition of financial products, the organization of payment systems, the business models of banks, insurance companies and other financial service providers, as well as the design of money supply regimes and central banking. This book, The Future of Financial Systems in the Digital Age: Perspectives from Europe and Japan,brings together leading scholars, policymakers, and regulators from Japan and Europe, all with a profound and long professional background in the field of finance, to analyze the digital transformation of the financial system. The authors analyze the impact of digitalization on the financial system from different perspectives such as transaction costs and with regard to specific topics like the potential of digital and blockchain-based currency systems, the role of algorithmic trading, obstacles in the use of cashless payments, the challenges of regulatory oversight, and the transformation of banking business models. The collection of chapters offers insights from Japanese and European discourses, approaches, and experiences on a topic otherwise dominated by studies about developments in the USA and China
    corecore