39 research outputs found

    Strategic and operational services for workload management in the cloud

    Full text link
    In hosting environments such as Infrastructure as a Service (IaaS) clouds, desirable application performance is typically guaranteed through the use of Service Level Agreements (SLAs), which specify minimal fractions of resource capacities that must be allocated by a service provider for unencumbered use by customers to ensure proper operation of their workloads. Most IaaS offerings are presented to customers as fixed-size and fixed-price SLAs, that do not match well the needs of specific applications. Furthermore, arbitrary colocation of applications with different SLAs may result in inefficient utilization of hosts' resources, resulting in economically undesirable customer behavior. In this thesis, we propose the design and architecture of a Colocation as a Service (CaaS) framework: a set of strategic and operational services that allow the efficient colocation of customer workloads. CaaS strategic services provide customers the means to specify their application workload using an SLA language that provides them the opportunity and incentive to take advantage of any tolerances they may have regarding the scheduling of their workloads. CaaS operational services provide the information necessary for, and carry out the reconfigurations mandated by strategic services. We recognize that it could be the case that there are multiple, yet functionally equivalent ways to express an SLA. Thus, towards that end, we present a service that allows the provably-safe transformation of SLAs from one form to another for the purpose of achieving more efficient colocation. Our CaaS framework could be incorporated into an IaaS offering by providers or it could be implemented as a value added proposition by IaaS resellers. To establish the practicality of such offerings, we present a prototype implementation of our proposed CaaS framework

    Supply Chain (micro)TMS development

    Get PDF
    Project Work presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementThe rise of technology across many verticals has necessitated the company’s move to digitalization. Despite “XPTO” company a well know player on the retail and success on e‐commerce internal market, they aimed at the strategy of continuous innovation to drive business growth and strengthen their position as a premium brand. They decided to move forward into digitalism inside cloud based solutions to get all the advantages of microservices architecture: optimize logistics and supply chain management, speed up the workflow and maximize service efficiency. An agile organization is not achieved purely by shifting the focus from traditional functional/ technological oriented organizations. The new way to organize teams must reflect all the principles and right segregations of roles, which will be the most immediate and visible disruption and cutover from the traditional way of managing the IT. In this project we aim to use agile framework with development based in house cloud microservice solution for a (micro)TMS solution/system that address the immediate needs imposed by the market in order to use it has competitive advantage

    Optimisation of the enactment of fine-grained distributed data-intensive work flows

    Get PDF
    The emergence of data-intensive science as the fourth science paradigm has posed a data deluge challenge for enacting scientific work-flows. The scientific community is facing an imminent flood of data from the next generation of experiments and simulations, besides dealing with the heterogeneity and complexity of data, applications and execution environments. New scientific work-flows involve execution on distributed and heterogeneous computing resources across organisational and geographical boundaries, processing gigabytes of live data streams and petabytes of archived and simulation data, in various formats and from multiple sources. Managing the enactment of such work-flows not only requires larger storage space and faster machines, but the capability to support scalability and diversity of the users, applications, data, computing resources and the enactment technologies. We argue that the enactment process can be made efficient using optimisation techniques in an appropriate architecture. This architecture should support the creation of diversified applications and their enactment on diversified execution environments, with a standard interface, i.e. a work-flow language. The work-flow language should be both human readable and suitable for communication between the enactment environments. The data-streaming model central to this architecture provides a scalable approach to large-scale data exploitation. Data-flow between computational elements in the scientific work-flow is implemented as streams. To cope with the exploratory nature of scientific work-flows, the architecture should support fast work-flow prototyping, and the re-use of work-flows and work-flow components. Above all, the enactment process should be easily repeated and automated. In this thesis, we present a candidate data-intensive architecture that includes an intermediate work-flow language, named DISPEL. We create a new fine-grained measurement framework to capture performance-related data during enactments, and design a performance database to organise them systematically. We propose a new enactment strategy to demonstrate that optimisation of data-streaming work-flows can be automated by exploiting performance data gathered during previous enactments

    Strategic and operational services for workload management in the cloud (PhD thesis)

    Full text link
    In hosting environments such as Infrastructure as a Service (IaaS) clouds, desirable application performance is typically guaranteed through the use of Service Level Agreements (SLAs), which specify minimal fractions of resource capacities that must be allocated by a service provider for unencumbered use by customers to ensure proper operation of their workloads. Most IaaS offerings are presented to customers as fixed-size and fixed-price SLAs, that do not match well the needs of specific applications. Furthermore, arbitrary colocation of applications with different SLAs may result in inefficient utilization of hosts’ resources, resulting in economically undesirable customer behavior. In this thesis, we propose the design and architecture of a Colocation as a Service (CaaS) framework: a set of strategic and operational services that allow the efficient colocation of customer workloads. CaaS strategic services provide customers the means to specify their application workload using an SLA language that provides them the opportunity and incentive to take advantage of any tolerances they may have regarding the scheduling of their workloads. CaaS operational services provide the information necessary for, and carry out the reconfigurations mandated by strategic services. We recognize that it could be the case that there are multiple, yet functionally equivalent ways to express an SLA. Thus, towards that end, we present a service that allows the provably-safe transformation of SLAs from one form to another for the purpose of achieving more efficient colocation. Our CaaS framework could be incorporated into an IaaS offering by providers or it could be implemented as a value added proposition by IaaS resellers. To establish the practicality of such offerings, we present a prototype implementation of our proposed CaaS framework. (Major Advisor: Azer Bestavros

    A modern teaching environment for process automation

    Get PDF
    Emergence of the new technological trends such as Open Platform Communications Unified Architecture (OPC UA), Industrial Ethernet, cloud computing and the 5th wireless network (5G) enabled the implementation of Cyber-physical System (CPS) with flexible, configurable, scalable and interoperable business models. This provides new opportunities for the process automation systems. On the other hand, the constant urge of industries for cost and material efficient processes demands a new automation paradigm with the latest tools and technologies which should be taken into account while teaching future automation engineers. In this thesis, the modern teaching environment for process automation is designed, implemented and described. This work explains the connections, configurations and the test of three mini plants including the Multiple Heat Exchanger, the Three-tank system and the Mixing Tank. In addition, OPC UA communication between the server and its clients has been tested. The plants are a part of the state of the art of the architecture that provides the access of ABB 800xA to the cloud services via OPC UA over the 5G test wireless network. This new paradigm changes the old automation hierarchy and enables the cross layered communication in the old architecture. This modern teaching environment prepares the students for the future automation challenges with the latest tools and merges data analytics, cloud computing and wireless network studies with process automation. It also provides the unique chance of testing the future trends together in this unique process automation setup

    Acquisition and Declarative Analytical Processing of Spatio-Temporal Observation Data

    Get PDF
    A generic framework for spatio-temporal observation data acquisition and declarative analytical processing has been designed and implemented in this Thesis. The main contributions of this Thesis may be summarized as follows: 1) generalization of a data acquisition and dissemination server, with great applicability in many scientific and industrial domains, providing flexibility in the incorporation of different technologies for data acquisition, data persistence and data dissemination, 2) definition of a new hybrid logical-functional paradigm to formalize a novel data model for the integrated management of entity and sampled data, 3) definition of a novel spatio-temporal declarative data analysis language for the previous data model, 4) definition of a data warehouse data model supporting observation data semantics, including application of the above language to the declarative definition of observation processes executed during observation data load, and 5) column-oriented parallel and distributed implementation of the spatial analysis declarative language. The huge amount of data to be processed forces the exploitation of current multi-core hardware architectures and multi-node cluster infrastructures

    Planetary Science Informatics and Data Analytics Conference : April 24–26, 2018, St. Louis, Missouri

    Get PDF
    The PSIDA conference provides a forum to discuss approaches, challenges, and applications of informatics and data analytics technologies and capabilities in planetary science.Institutional Support NASA Planetary Data System Geosciences, Lunar and Planetary Institute.Chairs Tom Stein, Washington University, St. Louis, USA, Dan Crichton, Jet Propulsion Laboratory, Pasadena, USA ; Program Committee Alphan Altinok, Jet Propulsion Laboratory, Pasadena, USA … [and 8 others]PARTIAL CONTENTS: ESA Planetary Science Archive Architecture and Data Management--SPICE for ESA Planetary Missions--VESPA: Enlarging the Virtual Observatory to Planetary Science--SeaBIRD: A Flexible and Intuitive Planetary Datamining Infrastructure--Model-Driven Development for PDS4 Software and Services--The Need for a Planetary Spatial Data Clearinghouse--The Relationship Between Planetary Spatial Data Infrastructure and the Planetary Data System--Update on the NASA-USGS Planetary Spatial Data Infrastructure Inter-Agency Agreement--MoonDB - A Data System for Analytical Data of Lunar Samples--Large-Scale Numerical Simulations of Planetary Interiors--Scalable Data Processing with the LROC Processing Pipelines--PACKMAN-Net: A Distributed, Open-Access, and Scalable Network of User-Friendly Space Weather Stations

    Timing Predictability in Future Multi-Core Avionics Systems

    Full text link

    Evidence-based Accountability Audits for Cloud Computing

    Get PDF
    Cloud computing is known for its on-demand service provisioning and has now become mainstream. Many businesses as well as individuals are using cloud services on a daily basis. There is a big variety of services that ranges from the provision of computing resources to services such as productivity suites and social networks. The nature of these services varies heavily in terms of what kind of information is being out-sourced to the cloud provider. Often, that data is sensitive, for instance when PII is being shared by an individual. Also, businesses that move (parts of) their processes to the cloud are actively participating in a major paradigm shift from having data on-premise to transfering data to a third-party provider. However, many new challenges come along with this trend, which are closely tied to the loss of control over data. When moving to the cloud, direct control over geographical storage location, who has access to it and how it is shared and processed is given up. Because of this loss of control, cloud customers have to trust cloud providers that they treat their data in an appropriate and responsible way. Cloud audits can be used to check how data has been processed in the cloud (i.e., by whom, for what purpose) and whether or not this happened in compliance with what has been defined in agreed-upon privacy and data storage, usage and maintenance (i.e., data handling) policies. This way, a cloud customer can regain some of the control he has given up by moving to the cloud. In this thesis, accountability audits are presented as a way to strengthen trust in cloud computing by providing assurance about the processing of data in the cloud according to data handling and privacy policies. In cloud accountability audits, various distributed evidence sources need to be considered. The research presented in this thesis discusses the use of various heterogeous evidence sources on all cloud layers. This way, a complete picture of the actual data handling practices that is based on hard facts can be presented to the cloud consumer. Furthermore, this strengthens transparency of data processing in the cloud, which can lead to improved trust in cloud providers, if they choose to adopt these mechanisms in order to assure their customers that their data is being handled according to their expectations. The system presented in this thesis enables continuous auditing of a cloud provider's adherence to data handling policies in an automated way that shortens audit intervals and that is based on evidence that is produced by cloud subsystems. An important aspect of many cloud offerings is the combination of multiple distinct cloud services that are offered by independent providers. Data is thereby freuqently exchanged between the cloud providers. This also includes trans-border flows of data, where one provider may be required to adhere to more strict data protection requirements than the others. The system presented in this thesis addresses such scenarios by enabling the collection of evidence at providers and evaluating it during audits. Securing evidence quickly becomes a challenge in the system design, when information that is needed for the audit is deemed sensitive or confidential. This means that securing the evidence at-rest as well as in-transit is of utmost importance, in order not to introduce a new liability by building an insecure data heap. This research presents the identification of security and privacy protection requirements alongside proposed solutions that enable the development of an architecture for secure, automated, policy-driven and evidence-based accountability audits
    corecore