1,296 research outputs found

    An infrastructure service recommendation system for cloud applications with real-time QoS requirement constraints

    Get PDF
    The proliferation of cloud computing has revolutionized the hosting and delivery of Internet-based application services. However, with the constant launch of new cloud services and capabilities almost every month by both big (e.g., Amazon Web Service and Microsoft Azure) and small companies (e.g., Rackspace and Ninefold), decision makers (e.g., application developers and chief information officers) are likely to be overwhelmed by choices available. The decision-making problem is further complicated due to heterogeneous service configurations and application provisioning QoS constraints. To address this hard challenge, in our previous work, we developed a semiautomated, extensible, and ontology-based approach to infrastructure service discovery and selection only based on design-time constraints (e.g., the renting cost, the data center location, the service feature, etc.). In this paper, we extend our approach to include the real-time (run-time) QoS (the end-to-end message latency and the end-to-end message throughput) in the decision-making process. The hosting of next-generation applications in the domain of online interactive gaming, large-scale sensor analytics, and real-time mobile applications on cloud services necessitates the optimization of such real-time QoS constraints for meeting service-level agreements. To this end, we present a real-time QoS-aware multicriteria decision-making technique that builds over the well-known analytic hierarchy process method. The proposed technique is applicable to selecting Infrastructure as a Service (IaaS) cloud offers, and it allows users to define multiple design-time and real-time QoS constraints or requirements. These requirements are then matched against our knowledge base to compute the possible best fit combinations of cloud services at the IaaS layer. We conducted extensive experiments to prove the feasibility of our approach

    An effective scheme for QoS estimation via alternating direction method-based matrix factorization

    Get PDF
    Accurately estimating unknown quality-of-service (QoS) data based on historical records of Web-service invocations is vital for automatic service selection. This work presents an effective scheme for addressing this issue via alternating direction method-based matrix factorization. Its main idea consists of a) adopting the principle of the alternating direction method to decompose the task of building a matrix factorization-based QoS-estimator into small subtasks, where each one trains a subset of desired parameters based on the latest status of the whole parameter set; b) building an ensemble of diversified single models with sophisticated diversifying and aggregating mechanism; and c) parallelizing the construction process of the ensemble to drastically reduce the time cost. Experimental results on two industrial QoS datasets demonstrate that with the proposed scheme, more accurate QoS estimates can be achieved than its peers with comparable computing time with the help of its practical parallelization.This work was supported in part by the FDCT (Fundo para o Desenvolvimento das CiĂŞncias e da Tecnologia) under Grant119/2014/A3, in part by the National Natu-ral Science Foundation of China under Grant 61370150, and Grant 61433014; in part by the Young Scientist Foun-dation of Chongqing under Grant cstc2014kjrc-qnrc40005; in part by the Chongqing Research Program of Basic Re-search and Frontier Technology under Grant cstc2015jcyjB0244; in part by the Postdoctoral Science Funded Project of Chongqing under Grant Xm2014043; in part by the Fundamental Research Funds for the Central Universities under Grant 106112015CDJXY180005; in part by the Specialized Research Fund for the Doctoral Pro-gram of Higher Education under Grant 20120191120030

    Quality of service based data-aware scheduling

    Get PDF
    Distributed supercomputers have been widely used for solving complex computational problems and modeling complex phenomena such as black holes, the environment, supply-chain economics, etc. In this work we analyze the use of these distributed supercomputers for time sensitive data-driven applications. We present the scheduling challenges involved in running deadline sensitive applications on shared distributed supercomputers running large parallel jobs and introduce a ``data-aware\u27\u27 scheduling paradigm that overcomes these challenges by making use of Quality of Service classes for running applications on shared resources. We evaluate the new data-aware scheduling paradigm using an event-driven hurricane simulation framework which attempts to run various simulations modeling storm surge, wave height, etc. in a timely fashion to be used by first responders and emergency officials. We further generalize the work and demonstrate with examples how data-aware computing can be used in other applications with similar requirements

    Trustee: A Trust Management System for Fog-enabled Cyber Physical Systems

    Get PDF
    In this paper, we propose a lightweight trust management system (TMS) for fog-enabled cyber physical systems (Fog-CPS). Trust computation is based on multi-factor and multi-dimensional parameters, and formulated as a statistical regression problem which is solved by employing random forest regression model. Additionally, as the Fog-CPS systems could be deployed in open and unprotected environments, the CPS devices and fog nodes are vulnerable to numerous attacks namely, collusion, self-promotion, badmouthing, ballot-stuffing, and opportunistic service. The compromised entities can impact the accuracy of trust computation model by increasing/decreasing the trust of other nodes. These challenges are addressed by designing a generic trust credibility model which can countermeasures the compromise of both CPS devices and fog nodes. The credibility of each newly computed trust value is evaluated and subsequently adjusted by correlating it with a standard deviation threshold. The standard deviation is quantified by computing the trust in two configurations of hostile environments and subsequently comparing it with the trust value in a legitimate/normal environment. Our results demonstrate that credibility model successfully countermeasures the malicious behaviour of all Fog-CPS entities i.e. CPS devices and fog nodes. The multi-factor trust assessment and credibility evaluation enable accurate and precise trust computation and guarantee a dependable Fog-CPS system
    • …
    corecore