31,135 research outputs found

    An infrastructure service recommendation system for cloud applications with real-time QoS requirement constraints

    Get PDF
    The proliferation of cloud computing has revolutionized the hosting and delivery of Internet-based application services. However, with the constant launch of new cloud services and capabilities almost every month by both big (e.g., Amazon Web Service and Microsoft Azure) and small companies (e.g., Rackspace and Ninefold), decision makers (e.g., application developers and chief information officers) are likely to be overwhelmed by choices available. The decision-making problem is further complicated due to heterogeneous service configurations and application provisioning QoS constraints. To address this hard challenge, in our previous work, we developed a semiautomated, extensible, and ontology-based approach to infrastructure service discovery and selection only based on design-time constraints (e.g., the renting cost, the data center location, the service feature, etc.). In this paper, we extend our approach to include the real-time (run-time) QoS (the end-to-end message latency and the end-to-end message throughput) in the decision-making process. The hosting of next-generation applications in the domain of online interactive gaming, large-scale sensor analytics, and real-time mobile applications on cloud services necessitates the optimization of such real-time QoS constraints for meeting service-level agreements. To this end, we present a real-time QoS-aware multicriteria decision-making technique that builds over the well-known analytic hierarchy process method. The proposed technique is applicable to selecting Infrastructure as a Service (IaaS) cloud offers, and it allows users to define multiple design-time and real-time QoS constraints or requirements. These requirements are then matched against our knowledge base to compute the possible best fit combinations of cloud services at the IaaS layer. We conducted extensive experiments to prove the feasibility of our approach

    Broadbanding Brunswick: High-speed broadband and household media ecologies

    Get PDF
    New research from the University of Melbourne and Swinburne University has found that 82% of households in the NBN first release site of Brunswick, Victoria, think the NBN is a good idea. The study, Broadbanding Brunswick: High-speed Broadband and Household Media Ecologies, examines the take-up, use and implications of high-speed broadband for some of its earliest adopters. It looks at how the adoption of high-speed broadband influences household consumption patterns and use of telecoms. The survey of 282 Brunswick households found there had been a significant uptake of the NBN during the course of the research. In 2011, 20% of households were connected to the NBN and in 2012 that number had risen to 34%. Families, home owners, higher income earners and teleworkers were most likely to adopt the NBN. Many NBN users reported paying less for their monthly internet bills, with 49% paying about the same. In many cases those paying more (37%) had elected to do so.Download report: Broadbanding Brunswick: High-speed Broadband and Household Media Ecologies [PDF, 2.5MB] Download report: Broadbanding Brunswick: High-speed Broadband and Household Media Ecologies [Word 2007 document, 5MB

    A Case Study In Software Adaptation

    Get PDF
    We attach a feedback-control-loop infrastructure to an existing target system, to continually monitor and dynamically adapt its activities and performance. (This approach could also be applied to 'new' systems, as an alternative to 'building in' adaptation facilities, but we do not address that here.) Our infrastructure consists of multiple layers with the objectives of 1. probing, measuring and reporting of activity and state during the execution of the target system among its components and connectors; 2. gauging, analysis and interpretation of the reported events; and 3. whenever necessary, feedback onto the probes and gauges, to focus them (e.g., drill deeper), or onto the running target system, to direct its automatic adjustment and reconfiguration. We report on our successful experience using this approach in dynamic adaptation of a large-scale commercial application that requires both coarse and fine grained modifications

    A Case Study In Software Adaptation

    Get PDF
    We attach a feedback-control-loop infrastructure to an existing target system, to continually monitor and dynamically adapt its activities and performance. (This approach could also be applied to 'new' systems, as an alternative to 'building in' adaptation facilities, but we do not address that here.) Our infrastructure consists of multiple layers with the objectives of 1. probing, measuring and reporting of activity and state during the execution of the target system among its components and connectors; 2. gauging, analysis and interpretation of the reported events; and 3. whenever necessary, feedback onto the probes and gauges, to focus them (e.g., drill deeper), or onto the running target system, to direct its automatic adjustment and reconfiguration. We report on our successful experience using this approach in dynamic adaptation of a large-scale commercial application that requires both coarse and fine grained modifications

    Adaptive monitoring: A systematic mapping

    Get PDF
    Context: Adaptive monitoring is a method used in a variety of domains for responding to changing conditions. It has been applied in different ways, from monitoring systems’ customization to re-composition, in different application domains. However, to the best of our knowledge, there are no studies analyzing how adaptive monitoring differs or resembles among the existing approaches. Objective: To characterize the current state of the art on adaptive monitoring, specifically to: (a) identify the main concepts in the adaptive monitoring topic; (b) determine the demographic characteristics of the studies published in this topic; (c) identify how adaptive monitoring is conducted and evaluated by the different approaches; (d) identify patterns in the approaches supporting adaptive monitoring. Method: We have conducted a systematic mapping study of adaptive monitoring approaches following recommended practices. We have applied automatic search and snowballing sampling on different sources and used rigorous selection criteria to retrieve the final set of papers. Moreover, we have used an existing qualitative analysis method for extracting relevant data from studies. Finally, we have applied data mining techniques for identifying patterns in the solutions. Results: We have evaluated 110 studies organized in 81 approaches that support adaptive monitoring. By analyzing them, we have: (1) surveyed related terms and definitions of adaptive monitoring and proposed a generic one; (2) visualized studies’ demographic data and arranged the studies into approaches; (3) characterized the main approaches’ contributions; (4) determined how approaches conduct the adaptation process and evaluate their solutions. Conclusions This cross-domain overview of the current state of the art on adaptive monitoring may be a solid and comprehensive baseline for researchers and practitioners in the field. Especially, it may help in identifying opportunities of research; for instance, the need of proposing generic and flexible software engineering solutions for supporting adaptive monitoring in a variety of systems.Peer ReviewedPostprint (author's final draft

    Scribing the writer: implications of the social construction of writer identity for pedagogy and paradigms of written composition

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of Philosophy by Published Works.A reflexive analysis of five peer reviewed published papers reveals how socio-cultural and political discourses and individual agency compete to shape the identity of the learner-writer. It is posited that although hegemonic political discourses construct ‘schooling literacy’ (Meek 1988 ) which frame the socio-cultural contexts in which texts, authors, teachers and leaners develop; the socio-cultural standpoint of the individual makes possible conscious construction of counter discourses. Writer identity is integral to the compositional process. However, writer identity is mediated by, on the one hand, dominant discourses of literacy that inform current pedagogies of writing (Paper One) and on the other by socio-cultural narratives that shape identity (Paper Three). A synthesis of Gramsci’s notion of cultural hegemony and Bronfenbrenner’s ecological systems theory is used to explain the constraining function of dominant discourses in literacy education. These works largely fall within a qualitative paradigm, although a mixed-method approach was adopted for the data collection of Papers Four and Five. The methods these papers had in common were the use of survey and documentary analysis of reflective journals. A semi-structured interview with a focus group was the third method used to collect data for Paper Five. Individual semi-structured interviews were used to collect partial life-histories for Paper Two and textual analysis of pupils’ narrative writing was the main method used for Paper One. Paper Three involved a rhizotextual auto-ethnographic analysis of original poetry. Findings suggest pedagogies which minimise or negate the identity of the writer are counter-productive in facilitating writer efficacy. It is suggested, the teaching of writing should be premised on approaches that encourage the writer to draw upon personal, inherited and secondary narratives. In this conceptualisation of writing, the writer is simultaneously composing and exploring aspects of self. However, the self is not a fixed entity and writing is viewed as a process by which identity emerges through reflexive engagement with the compositional process. The corollary is that pedagogy of writing needs to embrace the identity of the writer, whilst also allowing space for the writer’s ‘becoming’

    Using Process Technology to Control and Coordinate Software Adaptation

    Get PDF
    We have developed an infrastructure for end-to-end run-time monitoring, behavior/performance analysis, and dynamic adaptation of distributed software. This infrastructure is primarily targeted to pre-existing systems and thus operates outside the target application, without making assumptions about the target's implementation, internal communication/computation mechanisms, source code availability, etc. This paper assumes the existence of the monitoring and analysis components, presented elsewhere, and focuses on the mechanisms used to control and coordinate possibly complex repairs/reconfigurations to the target system. These mechanisms require lower level effectors somehow attached to the target system, so we briefly sketch one such facility (elaborated elsewhere). Our main contribution is the model, architecture, and implementation of Workflakes, the decentralized process engine we use to tailor, control, coordinate, etc. a cohort of such effectors. We have validated the Workflakes approach with case studies in several application domains. Due to space restrictions we concentrate primarily on one case study, briefly discuss a second, and only sketch others
    • 

    corecore