1,055 research outputs found

    Computing at massive scale: Scalability and dependability challenges

    Get PDF
    Large-scale Cloud systems and big data analytics frameworks are now widely used for practical services and applications. However, with the increase of data volume, together with the heterogeneity of workloads and resources, and the dynamic nature of massive user requests, the uncertainties and complexity of resource management and service provisioning increase dramatically, often resulting in poor resource utilization, vulnerable system dependability, and user-perceived performance degradations. In this paper we report our latest understanding of the current and future challenges in this particular area, and discuss both existing and potential solutions to the problems, especially those concerned with system efficiency, scalability and dependability. We first introduce a data-driven analysis methodology for characterizing the resource and workload patterns and tracing performance bottlenecks in a massive-scale distributed computing environment. We then examine and analyze several fundamental challenges and the solutions we are developing to tackle them, including for example incremental but decentralized resource scheduling, incremental messaging communication, rapid system failover, and request handling parallelism. We integrate these solutions with our data analysis methodology in order to establish an engineering approach that facilitates the optimization, tuning and verification of massive-scale distributed systems. We aim to develop and offer innovative methods and mechanisms for future computing platforms that will provide strong support for new big data and IoE (Internet of Everything) applications

    The Dag-Brucken ASRS Case Study

    Get PDF
    In 1996 an agreement was made between a well-known beverage manufacturer, Super-Cola Taiwan, (SCT) and a small Australian electrical engineering company, Dag-Brücken ASRS Pty Ltd, (DB), to provide an automated storage and retrieval system (ASRS) facility as part of SCT’s production facilities in Asia. Recognising the potential of their innovative and technically advanced design, DB was awarded a State Premiers Export Award and was a finalist in that year’s National Export Awards. The case tracks the development and subsequent implementation of the SCT ASRS project, setting out to highlight how the lack of appropriate IT development processes contributed to the ultimate failure of the project and the subsequent winding up of DB only one year after being honoured with these prestigious awards. The case provides compelling evidence of the types of project management incompetency that, from the literature, appears to contribute to the high failure rate in IT projects. For confidentiality reasons, the names of the principal parties are changed, but the case covers actual events documented by one of the project team members as part of his postgraduate studies, providing an example of the special mode of evidence collection that Yin (1994) calls ‘participant-observation’

    A gentle transition from Java programming to Web Services using XML-RPC

    Get PDF
    Exposing students to leading edge vocational areas of relevance such as Web Services can be difficult. We show a lightweight approach by embedding a key component of Web Services within a Level 3 BSc module in Distributed Computing. We present a ready to use collection of lecture slides and student activities based on XML-RPC. In addition we show that this material addresses the central topics in the context of web services as identified by Draganova (2003)

    Localizing State-Dependent Faults Using Associated Sequence Mining

    Get PDF
    In this thesis we developed a new fault localization process to localize faults in object oriented software. The process is built upon the Encapsulation\u27\u27 principle and aims to locate state-dependent discrepancies in the software\u27s behavior. We experimented with the proposed process on 50 seeded faults in 8 subject programs, and were able to locate the faulty class in 100% of the cases when objects with constant states were taken into consideration, while we missed 24% percent of the faults when these objects were not considered. We also developed a customized data mining technique Associated sequence mining\u27\u27 to be used in the localization process; experiments showed that it only provided slight enhancement to the result of the process. The customization provided at least 17% enhancement in the time performance and it is generic enough to be applicable in other domains. In addition to that we have developed an extensive taxonomy for object-oriented software faults based on UML models. We used the taxonomy to make decisions regarding the localization process. It provides an aid for understanding the nature of software faults, and will help enhance the different tasks related to software quality assurance. The main contributions of the thesis were based on preliminary experimentation on the usability of the classification algorithms implemented in WEKA in software fault localization, which resulted in the conclusion that both the fault type and the mechanism implemented in the analysis algorithm were significant to affect the results of the localization

    Tietojärjestelmien valvonta ja hallinta hajautetussa ympäristössä

    Get PDF
    As organizations information systems get more complex, distributed systems more general and integration between systems increasingly popular also management of the whole meets new challenges. At the same time different management concepts need information of systems behaviour to be able to follow how agreed goals are accomplished and how agreed service levels are met. In this thesis we will present a three-tier framework for distributed system management and consider its suitability for enterprise level architecture. Properly implemented monitoring system not only enables the possibility to identify system's current state but also supports management concepts information needs. Throughout monitoring of distributed systems generates massive amount of data. A data warehouse solution is considered as a way to minimize monitoring overhead and to provide a centralized location where the data can be accessed for further analysis. At the end we will address different management concepts and show what they can gain from properly implemented monitoring system.Organisaatioiden tietojärjestelmien muuttuessa yhä monimutkaisemmiksi, hajautettujen järjestelmien yleistyessä ja järjestelmien välisen integraation tullessa yhä yleisemmäksi kohtaa kokonaisuuden hallinta uusia haasteita. Samaan aikaan erilaiset liikkeenjohdolliset konseptit tarvitsevat tietoa järjestelmien käyttäytymisestä pystyäkseen seuraamaan kuinka asetettu tavoitteet toteutuvat. Tässä työssä esittelemme kolmitasoisen kehyksen hajautettujen tietojärjestelmien hallintaan ja arvioimme sen sopivuutta yrityksen tietojärjestelmien valvonta-arkkitehtuuriksi. Oikein toteutettu valvontajärjestelmä ei pelkästään mahdollista järjestelmien sen hetkisen tilan valvontaa vaan myös tukee liikkeenjohdollisten konseptien tietotarpeita. Hajautettujen järjestelmien läpikohtainen valvonta tuottaa valtavan määrän dataa. Arvioimme keskitettyä tietovarasto-ratkaisua tapana minimoida valvontajärjestelmän tuottama kuormitus itse valvottavalle järjestelmälle ja tarjota keskitetty paikka päästä tietoon käsiksi jatkoanalysointia varten. Lopuksi käsittelemme liikkeenjohdollisia konsepteja ja osoitamme minkälaisia hyötyjä oikein toteutettu valvontajärjestelmä voi niille tarjot

    Wireless Sensor Network: At a Glance

    Get PDF
    corecore