4,203 research outputs found

    Quality of Service for Information Access

    Get PDF
    Information is available in many forms from different sources, in distributed locations; access to information is supported by networks of varying performance; the cost of accessing and transporting the information varies for both the source and the transport route. Users who vary in their preferences, background knowledge required to interpret the information and motivation for accessing it, gather information to perform many different tasks. This position paper outlines some of these variations in information provision and access, and explores the impact these variations have on the user’s task performance, and the possibilities they make available to adapt the user interface for the presentation of information

    A proposed model to analyse risk and return for a large computing system adoption

    No full text
    This thesis presents Organisational Sustainability Modelling (OSM), a new method to model and analyse risk and return systematically for the adoption of large systems such as Cloud Computing. Return includes improvements in technical efficiency, profitability and service. Risk includes controlled risk (risk-control rate) and uncontrolled risk (beta), although uncontrolled risk cannot be evaluated directly. Three OSM metrics, actual return value, expected return value and risk-control rate are used to calculate uncontrolled risk. The OSM data collection process in which hundreds of datasets (rows of data containing three OSM metrics in each row) are used as inputs is explained. Outputs including standard error, mean squared error, Durbin-Watson, p-value and R-squared value are calculated. Visualisation is used to illustrate quality and accuracy of data analysis. The metrics, process and interpretation of data analysis is presented and the rationale is explained in the review of the OSM method.Three case studies are used to illustrate the validity of OSM:• National Health Service (NHS) is a technical application concerned with backing up data files and focuses on improvement in efficiency.• Vodafone/Apple is a cost application and focuses on profitability.• The iSolutions Group, University of Southampton focuses on service improvement using user feedback.The NHS case study is explained in detail. The expected execution time calculated by OSM to complete all backup activity in Cloud-based systems matches actual execution time to within 0.01%. The Cloud system shows improved efficiency in both sets of comparisons. All three case studies confirm there are benefits for the adoption of a large computer system such as the Cloud. Together these demonstrations answer the two research questions for this thesis:1. How do you model and analyse risk and return on adoption of large computing systems systematically and coherently?2. Can the same method be used in risk mitigation of system adoption?Limitations of this study, a reproducibility case, comparisons with similar approaches, research contributions and future work are also presented

    Managing Distributed Cloud Applications and Infrastructure

    Get PDF
    The emergence of the Internet of Things (IoT), combined with greater heterogeneity not only online in cloud computing architectures but across the cloud-to-edge continuum, is introducing new challenges for managing applications and infrastructure across this continuum. The scale and complexity is simply so complex that it is no longer realistic for IT teams to manually foresee the potential issues and manage the dynamism and dependencies across an increasing inter-dependent chain of service provision. This Open Access Pivot explores these challenges and offers a solution for the intelligent and reliable management of physical infrastructure and the optimal placement of applications for the provision of services on distributed clouds. This book provides a conceptual reference model for reliable capacity provisioning for distributed clouds and discusses how data analytics and machine learning, application and infrastructure optimization, and simulation can deliver quality of service requirements cost-efficiently in this complex feature space. These are illustrated through a series of case studies in cloud computing, telecommunications, big data analytics, and smart cities

    Visualisation of Large-Scale Call-Centre Data

    Get PDF
    The contact centre industry employs 4% of the entire United King-dom and United States’ working population and generates gigabytes of operational data that require analysis, to provide insight and to improve efficiency. This thesis is the result of a collaboration with QPC Limited who provide data collection and analysis products for call centres. They provided a large data-set featuring almost 5 million calls to be analysed. This thesis utilises novel visualisation techniques to create tools for the exploration of the large, complex call centre data-set and to facilitate unique observations into the data.A survey of information visualisation books is presented, provid-ing a thorough background of the field. Following this, a feature-rich application that visualises large call centre data sets using scatterplots that support millions of points is presented. The application utilises both the CPU and GPU acceleration for processing and filtering and is exhibited with millions of call events.This is expanded upon with the use of glyphs to depict agent behaviour in a call centre. A technique is developed to cluster over-lapping glyphs into a single parent glyph dependant on zoom level and a customizable distance metric. This hierarchical glyph repre-sents the mean value of all child agent glyphs, removing overlap and reducing visual clutter. A novel technique for visualising individually tailored glyphs using a Graphics Processing Unit is also presented, and demonstrated rendering over 100,000 glyphs at interactive frame rates. An open-source code example is provided for reproducibility.Finally, a novel interaction and layout method is introduced for improving the scalability of chord diagrams to visualise call transfers. An exploration of sketch-based methods for showing multiple links and direction is made, and a sketch-based brushing technique for filtering is proposed. Feedback from domain experts in the call centre industry is reported for all applications developed

    Wireless Communication Networks for Gas Turbine Engine Testing

    Get PDF
    A new trend in the field of Aeronautical Engine Health Monitoring is the implementation of wireless sensor networks (WSNs) for data acquisition and condition monitoring to partially replace heavy and complex wiring harnesses, which limit the versatility of the monitoring process as well as creating practical deployment issues. Using wireless technologies instead of fixed wiring will fuel opportunities for reduced cabling, faster sensor and network deployment, increased data acquisition flexibility and reduced cable maintenance costs. However, embedding wireless technology into an aero engine (even in the ground testing application considered here) presents some very significant challenges, e.g. a harsh environment with a complex RF transmission environment, high sensor density and high data-rate. In this paper we discuss the results of the Wireless Data Acquisition in Gas Turbine Engine Testing (WIDAGATE) project, which aimed to design and simulate such a network to estimate network performance and de-risk the wireless techniques before the deployment

    Predictive Maintenance use case employing Survival Analysis in a telecommunication company

    Get PDF
    Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsDriven by the digital revolution, telecommunications companies need to adopt innovative technologies and services to be competitive. In this context, the company invests in its first Predictive Maintenance solution, intelligent anticipation of device failure through sensorial data. This solution has the power to anticipate and plan Reactive Maintenance measures that extend equipment’s life, reduce downtime, aim for cost savings, and avoid negative feedback, consequently improving the service quality. This project explores a Fault Prediction tool such as Survival Analysis. It undergoes the six phases of a Data Science Project following the CRISP-DM methodology. For applying the Survival Analysis technique (e.g. Kaplan-Meier), it is crucial to identify two key events using the equipment’s historical data (e.g. STB): The beginning of the anomalous event and the exact moment of the fault event. Several techniques, such as Statistical Smoothing models and Anomaly Detection models, were analysed and compared in detail to detect the beginning of the device malfunction. The best results to detect the devices’ anomalous event were employed by a Statistical technique, the SMA where the anomalous event is reaching 50 degrees for the one-day smooth average. Therefore, it is possible to obtain an acceptable anticipation period of 38 days for future equipment maintenance intervention. In this sense, employing a Predictive Maintenance solution guarantees the reduction of 71% of the actual emergency interventions. Consequently, the company saves more money rather than not making any prediction at all. Moreover, it was also developed a visualisation tool to demonstrate the solution and explore it, where it employs the different models to detect the beginning of the anomalous event’s. Consequently, all the proposed goals of the company were accomplished

    Performance Measurement in the Product Development Process

    Get PDF
    The intention of the programme was to evaluate Product Development (PD) strategies within the automotive industry and to identify areas in which improvements could be made in PD project performance that would also provide a business opportunity for the author’s employer RLE INTERNATIONAL (RLE). The research is principally concerned with the automotive industry but also has broader applications within similar industries. The research was undertaken via three projects. Project 1 involved a study of the structure, drivers and trends within the automotive industry. The aim was to assess the implications for PD in the automotive industry and identify significant issues where opportunities for improvement existed. The outcome was a portrayal of an industry under extreme competitive pressure and waiting for something to change but without a clear future state. What was apparent was that the competitive pressures, and thus the need to deliver more products without significantly increased resources, were not going to abate in the near future. PD has to ‘deliver more with less’ but a definition of success and its associated measures in terms of the PD process is difficult to frame. Therefore, the aim of project 2 focused on performance measurement of the PD process by assessing four internationally diverse development projects carried out by the author’s employer with four discrete customers. The projects were all different in their content and were carried out in different countries, i.e. USA, Germany, India and Sweden. Whilst customer specific and cultural aspects of the projects differed, the significant issue identified via the research was common across all the projects. Traditional Key Performance Indicators (KPIs) of cost, time and scope were used but failed to predict issues in project delivery. The key finding was that if project information did not flow as originally planned then resources were wasted resulting in time and cost over-runs. Project 3 researched alternative solutions to the issue of monitoring information flow and proposes a specific method of indicating the likelihood of success in a project by identifying new PD measurement techniques to be used within the automotive PD process. This new measurement criterion of information flow provides a predictive tool that significantly enhances the project control process. The predictive method of information flow tracking developed is new to the automotive PD profession. It was trialled on an existing project and was shown to identify specific issues with the Work-in-Progress (WIP) not found by traditional project management methods. The resulting indication of issues enabled the organisation’s management to have a substantially different insight and understanding of project performance at a given point in time and therefore enabled immediate changes in resource allocation to improve project performance. The implementation of these changes as a result of the adoption of information flow monitoring resulted in significantly improved project KPI performance. The contribution of this new PD management method has the potential to significantly impact the competitiveness of any company involved in the design and development process. Its benefits include improved understanding of project performance indicators, powerful predictive attributes resulting in better utilisation of company resources and reductions in both project costs and lead times

    Managing Distributed Cloud Applications and Infrastructure

    Get PDF
    The emergence of the Internet of Things (IoT), combined with greater heterogeneity not only online in cloud computing architectures but across the cloud-to-edge continuum, is introducing new challenges for managing applications and infrastructure across this continuum. The scale and complexity is simply so complex that it is no longer realistic for IT teams to manually foresee the potential issues and manage the dynamism and dependencies across an increasing inter-dependent chain of service provision. This Open Access Pivot explores these challenges and offers a solution for the intelligent and reliable management of physical infrastructure and the optimal placement of applications for the provision of services on distributed clouds. This book provides a conceptual reference model for reliable capacity provisioning for distributed clouds and discusses how data analytics and machine learning, application and infrastructure optimization, and simulation can deliver quality of service requirements cost-efficiently in this complex feature space. These are illustrated through a series of case studies in cloud computing, telecommunications, big data analytics, and smart cities
    corecore