42,021 research outputs found

    Cloud based testing of business applications and web services

    Get PDF
    This paper deals with testing of applications based on the principles of cloud computing. It is aimed to describe options of testing business software in clouds (cloud testing). It identifies the needs for cloud testing tools including multi-layer testing; service level agreement (SLA) based testing, large scale simulation, and on-demand test environment. In a cloud-based model, ICT services are distributed and accessed over networks such as intranet or internet, which offer large data centers deliver on demand, resources as a service, eliminating the need for investments in specific hardware, software, or on data center infrastructure. Businesses can apply those new technologies in the contest of intellectual capital management to lower the cost and increase competitiveness and also earnings. Based on comparison of the testing tools and techniques, the paper further investigates future trend of cloud based testing tools research and development. It is also important to say that this comparison and classification of testing tools describes a new area and it has not yet been done

    Two ways to Grid: the contribution of Open Grid Services Architecture (OGSA) mechanisms to service-centric and resource-centric lifecycles

    Get PDF
    Service Oriented Architectures (SOAs) support service lifecycle tasks, including Development, Deployment, Discovery and Use. We observe that there are two disparate ways to use Grid SOAs such as the Open Grid Services Architecture (OGSA) as exemplified in the Globus Toolkit (GT3/4). One is a traditional enterprise SOA use where end-user services are developed, deployed and resourced behind firewalls, for use by external consumers: a service-centric (or ‘first-order’) approach. The other supports end-user development, deployment, and resourcing of applications across organizations via the use of execution and resource management services: A Resource-centric (or ‘second-order’) approach. We analyze and compare the two approaches using a combination of empirical experiments and an architectural evaluation methodology (scenario, mechanism, and quality attributes) to reveal common and distinct strengths and weaknesses. The impact of potential improvements (which are likely to be manifested by GT4) is estimated, and opportunities for alternative architectures and technologies explored. We conclude by investigating if the two approaches can be converged or combined, and if they are compatible on shared resources

    Performance of Network and Service Monitoring Frameworks

    Get PDF
    The efficiency and the performance of anagement systems is becoming a hot research topic within the networks and services management community. This concern is due to the new challenges of large scale managed systems, where the management plane is integrated within the functional plane and where management activities have to carry accurate and up-to-date information. We defined a set of primary and secondary metrics to measure the performance of a management approach. Secondary metrics are derived from the primary ones and quantifies mainly the efficiency, the scalability and the impact of management activities. To validate our proposals, we have designed and developed a benchmarking platform dedicated to the measurement of the performance of a JMX manager-agent based management system. The second part of our work deals with the collection of measurement data sets from our JMX benchmarking platform. We mainly studied the effect of both load and the number of agents on the scalability, the impact of management activities on the user perceived performance of a managed server and the delays of JMX operations when carrying variables values. Our findings show that most of these delays follow a Weibull statistical distribution. We used this statistical model to study the behavior of a monitoring algorithm proposed in the literature, under heavy tail delays distribution. In this case, the view of the managed system on the manager side becomes noisy and out of date

    iTeleScope: Intelligent Video Telemetry and Classification in Real-Time using Software Defined Networking

    Full text link
    Video continues to dominate network traffic, yet operators today have poor visibility into the number, duration, and resolutions of the video streams traversing their domain. Current approaches are inaccurate, expensive, or unscalable, as they rely on statistical sampling, middle-box hardware, or packet inspection software. We present {\em iTelescope}, the first intelligent, inexpensive, and scalable SDN-based solution for identifying and classifying video flows in real-time. Our solution is novel in combining dynamic flow rules with telemetry and machine learning, and is built on commodity OpenFlow switches and open-source software. We develop a fully functional system, train it in the lab using multiple machine learning algorithms, and validate its performance to show over 95\% accuracy in identifying and classifying video streams from many providers including Youtube and Netflix. Lastly, we conduct tests to demonstrate its scalability to tens of thousands of concurrent streams, and deploy it live on a campus network serving several hundred real users. Our system gives unprecedented fine-grained real-time visibility of video streaming performance to operators of enterprise and carrier networks at very low cost.Comment: 12 pages, 16 figure
    corecore