1,399 research outputs found

    QoS control of E-business systems through performance modelling and estimation

    Get PDF
    E-business systems provide the infrastructure whereby parties interact electronically via business transactions. At peak loads, these systems are susceptible to large volumes of transactions and concurrent users and yet they are expected to maintain adequate performance levels. Over provisioning is an expensive solution. A good alternative is the adaptation of the system, managing and controlling its resources. We address these concerns by presenting a model that allows fast evaluation of performance metrics in terms of measurable or controllable parameters. The model can be used in order to (a) predict the performance of a system under given or assumed loading conditions and (b) to choose the optimal configuration set-up for certain controllable parameters with respect to specified performance measures. Firstly, we analyze the characteristics of E-business systems. This analysis leads to the analytical model, which is sufficiently general to capture the behaviour of a large class of commonly encountered architectures. We propose an approximate solution which is numerically efficient and fast. By mean of simulation, we prove that its accuracy is acceptable over a wide range of system configurations and different load levels. We further evaluate the approximate solution by comparing it to a real-life E-business system. A J2EE application of non-trivial size and complexity is deployed on a 2-tier system composed of the JBoss application server and a database server. We implement an infrastructure fully integrated on the application server, capable of monitoring the E-business system and controlling its configuration parameters. Finally, we use this infrastructure to quantify both the static parameters of the model and the observed performance. The latter are then compared with the metrics predicted by the model, showing that the approximate solution is almost exact in predicting performance and that it assesses the optimal system configuration very accurately.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A System Architecture for Software-Defined Industrial Internet of Things

    Full text link
    Wireless sensor networks have been a driving force of the Industrial Internet of Things (IIoT) advancement in the process control and manufacturing industry. The emergence of IIoT opens great potential for the ubiquitous field device connectivity and manageability with an integrated and standardized architecture from low-level device operations to high-level data-centric application interactions. This technological development requires software definability in the key architectural elements of IIoT, including wireless field devices, IIoT gateways, network infrastructure, and IIoT sensor cloud services. In this paper, a novel software-defined IIoT (SD-IIoT) is proposed in order to solve essential challenges in a holistic IIoT system, such as reliability, security, timeliness scalability, and quality of service (QoS). A new IIoT system architecture is proposed based on the latest networking technologies such as WirelessHART, WebSocket, IETF constrained application protocol (CoAP) and software-defined networking (SDN). A new scheme based on CoAP and SDN is proposed to solve the QoS issues. Computer experiments in a case study are implemented to show the effectiveness of the proposed system architecture.Comment: To be published by IEEE ICUWB-201

    Final report on the evaluation of RRM/CRRM algorithms

    Get PDF
    Deliverable public del projecte EVERESTThis deliverable provides a definition and a complete evaluation of the RRM/CRRM algorithms selected in D11 and D15, and evolved and refined on an iterative process. The evaluation will be carried out by means of simulations using the simulators provided at D07, and D14.Preprin

    From Social Data Mining to Forecasting Socio-Economic Crisis

    Full text link
    Socio-economic data mining has a great potential in terms of gaining a better understanding of problems that our economy and society are facing, such as financial instability, shortages of resources, or conflicts. Without large-scale data mining, progress in these areas seems hard or impossible. Therefore, a suitable, distributed data mining infrastructure and research centers should be built in Europe. It also appears appropriate to build a network of Crisis Observatories. They can be imagined as laboratories devoted to the gathering and processing of enormous volumes of data on both natural systems such as the Earth and its ecosystem, as well as on human techno-socio-economic systems, so as to gain early warnings of impending events. Reality mining provides the chance to adapt more quickly and more accurately to changing situations. Further opportunities arise by individually customized services, which however should be provided in a privacy-respecting way. This requires the development of novel ICT (such as a self- organizing Web), but most likely new legal regulations and suitable institutions as well. As long as such regulations are lacking on a world-wide scale, it is in the public interest that scientists explore what can be done with the huge data available. Big data do have the potential to change or even threaten democratic societies. The same applies to sudden and large-scale failures of ICT systems. Therefore, dealing with data must be done with a large degree of responsibility and care. Self-interests of individuals, companies or institutions have limits, where the public interest is affected, and public interest is not a sufficient justification to violate human rights of individuals. Privacy is a high good, as confidentiality is, and damaging it would have serious side effects for society.Comment: 65 pages, 1 figure, Visioneer White Paper, see http://www.visioneer.ethz.c

    A Credit-based Home Access Point (CHAP) to Improve Application Quality on IEEE 802.11 Networks

    Get PDF
    Increasing availability of high-speed Internet and wireless access points has allowed home users to connect not only their computers but various other devices to the Internet. Every device running different applications requires unique Quality of Service (QoS). It has been shown that delay- sensitive applications, such as VoIP, remote login and online game sessions, suffer increased latency in the presence of throughput-sensitive applications such as FTP and P2P. Currently, there is no mechanism at the wireless AP to mitigate these effects except explicitly classifying the traffic based on port numbers or host IP addresses. We propose CHAP, a credit-based queue management technique, to eliminate the explicit configuration process and dynamically adjust the priority of all the flows from different devices to match their QoS requirements and wireless conditions to improve application quality in home networks. An analytical model is used to analyze the interaction between flows and credits and resulting queueing delays for packets. CHAP is evaluated using Network Simulator (NS2) under a wide range of conditions against First-In-First- Out (FIFO) and Strict Priority Queue (SPQ) scheduling algorithms. CHAP improves the quality of an online game, a VoIP session, a video streaming session, and a Web browsing activity by 20%, 3%, 93%, and 51%, respectively, compared to FIFO in the presence of an FTP download. CHAP provides these improvements similar to SPQ without an explicit classification of flows and a pre- configured scheduling policy. A Linux implementation of CHAP is used to evaluate its performance in a real residential network against FIFO. CHAP reduces the web response time by up to 85% compared to FIFO in the presence of a bulk file download. Our contributions include an analytic model for the credit-based queue management, simulation, and implementation of CHAP, which provides QoS with minimal configuration at the AP

    A decentralized control and optimization framework for autonomic performance management of web-server systems

    Get PDF
    Web-based services such as online banking and e-commerce are often hosted on distributed computing systems comprising heterogeneous and networked servers in a data-center setting. To operate such systems efficiently while satisfying stringent quality-of-service (QoS) requirements, multiple performance-related parameters must be dynamically tuned to track changing operating conditions. For example, the workload to be processed may be time varying and hardware/software resources may fail during system operation. To cope with their growing scale and complexity, such computing systems must become largely autonomic, capable of being managed with minimal human intervention.This study develops a distributed cooperative-control framework using concepts from optimal control theory and hybrid dynamical systems to adaptively manage the performance of computer clusters operating in dynamic and uncertain environments. As case studies, we focus on power management and dynamic resource provisioning problems in such clusters.First, we apply the control framework to minimize the power consumed by a server cluster under a time-varying workload. The overall power-management problem is decomposed into smaller sub-problems and solved in cooperative fashion by individual controllers on each server. This approach allows for the scalable control of large computing systems. The control framework also adapts to controller failures and allows for the dynamic addition and removal of controllers during system operation. We validate the proposed approach using a discrete-event simulator with real-world workload traces, and our results indicate that the controllers achieve a 55% reduction in power consumption when compared to an uncontrolled system in which each server operates at its maximum frequency at all times.We then develop a distributed resource provisioning framework to achieve di®erentiated QoS among multiple online services using concepts from hybrid control. We use a discrete hybrid automaton to model the operation of the computing cluster. The resource provisioning problem combining both QoS control and power management is then solved using a decentralized model predictive controller to maximize the operating profits generated by the cluster according to a specified service level agreement. Simulation results indicate that the controller generates 27% additional profit when compared to an uncontrolled system.Ph.D., Electrical Engineering -- Drexel University, 200

    Performance evaluation of AAL2 over IP in the UMTS access network Iub interface

    Get PDF
    Bibliography: leaves 84-86.In this study, we proposed to retain AAL2 and lay it over IP (AAL2IIP). The IP-based lub interface is therefore designed to tunnel AAL2 channels from the Node B to the RNC. Currently IP routes packets based on best-effort which does not guarantee QoS, To provide QoS, MPLS integrated with DiffServ is proposed to support different QoS levels to different classes of service and fast forward the IP packets within the lub interface. To evaluate the performance of AAL2!IP in the Iub interface, a test-bed was created

    Ubik: efficient cache sharing with strict qos for latency-critical workloads

    Get PDF
    Chip-multiprocessors (CMPs) must often execute workload mixes with different performance requirements. On one hand, user-facing, latency-critical applications (e.g., web search) need low tail (i.e., worst-case) latencies, often in the millisecond range, and have inherently low utilization. On the other hand, compute-intensive batch applications (e.g., MapReduce) only need high long-term average performance. In current CMPs, latency-critical and batch applications cannot run concurrently due to interference on shared resources. Unfortunately, prior work on quality of service (QoS) in CMPs has focused on guaranteeing average performance, not tail latency. In this work, we analyze several latency-critical workloads, and show that guaranteeing average performance is insufficient to maintain low tail latency, because microarchitectural resources with state, such as caches or cores, exert inertia on instantaneous workload performance. Last-level caches impart the highest inertia, as workloads take tens of milliseconds to warm them up. When left unmanaged, or when managed with conventional QoS frameworks, shared last-level caches degrade tail latency significantly. Instead, we propose Ubik, a dynamic partitioning technique that predicts and exploits the transient behavior of latency-critical workloads to maintain their tail latency while maximizing the cache space available to batch applications. Using extensive simulations, we show that, while conventional QoS frameworks degrade tail latency by up to 2.3x, Ubik simultaneously maintains the tail latency of latency-critical workloads and significantly improves the performance of batch applications.United States. Defense Advanced Research Projects Agency (Power Efficiency Revolution For Embedded Computing Technologies Contract HR0011-13-2-0005)National Science Foundation (U.S.) (Grant CCF-1318384

    Discrete Event Simulations

    Get PDF
    Considered by many authors as a technique for modelling stochastic, dynamic and discretely evolving systems, this technique has gained widespread acceptance among the practitioners who want to represent and improve complex systems. Since DES is a technique applied in incredibly different areas, this book reflects many different points of view about DES, thus, all authors describe how it is understood and applied within their context of work, providing an extensive understanding of what DES is. It can be said that the name of the book itself reflects the plurality that these points of view represent. The book embraces a number of topics covering theory, methods and applications to a wide range of sectors and problem areas that have been categorised into five groups. As well as the previously explained variety of points of view concerning DES, there is one additional thing to remark about this book: its richness when talking about actual data or actual data based analysis. When most academic areas are lacking application cases, roughly the half part of the chapters included in this book deal with actual problems or at least are based on actual data. Thus, the editor firmly believes that this book will be interesting for both beginners and practitioners in the area of DES

    What's That Noise? Or, a Case Against Digital Privacy as a Matter of Regulation and Control

    Get PDF
    Digital privacy is typically understood as the restriction of access to personal information and user data. This assumes regulation and control on the part of governments and corporations, realized through various laws and policies. However, there exists another realm bearing on digital privacy. This realm involves a wider network of actors carrying out practices and techniques beyond merely governmental and corporate means: users who engage and manipulate digital privacy software that is created by coders, as well as the software itself for the ways in which it mediates the relationship between users and coders. The dissertation argues that by focusing attention on this other realm of coders, users and software interacting with one another we as analysts develop alternative understandings of digital privacy, specifically by attending to each actors noisemaking: the deliberate (or even incidental) process of obfuscating, interrupting, precluding, confusing or misleading access to digital information. The dissertation analyzes how each of these three actors engage in noisemaking across three different types of encrypted Internet systems: The Onion Router web browser; the WhatsApp instant messaging service; the SpiderOak One file hosting service. These relatively taken-for-granted actors instruct the academy that digital privacy is less about regulating and controlling information as much as it is about surrendering control over information management and security. The dissertation demonstrates that digital privacy thus ought to be understood as a reflection of the variegated, contingent and incidental nature of social and political forces unfolding at the edge of and even beyond the purview of governments and corporations
    • …
    corecore