11,196 research outputs found

    A Framework for Developing Real-Time OLAP algorithm using Multi-core processing and GPU: Heterogeneous Computing

    Full text link
    The overwhelmingly increasing amount of stored data has spurred researchers seeking different methods in order to optimally take advantage of it which mostly have faced a response time problem as a result of this enormous size of data. Most of solutions have suggested materialization as a favourite solution. However, such a solution cannot attain Real- Time answers anyhow. In this paper we propose a framework illustrating the barriers and suggested solutions in the way of achieving Real-Time OLAP answers that are significantly used in decision support systems and data warehouses

    FDMC: Framework for Decision Making in Cloud for Efficient Resource Management

    Get PDF
    An effective resource management is one of the critical success factors for precise virtualization process in cloud computing in presence of dynamic demands of the user. After reviewing the existing research work towards resource management in cloud, it was found that there is still a large scope of enhancement. The existing techniques are found not to completely utilize the potential features of virtual machine in order to perform resource allocation. This paper presents a framework called FDMC or Framework for Decision Making in Cloud that gives better capability for the VMs to perform resource allocation. The contribution of FDMC is a joint operation of VM to ensure faster processing of task and thereby withstand more number of increasing traffic. The study outcome was compared with some of the existing systems to find FDMC excels better performance in the scale of task allocation time, amount of core wasted, amount of storage wasted, and communication cost

    Supply chain management: An opportunity for metaheuristics

    Get PDF
    In today’s highly competitive and global marketplace the pressure on organizations to find new ways to create and deliver value to customers grows ever stronger. In the last two decades, logistics and supply chain has moved to the center stage. There has been a growing recognition that it is through an effective management of the logistics function and the supply chain that the goal of cost reduction and service enhancement can be achieved. The key to success in Supply Chain Management (SCM) require heavy emphasis on integration of activities, cooperation, coordination and information sharing throughout the entire supply chain, from suppliers to customers. To be able to respond to the challenge of integration there is the need of sophisticated decision support systems based on powerful mathematical models and solution techniques, together with the advances in information and communication technologies. The industry and the academia have become increasingly interested in SCM to be able to respond to the problems and issues posed by the changes in the logistics and supply chain. We present a brief discussion on the important issues in SCM. We then argue that metaheuristics can play an important role in solving complex supply chain related problems derived by the importance of designing and managing the entire supply chain as a single entity. We will focus specially on the Iterated Local Search, Tabu Search and Scatter Search as the ones, but not limited to, with great potential to be used on solving the SCM related problems. We will present briefly some successful applications.Supply chain management, metaheuristics, iterated local search, tabu search and scatter search

    Dependence-driven techniques in system design

    Get PDF
    Burstiness in workloads is often found in multi-tier architectures, storage systems, and communication networks. This feature is extremely important in system design because it can significantly degrade system performance and availability. This dissertation focuses on how to use knowledge of burstiness to develop new techniques and tools for performance prediction, scheduling, and resource allocation under bursty workload conditions.;For multi-tier enterprise systems, burstiness in the service times is catastrophic for performance. Via detailed experimentation, we identify the cause of performance degradation on the persistent bottleneck switch among various servers. This results in an unstable behavior that cannot be captured by existing capacity planning models. In this dissertation, beyond identifying the cause and effects of bottleneck switch in multi-tier systems, we also propose modifications to the classic TPC-W benchmark to emulate bursty arrivals in multi-tier systems.;This dissertation also demonstrates how burstiness can be used to improve system performance. Two dependence-driven scheduling policies, SWAP and ALoC, are developed. These general scheduling policies counteract burstiness in workloads and maintain high availability by delaying selected requests that contribute to burstiness. Extensive experiments show that both SWAP and ALoC achieve good estimates of service times based on the knowledge of burstiness in the service process. as a result, SWAP successfully approximates the shortest job first (SJF) scheduling without requiring a priori information of job service times. ALoC adaptively controls system load by infinitely delaying only a small fraction of the incoming requests.;The knowledge of burstiness can also be used to forecast the length of idle intervals in storage systems. In practice, background activities are scheduled during system idle times. The scheduling of background jobs is crucial in terms of the performance degradation of foreground jobs and the utilization of idle times. In this dissertation, new background scheduling schemes are designed to determine when and for how long idle times can be used for serving background jobs, without violating predefined performance targets of foreground jobs. Extensive trace-driven simulation results illustrate that the proposed schemes are effective and robust in a wide range of system conditions. Furthermore, if there is burstiness within idle times, then maintenance features like disk scrubbing and intra-disk data redundancy can be successfully scheduled as background activities during idle times

    Workload Interleaving with Performance Guarantees in Data Centers

    Get PDF
    In the era of global, large scale data centers residing in clouds, many applications and users share the same pool of resources for the purposes of reducing energy and operating costs, and of improving availability and reliability. Along with the above benefits, resource sharing also introduces performance challenges: when multiple workloads access the same resources concurrently, contention may occur and introduce delays in the performance of individual workloads. Providing performance isolation to individual workloads needs effective management methodologies. The challenges of deriving effective management methodologies lie in finding accurate, robust, compact metrics and models to drive algorithms that can meet different performance objectives while achieving efficient utilization of resources. This dissertation proposes a set of methodologies aiming at solving the challenging performance isolation problem in workload interleaving in data centers, focusing on both storage components and computing components. at the storage node level, we focus on methodologies for better interleaving user traffic with background workloads, such as tasks for improving reliability, availability, and power savings. More specifically, a scheduling policy for background workload based on the statistical characteristics of the system busy periods and a methodology that quantitatively estimates the performance impact of power savings are developed. at the storage cluster level, we consider methodologies on how to efficiently conduct work consolidation and schedule asynchronous updates without violating user performance targets. More specifically, we develop a framework that can estimate beforehand the benefits and overheads of each option in order to automate the process of reaching intelligent consolidation decisions while achieving faster eventual consistency. at the computing node level, we focus on improving workload interleaving at off-the-shelf servers as they are the basic building blocks of large-scale data centers. We develop priority scheduling middleware that employs different policies to schedule background tasks based on the instantaneous resource requirements of the high priority applications running on the server node. Finally, at the computing cluster level, we investigate popular computing frameworks for large-scale data intensive distributed processing, such as MapReduce and its Hadoop implementation. We develop a new Hadoop scheduler called DyScale to exploit capabilities offered by heterogeneous cores in order to achieve a variety of performance objectives

    Fog computing scheduling algorithm for smart city

    Get PDF
    With the advent of the number of smart devices across the globe, increasing the number of users using the Internet. The main aim of the fog computing (FC) paradigm is to connect huge number of smart objects (billions of object) that can make a bright future for smart cities. Due to the large deployments of smart devices, devices are expected to generate huge amounts of data and forward the data through the Internet. FC also refers to an edge computing framework that mitigates the issue by applying the process of knowledge discovery using a data analysis approach to the edges. Thus, the FC approaches can work together with the internet of things (IoT) world, which can build a sustainable infrastructure for smart cities. In this paper, we propose a scheduling algorithm namely the weighted round-robin (WRR) scheduling algorithm to execute the task from one fog node (FN) to another fog node to the cloud. Firstly, a fog simulator is used with the emergent concept of FC to design IoT infrastructure for smart cities. Then, spanning-tree routing (STP) protocol is used for data collection and routing. Further, 5G networks are proposed to establish fast transmission and communication between users. Finally, the performance of our proposed system is evaluated in terms of response time, latency, and amount of data used

    Market and price decision enhancement services for farmers in Uganda

    Get PDF

    Market and price decision enhancement services for farmers in Uganda

    Get PDF
    • …
    corecore