9,090 research outputs found

    On Optimal Update Policies and Cluster Sizes for 2-Tier Distributed Systems

    Get PDF
    We try to analyze a generic model for 2-tier distributed systems, exploring the possibility of optimal cluster sizes from an information management perspective, such that the overall cost for updating and searching information may be minimized by adopting a judiciously lazy updating policy. We do not assume either centralized coordination or decentralization, and since it is an initial work, we only advocate the existence of such optimal policies rather than how such policies may be discovered by the system participants. We put our work in perspective using two examples from diverse domains of distributed systems, namely the wireless cellular networks, which are based on centralized coordination and peer-to-peer systems using clusters (like Kazaa)

    Cache Serializability: Reducing Inconsistency in Edge Transactions

    Full text link
    Read-only caches are widely used in cloud infrastructures to reduce access latency and load on backend databases. Operators view coherent caches as impractical at genuinely large scale and many client-facing caches are updated in an asynchronous manner with best-effort pipelines. Existing solutions that support cache consistency are inapplicable to this scenario since they require a round trip to the database on every cache transaction. Existing incoherent cache technologies are oblivious to transactional data access, even if the backend database supports transactions. We propose T-Cache, a novel caching policy for read-only transactions in which inconsistency is tolerable (won't cause safety violations) but undesirable (has a cost). T-Cache improves cache consistency despite asynchronous and unreliable communication between the cache and the database. We define cache-serializability, a variant of serializability that is suitable for incoherent caches, and prove that with unbounded resources T-Cache implements this new specification. With limited resources, T-Cache allows the system manager to choose a trade-off between performance and consistency. Our evaluation shows that T-Cache detects many inconsistencies with only nominal overhead. We use synthetic workloads to demonstrate the efficacy of T-Cache when data accesses are clustered and its adaptive reaction to workload changes. With workloads based on the real-world topologies, T-Cache detects 43-70% of the inconsistencies and increases the rate of consistent transactions by 33-58%.Comment: Ittay Eyal, Ken Birman, Robbert van Renesse, "Cache Serializability: Reducing Inconsistency in Edge Transactions," Distributed Computing Systems (ICDCS), IEEE 35th International Conference on, June~29 2015--July~2 201

    DEPAS: A Decentralized Probabilistic Algorithm for Auto-Scaling

    Full text link
    The dynamic provisioning of virtualized resources offered by cloud computing infrastructures allows applications deployed in a cloud environment to automatically increase and decrease the amount of used resources. This capability is called auto-scaling and its main purpose is to automatically adjust the scale of the system that is running the application to satisfy the varying workload with minimum resource utilization. The need for auto-scaling is particularly important during workload peaks, in which applications may need to scale up to extremely large-scale systems. Both the research community and the main cloud providers have already developed auto-scaling solutions. However, most research solutions are centralized and not suitable for managing large-scale systems, moreover cloud providers' solutions are bound to the limitations of a specific provider in terms of resource prices, availability, reliability, and connectivity. In this paper we propose DEPAS, a decentralized probabilistic auto-scaling algorithm integrated into a P2P architecture that is cloud provider independent, thus allowing the auto-scaling of services over multiple cloud infrastructures at the same time. Our simulations, which are based on real service traces, show that our approach is capable of: (i) keeping the overall utilization of all the instantiated cloud resources in a target range, (ii) maintaining service response times close to the ones obtained using optimal centralized auto-scaling approaches.Comment: Submitted to Springer Computin

    Enhancing the Supply Chain Performance by Integrating Simulated and Physical Agents into Organizational Information Systems

    Get PDF
    As the business environment gets more complicated, organizations must be able to respond to the business changes and adjust themselves quickly to gain their competitive advantages. This study proposes an integrated agent system, called SPA, which coordinates simulated and physical agents to provide an efficient way for organizations to meet the challenges in managing supply chains. In the integrated framework, physical agents coordinate with inter-organizations\' physical agents to form workable business processes and detect the variations occurring in the outside world, whereas simulated agents model and analyze the what-if scenarios to support physical agents in making decisions. This study uses a supply chain that produces digital still cameras as an example to demonstrate how the SPA works. In this example, individual information systems of the involved companies equip with the SPA and the entire supply chain is modeled as a hierarchical object oriented Petri nets. The SPA here applies the modified AGNES data clustering technique and the moving average approach to help each firm generalize customers\' past demand patterns and forecast their future demands. The amplitude of forecasting errors caused by bullwhip effects is used as a metric to evaluate the degree that the SPA affects the supply chain performance. The experimental results show that the SPA benefits the entire supply chain by reducing the bullwhip effects and forecasting errors in a dynamic environment.Supply Chain Performance Enhancement; Bullwhip Effects; Simulated Agents; Physical Agents; Dynamic Customer Demand Pattern Discovery

    Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey

    Full text link
    Wireless sensor networks (WSNs) consist of autonomous and resource-limited devices. The devices cooperate to monitor one or more physical phenomena within an area of interest. WSNs operate as stochastic systems because of randomness in the monitored environments. For long service time and low maintenance cost, WSNs require adaptive and robust methods to address data exchange, topology formulation, resource and power optimization, sensing coverage and object detection, and security challenges. In these problems, sensor nodes are to make optimized decisions from a set of accessible strategies to achieve design goals. This survey reviews numerous applications of the Markov decision process (MDP) framework, a powerful decision-making tool to develop adaptive algorithms and protocols for WSNs. Furthermore, various solution methods are discussed and compared to serve as a guide for using MDPs in WSNs
    • …
    corecore