1,388 research outputs found

    Supporting disconnection operations through cooperative hoarding

    Get PDF
    Mobile clients often need to operate while disconnected from the network due to limited battery life and network coverage. Hoarding supports this by fetching frequently accessed data into clients' local caches prior to disconnection. Existing work on hoarding have focused on improving data accessibility for individual mobile clients. However, due to storage limitations, mobile clients may not be able to hoard every data object they need. This leads to cache misses and disruption to clients' operations. In this paper, a new concept called cooperative hoarding is introduced to reduce the risks of cache misses for mobile clients. Cooperative hoarding takes advantage of group mobility behaviour, combined with peer cooperation in ad-hoc mode, to improve hoard performance. Two cooperative hoarding approaches are proposed that take into account access frequency, connection probability, and cache size of mobile clients so that hoarding can be performed cooperatively. Simulation results show that the proposed methods significantly improve cache hit ratio and provides better support for disconnected operations compared to existing schemes

    Towards an Improved Hoarding Procedure in a Mobile Environment

    Get PDF
    Frequent disconnection has been a critical issue in wireless network communication therefore causing excessive delay in data delivery. In this paper, we formulated a management mechanism based on computational optimization to achieve efficient and fast computation in order to reduce inherent delay during the hoarding process. The simulated result obtained is evaluated based on hoard size and delivery time. Keywords: Hoarding Procedure, Mobile computing Environment and Computational Optimization

    A new splitting-based displacement prediction approach for location-based services

    Get PDF
    In location-based services (LBSs), the service is provided based on the users' locations through location determination and mobility realization. Several location prediction models have been proposed to enhance and increase the relevance of the information retrieved by users of mobile information systems, but none of them studied the relationship between accuracy rate of prediction and the performance of the model in terms of consuming resources and constraints of mobile devices. Most of the current location prediction research is focused on generalized location models, where the geographic extent is divided into regular-shape cells. These models are not suitable for certain LBSs where the objectives are to compute and present on-road services. One such technique is the Prediction Location Model (PLM), which deals with inner cell structure. The PLM technique suffers from memory usage and poor accuracy. The main goal of this paper is to propose a new path prediction technique for Location-Based Services. The new approach is competitive and more efficient compared to PLM regarding measurements such as accuracy rate of location prediction and memory usage

    Model-driven dual caching For nomadic service-oriented architecture clients

    Get PDF
    Mobile devices have evolved over the years from resource constrained devices that supported only the most basic tasks to powerful handheld computing devices. However, the most significant step in the evolution of mobile devices was the introduction of wireless connectivity which enabled them to host applications that require internet connectivity such as email, web browsers and maybe most importantly smart/rich clients. Being able to host smart clients allows the users of mobile devices to seamlessly access the Information Technology (IT) resources of their organizations. One increasingly popular way of enabling access to IT resources is by using Web Services (WS). This trend has been aided by the rapid availability of WS packages/tools, most notably the efforts of the Apache group and Integrated Development Environment (IDE) vendors. But the widespread use of WS raises questions for users of mobile devices such as laptops or PDAs; how and if they can participate in WS. Unlike their “wired” counterparts (desktop computers and servers) they rely on a wireless network that is characterized by low bandwidth and unreliable connectivity.The aim of this thesis is to enable mobile devices to host Web Services consumers. It introduces a Model-Driven Dual Caching (MDDC) approach to overcome problems arising from temporarily loss of connectivity and fluctuations in bandwidth

    User-activity aware strategies for mobile information access

    Get PDF
    Information access suffers tremendously in wireless networks because of the low correlation between content transferred across low-bandwidth wireless links and actual data used to serve user requests. As a result, conventional content access mechanisms face such problems as unnecessary bandwidth consumption and large response times, and users experience significant performance degradation. In this dissertation, we analyze the cause of those problems and find that the major reason for inefficient information access in wireless networks is the absence of any user-activity awareness in current mechanisms. To solve these problems, we propose three user-activity aware strategies for mobile information access. Through simulations and implementations, we show that our strategies can outperform conventional information access schemes in terms of bandwidth consumption and user-perceived response times.Ph.D.Committee Chair: Raghupathy Sivakumar; Committee Member: Chuanyi Ji; Committee Member: George Riley; Committee Member: Magnus Egerstedt; Committee Member: Umakishore Ramachandra

    An actor-model based bottom-up simulation - An experiment on Indian demonetisation initiative

    Get PDF
    The dominance of cash-based transactions and relentless growth of a shadow economy triggered a fiscal intervention by the Indian government wherein 86% of the total cash in circulation was pulled out in a sudden announcement on November 8, 2016. This disruptive initiative resulted into prolonged cash shortages, financial inconvenience, and crisis situation to cross-section of population of the country. Overall, the initiative has faced a lot of criticism as being poorly thought through and inadequately planned. We claim that these emerging adverse conditions could have been anticipated well in advance with appropriate experimental setup. We further claim that the efficacy of possible courses of actions for managing critical situations, and probable consequences of the courses of action could have been estimated in a laboratory setting. This paper justifies our claims with an experimental setup relying on what-if analysis using an actor-based bottom up simulation approach

    An actor-model based bottom-up simulation - An experiment on Indian demonetisation initiative

    Get PDF
    The dominance of cash-based transactions and relentless growth of a shadow economy triggered a fiscal intervention by the Indian government wherein 86% of the total cash in circulation was pulled out in a sudden announcement on November 8, 2016. This disruptive initiative resulted into prolonged cash shortages, financial inconvenience, and crisis situation to cross-section of population of the country. Overall, the initiative has faced a lot of criticism as being poorly thought through and inadequately planned. We claim that these emerging adverse conditions could have been anticipated well in advance with appropriate experimental setup. We further claim that the efficacy of possible courses of actions for managing critical situations, and probable consequences of the courses of action could have been estimated in a laboratory setting. This paper justifies our claims with an experimental setup relying on what-if analysis using an actor-based bottom up simulation approach

    Contention management for distributed data replication

    Get PDF
    PhD ThesisOptimistic replication schemes provide distributed applications with access to shared data at lower latencies and greater availability. This is achieved by allowing clients to replicate shared data and execute actions locally. A consequence of this scheme raises issues regarding shared data consistency. Sometimes an action executed by a client may result in shared data that may conflict and, as a consequence, may conflict with subsequent actions that are caused by the conflicting action. This requires a client to rollback to the action that caused the conflicting data, and to execute some exception handling. This can be achieved by relying on the application layer to either ignore or handle shared data inconsistencies when they are discovered during the reconciliation phase of an optimistic protocol. Inconsistency of shared data has an impact on the causality relationship across client actions. In protocol design, it is desirable to preserve the property of causality between different actions occurring across a distributed application. Without application level knowledge, we assume an action causes all the subsequent actions at the same client. With application knowledge, we can significantly ease the protocol burden of provisioning causal ordering, as we can identify which actions do not cause other actions (even if they precede them). This, in turn, makes possible the client’s ability to rollback to past actions and to change them, without having to alter subsequent actions. Unfortunately, increased instances of application level causal relations between actions lead to a significant overhead in protocol. Therefore, minimizing the rollback associated with conflicting actions, while preserving causality, is seen as desirable for lower exception handling in the application layer. In this thesis, we present a framework that utilizes causality to create a scheduler that can inform a contention management scheme to reduce the rollback associated with the conflicting access of shared data. Our framework uses a backoff contention management scheme to provide causality preserving for those optimistic replication systems with high causality requirements, without the need for application layer knowledge. We present experiments which demonstrate that our framework reduces clients’ rollback and, more importantly, that the overall throughput of the system is improved when the contention management is used with applications that require causality to be preserved across all actions

    Liquidity, Contagion and Financial Crisis

    Get PDF
    We develop a theoretical model where a redistribution of bank capital (e.g., due to reckless trading and/or faulty risk management) leads to a “freeze” of the interbank market. The fire-sale market plays a central role in spreading the crisis to the real economy. In crisis, credit rationing and liquidity hoarding appear simultaneously; endogenous levels of collateral (or margin requirements) are affected by both low fire-sale prices and high lending rates. Relative to previous analysis, this dual channel generates a stronger price and output effect. The main focus is on the policy analysis. We show that i) non-discriminating equity injections are more effective than liquidity injections, but in both the welfare effect is an order-of-magnitude lower than the price effect; ii) a discriminating policy that bails out only distressed banks is feasible but will be limited by incentive-compatibility constraints; iii) a restriction on international capital flows has an ambiguous effect on welfare.Debt deflation, Bailout, Liquidity Injection
    corecore