9 research outputs found

    Dynamic generation of personalized hybrid recommender systems

    Get PDF

    A Persistent Publish/Subscribe System for Mobile Edge Computing

    Get PDF
    In recent times, we have seen an incredible growth of users adopting mobile devices andwearables, and while the hardware capabilities of these devices have greatly increased year after year, mobile communications still remain a bottleneck for most applications. This is partially caused by the companies’ cloud infrastructure, which effectively represents a large scale communication hub where all kinds of platforms compete with each other for the servers’ processing power and channel throughput. Additionally, wireless technologies used in mobile environments are unreliable, slow and congestion-prone by nature when compared to the wired medium counterpart. To fix the back-and-forth mobile communication overhead, the “Edge” paradigm has been recently introduced with the aim to bring cloud services closer to the customers, by providing an intermediate layer between the end devices and the actual cloud infrastructure, resulting in faster response times. Publish/Subscribe systems, such as Thyme, have also been proposed and proven effective for data dissemination at edge networks, due to the interactions’ loosely coupled nature and scalability. Nonetheless, solely relying on P2P interactions is not feasible in every scenario due to wireless protocols’ range limitations. In this thesis we propose and develop Thyme- Infrastructure, an extension to the Thyme framework, that utilizes available stationary nodes within the edge infrastructure to not only improve the performance of mobile clients within a BSS, by offloading a portion of the requests to be processed by the infrastructure, but also to connect multiple clusters of users within the same venue, with the goal of creating a persistent and global end-to-end storage network. Our experimental results, both in simulated and real-world scenarios, show adequate response times for interactive usage, and low energy consumption, allowing the application to be used in a variety of events without excessive battery drainage. In fact, when compared to the previous version of Thyme, our framework was generally able to improve on all of these metrics. On top of that, we evaluated our system’s latencies against a full-fledged cloud solution and verified that our proposal yielded a considerable speedup across the board

    Improving Transaction Acceptance of Incoherent Updates Using Dynamic Merging In a Relational Database

    Get PDF
    Title from PDF of title page, viewed on March 23, 2016Thesis advisor: Vijay KumarVitaIncludes bibliographical references (pages 205-206)Thesis (M.S.)--School of Computing and Engineering. University of Missouri--Kansas City, 2015Despite its tenure, mobile computing continues to move to the forefront of technology and business. This ever expansive field holds no shortages of opportunity for either party. Its benefits and demand are abundant but it is not without its challenges. Maintaining both data consistency and availability is one of the most challenging prospects for mobile computing. These difficulties are exacerbated by the unique ability of mobile platforms to disconnect for extended periods of time while continuing to function normally. Data collected and modified while in such a state poses considerable risk of abandon as there exists no static algorithm to determine that it is consistent when integrated back to the server. This thesis proposes a mechanism to improve transaction acceptance without sacrificing consistency of the related data on both the client and server. Particular consideration is placed towards honoring data which a client may produce or modify while in a disconnected state. The underlying framework leverages merging strategies to resolve conflicts in data using a custom tiered dynamic merge granularity. The merge process is aided by a custom lock promotion scheme applied in the application layer at the server. The improved incoherence resolution process is then examined for impacts to the fate of such transactions and related bandwidth utilization.Introduction -- Related work -- Approach -- Implementation -- Evaluation -- Conclusion -- Appendix A. Client API documentation -- Appendix B. Client code -- Appendix C. Server code -- Appendix D. Property performance dat

    Counterintuitive Characteristics of Optimal Distributed LRU Caching Over Unreliable Channels

    No full text

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Workload Modeling for Computer Systems Performance Evaluation

    Full text link

    Fish4Knowledge: Collecting and Analyzing Massive Coral Reef Fish Video Data

    Get PDF
    This book gives a start-to-finish overview of the whole Fish4Knowledge project, in 18 short chapters, each describing one aspect of the project. The Fish4Knowledge project explored the possibilities of big video data, in this case from undersea video. Recording and analyzing 90 thousand hours of video from ten camera locations, the project gives a 3 year view of fish abundance in several tropical coral reefs off the coast of Taiwan. The research system built a remote recording network, over 100 Tb of storage, supercomputer processing, video target detection and
    corecore