5,651 research outputs found

    Re-engineering jake2 to work on a grid using the GridGain Middleware

    Get PDF
    With the advent of Massively Multiplayer Online Games (MMOGs), engineers and designers of games came across with many questions that needed to be answered such as, for example, "how to allow a large amount of clients to play simultaneously on the same server?", "how to guarantee a good quality of service (QoS) to a great number of clients?", "how many resources will be necessary?", "how to optimize these resources to the maximum?". A possible answer to these questions relies on the usage of grid computing. Taking into account the parallel and distributed nature of grid computing, we can say that grid computing allows for more scalability in terms of a growing number of players, guarantees shorter communication time between clients and servers, and allows for a better resource management and usage (e.g., memory, CPU, core balancing usage, etc.) than the traditional serial computing model. However, the main focus of this thesis is not about grid computing. Instead, this thesis describes the re-engineering process of an existing multiplayer computer game, called Jake2, by transforming it into a MMOG, which is then put to run on a grid

    Cache Serializability: Reducing Inconsistency in Edge Transactions

    Full text link
    Read-only caches are widely used in cloud infrastructures to reduce access latency and load on backend databases. Operators view coherent caches as impractical at genuinely large scale and many client-facing caches are updated in an asynchronous manner with best-effort pipelines. Existing solutions that support cache consistency are inapplicable to this scenario since they require a round trip to the database on every cache transaction. Existing incoherent cache technologies are oblivious to transactional data access, even if the backend database supports transactions. We propose T-Cache, a novel caching policy for read-only transactions in which inconsistency is tolerable (won't cause safety violations) but undesirable (has a cost). T-Cache improves cache consistency despite asynchronous and unreliable communication between the cache and the database. We define cache-serializability, a variant of serializability that is suitable for incoherent caches, and prove that with unbounded resources T-Cache implements this new specification. With limited resources, T-Cache allows the system manager to choose a trade-off between performance and consistency. Our evaluation shows that T-Cache detects many inconsistencies with only nominal overhead. We use synthetic workloads to demonstrate the efficacy of T-Cache when data accesses are clustered and its adaptive reaction to workload changes. With workloads based on the real-world topologies, T-Cache detects 43-70% of the inconsistencies and increases the rate of consistent transactions by 33-58%.Comment: Ittay Eyal, Ken Birman, Robbert van Renesse, "Cache Serializability: Reducing Inconsistency in Edge Transactions," Distributed Computing Systems (ICDCS), IEEE 35th International Conference on, June~29 2015--July~2 201

    Three applications for mobile epidemic algorithms

    Get PDF
    This paper presents a framework for the pervasive sharing of data using wireless networks. 'FarCry' uses the mobility of users to carry files between separated networks. Through a mix of ad-hoc and infrastructure-based wireless networking, files are transferred between users without their direct involvement. As users move to different locations, files are then transmitted on to other users, spreading and sharing information. We examine three applications of this framework. Each of these exploits the physically proximate nature of social gatherings. As people group together in, for example, business meetings and cafés, this can be taken as an indication of similar interests, e.g. in the same presentation or in a type of music. MediaNet affords sharing of media files between strangers or friends, MeetingNet shares business documents in meetings, and NewsNet shares RSS feeds between mobile users. NewsNet also develops the use of pre-emptive caching: collecting information from others not for oneself, but for the predicted later sharing with others. We offer observations on developing this system for a mobile, multi-user, multi-device environment

    Entropy/IP: Uncovering Structure in IPv6 Addresses

    Full text link
    In this paper, we introduce Entropy/IP: a system that discovers Internet address structure based on analyses of a subset of IPv6 addresses known to be active, i.e., training data, gleaned by readily available passive and active means. The system is completely automated and employs a combination of information-theoretic and machine learning techniques to probabilistically model IPv6 addresses. We present results showing that our system is effective in exposing structural characteristics of portions of the IPv6 Internet address space populated by active client, service, and router addresses. In addition to visualizing the address structure for exploration, the system uses its models to generate candidate target addresses for scanning. For each of 15 evaluated datasets, we train on 1K addresses and generate 1M candidates for scanning. We achieve some success in 14 datasets, finding up to 40% of the generated addresses to be active. In 11 of these datasets, we find active network identifiers (e.g., /64 prefixes or `subnets') not seen in training. Thus, we provide the first evidence that it is practical to discover subnets and hosts by scanning probabilistically selected areas of the IPv6 address space not known to contain active hosts a priori.Comment: Paper presented at the ACM IMC 2016 in Santa Monica, USA (https://dl.acm.org/citation.cfm?id=2987445). Live Demo site available at http://www.entropy-ip.com

    FogLearn: Leveraging Fog-based Machine Learning for Smart System Big Data Analytics

    Get PDF
    Big data analytics with the cloud computing are one of the emerging area for processing and analytics. Fog computing is the paradigm where fog devices help to reduce latency and increase throughput for assisting at the edge of the client. This paper discussed the emergence of fog computing for mining analytics in big data from geospatial and medical health applications. This paper proposed and developed fog computing based framework i.e. FogLearn for application of K-means clustering in Ganga River Basin Management and realworld feature data for detecting diabetes patients suffering from diabetes mellitus. Proposed architecture employed machine learning on deep learning framework for analysis of pathological feature data that obtained from smart watches worn by the patients with diabetes and geographical parameters of River Ganga basin geospatial database. The results showed that fog computing hold an immense promise for analysis of medical and geospatial big data

    New Method of Measuring TCP Performance of IP Network using Bio-computing

    Full text link
    The measurement of performance of Internet Protocol IP network can be done by Transmission Control Protocol TCP because it guarantees send data from one end of the connection actually gets to the other end and in the same order it was send, otherwise an error is reported. There are several methods to measure the performance of TCP among these methods genetic algorithms, neural network, data mining etc, all these methods have weakness and can't reach to correct measure of TCP performance. This paper proposed a new method of measuring TCP performance for real time IP network using Biocomputing, especially molecular calculation because it provides wisdom results and it can exploit all facilities of phylogentic analysis. Applying the new method at real time on Biological Kurdish Messenger BIOKM model designed to measure the TCP performance in two types of protocols File Transfer Protocol FTP and Internet Relay Chat Daemon IRCD. This application gives very close result of TCP performance comparing with TCP performance which obtains from Little's law using same model (BIOKM), i.e. the different percentage of utilization (Busy or traffic industry) and the idle time which are obtained from a new method base on Bio-computing comparing with Little's law was (nearly) 0.13%. KEYWORDS Bio-computing, TCP performance, Phylogenetic tree, Hybridized Model (Normalized), FTP, IRCDComment: 17 Pages,10 Figures,5 Table
    corecore