3,570 research outputs found

    Heuristics for Client Assignment and Load Balancing Problems in Online Games

    Get PDF
    Massively multiplayer online games (MMOGs) have been very popular over the past decade. The infrastructure necessary to support a large number of players simultaneously playing these games raises interesting problems to solve. Since the computations involved in solving those problems need to be done while the game is being played, they should not be so expensive that they cause any noticeable slowdown, as this would lead to a poor player perception of the game. Many of the problems in MMOGs are NP-Hard or NP-Complete, therefore we must develop heuristics for those problems without negatively affecting the player experience as a result of excessive computation. In this dissertation, we focus on a few of the problems encountered in MMOGs – the Client Assignment Problem (CAP) and both centralized and distributed load balancing – and develop heuristics for each. For the CAP we investigate how best to assign players to servers while meeting several conditions for satisfactory play, while in load balancing we investigate how best to distribute load among game servers subject to several criteria. In particular, we develop three heuristics - a heuristic for a variant of the CAP called Offline CAP-Z, a heuristic for centralized load balancing called BreakpointLB, and a heuristic for distributed load balancing called PLGR. We develop a simulator to simulate the operations of an MMOG and implement our heuristics to measure performance against adapted heuristics from the literature. We find that in many cases we are able to produce better results than those adapted heuristics, showing promise for implementation into production environments. Further, we believe that these ideas could also be easily adapted to the numerous other problems to solve in MMOGs, and they merit further consideration and augmentation for future research

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Clustering System and Clustering Support Vector Machine for Local Protein Structure Prediction

    Get PDF
    Protein tertiary structure plays a very important role in determining its possible functional sites and chemical interactions with other related proteins. Experimental methods to determine protein structure are time consuming and expensive. As a result, the gap between protein sequence and its structure has widened substantially due to the high throughput sequencing techniques. Problems of experimental methods motivate us to develop the computational algorithms for protein structure prediction. In this work, the clustering system is used to predict local protein structure. At first, recurring sequence clusters are explored with an improved K-means clustering algorithm. Carefully constructed sequence clusters are used to predict local protein structure. After obtaining the sequence clusters and motifs, we study how sequence variation for sequence clusters may influence its structural similarity. Analysis of the relationship between sequence variation and structural similarity for sequence clusters shows that sequence clusters with tight sequence variation have high structural similarity and sequence clusters with wide sequence variation have poor structural similarity. Based on above knowledge, the established clustering system is used to predict the tertiary structure for local sequence segments. Test results indicate that highest quality clusters can give highly reliable prediction results and high quality clusters can give reliable prediction results. In order to improve the performance of the clustering system for local protein structure prediction, a novel computational model called Clustering Support Vector Machines (CSVMs) is proposed. In our previous work, the sequence-to-structure relationship with the K-means algorithm has been explored by the conventional K-means algorithm. The K-means clustering algorithm may not capture nonlinear sequence-to-structure relationship effectively. As a result, we consider using Support Vector Machine (SVM) to capture the nonlinear sequence-to-structure relationship. However, SVM is not favorable for huge datasets including millions of samples. Therefore, we propose a novel computational model called CSVMs. Taking advantage of both the theory of granular computing and advanced statistical learning methodology, CSVMs are built specifically for each information granule partitioned intelligently by the clustering algorithm. Compared with the clustering system introduced previously, our experimental results show that accuracy for local structure prediction has been improved noticeably when CSVMs are applied

    The Centripetal Network: How the Internet Holds Itself Together, and the Forces Tearing It Apart

    Get PDF
    Two forces are in tension as the Internet evolves. One pushes toward interconnected common platforms; the other pulls toward fragmentation and proprietary alternatives. Their interplay drives many of the contentious issues in cyberlaw, intellectual property, and telecommunications policy, including the fight over network neutrality for broadband providers, debates over global Internet governance, and battles over copyright online. These are more than just conflicts between incumbents and innovators, or between openness and deregulation. Their roots lie in the fundamental dynamics of interconnected networks. Fortunately, there is an interdisciplinary literature on network properties, albeit one virtually unknown to legal scholars. The emerging field of network formation theory explains the pressures threatening to pull the Internet apart, and suggests responses. The Internet as we know it is surprisingly fragile. To continue the extraordinary outpouring of creativity and innovation that the Internet fosters, policy-makers must protect its composite structure against both fragmentation and excessive concentration of power. This paper, the first to apply network formation models to Internet law, shows how the Internet pulls itself together as a coherent whole. This very process, however, creates and magnifies imbalances that encourage balkanization. By understanding how networks behave, governments and other legal decision-makers can avoid unintended consequences and target their actions appropriately. A network-theoretic perspective holds great promise to inform the law and policy of the information economy

    Classification systems in the hospitals

    Get PDF
    Cílem této diplomové práce bylo nastudovat problematiku klasifikačních systémů v nemocnicích, které sledují především hospodářské požadavky hospitalizace. Následně bylo navrženo a naprogramováno webové rozhraní v prostředí Caché Server Pages (CSP), které poskytuje přístup k databázi CLINICOM. Pomocí webového rozhraní, je možno provádět klasifikaci pacientů do tříd MDC (Major Diagnostic Category), archivaci dat, výpočet DRG a další vybrané úkony. Webovou aplikaci by mohli využívat technicko-hospodářští pracovníci nemocnic a klinik jako jednoduchou pomůcku při své práci, respektive jako učební pomůcka biomedicínských oborů pří výuce zdravotnických informačních a klasifikačních systémů.The aim of this thesis is to study the issue of classification systems in hospitals, pursuing primarily economic demands of hospitalization, after that design and programmed a web interface Caché Server Pages (CSP), which provide access to the CLINICOM database. Using the web interface, it is possible to classify patients into the MDC (Major Diagnostic Category) classes, data archiving, calculation of DRG and other selected tasks. Web application could be used by technical and administrative staff of hospitals and clinics as a simple tool in their work, or as a teaching tool for biomedical fields in teaching health information and classification systems.

    DECENTRALIZED AND SCALABLE RESOURCE MANAGEMENT FOR DESKTOP GRIDS

    Get PDF
    The recent growth of the Internet and the CPU power of personal computers and workstations enables desktop grid computing to achieve tremendous computing power with low cost, through opportunistic sharing of resources. However, traditional server-client Grid architectures have inherent problems in robustness, reliability and scalability. Researchers have therefore recently turned to Peer-to-Peer (P2P) algorithms in an attempt to address these issues. I have designed and evaluated a set of protocols that implement a scalable P2P desktop grid computing system for executing Grid applications on widely distributed sets of resources. Such infrastructure must be decentralized, robust, highly available and scalable, while effectively mapping application instances to available resources throughout the system (called matchmaking). First of all, I address the problem of efficient matchmaking of jobs to available system resources by employing customized Content-Addressable Network (CAN) where each resource type corresponds to a distinct dimension. With this approach, incoming jobs are matched with system nodes through proximity in an N-dimensional resource space. Second, I provide comprehensive load balancing mechanisms that can greatly improve overall system throughput and response time without using any centralized control or information about the system. Finally, to remove any hot spots in the system where a small number of nodes are processing a lot of system maintenance work, I have designed a set of optimizations to minimize overall system overheads and distribute them fairly among available system nodes. My ultimate goal is to ensure that no node in the system becomes much more heavily loaded than others, either because of executing jobs or from system maintenance tasks. This is because every node in our system is a peer, so that no node is acting as a pure server or a pure client. Throughout extensive experimental results, I show that the resulting P2P desktop grid computing system is scalable and effective so that it can efficiently match any type of resource requirements for jobs simultaneously, while balancing load among multiple candidate nodes

    Position-relative identities in the internet of things: An evolutionary GHT approach

    Get PDF
    The Internet of Things (IoT) will result in the deployment of many billions of wireless embedded systems creating interactive pervasive environments. It is envisaged that devices will cooperate to provide greater system knowledge than the sum of its parts. In an emergency situation, the flow of data across the IoT may be disrupted, giving rise to a requirement for machine-to-machine interaction within the remaining ubiquitous environment. Geographic hash tables (GHTs) provide an efficient mechanism to support fault-tolerant rendezvous communication between devices. However, current approaches either rely on devices being equipped with a GPS or being manually assigned an identity. This is unrealistic when the majority of these systems will be located inside buildings and will be too numerous to expect manual configuration. Additionally, when using GHT as a distributed data store, imbalance in the topology can lead to storage and routing overhead. This causes unfair work load, exhausting limited power supplies as well as causing poor data redundancy. To deal with these issues, we propose an approach that balances graph-based layout identity assignment, through the application of multifitness genetic algorithms. Our experiments show through simulation that our multifitness evolution technique improves on the initial graph-based layout, providing devices with improved balance and reachability metrics

    Critical Team Composition Issues for Long-Distance and Long-Duration Space Exploration: A Literature Review, an Operational Assessment, and Recommendations for Practice and Research

    Get PDF
    Prevailing team effectiveness models suggest that teams are best positioned for success when certain enabling conditions are in place (Hackman, 1987; Hackman, 2012; Mathieu, Maynard, Rapp, & Gilson, 2008; Wageman, Hackman, & Lehman, 2005). Team composition, or the configuration of member attributes, is an enabling structure key to fostering competent teamwork (Hackman, 2002; Wageman et al., 2005). A vast body of research supports the importance of team composition in team design (Bell, 2007). For example, team composition is empirically linked to outcomes such as cooperation (Eby & Dobbins, 1997), social integration (Harrison, Price, Gavin, & Florey, 2002), shared cognition (Fisher, Bell, Dierdorff, & Belohlav, 2012), information sharing (Randall, Resick, & DeChurch, 2011), adaptability (LePine, 2005), and team performance (e.g., Bell, 2007). As such, NASA has identified team composition as a potentially powerful means for mitigating the risk of performance decrements due to inadequate crew cooperation, coordination, communication, and psychosocial adaptation in future space exploration missions. Much of what is known about effective team composition is drawn from research conducted in conventional workplaces (e.g., corporate offices, production plants). Quantitative reviews of the team composition literature (e.g., Bell, 2007; Bell, Villado, Lukasik, Belau, & Briggs, 2011) are based primarily on traditional teams. Less is known about how composition affects teams operating in extreme environments such as those that will be experienced by crews of future space exploration missions. For example, long-distance and long-duration space exploration (LDSE) crews are expected to live and work in isolated and confined environments (ICEs) for up to 30 months. Crews will also experience communication time delays from mission control, which will require crews to work more autonomously (see Appendix A for more detailed information regarding the LDSE context). Given the unique context within which LDSE crews will operate, NASA identified both a gap in knowledge related to the effective composition of autonomous, LDSE crews, and the need to identify psychological and psychosocial factors, measures, and combinations thereof that can be used to compose highly effective crews (Team Gap 8). As an initial step to address Team Gap 8, we conducted a focused literature review and operational assessment related to team composition issues for LDSE. The objectives of our research were to: (1) identify critical team composition issues and their effects on team functioning in LDSE-analogous environments with a focus on key composition factors that will most likely have the strongest influence on team performance and well-being, and 1 Astronaut diary entry in regards to group interaction aboard the ISS (p.22; Stuster, 2010) 2 (2) identify and evaluate methods used to compose teams with a focus on methods used in analogous environments. The remainder of the report includes the following components: (a) literature review methodology, (b) review of team composition theory and research, (c) methods for composing teams, (d) operational assessment results, and (e) recommendations

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Labour reallocation during transition: the case of Poland

    Get PDF
    This paper analyses the reallocation of labour during the transition period, which is argued not only to ease the transition from a planned to market orientated economy, but also to be fundamental to the successful integration of Poland into the European Union. Labour force survey data is used to gauge the overall level of reallocation during the period 1994-1998, a period in which the transition process is considered to be well and truly under way. The results obtained illustrate the inherent immobility prevailing in the Polish labour market during this period and would appear to suggest the presence of relatively significant structural rigidities in the labour market. It is argued that mobility rates of this magnitude are likely to result in considerable strains being placed on the Polish economy when it enters the European Union and could, over the medium term, result in relatively high levels of unemployment. Unless mobility is stimulated, European accession is therefore likely to be a socially costly process. The microeconometric analysis of the determinants of individual mobility presented in the second part of the paper offers a first step to identifying the demographic, economic and social attributes which either aid or inhibit effective labour reallocation. The results obtained highlight a number of important differences in mobility behaviour across age, gender, educational attainment, occupational grouping and labour market experience, which will need to be taken into account in the formulation of active labour market policies to stimulate individual mobility. --
    • …
    corecore