248 research outputs found

    To satisfy impatient Web surfers is hard

    Get PDF
    International audiencePrefetching is a basic mechanism for faster data access and efficient computing. An important issue in prefetching is the tradeoff between the amount of network's resources wasted by the prefetching and the gain of time. For instance, in the Web, browsers may download documents in advance while a Web surfer is surfing. Since the Web surfer follows the hyperlinks in an unpredictable way, the choice of the Web pages to be prefetched must be computed online. The question is then to determine the minimum amount of resources used by prefetching that ensures that all documents accessed by theWeb surfer have previously been loaded in the cache. We model this problem as a two-player game similar to Cops and Robber Games in graphs. Let k 1 be any integer. The first player, a fugitive, starts on a marked vertex of a (di)graph G. The second player, an observer, marks at most k vertices, then the fugitive moves along one edge/arc of G to a new vertex, then the observer marks at most k vertices, etc. The fugitive wins if it enters an unmarked vertex, and the observer wins otherwise. The surveillance number of a (di)graph is the minimum k such that the observer marking at most k vertices at each step can win against any strategy of the fugitive. We also consider the connected variant of this game, i.e., when a vertex can be marked only if it is adjacent to an already marked vertex. We study the computational complexity of the game. All our results hold for both variants, connected or unrestricted. We show that deciding whether the surveillance number of a chordal graph is at most 2 is NP-hard. We also prove that deciding if the surveillance number of a DAG is at most 4 is PSPACEcomplete. Moreover, we show that the problem of computing the surveillance number is NP-hard in split graphs. On the other hand, we provide polynomial time algorithms computing surveillance numbers of trees and interval graphs. Moreover, in the case of trees, we establish a combinatorial characterization of the surveillance number

    Connected Surveillance Game

    Get PDF
    International audienceThe surveillance game [Fomin et al., 2012] models the problem of web-page prefetching as a pursuit evasion game played on a graph. This two-player game is played turn-by-turn. The first player, called the observer, can mark a fixed amount of vertices at each turn. The second one controls a surfer that stands at vertices of the graph and can slide along edges. The surfer starts at some initially marked vertex of the graph, its objective is to reach an unmarked node before all nodes of the graph are marked. The surveillance number sn(G) of a graph G is the minimum amount of nodes that the observer has to mark at each turn ensuring it wins against any surfer in G. Fomin et al. also defined the connected surveillance game where the observer must ensure that marked nodes always induce a connected subgraph. They ask what is the cost of connectivity, i.e., is there a constant c > 0 such that the ratio between the connected surveillance number csn(G) and sn(G) is at most c for any graph G. It is straightforward to show that csn(G) ≤ ∆ sn(G) for any graph G with maximum degree ∆. Moreover, it has been shown that there are graphs G for which csn(G) = sn(G) + 1. In this paper, we investigate the question of the cost of the connectivity. We first provide new non-trivial upper and lower bounds for the cost of connectivity in the surveillance game. More precisely, we present a family of graphs G such that csn(G) > sn(G) + 1. Moreover, we prove that csn(G) ≤ sn(G)n for any n-node graph G. While the gap between these bounds remains huge, it seems difficult to reduce it. We then define the online surveillance game where the observer has no a priori knowledge of the graph topology and discovers it little-by-little. This variant, which fits better the prefetching motivation, is a restriction of the connected variant. Unfortunately, we show that no algorithm for solving the online surveillance game has competitive ratio better than Ω(∆). That is, while interesting, this variant does not help to obtain better upper bounds for the connected variant. We finally answer an open question [Fomin et al., 2012] by proving that deciding if the surveillance number of a digraph with maximum degree 6 is at most 2 is NP-hard

    PDF Text Searching System

    Get PDF
    This project is to develop a text searching system that assist users to develop a simple PDFtext-searching system, whichis capable of searching and processing the information in text files on user PC and in local networks. The main purpose of developing this project is to assist users in finding PDF documents and files within their local drives, where the appropriate documents can be found by entering the desired search terms (keywords) in the PDF Text Searching System. There are two objectives that have been set for this project. The first objective is to perform a study and have a better understanding on the software that will be used in order to develop PDF text-searching system, and the second objective is to develop a PDF text-searching system, which is capable of searching and processing the information in text files on userPC and in local networks. For the methodology, Rapid Application Development (RAD) approach has beenemployed. The methodology has been chosenbecause it is effective and suitable for short duration project. It was designed for developer and user to join together and work intensively toward their goal. By using the RAD methodology, the project is able to be completed within the time allocated. In the results and discussion part, it covers all the outcome that obtains from the project completion, which is based on the surveys conducted and questionnaires. In this chapter, the findings that were gain will determine whether the proposed system is acceptable and meet with the user's needs. In order to provide better services, some suggestion being carried out for future enhancement. This can improve the current system to be more efficient and effective

    The Inkwell

    Get PDF

    Satisfaire un internaute impatient est difficile

    Get PDF
    International audienceConsidérons un internaute qui va d'une page Web à une autre en suivant les liens qu'il rencontre. Pour éviter que l'internaute ne (s'im)patiente, il est important d'essayer de télécharger les documents avant que l'internaute ne les atteigne. Cependant, le coût d'un tel pré-téléchargement ne doit pas excéder le gain en temps qu'il génère. Ainsi, il faut minimiser la bande passante utilisée pour le pré-téléchargement tout en s'assurant que l'internaute impatient n'attende jamais. Nous modélisons ce problème sous forme d'un jeu de type Cops and Robber dans les graphes. En particulier, étant donnés un graphe GG qui représente le graphe du Web et une page Web de départ v0V(G)v_0 \in V(G), nous définissons l'indice de contrôle de GG, ic(G,v0)Nic(G,v_0) \in \mathbb{N}, qui modélise la vitesse minimum de téléchargement suffisante pour que l'internaute partant de v0v_0 n'attende jamais quoi qu'il fasse. Nous considérons le problème de décider si ic(G,v0)kic(G,v_0) \leq k et démontrons plusieurs résultats de complexité. En particulier, décider si ic(G,v0)2ic(G,v_0) \leq 2 est NP-difficile si GG est cordal, et décider si ic(G,v0)4ic(G,v_0) \leq 4 est PSPACE-complet si GG est un graphe orienté acyclique. Nous donnons un algorithme exponentiel exact qui calcule ic(G,v0)ic(G,v_0) en temps O(2n)O^*(2^n) dans un graphe de nn sommets quelconque. Puis, nous montrons que le problème est polynomial dans le cas des arbres et des graphes d'intervalles. Enfin, nous donnons une caractérisation combinatoire de l'indice de contrôle. Pour tout graphe GG et v0V(G)v_0 \in V(G), ic(G,v0)maxSN[S]1Sic(G,v_0) \geq \max_{S} \lceil \frac{|N[S]|-1}{|S|} \rceil avec v0SVv_0 \in S \subseteq V, SS induit un sous-graphe connexe et N[S]N[S] l'ensemble des sommets de SS ou voisins d'un sommet de SS. Il y a de plus égalité dans le cas des arbres

    Analysis of prefetching methods from a graph-theoretical perspective

    Get PDF
    Είναι σημαντικό να τονίσουμε το ρόλο που τα Δίκτυα Διανομής Περιεχομένου (CDNs) παίζουν στις ταχέως αναπτυσσόμενες τοπολογίες του Διαδικτύου. Είναι υπεύθυνα για την εξυπηρέτηση της πλειοψηφίας του περιεχομένου του Διαδικτύου στους τελικούς χρήστες αντιγράφοντας το από το διακομιστή προέλευσης και τοποθετώντας το σε έναν διακομιστή πιο κοντά τους. Τα μεγαλύτερα ίσως προβλήματα που αντιμετωπίζουν τα CDNs έχουν να κάνουν με την επιλογή του περιεχομένου που πρέπει να προανακτηθεί αλλά και την επιλογή ενός κατάλληλου διακομιστή μεσολάβησης στον οποίο θα τοποθετηθεί. Εμείς θα επικεντρωθούμε στο πρόβλημα προανάκτησής περιεχομένου επεκτείνοντας την έρευνα που έγινε από τον Σιδηρόπουλο κ.α. (WorldWideWebJournal, vol. 11, 2008, pp. 39-70). Συγκεκριμένα, θα προσπαθήσουμε να αποφανθούμε πώς η μέθοδος συσταδοποίησής τους μπορεί να δουλέψει σε συγκεκριμένα περιβάλλοντα σε σύγκριση με μια άλλη προσέγγιση που χρησιμοποιείται για την επίλυση του παιχνιδιού επιτήρησης σε γράφους όπως διερευνήθηκε από τον Fomin κ.α. (Proc. 6thInt’lConf. onFUNwithAlgorithms, 2012, pp.166-176) και τον Giroire κ.α. . (JournalofTheoreticalComputerScience, vol. 584, 2015, pp.131-143). Στην πορεία, δίνουμε και έναν άλλο ορισμό για τη συνοχή των συστάδων που καλύπτει και οριακές περιπτώσεις. Τέλος, ορίζουμε ένα καινούριο πρόβλημα, τη διαμέριση δηλαδή του γράφου σε έναν προκαθορισμένο αριθμό ανεξάρτητων συστάδων με βέλτιστη μέση συνοχή.It is important to highlight the role Content Distribution Networks (CDNs) play in rapidly growing Internet topologies. They are responsible for serving the lion's share of Internet content to the end users by replicating it from the origin server and placing it to a caching server closer to them. Probably the biggest issues CDNs have to deal with revolve around deciding which content gets prefetched, in which surrogate/caching server it is placed and allocating storage to each server in an efficient manner. We will focus on the content selection/prefetching problem extending the work done by Sidiropoulos et al. (World Wide Web Journal, vol. 11, 2008, pp. 39-70). Specifically, we are trying to determine how their clustering algorithm can work in specific environments in comparison with an approach used to solve the surveillance game in graphs as discussed by Fomin et al(Proc. 6th Int’l Conf. on FUN with Algorithms, 2012, pp.166-176)and Giroire et al. (Journal of Theoretical Computer Science, vol. 584, 2015, pp.131-143). Along the way, we provide another definition for cluster cohesion that accounts for edge cases. Finally, we define an original problem, which consists of partitioning a graph into a predefined amount of disjoint clusters of optimal average cohesion

    Fundamental Design Considerations For Creating Web Pages

    Get PDF
    Fundamental Design Considerations for Creating Web Pages is a graduate review paper that was written to increase awareness of the proper use of design principles and web page layout in designing school web sites. This was written in response to the increase of poorly designed web sites that are difficult to read and understand because of the use of improper text and text size, distracting backgrounds, and color combinations that do not match. Information is given as to what design considerations and background information is needed to create a web site. The use of tables and frames is compared along with the four design elements that consist of proximity, repetition, contrast, and alignment. The reader is also given information about readability and navigation within a web site

    PDF Text Searching System

    Get PDF
    This project is to develop a text searching system that assist users to develop a simple PDFtext-searching system, whichis capable of searching and processing the information in text files on user PC and in local networks. The main purpose of developing this project is to assist users in finding PDF documents and files within their local drives, where the appropriate documents can be found by entering the desired search terms (keywords) in the PDF Text Searching System. There are two objectives that have been set for this project. The first objective is to perform a study and have a better understanding on the software that will be used in order to develop PDF text-searching system, and the second objective is to develop a PDF text-searching system, which is capable of searching and processing the information in text files on userPC and in local networks. For the methodology, Rapid Application Development (RAD) approach has beenemployed. The methodology has been chosenbecause it is effective and suitable for short duration project. It was designed for developer and user to join together and work intensively toward their goal. By using the RAD methodology, the project is able to be completed within the time allocated. In the results and discussion part, it covers all the outcome that obtains from the project completion, which is based on the surveys conducted and questionnaires. In this chapter, the findings that were gain will determine whether the proposed system is acceptable and meet with the user's needs. In order to provide better services, some suggestion being carried out for future enhancement. This can improve the current system to be more efficient and effective
    corecore