20 research outputs found

    A Prototype for an e-Recruitment Platform using Semantic Web Technologies

    No full text
    Nowadays, the continuous demand for qualified candidates in the IT domain has empowered the use of e-Recruitment tools, which are becoming more and more exploited at the expense of the traditional methods. This study focuses on the use of ontologies in developing a job recommender system, which helps to automatically match job offers with the candidates' profiles and in reverse. In order to design the IT e-Recruitment ontology we have gathered a list with all the features a platform of this kind should provide, for both the job seeker and the recruiter. Based on the selected requirements, we developed the ontology that offers all the necessary means to implement such a job recommender system, designed to connect people with job opportunities and backwards. The second part of the paper proposes a Java based architecture to implement the e-Recruitment platform. To convert the users' input into RDF description, RDF2Go and RDFBeans APIs are employed. Storing and retrieving the data make use of the Jena Framework, which provides dedicated interfaces for accessing the Fuseki2 Server over HTTP. Finally, the prototype of the e-Recruitment platform is presented, together with its core functionalities

    Reducing Maximum Stretch in Compact Routing

    No full text
    Abstract—It is important in communication networks to use routes that are as short as possible (i.e have low stretch) while keeping routing tables small. Recent advances in compact routing show that a stretch of 3 can be achieved while maintaining a sublinear (in the size of the network) space at each node [14]. It is also known that no routing scheme can achieve stretch less than 3 with sub-linear space for arbitrary networks. In contrast, simulations on real-life networks have indicated that stretch less than 3 can indeed be obtained using sub-linear sized routing tables[6]. In this paper, we further investigate the space-stretch tradeoffs for compact routing by analyzing a specific class of graphs and by presenting an efficient algorithm that (approximately) finds the optimum space-stretch tradeoff for any given network. We first study a popular model of random graphs, known as Bernoulli random graphs or Erdős-Renyi graphs, and prove that stretch less than 3 can be obtained in conjunction with sublinear routing tables. In particular, stretch 2 can be obtained using routing tables that grow roughly as n 3/4 where n is the number of nodes in the network. Compact routing schemes often involve the selection of landmarks. We present a simple greedy scheme for landmark selection that takes a desired stretch s and a budget L on the number of landmarks as input, and produces a set of at most O(L log n) landmarks that achieve stretch s. Our scheme produces routing tables that use no more than O(log n) more space than the optimum scheme for achieving stretch s with L landmarks. This may be a valuable tool for obtaining near-optimum stretch-space tradeoffs for specific graphs. We simulate this greedy scheme (and other heuristics) on multiple classes of random graphs as well as on Internet like graphs. I

    Scale Free Aggregation in Sensor Networks

    No full text
    Sensor networks are distributed data collection systems, frequently used for monitoring environments in which “nearby” data has a high degree of correlation. This induces opportunities for data aggregation, that are crucial given the severe energy constraints of the sensors. Thus it is very desirable to take advantage of data correlations in order to avoid transmitting redundancy. In our model, we formalize a notion of correlation, that can vary according to a parameter k. Then we relate the expected collision time of ”nearby ” walks on the grid to the optimum cost of scale-free aggregation. We also propose a very simple randomized algorithm for routing information on a grid of sensors that satisfies the appropriate collision time condition. Thus, we prove that this simple scheme is a constant factor approximation (in expectation) to the optimum aggregation tree simultaneously for all correlation parameters k. The key contribution in our randomized analysis is to bound the average expected collision time of non-homogeneous random walks on the grid, i.e. the next hop probability depends on the current position

    Routers with Very Small Buffers

    No full text
    Internet routers require buffers to hold packets during times of congestion. The buffers need to be fast, and so ideally they should be small enough to use fast memory technologies such as SRAM or all-optical buffering. Unfortunately, a widely used rule-of-thumb says we need a bandwidth-delay product of buffering at each router so as not to lose link utilization. This can be prohibitively large. In a recent paper, Appenzeller et al. challenged this rule-of-thumb and showed that for a backbone network, the buffer size can be divided by √ N without sacrificing throughput, where N is the number of flows sharing the bottleneck. In this paper, we explore how buffers in the backbone can be significantly reduced even more, to as little as a few dozen packets, if we are willing to sacrifice a small amount of link capacity. We argue that if the TCP sources are not overly bursty, then fewer than twenty packet buffers are sufficient for high throughput. Specifically, we argue that O(log W) buffers are sufficient, where W is the window size of each flow. We support our claim with analysis and a variety of simulations. The change we need to make to TCP is minimal—each sender just needs ∗ This work was supported under DARPA/MTO DOD-N award no. W911NF-04-0001/KK4118 (LASOR PROJECT

    ABSTRACT Part III: Routers with very small buffers ∗

    No full text
    Internet routers require buffers to hold packets during times of congestion. The buffers need to be fast, and so ideally they should be small enough to use fast memory technologies such as SRAM or all-optical buffering. Unfortunately, a widely used rule-of-thumb says we need a bandwidth-delay product of buffering at each router so as not to lose link utilization. This can be prohibitively large. In a recent paper, Appenzeller et al. challenged this rule-of-thumb and showed that for a backbone network, the buffer size can be divided by √ N without sacrificing throughput, where N is the number of flows sharing the bottleneck. In this paper, we explore how buffers in the backbone can be significantly reduced even more, to as little as a few dozen packets, if we are willing to sacrifice a small amount of link capacity. We argue that if the TCP sources are not overly bursty, then fewe

    ABSTRACT Part III: Routers with Very Small Buffers ∗

    No full text
    Internet routers require buffers to hold packets during times of congestion. The buffers need to be fast, and so ideally they should be small enough to use fast memory technologies such as SRAM or all-optical buffering. Unfortunately, a widely used rule-of-thumb says we need a bandwidth-delay product of buffering at each router so as not to lose link utilization. This can be prohibitively large. In a recent paper, Appenzeller et al. challenged this rule-of-thumb and showed that for a backbone network, the buffer size can be divided by √ N without sacrificing throughput, where N is the number of flows sharing the bottleneck. In this paper, we explore how buffers in the backbone can be significantly reduced even more, to as little as a few dozen packets, if we are willing to sacrifice a small amount of link capacity. We argue that if the TCP sources are not overly bursty, then fewe
    corecore