7 research outputs found

    Matching demand and offer in on-line provision: A longitudinal study of monster.com

    Get PDF
    This is the post-print version of the final published paper that is available from the link below.When considering the jobs market, changes or recurring trends for skilled employees expressed by employers' needs have a tremendous impact on the evolution of website content. On-line jobs sites adverts, academic institutions and professional development “standard bodies” all share those needs as their common driver for contents evolution. This paper aims, on one hand, to discuss and to analyse how current needs and requirements (“demand”) of IT skills in the UK job market drive the contents of different types of websites, in turn analysing whether this demand changes and how. On the other hand, it is studied what the UK higher education institutions have to offer to fulfill this demand. The results found analysing the evolution of the largest on-line job centre (www.monster.com), and the websites of selected UK academic institutions, demonstrate that often what is requested by UK industries is not clearly offered by UK institutions. Given the prominence of monster.com in the global economy, these results could provide a meaningful starting point to support curricula development in UK, as much as worldwide

    Maestro: An Extensible General-Purpose Data Gathering and Classification Platform

    Get PDF
    Researchers who want to gather and classify data on a specific topic are doomed to use several tools in a tedious process given the lack of software tools to collect data from multiple sources for posterior analysis and classification. Our study addresses these issues by designing a novel software platform named Maestro that automatically gathers, classifies, and provides specific datasets from a dynamic set of configurable components (plugins). Extensibility is Maestro’s main feature, which allows new plugins to be incrementally added by the core team or other developers without changing the source code. To evaluate this proposal and support the discussion, a simple working example with images of the former U.S. president, Donald Trump and his facial expressions is shown

    Information monitoring based on web resources

    Get PDF
    The paper summarizes the system for WEB resources monitoring based on defined query. Experiment compares results returned by the proposed system to those provided by Google Search and Google Alert services. Results indicate that the system could be solid base for development and tests of pattern detection and information retrieval mechanism, while providing more data than Google solutions. Drawback of system and further development plans are also presented

    Matching Demand and Offer in On-line Provision: a Longitudinal Study of Monster.com

    Get PDF
    When considering the jobs market, changes or recurring trends for skilled employees expressed by employers’ needs have a tremendous impact on the evolution of website content. On-line jobs sites adverts, academic institutions and professional development “standard bodies” all share those needs as their common driver for contents evolution. This paper aims, on one hand, to discuss and to analyse how current needs and requirements (“demand”) of IT skills in the UK job market drive the contents of different types of websites, in turn analysing whether this demand changes and how. On the other hand, it is studied what the UK higher education institutions have to offer to fulfill this demand. The results found analysing the evolution of the largest on-line job centre (www.monster.com), and the websites of selected UK academic institutions, demonstrate that often what is requested by UK industries is not clearly offered by UK institutions. Given the prominence of monster.com in the global economy, these results could provide a meaningful starting point to support curricula development in UK, as much as worldwide

    Cross-language program analysis for dynamic web applications

    Get PDF
    Web applications have become one of the most important and prevalent types of software. In modern web applications, the display of any web page is usually an interplay of multiple languages and involves code execution at different locations (the server side, the database side, and the client side). These characteristics make it hard to write and maintain web applications. Much of the existing research and tool support often deals with one single language and therefore is still limited in addressing those challenges. To fill in this gap, this dissertation is aimed at developing an infrastructure for cross-language program analysis for dynamic web applications to support creating reliable and robust web applications with higher quality and lower costs. To reach that goal, we have developed the following research components. First, to understand the client-side code that is embedded in the server-side code, we develop an output-oriented symbolic execution engine that approximates all possible outputs of a server-side program. Second, we use variability-aware parsing, a technique recently developed for parsing conditional code in software product lines, to parse those outputs into a compact tree representation (called VarDOM) that represents all possible DOM variants of a web application. Third, we leverage the VarDOM to extract semantic information from the server-side code. Specifically, we develop novel concepts, techniques, and tools (1) to build call graphs for embedded client code in different languages, (2) to compute cross-language program slices, and (3) to compute a novel test coverage criterion called output coverage that aids testers in creating effective test suites for detecting output-related bugs. The results have been demonstrated in a wide range of applications for web programs such as IDE services, fault localization, bug detection, and testing

    Web crawlers compared

    No full text
    Tools for the assessment of the quality and reliability of Web applications are based on the possibility of downloading the target of the analysis. This is achieved through Web crawlers, which can automatically navigate within a Web site and perform proper actions (such as download) during the visit. The most important performance indicators for a Web crawler are its completeness and robustness, measuring respectively the ability to visit the Web site entirely and without errors. The variety of implementation languages and technologies used for Web site development makes these two indicators hard to maximize. We conducted an evaluation study, in which we tested several of the available Web crawlers
    corecore