35 research outputs found

    Citation Counts and Evaluation of Researchers in the Internet Age

    Full text link
    Bibliometric measures derived from citation counts are increasingly being used as a research evaluation tool. Their strengths and weaknesses have been widely analyzed in the literature and are often subject of vigorous debate. We believe there are a few fundamental issues related to the impact of the web that are not taken into account with the importance they deserve. We focus on evaluation of researchers, but several of our arguments may be applied also to evaluation of research institutions as well as of journals and conferences.Comment: 4 pages, 2 figures, 3 table

    Unveiling evolutionary algorithm representation with DU maps

    Get PDF
    Evolutionary algorithms (EAs) have proven to be effective in tackling problems in many different domains. However, users are often required to spend a significant amount of effort in fine-tuning the EA parameters in order to make the algorithm work. In principle, visualization tools may be of great help in this laborious task, but current visualization tools are either EA-specific, and hence hardly available to all users, or too general to convey detailed information. In this work, we study the Diversity and Usage map (DU map), a compact visualization for analyzing a key component of every EA, the representation of solutions. In a single heat map, the DU map visualizes for entire runs how diverse the genotype is across the population and to which degree each gene in the genotype contributes to the solution. We demonstrate the generality of the DU map concept by applying it to six EAs that use different representations (bit and integer strings, trees, ensembles of trees, and neural networks). We present the results of an online user study about the usability of the DU map which confirm the suitability of the proposed tool and provide important insights on our design choices. By providing a visualization tool that can be easily tailored by specifying the diversity (D) and usage (U) functions, the DU map aims at being a powerful analysis tool for EAs practitioners, making EAs more transparent and hence lowering the barrier for their use

    Crowded Environment Navigation with NEAT: Impact of Perception Resolution on Controller Optimization

    Get PDF
    Crowd navigation with autonomous systems is a topic which has seen a rapid increase in interest recently. While it appears natural to humans, being able to reach a target can prove difficult or impossible to a mobile robot because of the safety issues related to collisions with people. In this work we propose an approach to control a robot in a crowded environment; the method employs an Artificial Neural Network (ANN) that is trained with the NeuroEvolution of Augmented Topologies (NEAT) method. Models for the kinematics, perception, and cognition of the robot are presented. In particular, perception is based on a raycasting model which is tailored on the ANN. An in-depth analysis of a number of parameters of the environment and the robot is performed and a comparative analysis is presented; finally, results of the performance of the controller trained with NEAT are compared to those of a human driver who takes over the controller itself. Results show that the intelligent controller is able to perform on par with the human, within the simulated environment

    Bibliometric Evaluation of Researchers in the Internet Age

    No full text
    Research evaluation, which is an increasingly pressing issue, invariably relies on citation counts. In this contribution we highlight two concerns that the research community needs to pay attention to. One, in the world of search engine facilitated research, factors such as ease of web discovery, ease of access, and content relevance rather than quality influence what gets read and cited. Two, research evaluation based on citation counts works against many types of high-quality works. We will also elaborate on the implications of these points by examining a recent nation-wide evaluation of researchers performed in Italy. We focus on our discipline (computer science), but we believe that our observations have relevance for a broad audience

    On the Effects of Learning Set Corruption in Anomaly-based Detection of Web Defacements

    No full text
    Anomaly detection is a commonly used approach for constructing intrusion detection systems. A key requirement is that the data used for building the resource profile are indeed attack-free, but this issue is often skipped or taken for granted. In this work we consider the problem of corruption in the learning data, with respect to a specific detection system, i.e., a web site integrity checker. We used corrupted learning sets and observed their impact on performance (in terms of false positives and false negatives). This analysis enabled us to gain important insights into this rather unexplored issue. Based on this analysis we also present a procedure for detecting whether a learning set is corrupted. We evaluated the performance of our proposal and obtained very good results up to a corruption rate close to 50%. Our experiments are based on collections of real data and consider three different flavors of anomaly detection

    Detection of Hidden Fraudulent URLs within Trusted Sites using Lexical Features

    No full text
    Internet security threats often involve the fraudulent modification of a web site, often with the addition of new pages at URLs where no page should exist. Detecting the existence of such hidden URLs is very difficult because they do not appear during normal navigation and usually are not indexed by search engines. Most importantly, drive-by attacks leading users to hidden URLs, for example for phishing credentials, may fool even tech-savvy users, because such hidden URLs are increasingly hosted within trusted sites, thereby rendering HTTPS authentication ineffective. In this work, we propose an approach for detecting such URLs based only on their lexical features, which allows alerting the user before actually fetching the page. We assess our proposal on a dataset composed of thousands of URLs, with promising results

    A Look at Hidden Web Pages in Italian Public Administrations

    No full text
    Preventing illegitimate modifications to web sites offering a public service is a fundamental requirement of any e-government initiative. Unfortunately, attacks to web sites resulting in the creation of fraudulent content by hackers are ubiquitous. In this work we attempted to assess the ability of Italian public administrations to be in full control of the respective web sites. We examined several thousands sites, including all local governments and universities, and found that approximately 1.16% of the analyzed sites serves contents that admittedly is not supposed to be there. Although these contents do not constitute an immediate threat to citizens, this result does not seem very encouraging also because our methodology leads to very conservative estimates. We believe that our analysis allows gaining useful insights into this novel and peculiar threat

    GOMGE: Gene-Pool Optimal Mixing on Grammatical Evolution

    Get PDF
    4noGene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) is a recent Evolutionary Algorithm (EA) in which the interactions among parts of the solution (i.e., the linkage) are learned and exploited in a novel variation operator. We present GOMGE, the extension of GOMEA to Grammatical Evolution (GE), a popular EA based on an indirect representation which may be applied to any problem whose solutions can be described using a context-free grammar (CFG). GE is a general approach that does not require the user to tune the internals of the EA to fit the problem at hand: there is hence the opportunity for benefiting from the potential of GOMEA to automatically learn and exploit the linkage. We apply the proposed approach to three variants of GE differing in the representation (original GE, SGE, and WHGE) and incorporate in GOMGE two specific improvements aimed at coping with the high degeneracy of those representations. We experimentally assess GOMGE and show that, when coupled with WHGE and SGE, it is clearly beneficial to both effectiveness and efficiency, whereas it delivers mixed results with the original GE.partially_openembargoed_20190822Medvet, Eric; Bartoli, Alberto; De Lorenzo, Andrea; Tarlao, FabianoMedvet, Eric; Bartoli, Alberto; De Lorenzo, Andrea; Tarlao, Fabian

    A Framework for Large-Scale Detection of Web Site Defacements

    No full text
    Web site defacement, the process of introducing unauthorized modifications to a web site, is a very common form of attack. In this paper we describe and evaluate experimentally a framework that may constitute the basis for a defacement detection service capable of monitoring thousands of remote web sites systematically and automatically. In our framework an organization may join the service by simply providing the URLs of the resources to be monitored along with the contact point of an administrator. The monitored organization may thus take advantage of the service with just a few mouse clicks, without installing any software locally nor changing its own daily operational processes. Our approach is based on anomaly detection and allows monitoring the integrity of many remote web resources automatically while remaining fully decoupled from them, in particular, without requiring any prior knowledge about those resources. We evaluated our approach over a selection of dynamic resources and a set of publicly available defacements. The results are very satisfactory: all attacks are detected while keeping false positives to a minimum. We also assessed performance and scalability of our proposal and we found that it may indeed constitute the basis for actually deploying the proposed service on a large-scale
    corecore