942 research outputs found

    SpreadCluster: Recovering Versioned Spreadsheets through Similarity-Based Clustering

    Full text link
    Version information plays an important role in spreadsheet understanding, maintaining and quality improving. However, end users rarely use version control tools to document spreadsheet version information. Thus, the spreadsheet version information is missing, and different versions of a spreadsheet coexist as individual and similar spreadsheets. Existing approaches try to recover spreadsheet version information through clustering these similar spreadsheets based on spreadsheet filenames or related email conversation. However, the applicability and accuracy of existing clustering approaches are limited due to the necessary information (e.g., filenames and email conversation) is usually missing. We inspected the versioned spreadsheets in VEnron, which is extracted from the Enron Corporation. In VEnron, the different versions of a spreadsheet are clustered into an evolution group. We observed that the versioned spreadsheets in each evolution group exhibit certain common features (e.g., similar table headers and worksheet names). Based on this observation, we proposed an automatic clustering algorithm, SpreadCluster. SpreadCluster learns the criteria of features from the versioned spreadsheets in VEnron, and then automatically clusters spreadsheets with the similar features into the same evolution group. We applied SpreadCluster on all spreadsheets in the Enron corpus. The evaluation result shows that SpreadCluster could cluster spreadsheets with higher precision and recall rate than the filename-based approach used by VEnron. Based on the clustering result by SpreadCluster, we further created a new versioned spreadsheet corpus VEnron2, which is much bigger than VEnron. We also applied SpreadCluster on the other two spreadsheet corpora FUSE and EUSES. The results show that SpreadCluster can cluster the versioned spreadsheets in these two corpora with high precision.Comment: 12 pages, MSR 201

    Embedding Web-based Statistical Translation Models in Cross-Language Information Retrieval

    Get PDF
    Although more and more language pairs are covered by machine translation services, there are still many pairs that lack translation resources. Cross-language information retrieval (CLIR) is an application which needs translation functionality of a relatively low level of sophistication since current models for information retrieval (IR) are still based on a bag-of-words. The Web provides a vast resource for the automatic construction of parallel corpora which can be used to train statistical translation models automatically. The resulting translation models can be embedded in several ways in a retrieval model. In this paper, we will investigate the problem of automatically mining parallel texts from the Web and different ways of integrating the translation models within the retrieval process. Our experiments on standard test collections for CLIR show that the Web-based translation models can surpass commercial MT systems in CLIR tasks. These results open the perspective of constructing a fully automatic query translation device for CLIR at a very low cost.Comment: 37 page

    File Naming in Digital Media Research: Examples from the Humanities and Social Sciences

    Get PDF
    This paper identifies organizational challenges faced by Social Science and Humanities (SSH) scholars when dealing with digital data and media, and suggests improved file naming practices in order to maximize organization, making files easier to find, more useable, and more easily shared. We argue that such skills are not formally discussed in the literature and therefore many scholars do not recognize the problem until they cannot locate a specific file or are sharing files with colleagues. We asked SSH scholars to share their file naming strategies (or lack thereof ) and we use these narrative anecdotes to discuss common problems and suggest possible solutions for their general file naming needs

    Tracking English and Translated Arabic News using GHSOM

    Get PDF

    A MapReduce Algorithm for Finding Hotspots of Topics from Time Stamped Documents

    Get PDF
    Hotspots of a word/topic are time periods with a burst of activities in a time stamped document set. Identifying and analyzing hot spots of topics has been an important area of research. Finding hot spots of topics requires processing of contents of documents which is often time consuming. In this thesis, we explore MapReduce style algorithms for computing hot spots of topics. MapReduce is a distributed parallel programming model and an associated implementation for processing and analyzing large datasets. User specifies a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model and this thesis explores the feasibility of implementing the hotspot algorithm using MapReduce. We design map and reduce functions appropriate for preprocessing of documents, and the hot spot computation. We implement the functions in Hadoop (a MapReduce framework for Apache Foundation) and conduct several experiments to assess the benefits of MapReduce style implementation versus simple sequential implementation

    Peer to Peer Information Retrieval: An Overview

    Get PDF
    Peer-to-peer technology is widely used for file sharing. In the past decade a number of prototype peer-to-peer information retrieval systems have been developed. Unfortunately, none of these have seen widespread real- world adoption and thus, in contrast with file sharing, information retrieval is still dominated by centralised solutions. In this paper we provide an overview of the key challenges for peer-to-peer information retrieval and the work done so far. We want to stimulate and inspire further research to overcome these challenges. This will open the door to the development and large-scale deployment of real-world peer-to-peer information retrieval systems that rival existing centralised client-server solutions in terms of scalability, performance, user satisfaction and freedom

    Design of a Controlled Language for Critical Infrastructures Protection

    Get PDF
    We describe a project for the construction of controlled language for critical infrastructures protection (CIP). This project originates from the need to coordinate and categorize the communications on CIP at the European level. These communications can be physically represented by official documents, reports on incidents, informal communications and plain e-mail. We explore the application of traditional library science tools for the construction of controlled languages in order to achieve our goal. Our starting point is an analogous work done during the sixties in the field of nuclear science known as the Euratom Thesaurus.JRC.G.6-Security technology assessmen

    Forensic investigation of cooperative storage cloud service: Symform as a case study

    Get PDF
    Researchers envisioned Storage as a Service (StaaS) as an effective solution to the distributed management of digital data. Cooperative storage cloud forensic is relatively new and is an under-explored area of research. Using Symform as a case study, we seek to determine the data remnants from the use of cooperative cloud storage services. In particular, we consider both mobile devices and personal computers running various popular operating systems, namely Windows 8.1, Mac OS X Mavericks 10.9.5, Ubuntu 14.04.1 LTS, iOS 7.1.2, and Android KitKat 4.4.4. Potential artefacts recovered during the research include data relating to the installation and uninstallation of the cloud applications, log-in to and log-out from Symform account using the client application, file synchronization as well as their time stamp information. This research contributes to an in-depth understanding of the types of terrestrial artifacts that are likely to remain after the use of cooperative storage cloud on client devices

    "Stuff goes into the computer and doesn't come out": a cross-tool study of personal information management

    Get PDF
    This paper reports a study of Personal Information Management (PIM), which advances research in two ways: (1) rather than focusing on one tool, we collected cross-tool data relating to file, email and web bookmark usage for each participant, and (2) we collected longitudinal data for a subset of the participants. We found that individuals employ a rich variety of strategies both within and across PIM tools, and we present new strategy classifications that reflect this behaviour. We discuss synergies and differences between tools that may be useful in guiding the design of tool integration. Our longitudinal data provides insight into how PIM behaviour evolves over time, and suggests how the supporting nature of PIM discourages reflection by users on their strategies. We discuss how the promotion of some reflection by tools and organizations may benefit users
    corecore