532,343 research outputs found

    Introduction to the special issue on cross-language algorithms and applications

    Get PDF
    With the increasingly global nature of our everyday interactions, the need for multilingual technologies to support efficient and efective information access and communication cannot be overemphasized. Computational modeling of language has been the focus of Natural Language Processing, a subdiscipline of Artificial Intelligence. One of the current challenges for this discipline is to design methodologies and algorithms that are cross-language in order to create multilingual technologies rapidly. The goal of this JAIR special issue on Cross-Language Algorithms and Applications (CLAA) is to present leading research in this area, with emphasis on developing unifying themes that could lead to the development of the science of multi- and cross-lingualism. In this introduction, we provide the reader with the motivation for this special issue and summarize the contributions of the papers that have been included. The selected papers cover a broad range of cross-lingual technologies including machine translation, domain and language adaptation for sentiment analysis, cross-language lexical resources, dependency parsing, information retrieval and knowledge representation. We anticipate that this special issue will serve as an invaluable resource for researchers interested in topics of cross-lingual natural language processing.Postprint (published version

    The Semantic Web: Apotheosis of annotation, but what are its semantics?

    Get PDF
    This article discusses what kind of entity the proposed Semantic Web (SW) is, principally by reference to the relationship of natural language structure to knowledge representation (KR). There are three distinct views on this issue. The first is that the SW is basically a renaming of the traditional AI KR task, with all its problems and challenges. The second view is that the SW will be, at a minimum, the World Wide Web with its constituent documents annotated so as to yield their content, or meaning structure, more directly. This view makes natural language processing central as the procedural bridge from texts to KR, usually via some form of automated information extraction. The third view is that the SW is about trusted databases as the foundation of a system of Web processes and services. There's also a fourth view, which is much more difficult to define and discuss: If the SW just keeps moving as an engineering development and is lucky, then real problems won't arise. This article is part of a special issue called Semantic Web Update

    Slum Tourism: Developments in a Young Field of Interdisciplinary Tourism Research

    Get PDF
    This paper introduces the Special Issue on slum tourism with a reflection on the state of the art on this new area of tourism research. After a review of the literature we discuss the breadth of research that was presented at the conference 'Destination Slum', the first international conference on slum tourism. Identifying various dimensions, as well as similarities and differences, in slum tourism in different parts of the world, we contest that slum tourism has evolved from being practised at only a limited number of places into a truly global phenomenon which now is performed on five continents. Equally the variety of services and ways in which tourists visit the slums has increased.The widening scope and diversity of slum tourism is clearly reflected in the variety of papers presented at the conference and in this Special Issue. Whilst academic discussion on the theme is evolving rapidly, slum tourism is still a relatively young area of research. Most papers at the conference and, indeed, most slum tourism research as a whole appears to remain focused on understanding issues of representation, often concentrating on a reflection of slum tourists rather than tourism. Aspects, such as the position of local people, remain underexposed as well as empirical work on the actual practice of slum tourism. To address these issues, we set out a research agenda in the final part of the article with potential avenues for future research to further the knowledge on slum tourism. © 2012 Copyright Taylor and Francis Group, LLC

    The Thesis: texts and machines

    Get PDF
    This opening chapter focuses on how research knowledge is represented in the dissertation as a textual format. It sets the dissertation in two contexts. Borg discusses its historical formation within the technologies of the pen and the typewriter; Boyd Davis analyses the changes produced by digital technologies, offering counter-arguments to the claim that the predominantly textual thesis is a poor representation of research knowledge. He advances evidence-based arguments, using a synthesis of recent technological developments, for the additional functionality that text has acquired as a result of being digital and being connected via international networks, contrasting this with the relatively poor forms of access available even now using pictures, moving images and other non-textual forms. The chapter argues that the dissertation is inherently contingent, changing and changeable. While supervisors may expect their students to produce a dissertation that resembles the one they wrote themselves, changes both in the available technologies and in the kinds of knowledge the dissertation is expected to represent are having a significant effect on its form as well as its content. Boyd Davis is co-editor of the book in which this chapter is published, which has its origins in an ESRC-funded seminar series, ‘New Forms of Doctorate’ (2008–10), that he co-devised and co-chaired. The work grew out Boyd Davis’s questioning of methods and formats for research knowledge in his introduction to, and editing of, a special issue of Digital Creativity, entitled Creative Evaluation, in 2009. This followed a peer-reviewed symposium on evaluative techniques within creative work supported by the Design Research Society and British Computer Society, which he devised and chaired. Related work on forms of knowledge in interactive media appears in an article with Faiola and Edwards of Indiana University–Purdue University, Indianapolis, for New Media and Society (2010)

    Simple Versus Complex Forecasting: The Evidence

    Get PDF
    This article introduces this JBR Special Issue on simple versus complex methods in forecasting. Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models, forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods – including those in this special issue – found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy. Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives: (1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers\u27 plans; and (3) forecasters\u27 clients may be reassured by the incomprehensability. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures. They can rate the simplicity of forecasters\u27 procedures using the questionnaire at simple-forecasting.com
    • …
    corecore