12 research outputs found

    Usability evaluation model for mobile e-book applications

    Get PDF
    Evaluation for mobile e-book applications are limited and did not address all the important usability measurements. Hence, this study aimed to identify the characteristics that affect user satisfaction on the usability of mobile e-book applications. Five characteristics that have a significant effect on the user satisfaction of mobile e-book applications have been identified namely readability, effectiveness, accessibility, efficiency, and navigation. A usability evaluation was conducted on three mobile e-book applications namely Adobe Acrobat Reader, Ebook Reader, and Amazon Kindle. 30 students from Universiti Utara Malaysia evaluated the mobile e-book applications and their satisfaction was measured using questionnaire. The outcomes discovered that the five characteristics (i.e., readability, effectiveness, accessibility, efficiency, and navigation) have a significant positive relationship with user satisfaction. This provides insights into the main characteristics that increase user satisfaction. It also designed a task scenario and a satisfaction questionnaire which help in evaluating mobile e-book applications

    Touching Annotations: A Visual Metaphor for Navigation of Annotation in Digital Documents.

    Get PDF
    Direct touch manipulation interactions with technology are now commonplace and significant interest is building around their use in the culture and heritage domain. Such interactions can give people the opportunity to explore materials and artefacts in ways that would otherwise be unavailable. These are often heavily annotated and can be linked to a large array of related digital content, thus enriching the experience for the user. Research has addressed issues of how to present digital documents and their related annotations but at present it is unclear what the optimal interaction approach to navigating these annotations in a touch display context might be. In this paper we investigate the role of two alternative approaches to support the navigation of annotations in digitised documents in the context of a touch interface. Through a control study we demonstrate that, whilst the navigation paradigm displays a significant interaction with the type of annotations task performed, there is no discernible advantage of using a natural visual metaphor for annotation in this context. This suggests that design of digital document annotation navigation tools should account for the context and navigation tasks being considered

    Introduzindo o livro digital no curso de Editoração: uma busca epistêmica

    Get PDF
    E-books have been stressing the market. Recognizing that design and production of digital artifacts are strategic for future publishing activity, in 2012 the Publishing undergraduate course of the University of São Paulo (USP) included e-book’s study and production in its curriculum by creating two new disciplines at transdisciplinary fields. The research presented in this paper allowed delineating the approach of both disciplines: one focused on the production and, the other, focused on the project through interaction design. These disciplines address the second and third generation of e-books. They allow the Editors to expand their vast cultural universe for more transdisciplinary areas developing skills and knowledge that enable them to integrate teams used to develop interactive digital publishing productsE-books have been stressing the market. Recognizing that design and production of digital artifacts are strategic for future publishing activity, in 2012 the Publishing undergraduate course of the University of São Paulo (USP) included e-book's study and production in its curriculum by creating two new disciplines at transdisciplinary fields. The research presented in this paper allowed delineating the approach of both disciplines: one focused on the production and, the other, focused on the project through interaction design. These disciplines address the second and third generation of e-books. They allow the Editors to expand their vast cultural universe for more transdisciplinary areas developing skills and knowledge that enable them to integrate teams used to develop interactive digital publishing products. The following sections present this experience

    Investigating the effect of priming on reading performance on electronic devices

    Get PDF
    Reading is an activity needed almost everywhere in daily life. Through reading we are not only able to extract meaning from a text, but also to extend our knowledge of the world and to foster other cognitive abilities. In the age of information technology, reading behav-iour has been subject to change. With more information made accessible through the inter-net and e-reading devices, the time spent reading increases. Further, it is reported that read-ing on computers and other electronic devices tends to be shallower and abilities like browsing over a document to get the gist of its content become more important. In this thesis we want to investigate the usage of text visualizations to facilitate the reading activities on electronic devices, with special focus on reading comprehension. We want to make use of the memory psychological "priming effect" by presenting the reader with a visualization of the text's content before the actual reading activity, and thus giving the opportunity to get familiar with the information contained in the text prior to reading. To create those visualizations we developed a first prototype, which is capable of extracting keywords of a text and visualizing them. Additionally, we present certain important design aspects of text visualizations, which were discovered through a preliminary study. The presented concepts were evaluated in a user study and are considered as a starting point for future research. With the contributions of this work we aim to support readers with reading activities. Facilitated reading could help to lower the hurdle to read more and therefore foster the gain of knowledge.Die Fähigkeit des Lesens wird fast überall in unserem Alltag benötigt. Durch Lesen erfas-sen wir nicht nur die Bedeutung eines Texts, sondern erweitern unser Wissen und andere geistige Fähigkeiten. Im IT-Zeitalter ändert sich auch unser Leseverhalten. Über das Inter-net und e-Reader sind mehr Informationen zugänglich und so nimmt auch die Zeit zu, die wir mit Lesen verbringen. Lesen auf elektronischen Geräten neigt Studien zufolge außerdem dazu oberflächlicher zu sein und es wird immer wichtiger, den groben Inhalt eines Texts durch Überfliegen zu erfassen. In dieser Arbeit soll untersucht werden ob Textvisualisierungen in der Lage sind, das Le-sen auf elektronischen Geräten zu erleichtern, insbesondere in Bezug auf das Textverständnis. Hierzu möchten wir den gedächtnispsychologischen "Priming Effekt" ausnutzen, indem wir dem Leser schon vor dem eigentlichen Lesen eine visuelle Zusammenfassung des Inhalts präsentieren. Diese soll dem Leser ermöglichen, sich schon vor dem Lesen mit den Inhalten des Texts vertraut zu machen. Um solche Visualisierungen zu erstellen, wurde ein Prototyp entwickelt, der automatisch die Schlüsselworte eines Texts extrahiert und diese visuell darstellt. Wichtige Design Eigenschaften von Textvisualisierungen werden eben-falls erläutert. Die vorgestellten Konzepte wurden durch eine Nutzerstudie evaluiert und können Ausgangspunkt für spätere Forschung in diesem Bereich bilden. Mit den Erkenntnissen dieser Arbeit möchten wir dazu beitragen das Lesen auf elektronischen Geräten zu erleichtern. Unterstütztes und einfacheres Lesen könnte Lesern helfen, mehr zu Lesen und damit schließlich auch den Gewinn neuen Wissens fördern

    Credibility analysis of textual claims with explainable evidence

    Get PDF
    Despite being a vast resource of valuable information, the Web has been polluted by the spread of false claims. Increasing hoaxes, fake news, and misleading information on the Web have given rise to many fact-checking websites that manually assess these doubtful claims. However, the rapid speed and large scale of misinformation spread have become the bottleneck for manual verification. This calls for credibility assessment tools that can automate this verification process. Prior works in this domain make strong assumptions about the structure of the claims and the communities where they are made. Most importantly, black-box techniques proposed in prior works lack the ability to explain why a certain statement is deemed credible or not. To address these limitations, this dissertation proposes a general framework for automated credibility assessment that does not make any assumption about the structure or origin of the claims. Specifically, we propose a feature-based model, which automatically retrieves relevant articles about the given claim and assesses its credibility by capturing the mutual interaction between the language style of the relevant articles, their stance towards the claim, and the trustworthiness of the underlying web sources. We further enhance our credibility assessment approach and propose a neural-network-based model. Unlike the feature-based model, this model does not rely on feature engineering and external lexicons. Both our models make their assessments interpretable by extracting explainable evidence from judiciously selected web sources. We utilize our models and develop a Web interface, CredEye, which enables users to automatically assess the credibility of a textual claim and dissect into the assessment by browsing through judiciously and automatically selected evidence snippets. In addition, we study the problem of stance classification and propose a neural-network-based model for predicting the stance of diverse user perspectives regarding the controversial claims. Given a controversial claim and a user comment, our stance classification model predicts whether the user comment is supporting or opposing the claim.Das Web ist eine riesige Quelle wertvoller Informationen, allerdings wurde es durch die Verbreitung von Falschmeldungen verschmutzt. Eine zunehmende Anzahl an Hoaxes, Falschmeldungen und irreführenden Informationen im Internet haben viele Websites hervorgebracht, auf denen die Fakten überprüft und zweifelhafte Behauptungen manuell bewertet werden. Die rasante Verbreitung großer Mengen von Fehlinformationen sind jedoch zum Engpass für die manuelle Überprüfung geworden. Dies erfordert Tools zur Bewertung der Glaubwürdigkeit, mit denen dieser Überprüfungsprozess automatisiert werden kann. In früheren Arbeiten in diesem Bereich werden starke Annahmen gemacht über die Struktur der Behauptungen und die Portale, in denen sie gepostet werden. Vor allem aber können die Black-Box-Techniken, die in früheren Arbeiten vorgeschlagen wurden, nicht erklären, warum eine bestimmte Aussage als glaubwürdig erachtet wird oder nicht. Um diesen Einschränkungen zu begegnen, wird in dieser Dissertation ein allgemeines Framework für die automatisierte Bewertung der Glaubwürdigkeit vorgeschlagen, bei dem keine Annahmen über die Struktur oder den Ursprung der Behauptungen gemacht werden. Insbesondere schlagen wir ein featurebasiertes Modell vor, das automatisch relevante Artikel zu einer bestimmten Behauptung abruft und deren Glaubwürdigkeit bewertet, indem die gegenseitige Interaktion zwischen dem Sprachstil der relevanten Artikel, ihre Haltung zur Behauptung und der Vertrauenswürdigkeit der zugrunde liegenden Quellen erfasst wird. Wir verbessern unseren Ansatz zur Bewertung der Glaubwürdigkeit weiter und schlagen ein auf neuronalen Netzen basierendes Modell vor. Im Gegensatz zum featurebasierten Modell ist dieses Modell nicht auf Feature-Engineering und externe Lexika angewiesen. Unsere beiden Modelle machen ihre Einschätzungen interpretierbar, indem sie erklärbare Beweise aus sorgfältig ausgewählten Webquellen extrahieren. Wir verwenden unsere Modelle zur Entwicklung eines Webinterfaces, CredEye, mit dem Benutzer die Glaubwürdigkeit einer Behauptung in Textform automatisch bewerten und verstehen können, indem sie automatisch ausgewählte Beweisstücke einsehen. Darüber hinaus untersuchen wir das Problem der Positionsklassifizierung und schlagen ein auf neuronalen Netzen basierendes Modell vor, um die Position verschiedener Benutzerperspektiven in Bezug auf die umstrittenen Behauptungen vorherzusagen. Bei einer kontroversen Behauptung und einem Benutzerkommentar sagt unser Einstufungsmodell voraus, ob der Benutzerkommentar die Behauptung unterstützt oder ablehnt

    HEBE: Highly Engaging eBook Experiences

    Get PDF
    Despite more and more books are made available in electronic format and technology is increasingly present in children’s everyday life, thus far the potential of the electronic book (eBook) medium has been only partially exploited. With the Highly Engaging eBook Experiences (HEBE) project we studied how to design and evaluate eBooks for children with the goal of making the reading experience more engaging. The project began with an investigation of the many facets that characterize the reading experience of children in order to understand how it could possibly be enhanced by electronic books. In a later stage an intergenerational design team used different techniques of Cooperative Inquiry to explore a range of design ideas. Then, based on those ideas, we developed a prototype of enhanced eBook and elaborated a shortlist of design recommendations that are intended to help designers in creating more engaging eBooks. The research project ended with a stage of evaluation where children’s User Experience with the eBook prototype was assessed. We took inspiration from Csikszentmihalyi’s Flow theory to define a benchmark for evaluating the reading experience. Then, by means of the Experience Sampling Method (ESM), we investigated and collected data on the reading experience of two groups of children, one of which read an eBook enhanced following our design recommendations while the other read a basic version of the same eBook. Following a mixed-method approach, with quantitative analysis we verified whether participants who read the enhanced eBook had a better reading experience, while with qualitative analysis we tried to understand why. The results of the evaluation showed that that an eBook designed following our design recommendations may have a positive effect on children’s reading experience by making it more engaging

    Wiktionary: The Metalexicographic and the Natural Language Processing Perspective

    Get PDF
    Dictionaries are the main reference works for our understanding of language. They are used by humans and likewise by computational methods. So far, the compilation of dictionaries has almost exclusively been the profession of expert lexicographers. The ease of collaboration on the Web and the rising initiatives of collecting open-licensed knowledge, such as in Wikipedia, caused a new type of dictionary that is voluntarily created by large communities of Web users. This collaborative construction approach presents a new paradigm for lexicography that poses new research questions to dictionary research on the one hand and provides a very valuable knowledge source for natural language processing applications on the other hand. The subject of our research is Wiktionary, which is currently the largest collaboratively constructed dictionary project. In the first part of this thesis, we study Wiktionary from the metalexicographic perspective. Metalexicography is the scientific study of lexicography including the analysis and criticism of dictionaries and lexicographic processes. To this end, we discuss three contributions related to this area of research: (i) We first provide a detailed analysis of Wiktionary and its various language editions and dictionary structures. (ii) We then analyze the collaborative construction process of Wiktionary. Our results show that the traditional phases of the lexicographic process do not apply well to Wiktionary, which is why we propose a novel process description that is based on the frequent and continual revision and discussion of the dictionary articles and the lexicographic instructions. (iii) We perform a large-scale quantitative comparison of Wiktionary and a number of other dictionaries regarding the covered languages, lexical entries, word senses, pragmatic labels, lexical relations, and translations. We conclude the metalexicographic perspective by finding that the collaborative Wiktionary is not an appropriate replacement for expert-built dictionaries due to its inconsistencies, quality flaws, one-fits-all-approach, and strong dependence on expert-built dictionaries. However, Wiktionary's rapid and continual growth, its high coverage of languages, newly coined words, domain-specific vocabulary and non-standard language varieties, as well as the kind of evidence based on the authors' intuition provide promising opportunities for both lexicography and natural language processing. In particular, we find that Wiktionary and expert-built wordnets and thesauri contain largely complementary entries. In the second part of the thesis, we study Wiktionary from the natural language processing perspective with the aim of making available its linguistic knowledge for computational applications. Such applications require vast amounts of structured data with high quality. Expert-built resources have been found to suffer from insufficient coverage and high construction and maintenance cost, whereas fully automatic extraction from corpora or the Web often yields resources of limited quality. Collaboratively built encyclopedias present a viable solution, but do not cover well linguistically oriented knowledge as it is found in dictionaries. That is why we propose extracting linguistic knowledge from Wiktionary, which we achieve by the following three main contributions: (i) We propose the novel multilingual ontology OntoWiktionary that is created by extracting and harmonizing the weakly structured dictionary articles in Wiktionary. A particular challenge in this process is the ambiguity of semantic relations and translations, which we resolve by automatic word sense disambiguation methods. (ii) We automatically align Wiktionary with WordNet 3.0 at the word sense level. The largely complementary information from the two dictionaries yields an aligned resource with higher coverage and an enriched representation of word senses. (iii) We represent Wiktionary according to the ISO standard Lexical Markup Framework, which we adapt to the peculiarities of collaborative dictionaries. This standardized representation is of great importance for fostering the interoperability of resources and hence the dissemination of Wiktionary-based research. To this end, our work presents a foundational step towards the large-scale integrated resource UBY, which facilitates a unified access to a number of standardized dictionaries by means of a shared web interface for human users and an application programming interface for natural language processing applications. A user can, in particular, switch between and combine information from Wiktionary and other dictionaries without completely changing the software. Our final resource and the accompanying datasets and software are publicly available and can be employed for multiple different natural language processing applications. It particularly fills the gap between the small expert-built wordnets and the large amount of encyclopedic knowledge from Wikipedia. We provide a survey of previous works utilizing Wiktionary, and we exemplify the usefulness of our work in two case studies on measuring verb similarity and detecting cross-lingual marketing blunders, which make use of our Wiktionary-based resource and the results of our metalexicographic study. We conclude the thesis by emphasizing the usefulness of collaborative dictionaries when being combined with expert-built resources, which bears much unused potential
    corecore