169 research outputs found

    Role of images on World Wide Web readability

    Get PDF
    As the Internet and World Wide Web have grown, many good things have come. If you have access to a computer, you can find a lot of information quickly and easily. Electronic devices can store and retrieve vast amounts of data in seconds. You no longer have to leave your house to get products and services you could only get in person. Documents can be changed from English to Urdu or from text to speech almost instantly, making it easy for people from different cultures and with different abilities to talk to each other. As technology improves, web developers and website visitors want more animation, colour, and technology. As computers get faster at processing images and other graphics, web developers use them more and more. Users who can see colour, pictures, animation, and images can help understand and read the Web and improve the Web experience. People who have trouble reading or whose first language is not used on the website can also benefit from using pictures. But not all images help people understand and read the text they go with. For example, images just for decoration or picked by the people who made the website should not be used. Also, different factors could affect how easy it is to read graphical content, such as a low image resolution, a bad aspect ratio, a bad colour combination in the image itself, a small font size, etc., and the WCAG gave different rules for each of these problems. The rules suggest using alternative text, the right combination of colours, low contrast, and a higher resolution. But one of the biggest problems is that images that don't go with the text on a web page can make it hard to read the text. On the other hand, relevant pictures could make the page easier to read. A method has been suggested to figure out how relevant the images on websites are from the point of view of web readability. This method combines different ways to get information from images by using Cloud Vision API and Optical Character Recognition (OCR), and reading text from websites to find relevancy between them. Techniques for preprocessing data have been used on the information that has been extracted. Natural Language Processing (NLP) technique has been used to determine what images and text on a web page have to do with each other. This tool looks at fifty educational websites' pictures and assesses their relevance. Results show that images that have nothing to do with the page's content and images that aren't very good cause lower relevancy scores. A user study was done to evaluate the hypothesis that the relevant images could enhance web readability based on two evaluations: the evaluation of the 1024 end users of the page and the heuristic evaluation, which was done by 32 experts in accessibility. The user study was done with questions about what the user knows, how they feel, and what they can do. The results back up the idea that images that are relevant to the page make it easier to read. This method will help web designers make pages easier to read by looking at only the essential parts of a page and not relying on their judgment.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: José Luis Lépez Cuadrado.- Secretario: Divakar Yadav.- Vocal: Arti Jai

    Computational Intelligence and Human- Computer Interaction: Modern Methods and Applications

    Get PDF
    The present book contains all of the articles that were accepted and published in the Special Issue of MDPI’s journal Mathematics titled "Computational Intelligence and Human–Computer Interaction: Modern Methods and Applications". This Special Issue covered a wide range of topics connected to the theory and application of different computational intelligence techniques to the domain of human–computer interaction, such as automatic speech recognition, speech processing and analysis, virtual reality, emotion-aware applications, digital storytelling, natural language processing, smart cars and devices, and online learning. We hope that this book will be interesting and useful for those working in various areas of artificial intelligence, human–computer interaction, and software engineering as well as for those who are interested in how these domains are connected in real-life situations

    Fan-Fiction and AO3 Free-Form Tagging Practice: Innovating Open-Source Tools for Tag Network Visualization and Analysis

    Get PDF
    The thesis focuses on obtaining, representing, analyzing, and visualizing tag data within the Archive of Our Own (AO3) fan fiction archive, with a focus on the ``Additional Tags" category. The work was undertaken in four components. First, network nomenclature was defined and formally-defined network representations were developed for the AO3 Additional Tag structure. Second, computational techniques were developed to scrape the data from AO3, both in its current state and in its archival state (from the Wayback Machine) and to store it in a format suitable for subsequent processing. Third, techniques were developed to use the scraped data in order to computationally instantiate and visualize the network structures that were developed in the first component. Finally, in the fourth component, a set of network analysis techniques were developed, which were used to extract quantitative characteristics of the networks, to identify relevant changes over time, and to identify co-creation actions between different users in the AO3 community

    The Proceedings of the 23rd Annual International Conference on Digital Government Research (DGO2022) Intelligent Technologies, Governments and Citizens June 15-17, 2022

    Get PDF
    The 23rd Annual International Conference on Digital Government Research theme is “Intelligent Technologies, Governments and Citizens”. Data and computational algorithms make systems smarter, but should result in smarter government and citizens. Intelligence and smartness affect all kinds of public values - such as fairness, inclusion, equity, transparency, privacy, security, trust, etc., and is not well-understood. These technologies provide immense opportunities and should be used in the light of public values. Society and technology co-evolve and we are looking for new ways to balance between them. Specifically, the conference aims to advance research and practice in this field. The keynotes, presentations, posters and workshops show that the conference theme is very well-chosen and more actual than ever. The challenges posed by new technology have underscored the need to grasp the potential. Digital government brings into focus the realization of public values to improve our society at all levels of government. The conference again shows the importance of the digital government society, which brings together scholars in this field. Dg.o 2022 is fully online and enables to connect to scholars and practitioners around the globe and facilitate global conversations and exchanges via the use of digital technologies. This conference is primarily a live conference for full engagement, keynotes, presentations of research papers, workshops, panels and posters and provides engaging exchange throughout the entire duration of the conference

    Fan-Fiction and AO3 Free-Form Tagging Practice: Innovating Open-Source Tools for Tag Network Visualization and Analysis

    Get PDF
    The thesis focuses on obtaining, representing, analyzing, and visualizing tag data within the Archive of Our Own (AO3) fan fiction archive, with a focus on the ``Additional Tags" category. The work was undertaken in four components. First, network nomenclature was defined and formally-defined network representations were developed for the AO3 Additional Tag structure. Second, computational techniques were developed to scrape the data from AO3, both in its current state and in its archival state (from the Wayback Machine) and to store it in a format suitable for subsequent processing. Third, techniques were developed to use the scraped data in order to computationally instantiate and visualize the network structures that were developed in the first component. Finally, in the fourth component, a set of network analysis techniques were developed, which were used to extract quantitative characteristics of the networks, to identify relevant changes over time, and to identify co-creation actions between different users in the AO3 community

    Data Science and Knowledge Discovery

    Get PDF
    Data Science (DS) is gaining significant importance in the decision process due to a mix of various areas, including Computer Science, Machine Learning, Math and Statistics, domain/business knowledge, software development, and traditional research. In the business field, DS's application allows using scientific methods, processes, algorithms, and systems to extract knowledge and insights from structured and unstructured data to support the decision process. After collecting the data, it is crucial to discover the knowledge. In this step, Knowledge Discovery (KD) tasks are used to create knowledge from structured and unstructured sources (e.g., text, data, and images). The output needs to be in a readable and interpretable format. It must represent knowledge in a manner that facilitates inferencing. KD is applied in several areas, such as education, health, accounting, energy, and public administration. This book includes fourteen excellent articles which discuss this trending topic and present innovative solutions to show the importance of Data Science and Knowledge Discovery to researchers, managers, industry, society, and other communities. The chapters address several topics like Data mining, Deep Learning, Data Visualization and Analytics, Semantic data, Geospatial and Spatio-Temporal Data, Data Augmentation and Text Mining

    Proceedings of the 19th Sound and Music Computing Conference

    Get PDF
    Proceedings of the 19th Sound and Music Computing Conference - June 5-12, 2022 - Saint-Étienne (France). https://smc22.grame.f

    AXMEDIS 2008

    Get PDF
    The AXMEDIS International Conference series aims to explore all subjects and topics related to cross-media and digital-media content production, processing, management, standards, representation, sharing, protection and rights management, to address the latest developments and future trends of the technologies and their applications, impacts and exploitation. The AXMEDIS events offer venues for exchanging concepts, requirements, prototypes, research ideas, and findings which could contribute to academic research and also benefit business and industrial communities. In the Internet as well as in the digital era, cross-media production and distribution represent key developments and innovations that are fostered by emergent technologies to ensure better value for money while optimising productivity and market coverage

    LEARNFCA: A FUZZY FCA AND PROBABILITY BASED APPROACH FOR LEARNING AND CLASSIFICATION

    Get PDF
    Formal concept analysis(FCA) is a mathematical theory based on lattice and order theory used for data analysis and knowledge representation. Over the past several years, many of its extensions have been proposed and applied in several domains including data mining, machine learning, knowledge management, semantic web, software development, chemistry ,biology, medicine, data analytics, biology and ontology engineering. This thesis reviews the state-of-the-art of theory of Formal Concept Analysis(FCA) and its various extensions that have been developed and well-studied in the past several years. We discuss their historical roots, reproduce the original definitions and derivations with illustrative examples. Further, we provide a literature review of it’s applications and various approaches adopted by researchers in the areas of dataanalysis, knowledge management with emphasis to data-learning and classification problems. We propose LearnFCA, a novel approach based on FuzzyFCA and probability theory for learning and classification problems. LearnFCA uses an enhanced version of FuzzyLattice which has been developed to store class labels and probability vectors and has the capability to be used for classifying instances with encoded and unlabelled features. We evaluate LearnFCA on encodings from three datasets - mnist, omniglot and cancer images with interesting results and varying degrees of success. Adviser: Jitender Deogu
    corecore