522 research outputs found

    Semantic technologies: from niche to the mainstream of Web 3? A comprehensive framework for web Information modelling and semantic annotation

    Get PDF
    Context: Web information technologies developed and applied in the last decade have considerably changed the way web applications operate and have revolutionised information management and knowledge discovery. Social technologies, user-generated classification schemes and formal semantics have a far-reaching sphere of influence. They promote collective intelligence, support interoperability, enhance sustainability and instigate innovation. Contribution: The research carried out and consequent publications follow the various paradigms of semantic technologies, assess each approach, evaluate its efficiency, identify the challenges involved and propose a comprehensive framework for web information modelling and semantic annotation, which is the thesis’ original contribution to knowledge. The proposed framework assists web information modelling, facilitates semantic annotation and information retrieval, enables system interoperability and enhances information quality. Implications: Semantic technologies coupled with social media and end-user involvement can instigate innovative influence with wide organisational implications that can benefit a considerable range of industries. The scalable and sustainable business models of social computing and the collective intelligence of organisational social media can be resourcefully paired with internal research and knowledge from interoperable information repositories, back-end databases and legacy systems. Semantified information assets can free human resources so that they can be used to better serve business development, support innovation and increase productivity

    Tradition and Technology: A Design-Based Prototype of an Online Ginan Semantization Tool

    Get PDF
    The heritage of ginans of the Nizari Ismaili community comprises over 1,000 individual hymn-like poems of varying lengths and languages. The ginans were originally composed to spread the teachings of the Satpanth Ismaili faith and served as scriptural texts that guided the normative understanding of the community in South Asia. The emotive melodies of the ginans continue to enchant the members of the community in the diaspora who do not necessarily understand the language of the ginans. The language of the ginans is mixed and borrows vocabulary from Indo-Aryan and Perso-Arabic dialects. With deliberate and purposeful use of information technology, the online tool blends the Western best practices of language learning with the traditional transmission methods and materials of the Ismaili community. This study is based on the premise that for the teachings of the ginans to survive in the Euro-American diaspora, the successive generations must learn and understand the vocabulary of the ginans. The process through which humans learn and master vocabulary is called semantization, which refers to the process of learning and understand various senses and uses of words in a language. To this end, a sample ginan corpus was chosen and semantically analyzed to develop an online ginan lexicon. This lexicon was then used to enrich ginan texts with online glosses to facilitate semantization of ginan vocabulary. The design based-research methodology for prototyping the tool comprised two design iterations of analysis, design, and review. In the first iteration, the initial design of the prototype was based on the multidisciplinary literature review and an in-depth semantic analysis of ginan materials. The initial design was then reviewed by community ginan experts and teachers to inform the next design iteration. In the second design iteration, the initial design was enhanced into a functional prototype by adding features based on the expert suggestions as well as the needs of community learners gathered by surveying a convenience sample of 515 community members across the globe. The analysis of the survey data revealed that over 90% of the survey participants preferred English materials for learning and understanding the language of the ginans. In addition, having online access to ginan materials was expressed as a dire need for the community to engage with the ginans. The development and dissemination of curriculum-based educational programs and supporting resources for the ginans emerged as the most urgent and unmet expectations of the community. The study also confirmed that the wide availability of an online ginan learning tool, such as the one designed in this study, is highly desirable by English-speaking community members who want to learn and understand the tradition and teachings of ginans. However, such a tool is only a part of the solution for fostering sustainable community engagement for the preservation of ginans. To ensure that the tradition is carried forward by the future generations with compassion and understanding, the community institutions must make ginans an educational priority and ensure educational resources for ginans are widely available to community members

    Reimagining the Journal Editorial Process: An AI-Augmented Versus an AI-Driven Future

    Get PDF
    The editorial process at our leading information systems journals has been pivotal in shaping and growing our field. But this process has grown long in the tooth and is increasingly frustrating and challenging its various stakeholders: editors, reviewers, and authors. The sudden and explosive spread of AI tools, including advances in language models, make them a tempting fit in our efforts to ease and advance the editorial process. But we must carefully consider how the goals and methods of AI tools fit with the core purpose of the editorial process. We present a thought experiment exploring the implications of two distinct futures for the information systems powering today’s journal editorial process: an AI-augmented and an AI-driven one. The AI-augmented scenario envisions systems providing algorithmic predictions and recommendations to enhance human decision-making, offering enhanced efficiency while maintaining human judgment and accountability. However, it also requires debate over algorithm transparency, appropriate machine learning methods, and data privacy and security. The AI-driven scenario, meanwhile, imagines a fully autonomous and iterative AI. While potentially even more efficient, this future risks failing to align with academic values and norms, perpetuating data biases, and neglecting the important social bonds and community practices embedded in and strengthened by the human-led editorial process. We consider and contrast the two scenarios in terms of their usefulness and dangers to authors, reviewers, editors, and publishers. We conclude by cautioning against the lure of an AI-driven, metric-focused approach, advocating instead for a future where AI serves as a tool to augment human capacity and strengthen the quality of academic discourse. But more broadly, this thought experiment allows us to distill what the editorial process is about: the building of a premier research community instead of chasing metrics and efficiency. It is up to us to guard these values

    Role of images on World Wide Web readability

    Get PDF
    As the Internet and World Wide Web have grown, many good things have come. If you have access to a computer, you can find a lot of information quickly and easily. Electronic devices can store and retrieve vast amounts of data in seconds. You no longer have to leave your house to get products and services you could only get in person. Documents can be changed from English to Urdu or from text to speech almost instantly, making it easy for people from different cultures and with different abilities to talk to each other. As technology improves, web developers and website visitors want more animation, colour, and technology. As computers get faster at processing images and other graphics, web developers use them more and more. Users who can see colour, pictures, animation, and images can help understand and read the Web and improve the Web experience. People who have trouble reading or whose first language is not used on the website can also benefit from using pictures. But not all images help people understand and read the text they go with. For example, images just for decoration or picked by the people who made the website should not be used. Also, different factors could affect how easy it is to read graphical content, such as a low image resolution, a bad aspect ratio, a bad colour combination in the image itself, a small font size, etc., and the WCAG gave different rules for each of these problems. The rules suggest using alternative text, the right combination of colours, low contrast, and a higher resolution. But one of the biggest problems is that images that don't go with the text on a web page can make it hard to read the text. On the other hand, relevant pictures could make the page easier to read. A method has been suggested to figure out how relevant the images on websites are from the point of view of web readability. This method combines different ways to get information from images by using Cloud Vision API and Optical Character Recognition (OCR), and reading text from websites to find relevancy between them. Techniques for preprocessing data have been used on the information that has been extracted. Natural Language Processing (NLP) technique has been used to determine what images and text on a web page have to do with each other. This tool looks at fifty educational websites' pictures and assesses their relevance. Results show that images that have nothing to do with the page's content and images that aren't very good cause lower relevancy scores. A user study was done to evaluate the hypothesis that the relevant images could enhance web readability based on two evaluations: the evaluation of the 1024 end users of the page and the heuristic evaluation, which was done by 32 experts in accessibility. The user study was done with questions about what the user knows, how they feel, and what they can do. The results back up the idea that images that are relevant to the page make it easier to read. This method will help web designers make pages easier to read by looking at only the essential parts of a page and not relying on their judgment.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: José Luis Lépez Cuadrado.- Secretario: Divakar Yadav.- Vocal: Arti Jai

    Framing Professional Learning Analytics as Reframing Oneself

    Get PDF
    Central to imagining the future of technology-enhanced professional learning is the question of how data are gathered, analyzed, and fed back to stakeholders. The field of learning analytics (LA) has emerged over the last decade at the intersection of data science, learning sciences, human-centered and instructional design, and organizational change, and so could in principle inform how data can be gathered and analyzed in ways that support professional learning. However, in contrast to formal education where most research in LA has been conducted, much work-integrated learning is experiential, social, situated, and practice-bound. Supporting such learning exposes a significant weakness in LA research, and to make sense of this gap, this article proposes an adaptation of the Knowledge-Agency Window framework. It draws attention to how different forms of professional learning locate on the dimensions of learner agency and knowledge creation. Specifically, we argue that the concept of “reframing oneself” holds particular relevance for informal, work-integrated learning. To illustrate how this insight translates into LA design for professionals, three examples are provided: first, analyzing personal and team skills profiles (skills analytics); second, making sense of challenging workplace experiences (reflective writing analytics); and third, reflecting on orientation to learning (dispositional analytics). We foreground professional agency as a key requirement for such techniques to be used effectively and ethically

    Social Learning Systems: The Design of Evolutionary, Highly Scalable, Socially Curated Knowledge Systems

    Get PDF
    In recent times, great strides have been made towards the advancement of automated reasoning and knowledge management applications, along with their associated methodologies. The introduction of the World Wide Web peaked academicians’ interest in harnessing the power of linked, online documents for the purpose of developing machine learning corpora, providing dynamical knowledge bases for question answering systems, fueling automated entity extraction applications, and performing graph analytic evaluations, such as uncovering the inherent structural semantics of linked pages. Even more recently, substantial attention in the wider computer science and information systems disciplines has been focused on the evolving study of social computing phenomena, primarily those associated with the use, development, and analysis of online social networks (OSN\u27s). This work followed an independent effort to develop an evolutionary knowledge management system, and outlines a model for integrating the wisdom of the crowd into the process of collecting, analyzing, and curating data for dynamical knowledge systems. Throughout, we examine how relational data modeling, automated reasoning, crowdsourcing, and social curation techniques have been exploited to extend the utility of web-based, transactional knowledge management systems, creating a new breed of knowledge-based system in the process: the Social Learning System (SLS). The key questions this work has explored by way of elucidating the SLS model include considerations for 1) how it is possible to unify Web and OSN mining techniques to conform to a versatile, structured, and computationally-efficient ontological framework, and 2) how large-scale knowledge projects may incorporate tiered collaborative editing systems in an effort to elicit knowledge contributions and curation activities from a diverse, participatory audience

    An Investigation of the Public Health Informatics Research and Practice in the Past Fifteen Years from 2000 to 2014: A Scoping Review in MEDLINE

    Get PDF
    Objective: To examine the extent and nature of existing Public Health Informatics (PHI) studies in the past 15 years on MEDLINE. Methods: This thesis adopted the scientific scoping review methodology recommended by Arksey and O’Malley in 2005. It proceeded with the five main stages, which were: Stage I - identifying the research question; Stage II - identifying relevant studies; Stage III - study selection; Stage IV - charting the data; and Stage V - collating, summarizing, and reporting the results. Each methodological stage was carried out with the joint collaboration with the academic supervisor and a final result and conclusion were set forth. Results: The results of this study captured a total number of 486 articles in MEDLINE focused in PHI. Out of them, a majority belonged to the USA followed by the UK, Australia and Canada. Only about one fifth of the articles were from the rest of the world. Further, About 60% of the articles represented infectious disease monitoring, outbreak detection, and bio-terrorism surveillance. Furthermore, about 10% belonged to chronic disease monitoring; whereas public health policy system and research represented 40% of the total articles. The most frequently used information technology were electronic registry, website, and GIS. In contrast, mass media and mobile phones were among the least used technologies. Conclusion: Despite multiple research and discussions conducted in the past 15 years (starting from 2000), the PHI system requires further improvements in the application of modern PHT such as wireless devices, wearable devices, remote sensors, remote/ cloud computing etc. on various domains of PH, which were scarcely discussed or used in the available literature
    corecore