8,149 research outputs found

    Applying digital content management to support localisation

    Get PDF
    The retrieval and presentation of digital content such as that on the World Wide Web (WWW) is a substantial area of research. While recent years have seen huge expansion in the size of web-based archives that can be searched efficiently by commercial search engines, the presentation of potentially relevant content is still limited to ranked document lists represented by simple text snippets or image keyframe surrogates. There is expanding interest in techniques to personalise the presentation of content to improve the richness and effectiveness of the user experience. One of the most significant challenges to achieving this is the increasingly multilingual nature of this data, and the need to provide suitably localised responses to users based on this content. The Digital Content Management (DCM) track of the Centre for Next Generation Localisation (CNGL) is seeking to develop technologies to support advanced personalised access and presentation of information by combining elements from the existing research areas of Adaptive Hypermedia and Information Retrieval. The combination of these technologies is intended to produce significant improvements in the way users access information. We review key features of these technologies and introduce early ideas for how these technologies can support localisation and localised content before concluding with some impressions of future directions in DCM

    User-centred design of flexible hypermedia for a mobile guide: Reflections on the hyperaudio experience

    Get PDF
    A user-centred design approach involves end-users from the very beginning. Considering users at the early stages compels designers to think in terms of utility and usability and helps develop the system on what is actually needed. This paper discusses the case of HyperAudio, a context-sensitive adaptive and mobile guide to museums developed in the late 90s. User requirements were collected via a survey to understand visitors’ profiles and visit styles in Natural Science museums. The knowledge acquired supported the specification of system requirements, helping defining user model, data structure and adaptive behaviour of the system. User requirements guided the design decisions on what could be implemented by using simple adaptable triggers and what instead needed more sophisticated adaptive techniques, a fundamental choice when all the computation must be done on a PDA. Graphical and interactive environments for developing and testing complex adaptive systems are discussed as a further step towards an iterative design that considers the user interaction a central point. The paper discusses how such an environment allows designers and developers to experiment with different system’s behaviours and to widely test it under realistic conditions by simulation of the actual context evolving over time. The understanding gained in HyperAudio is then considered in the perspective of the developments that followed that first experience: our findings seem still valid despite the passed time

    A Web-based Multilingual Intelligent Tutor System based on Jackson's Learning Styles Profiler and Expert Systems

    Full text link
    Nowadays, Intelligent Tutoring Systems (ITSs) are so regarded in order to improve education quality via new technologies in this area. One of the problems is that the language of ITSs is different from the learner's. It forces the learners to learn the system language. This paper tries to remove this necessity by using an Automatic Translator Component in system structure like Google Translate API. This system carry out a pre-test and post-test by using Expert System and Jackson Model before and after of training a concept. It constantly updates learner model to save all changes in learning process. So this paper offers an E-Learning system which is web-based, intelligent, adaptive, multilingual and remotely accessible where tutors and learners can have non-identical language. It is also applicable Every Time and Every Where (ETEW). Furthermore, it trains the concepts in the best method with any language and low cost.Comment: 12 pages, 2 figures, IAENG Transactions on Electrical Engineering Volume 1 - Special Issue of the International MultiConference of Engineers and Computer Scientists 2012. arXiv admin note: substantial text overlap with arXiv:1304.404

    Fact Checking in Community Forums

    Full text link
    Community Question Answering (cQA) forums are very popular nowadays, as they represent effective means for communities around particular topics to share information. Unfortunately, this information is not always factual. Thus, here we explore a new dimension in the context of cQA, which has been ignored so far: checking the veracity of answers to particular questions in cQA forums. As this is a new problem, we create a specialized dataset for it. We further propose a novel multi-faceted model, which captures information from the answer content (what is said and how), from the author profile (who says it), from the rest of the community forum (where it is said), and from external authoritative sources of information (external support). Evaluation results show a MAP value of 86.54, which is 21 points absolute above the baseline.Comment: AAAI-2018; Fact-Checking; Veracity; Community-Question Answering; Neural Networks; Distributed Representation

    Language learning and technology

    Get PDF
    By and large, languages, both as first, second or foreign languages remain one of the most important core subjects at every educational level. In early stages, their inclusion in the curriculum is intricately connected with (pre-)literacy practices, but also as a main driver for the successful integration of minority students learning a second language. In addition, the attainment of a certain level of a foreign language by the end of compulsory education is a common goal in most educational systems around the globe. Arguably, the key drivers of success in learning a language range from motivational to attitudinal, but ultimately they also have to do with the amount of target language use, the access to quality input, and especially language teachers' readiness to incorporate the latest educational trends effectively in the language classroom, educational technologies amongst them

    Artificial Intelligence-Enabled Intelligent Assistant for Personalized and Adaptive Learning in Higher Education

    Full text link
    This paper presents a novel framework, Artificial Intelligence-Enabled Intelligent Assistant (AIIA), for personalized and adaptive learning in higher education. The AIIA system leverages advanced AI and Natural Language Processing (NLP) techniques to create an interactive and engaging learning platform. This platform is engineered to reduce cognitive load on learners by providing easy access to information, facilitating knowledge assessment, and delivering personalized learning support tailored to individual needs and learning styles. The AIIA's capabilities include understanding and responding to student inquiries, generating quizzes and flashcards, and offering personalized learning pathways. The research findings have the potential to significantly impact the design, implementation, and evaluation of AI-enabled Virtual Teaching Assistants (VTAs) in higher education, informing the development of innovative educational tools that can enhance student learning outcomes, engagement, and satisfaction. The paper presents the methodology, system architecture, intelligent services, and integration with Learning Management Systems (LMSs) while discussing the challenges, limitations, and future directions for the development of AI-enabled intelligent assistants in education.Comment: 29 pages, 10 figures, 9659 word

    Adapting the automatic assessment of free-text answers to the students

    Get PDF
    In this paper, we present the first approach in the field of Computer Assisted Assessment (CAA) of students' free-text answers to model the student profiles. This approach has been implemented in a new version of Atenea, a system able to automatically assess students' short answers. The system has been improved so that it is now able to take into account the students' preferences and personal features to adapt not only the assessment process but also to personalize the appearance of the interface. In particular, it is now able to accept students’ answers written in Spanish or in English indistinctly, by means of Machine Translation. Moreover, we have observed that Atenea’s performance does not decrease drastically when combined with automatic translation, provided that the translation does not reduce greatly the variability in the vocabulary

    Visual Question Answering: A SURVEY

    Get PDF
    Visual Question Answering (VQA) has been an emerging field in computer vision and natural language processing that aims to enable machines to understand the content of images and answer natural language questions about them. Recently, there has been increasing interest in integrating Semantic Web technologies into VQA systems to enhance their performance and scalability. In this context, knowledge graphs, which represent structured knowledge in the form of entities and their relationships, have shown great potential in providing rich semantic information for VQA. This paper provides an abstract overview of the state-of-the-art research on VQA using Semantic Web technologies, including knowledge graph based VQA, medical VQA with semantic segmentation, and multi-modal fusion with recurrent neural networks. The paper also highlights the challenges and future directions in this area, such as improving the accuracy of knowledge graph based VQA, addressing the semantic gap between image content and natural language, and designing more effective multimodal fusion strategies. Overall, this paper emphasizes the importance and potential of using Semantic Web technologies in VQA and encourages further research in this exciting area

    Multimedia question answering

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion
    corecore