278 research outputs found

    A software based mentor system

    Get PDF
    This thesis describes the architecture, implementation issues and evaluation of Mentor - an educational support system designed to mentor students in their university studies. Students can ask (by typing) natural language questions and Mentor will use several educational paradigms to present information from its Knowledge Base or from data-mined online Web sites to respond. Typically the questions focus on the student’s assignments or in their preparation for their examinations. Mentor is also pro-active in that it prompts the student with questions such as "Have you started your assignment yet?". If the student responds and enters into a dialogue with Mentor, then, based upon the student’s questions and answers, it guides them through a Directed Learning Path planned by the lecturer, specific to that assessment. The objectives of the research were to determine if such a system could be designed, developed and applied in a large-scale, real-world environment and to determine if the resulting system was beneficial to students using it. The study was significant in that it provided an analysis of the design and implementation of the system as well as a detailed evaluation of its use. This research integrated the Computer Science disciplines of network communication, natural language parsing, user interface design and software agents, together with pedagogies from the Computer Aided Instruction and Intelligent Tutoring System fields of Education. Collectively, these disciplines provide the foundation for the two main thesis research areas of Dialogue Management and Tutorial Dialogue Systems. The development and analysis of the Mentor System required the design and implementation of an easy to use text based interface as well as a hyper- and multi-media graphical user interface, a client-server system, and a dialogue management system based on an extensible kernel. The multi-user Java-based client-server system used Perl-5 Regular Expression pattern matching for Natural Language Parsing along with a state-based Dialogue Manager and a Knowledge Base marked up using the XML-based Virtual Human Markup Language. The kernel was also used in other Dialogue Management applications such as with computer generated Talking Heads. The system also enabled a user to easily program their own knowledge into the Knowledge Base as well as to program new information retrieval or management tasks so that the system could grow with the user. The overall framework to integrate and manage the above components into a usable system employed suitable educational pedagogies that helped in the student’s learning process. The thesis outlines the learning paradigms used in, and summarises the evaluation of, three course-based Case Studies of university students’ perception of the system to see how effective and useful it was, and whether students benefited from using it. This thesis will demonstrate that Mentor met its objectives and was very successful in helping students with their university studies. As one participant indicated: ‘I couldn’t have done without it.

    Adventures in software engineering : plugging HCI & acessibility gaps with open source solutions

    Get PDF
    There has been a great deal of research undertaken in the field of Human-Computer Interfaces (HCI), input devices, and output modalities in recent years. From touch-based and voice control input mechanisms such as those found on modern smart-devices to the use of touch-free input through video-stream/image analysis (including depth streams and skeletal mapping) and the inclusion of gaze tracking, head tracking, virtual reality and beyond - the availability and variety of these I/O (Input/Output) mechanisms has increased tremendously and progressed both into our living rooms and into our lives in general. With regard to modern desktop computers and videogame consoles, at present many of these technologies are at a relatively immature stage of development - their use often limited to simple adjuncts to the staple input mechanisms of mouse, keyboard, or joystick / joypad inputs. In effect, we have these new input devices - but we're not quite sure how best to use them yet; that is, where their various strengths and weaknesses lie, and how or if they can be used to conveniently and reliably drive or augment applications in our everyday lives. In addition, much of this technology is provided by proprietary hardware and software, providing limited options for customisation or adaptation to better meet the needs of specific users. Therefore, this project investigated the development of open source software solutions to address various aspects of innovative user I/O in a flexible manner. Towards this end, a number of original software applications have been developed which incorporate functionality aimed at enhancing the current state of the art in these areas and making that software freely available for use by any who may find it beneficial.Doctor of Philosoph

    WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM

    Get PDF
    Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments

    Mining a Small Medical Data Set by Integrating the Decision Tree and t-test

    Get PDF
    [[abstract]]Although several researchers have used statistical methods to prove that aspiration followed by the injection of 95% ethanol left in situ (retention) is an effective treatment for ovarian endometriomas, very few discuss the different conditions that could generate different recovery rates for the patients. Therefore, this study adopts the statistical method and decision tree techniques together to analyze the postoperative status of ovarian endometriosis patients under different conditions. Since our collected data set is small, containing only 212 records, we use all of these data as the training data. Therefore, instead of using a resultant tree to generate rules directly, we use the value of each node as a cut point to generate all possible rules from the tree first. Then, using t-test, we verify the rules to discover some useful description rules after all possible rules from the tree have been generated. Experimental results show that our approach can find some new interesting knowledge about recurrent ovarian endometriomas under different conditions.[[journaltype]]國外[[incitationindex]]EI[[booktype]]紙本[[countrycodes]]FI

    Contributions to presence-based systems for deploying ubiquitous communication services

    Get PDF
    Next-Generation Networks (NGNs) will converge the existing fixed and wireless networks. These networks rely on the IMS (IP Multimedia Subsystem), introduced by the 3GPP. The presence service came into being in instant messaging applications. A user¿s presence information consists in any context that is necessary for applications to handle and adapt the user's communications. The presence service is crucial in the IMS to deploy ubiquitous services. SIMPLE is the standard protocol for handling presence and instant messages. This protocol disseminates users' presence information through subscriptions, notifications and publications. SIMPLE generates much signaling traffic for constantly disseminating presence information and maintaining subscriptions, which may overload network servers. This issue is even more harmful to the IMS due to its centralized servers. A key factor in the success of NGNs is to provide users with always-on services that are seamlessly part of their daily life. Personalizing these services according to the users' needs is necessary for the success of these services. To this end, presence information is considered as a crucial tool for user-based personalization. This thesis can be briefly summarized through the following contributions: We propose filtering and controlling the rate of presence publications so as to reduce the information sent over access links. We probabilistically model presence information through Markov chains, and analyzed the efficiency of controlling the rate of publications that are modeled by a particular Markov chain. The reported results show that this technique certainly reduces presence overload. We mathematically study the amount of presence traffic exchanged between domains, and analyze the efficiency of several strategies for reducing this traffic. We propose an strategy, which we call Common Subscribe (CS), for reducing the presence traffic exchanged between federated domains. We compare this strategy traffic with that generated by other optimizations. The reported results show that CS is the most efficient at reducing presence traffic. We analyze the load in the number of messages that several inter-domain traffic optimizations cause to the IMS centralized servers. Our proposed strategy, CS, combined with an RLS (i.e., a SIMPLE optimization) is the only optimization that reduces the IMS load; the others increase this load. We estimate the efficiency of the RLS, thereby concluding that the RLS is not efficient under certain circumstances, and hence this optimization is discouraged. We propose a queuing system for optimizing presence traffic on both the network core and access link, which is capable to adapt the publication and notification rate based on some quality conditions (e.g, maximum delay). We probabilistically model this system, and validate it in different scenarios. We propose, and implement a prototype of, a fully-distributed platform for handling user presence information. This approach allows integrating Internet Services, such as HTTP or VoIP, and optimizing these services in an easy, user-personalized way. We have developed SECE (Sense Everything, Control Everything), a platform for users to create rules that handle their communications and Internet Services proactively. SECE interacts with multiple third-party services for obtaining as much user context as possible. We have developed a natural-English-like formal language for SECE rules. We have enhanced SECE for discovering web services automatically through the Web Ontology Language (OWL). SECE allows composing web services automatically based on real-world events, which is a significant contribution to the Semantic Web. The research presented in this thesis has been published through 3 book chapters, 4 international journals (3 of them are indexed in JCR), 10 international conference papers, 1 demonstration at an international conference, and 1 national conferenceNext-Generation Networks (NGNs) son las redes de próxima generación que soportaran la convergencia de redes de telecomunicación inalámbricas y fijas. La base de NGNs es el IMS (IP Multimedia Subsystem), introducido por el 3GPP. El servicio de presencia nació de aplicaciones de mesajería instantánea. La información de presencia de un usuario consiste en cualquier tipo de información que es de utilidad para manejar las comunicaciones con el usuario. El servicio de presencia es una parte esencial del IMS para el despliegue de servicios ubicuos. SIMPLE es el protocolo estándar para manejar presencia y mensajes instantáneos en el IMS. Este protocolo distribuye la información de presencia de los usuarios a través de suscripciones, notificaciones y publicaciones. SIMPLE genera mucho tráfico por la diseminación constante de información de presencia y el mantenimiento de las suscripciones, lo cual puede saturar los servidores de red. Este problema es todavía más perjudicial en el IMS, debido al carácter centralizado de sus servidores. Un factor clave en el éxito de NGNs es proporcionar a los usuarios servicios ubicuos que esten integrados en su vida diaria y asi interactúen con los usuarios constantemente. La personalización de estos servicios basado en los usuarios es imprescindible para el éxito de los mismos. Para este fin, la información de presencia es considerada como una herramienta base. La tesis realizada se puede resumir brevemente en los siguientes contribuciones: Proponemos filtrar y controlar el ratio de las publicaciones de presencia para reducir la cantidad de información enviada en la red de acceso. Modelamos la información de presencia probabilísticamente mediante cadenas de Markov, y analizamos la eficiencia de controlar el ratio de publicaciones con una cadena de Markov. Los resultados muestran que este mecanismo puede efectivamente reducir el tráfico de presencia. Estudiamos matemáticamente la cantidad de tráfico de presencia generada entre dominios y analizamos el rendimiento de tres estrategias para reducir este tráfico. Proponemos una estrategia, la cual llamamos Common Subscribe (CS), para reducir el tráfico de presencia entre dominios federados. Comparamos el tráfico generado por CS frente a otras estrategias de optimización. Los resultados de este análisis muestran que CS es la estrategia más efectiva. Analizamos la carga en numero de mensajes introducida por diferentes optimizaciones de tráfico de presencia en los servidores centralizados del IMS. Nuestra propuesta, CS, combinada con un RLS (i.e, una optimización de SIMPLE), es la unica optimización que reduce la carga en el IMS. Estimamos la eficiencia del RLS, deduciendo que un RLS no es eficiente en ciertas circunstancias, en las que es preferible no usar esta optimización. Proponemos un sistema de colas para optimizar el tráfico de presencia tanto en el núcleo de red como en la red de acceso, y que puede adaptar el ratio de publicación y notificación en base a varios parametros de calidad (e.g., maximo retraso). Modelamos y analizamos este sistema de colas probabilísticamente en diferentes escenarios. Proponemos una arquitectura totalmente distribuida para manejar las información de presencia del usuario, de la cual hemos implementado un prototipo. Esta propuesta permite la integracion sencilla y personalizada al usuario de servicios de Internet, como HTTP o VoIP, asi como la optimizacón de estos servicios. Hemos desarrollado SECE (Sense Everything, Control Everything), una plataforma donde los usuarios pueden crear reglas para manejar todas sus comunicaciones y servicios de Internet de forma proactiva. SECE interactúa con una multitud de servicios para conseguir todo el contexto possible del usuario. Hemos desarollado un lenguaje formal que parace como Ingles natural para que los usuarios puedan crear sus reglas. Hemos mejorado SECE para descubrir servicios web automaticamente a través del lenguaje OWL (Web Ontology Language)

    Deep Neural Attention for Misinformation and Deception Detection

    Get PDF
    PhD thesis in Information technologyAt present the influence of social media on society is so much that without it life seems to have no meaning for many. This kind of over-reliance on social media gives an opportunity to the anarchic elements to take undue advantage. Online misinformation and deception are vivid examples of such phenomenon. The misinformation or fake news spreads faster and wider than the true news [32]. The need of the hour is to identify and curb the spread of misinformation and misleading content automatically at the earliest. Several machine learning models have been proposed by the researchers to detect and prevent misinformation and deceptive content. However, these prior works suffer from some limitations: First, they either use feature engineering heavy methods or use intricate deep neural architectures, which are not so transparent in terms of their internal working and decision making. Second, they do not incorporate and learn the available auxiliary and latent cues and patterns, which can be very useful in forming the adequate context for the misinformation. Third, Most of the former methods perform poorly in early detection accuracy measures because of their reliance on features that are usually absent at the initial stage of news or social media posts on social networks. In this dissertation, we propose suitable deep neural attention based solutions to overcome these limitations. For instance, we propose a claim verification model, which learns embddings for the latent aspects such as author and subject of the claim and domain of the external evidence document. This enables the model to learn important additional context other than the textual content. In addition, we also propose an algorithm to extract evidential snippets out of external evidence documents, which serves as explanation of the model’s decisions. Next, we improve this model by using improved claim driven attention mechanism and also generate a topically diverse and non-redundant multi-document fact-checking summary for the claims, which helps to further interpret the model’s decision making. Subsequently, we introduce a novel method to learn influence and affinity relationships among the social media users present on the propagation paths of the news items. By modeling the complex influence relationship among the users, in addition to textual content, we learn the significant patterns pertaining to the diffusion of the news item on social network. The evaluation shows that the proposed model outperforms the other related methods in early detection performance with significant gains. Next, we propose a synthetic headline generation based headline incongruence detection model. Which uses a word-to-word mutual attention based deep semantic matching between original and synthetic news headline to detect incongruence. Further, we investigate and define a new task of incongruence detection in presence of important cardinal values in headline. For this new task, we propose a part-of-speech pattern driven attention based method, which learns requisite context for cardinal values

    Exploring students’ iterative practices when learning with physical computing kits through data visualisations

    Get PDF
    Physical computing kits allow the practical implementation of open-ended, hands-on, interactive learning experiences in the classroom. In the process of engaging with physical computing kits, students formulate and implement self-constructed goals using an iterative approach. However, the openness and diversity of such learning contexts often make them challenging to design and support. The field of learning analytics has the potential to support project-based learning, using continuous real-time data traces arising from student interactions. Data visualisations, specifically, can provide reflective opportunities for teachers to analyse students’ actions and act based on this evidence. However, to date, there has been little data visualisation research targeted at learning with physical computing kits. This thesis reports progress into the design and evaluation of a suite of data visualisations focussed on students’ iterative design process when using physical computing kits in authentic classroom settings. The areas of iterative design, appropriation theory, process-driven learning analytics and data visualisation inform the analysis and interpretation of trace data collected from students’ interactions. The contribution of the thesis is three-fold. First, a model for examining students’ trace data in keeping with social processes, such as appropriation, is presented. Secondly, insights into the iterative design process of students engaging in open-ended projects are produced, as they emerge from our data visualisations, across multiple groups of students. Thirdly, an evaluation into the role and potential of using data visualisations in the classroom is conducted with ten teachers. Implications for the design and support of open-ended project-based learning experiences with physical computing kits using trace data and data visualisation are discussed based on the teachers’ feedback. The thesis represents a first step towards the design of context-aligned, process-oriented data visualisations to provide evidence-based reflective opportunities to support students’ iterative design behaviours in this learning setting

    A Haptic Study to Inclusively Aid Teaching and Learning in the Discipline of Design

    Get PDF
    Designers are known to use a blend of manual and virtual processes to produce design prototype solutions. For modern designers, computer-aided design (CAD) tools are an essential requirement to begin to develop design concept solutions. CAD, together with augmented reality (AR) systems have altered the face of design practice, as witnessed by the way a designer can now change a 3D concept shape, form, color, pattern, and texture of a product by the click of a button in minutes, rather than the classic approach to labor on a physical model in the studio for hours. However, often CAD can limit a designer’s experience of being ‘hands-on’ with materials and processes. The rise of machine haptic1 (MH) tools have afforded a great potential for designers to feel more ‘hands-on’ with the virtual modeling processes. Through the use of MH, product designers are able to control, virtually sculpt, and manipulate virtual 3D objects on-screen. Design practitioners are well placed to make use of haptics, to augment 3D concept creation which is traditionally a highly tactile process. For similar reasoning, it could also be said that, non-sighted and visually impaired (NS, VI) communities could also benefit from using MH tools to increase touch-based interactions, thereby creating better access for NS, VI designers. In spite of this the use of MH within the design industry (specifically product design), or for use by the non-sighted community is still in its infancy. Therefore the full benefit of haptics to aid non-sighted designers has not yet been fully realised. This thesis empirically investigates the use of multimodal MH as a step closer to improving the virtual hands-on process, for the benefit of NS, VI and fully sighted (FS) Designer-Makers. This thesis comprises four experiments, embedded within four case studies (CS1-4). Case study 1and2 worked with self-employed NS, VI Art Makers at Henshaws College for the Blind and Visual Impaired. The study examined the effects of haptics on NS, VI users, evaluations of experience. Case study 3 and4, featuring experiments 3 and4, have been designed to examine the effects of haptics on distance learning design students at the Open University. The empirical results from all four case studies showed that NS, VI users were able to navigate and perceive virtual objects via the force from the haptically rendered objects on-screen. Moreover, they were assisted by the whole multimodal MH assistance, which in CS2 appeared to offer better assistance to NS versus FS participants. In CS3 and 4 MH and multimodal assistance afforded equal assistance to NS, VI, and FS, but haptics were not as successful in bettering the time results recorded in manual (M) haptic conditions. However, the collision data between M and MH showed little statistical difference. The thesis showed that multimodal MH systems, specifically used in kinesthetic mode have enabled human (non-disabled and disabled) to credibly judge objects within the virtual realm. It also shows that multimodal augmented tooling can improve the interaction and afford better access to the graphical user interface for a wider body of users

    Engaging older adults with age-related macular degeneration in the design and evaluation of mobile assistive technologies

    Get PDF
    Ongoing advances in technology are undoubtedly increasing the scope for enhancing and supporting older adults’ daily living. The digital divide between older and younger adults, however, raises concerns about the suitability of technological solutions for older adults, especially for those with impairments. Taking older adults with Age-Related Macular Degeneration (AMD) – a progressive and degenerative disease of the eye – as a case study, the research reported in this dissertation considers how best to engage older adults in the design and evaluation of mobile assistive technologies to achieve sympathetic design of such technologies. Recognising the importance of good nutrition and the challenges involved in designing for people with AMD, this research followed a participatory and user-centred design (UCD) approach to develop a proof–of–concept diet diary application for people with AMD. Findings from initial knowledge elicitation activities contribute to the growing debate surrounding the issues on how older adults’ participation is initiated, planned and managed. Reflections on the application of the participatory design method highlighted a number of key strategies that can be applied to maintain empathic participatory design rapport with older adults and, subsequently, lead to the formulation of participatory design guidelines for effectively engaging older adults in design activities. Taking a novel approach, the final evaluation study contributed to the gap in the knowledge on how to bring closure to the participatory process in as positive a way as possible, cognisant of the potential negative effect that withdrawal of the participatory process may have on individuals. Based on the results of this study, we ascertain that (a) sympathetic design of technology with older adults will maximise technology acceptance and shows strong indicators for affecting behaviour change; and (b) being involved in the design and development of such technologies has the capacity to significantly improve the quality of life of older adults (with AMD)
    • …
    corecore