567 research outputs found

    Collaborative learning with affective artificial study companions in a virtual learning environment

    Get PDF
    This research has been carried out in conjunction with Chapeltown and Harehills Assisted Learning Computer School (CHALCS) and local schools. CHALCS is an 'out-of-hours' school in a deprived inner-city community where unemployment is high and many children are failing to meet their educational potential. As the name implies CHALCS provides students with access to computers to support their learning. CHALCS relies on many volunteer tutors and specialist tutors are in short supply. This is especially true for subjects such as Advanced Level Physics with low numbers of students. This research aimed to investigate the feasibility of providing online study skills support to pupils at CHALCS and a local school. Research suggests that collaborative learning that prompts students to explain and justify their understanding can encourage deeper learning. As a potentially effective way of motivating deeper learning from hypertext course notes in a Virtual Learning Environment (VLE), this research investigates the feasibility of designing an artificial Agent capable of collaborating with the learner to jointly construct summary notes. Hypertext course notes covering a portion of the Advanced Level Physics curriculum were designed and uploaded into a WebCT based VLE. A specialist tutor validated the content of the course notes before the ease of use of the VLE was tested with target students. A study was then conducted to develop a model of the kinds of help students required in writing summary notes from the course-notes. Based on the derived process model of summarisation and an analysis of the content structure of the course notes, strategies for summarising the text were devised. An Animated Pedagogical Agent was designed incorporating these strategies. Two versions of the agent with opposing 'Affectations' (giving the appearance of different characters) were evaluated with users. It was therefore possible to test which artificial 'character' students preferred. From the evaluation study some conclusions are made concerning the effect of the two opposite characterisations on student perceptions of the agent and the degree to which it was helpful as a learning companion. Some recommendations for future work are then made

    Enhancing dynamic symbolic execution via loop summarisation, segmented memory and pending constraints

    Get PDF
    Software has become ubiquitous and its impact is still increasing. The more software is created, the more bugs get introduced into it. With software’s increasing omnipresence, these bugs have a high probability of negative impact on everyday life. There are many efforts aimed at improving software correctness, among which symbolic execution, a program analysis technique that aims to systematically explore all program paths. In this thesis we present three techniques for enhancing symbolic execution. We first present a counterexample-guided inductive synthesis approach to summarise a class of loops, called memoryless loops using standard library functions. Our approach can summarize two thirds of memoryless loops we gathered on a set of open-source programs. These loop summaries can be used to: 1) enhance symbolic execution, 2) optimise native code and 3) refactor code. We then propose a technique that avoids expensive forking by using a segmented memory model. In this model, we split memory into segments using pointer alias analysis, so that each symbolic pointer refers to objects in a single segment. This results in a memory model where forking due to symbolic pointer dereferences is reduced. We evaluate our segmented memory model on benchmarks such as SQLite, m4 and make and observe significant decreases in execution time and memory usage. Finally, we present pending constraints, which can enhance scalability of symbolic execution by aggressively prioritising execution paths that are already known to be feasible either via cached solver solutions or seeds. The execution of other paths is deferred until no paths are known to be feasible without using the constraint solver. We evaluate our technique on nine applications, including SQLite3, make and tcpdump, and show it can achieve higher coverage for both seeded and non-seeded exploration.Open Acces

    Review and classification of trajectory summarisation algorithms: From compression to segmentation

    Get PDF
    With the continuous development and cost reduction of positioning and tracking technologies, a large amount of trajectories are being exploited in multiple domains for knowledge extraction. A trajectory is formed by a large number of measurements, where many of them are unnecessary to describe the actual trajectory of the vehicle, or even harmful due to sensor noise. This not only consumes large amounts of memory, but also makes the extracting knowledge process more difficult. Trajectory summarisation techniques can solve this problem, generating a smaller and more manageable representation and even semantic segments. In this comprehensive review, we explain and classify techniques for the summarisation of trajectories according to their search strategy and point evaluation criteria, describing connections with the line simplification problem. We also explain several special concepts in trajectory summarisation problem. Finally, we outline the recent trends and best practices to continue the research in next summarisation algorithms.The author(s) disclosed receipt of the following financial support for the research, authorship and/or publication of this article: This work was funded by public research projects of Spanish Ministry of Economy and Competitivity (MINECO), reference TEC2017-88048-C2-2-

    Using social semantic knowledge to improve annotations in personal photo collections

    Get PDF
    Instituto Politécnico de Lisboa (IPL) e Instituto Superior de Engenharia de Lisboa (ISEL)apoio concedido pela bolsa SPRH/PROTEC/67580/2010, que apoiou parcialmente este trabalh

    Advances in automatic terminology processing: methodology and applications in focus

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.The information and knowledge era, in which we are living, creates challenges in many fields, and terminology is not an exception. The challenges include an exponential growth in the number of specialised documents that are available, in which terms are presented, and the number of newly introduced concepts and terms, which are already beyond our (manual) capacity. A promising solution to this ‘information overload’ would be to employ automatic or semi-automatic procedures to enable individuals and/or small groups to efficiently build high quality terminologies from their own resources which closely reflect their individual objectives and viewpoints. Automatic terminology processing (ATP) techniques have already proved to be quite reliable, and can save human time in terminology processing. However, they are not without weaknesses, one of which is that these techniques often consider terms to be independent lexical units satisfying some criteria, when terms are, in fact, integral parts of a coherent system (a terminology). This observation is supported by the discussion of the notion of terms and terminology and the review of existing approaches in ATP presented in this thesis. In order to overcome the aforementioned weakness, we propose a novel methodology in ATP which is able to extract a terminology as a whole. The proposed methodology is based on knowledge patterns automatically extracted from glossaries, which we considered to be valuable, but overlooked resources. These automatically identified knowledge patterns are used to extract terms, their relations and descriptions from corpora. The extracted information can facilitate the construction of a terminology as a coherent system. The study also aims to discuss applications of ATP, and describes an experiment in which ATP is integrated into a new NLP application: multiplechoice test item generation. The successful integration of the system shows that ATP is a viable technology, and should be exploited more by other NLP applications
    • …
    corecore