1,275 research outputs found

    Learner models in online personalized educational experiences: an infrastructure and some experim

    Get PDF
    Technologies are changing the world around us, and education is not immune from its influence: the field of teaching and learning supported by the use of Information and Communication Technologies (ICTs), also known as Technology Enhanced Learning (TEL), has witnessed a huge expansion in recent years. This wide adoption happened thanks to the massive diffusion of broadband connections and to the pervasive needs for education, highly connected to the evolution in sciences and technologies. Therefore, it has pushed up the usage of online education (distance and blended methodologies for educational experiences) to, even in lately years, unexpected rates. Alongside with the well known potentialities, digital-based educational tools come with a number of downsides, such as possible disengagement on the part of the learner, absence of the social pressures that normally exist in a classroom environment, difficulty or even inability from the learners to self-regulate and, last but not least, depletion of the stimulus to actively participate and cooperate with lectures and peers. These difficulties impact the teaching process and the outcomes of the educational experience (i.e. learning process), being a serious limit and questioning the broader applicability of TEL solutions. To overcome these issues, there is a need of tools to support the learning process. In the literature, one of the known approach to improve the situation is to rely on a user profile, that collects data during the use of the eLearning platforms or tool. The created profile can be used to adapt the behaviour and the contents proposed to the learner. On top of this model, some researches stressed the positive effects stimulated by the disclosure of the model itself for inspection purposes by the learner. This disclosed model is known as Open Learner Model (OLM). The idea of opening learners' profile and eventually integrate them with external on-line resources is not new and it has the ultimate goal of creating global and long-run indicators of the learner's profile. Also the representation aspect of the learner model plays a role, moving from the more traditional approach based on the textual and analytic/extensive representation to the graphical indicators that are able to summarise and to present one or more of the model characteristics in a way that is considered more effective and natural for the user consumption. Relying on the same learner models, and stressing the different aggregation and representation capabilities, it is possible to either support self-reflection of the learner or to foster the tutoring process to allow proper supervision by the tutor/teacher. Both the objectives can be reached through the graphical representation of the relevant information, presented in different ways. Furthermore, with such an open approach for the learner model, the concepts of personalisation and adaptation acquire a central role in the TEL experience, overcoming the previous limits related to the impossibility to observe and explain to the learner the reasons for such an intervention from the tool itself. As a consequence, the introduction of different tools, platforms, widgets and devices in the learning process, together with the adaptation process based on the learner profiles, can create a personal space for a potential fruitful usage of the rich and widespread amount of resources available to the learner. This work aimed at analysing the way a learner model could be represented in visual presentation to the system users, exploring the effects and performances for learners and teachers. Subsequently, it concentrated in investigating how the adoption of adaptive and social visualisations of OLM could affect the student experience within a TEL context. The motivation was twofold. On one side was to show that the approach of mixing data from heterogeneous and not already related data sources could have a meaningful didactic interpretations, whether on the other one was to measure the perceived impact of the introduction on online experiences of the adaptivity (and of social aspects) in the graphical visualisations produced by such a tool. In order to achieve these objectives, the present work analysed and addressed them through an approach that merged user data in learning platforms, implementing a learner profile. This was accomplished by means of the creation of a tool, named GVIS, to elaborate on the collected user actions in platforms enabling remote teaching. A number of test cases were performed and analysed, adopting the developed tool as the provider to extract, to aggregate and to represent the data for the learners' model. The GVIS tool impact was then estimated with self- evaluation questionnaires, with the analysis of log files and with knowledge quiz results. Dimensions such as the perceived usefulness, the impact on motivation and commitment, the cognitive overload generated, and the impact of social data disclosure were taken into account. The main result found by the application of the developed tool in TEL experiences was to have an impact on the behaviour of online learners when used to provide them with indicators around their activities, especially when enhanced with social capabilities. The effects appear to be amplifies in those cases where the widget usage is as simplified as possible. From the learner side, the results suggested that the learners seem to appreciate the tool and recognise its value. For them the introduction as part of the online learning experience could act as a positive pressure factor, enhanced by the peer comparison functionality. This functionality could also be used to reinforce the student engagement and positive commitment to the educational experience, by transmitting a sense of community and stimulating healthy competition between learners. From the teacher/tutor side, they seemed to be better supported by the presentation of compact, intuitive and just-in-time information (i.e. actions that have an educational interpretation or impact) about the monitored user or group. This gave them a clearer picture of how the class is currently performing and enabled them to address performance issues by adapting the resources and the teaching (and learning) approach accordingly. Although a drawback was identified regarding the cognitive overload, the data collected showed that users generally considered this kind of support useful. There is also indications that further analyses can be interesting to explore the effects introduced in the teaching practices by the availability and usage of such a tool

    Cloudarmor: Supporting Reputation-Based Trust Management for Cloud Services

    Get PDF
    Cloud services have become predominant in the current technological era. For the rich set of features provided by cloud services, consumers want to access the services while protecting their privacy. In this kind of environment, protection of cloud services will become a significant problem. So, research has started for a system, which lets the users access cloud services without losing the privacy of their data. Trust management and identity model makes sense in this case. The identity model maintains the authentication and authorization of the components involved in the system and trust-based model provides us with a dynamic way of identifying issues and attacks with the system and take appropriate actions. Further, a trust management-based system provides us with a new set of challenges such as reputation-based attacks, availability of components, and misleading trust feedbacks. Collusion attacks and Sybil attacks form a significant part of these challenges. This paper aims to solve the above problems in a trust management-based model by introducing a credibility model on top of a new trust management model, which addresses these use-cases, and also provides reliability and availability

    A Task-Centered Visualization Design Environment and a Method for Measuring the Complexity of Visualization Designs

    Get PDF
    Recent years have seen a growing interest in the emerging area of computer security visualization which is about developing visualization methods to help solve computer security problems. In this thesis, we will first present a method for measuring the complexity of information visualization designs. The complexity is measured in terms of visual integration, number of separable dimensions for each visual unit, the complexity of interpreting the visual attributes, number of visual units, and the efficiency of visual search. This method is designed to better assist fellow developers to quickly evaluate multiple design choices, potentially enables computer to automatically measure the complexity of visualization data. We will also analyze the design space of network security visualization. Our main contribution is a new taxonomy that consists of three dimensions – data, visualizations, and tasks. Each dimension is further divided into hierarchical layers, and for each layer we have identified key parameters for making major design choices. This new taxonomy provides a comprehensive framework that can guide network security visualization developers to systematically explore the design space and make informed design decisions. It can also help developers or users systematically evaluate existing network security visualization techniques and systems. Finally it helps developers identify gaps in the design space and create new techniques. Taxonomy showed that most of the existing computer security visualization programs are data centered. However, some studies have shown that task centered visualization is perhaps more effective. To test this hypothesis, we propose a task centered visualization design framework, in which tasks are explicitly identified and organized and visualizations are constructed for specific tasks and their related data parameters. The center piece of this framework is a task tree which dynamically links the raw data with automatically generated visualization. The task tree serves as a high level interaction technique that allows users to conduct problem solving naturally at the task level, while still giving end users flexible control over the visualization construction. This work is currently being extended by building a prototype visualization system based on a Task-centered Visualization Design Architecture

    Effect of Neighborhood Approximation on Downstream Analytics

    Get PDF
    Nearest neighbor search algorithms have been successful in finding practically useful solutions to computationally difficult problems. In the nearest neighbor search problem, the brute force approach is often more efficient than other algorithms for high-dimensional spaces. A special case exists for objects represented as sparse vectors, where algorithms take advantage of the fact that an object has a zero value for most features. In general, since exact nearest neighbor search methods suffer from the “curse of dimensionality,” many practitioners use approximate nearest neighbor search algorithms when faced with high dimensionality or large datasets. To a reasonable degree, it is known that relying on approximate nearest neighbors leads to some error in the solutions to the underlying data mining problems the neighbors are used to solve. However, no one has attempted to quantify this error or provide practitioners with guidance in choosing appropriate search methods for their task. In this thesis, we conduct several experiments on recommender systems with a goal to find the degree to which approximate nearest neighbor algorithms are subject to these types of error propagation problems. Additionally, we provide persuasive evidence on the trade-off between search performance and analytics effectiveness. Our experimental evaluation demonstrates that a state-of-the-art approximate nearest neighbor search method (L2KNNGApprox) is not an effective solution in most cases. When tuned to achieve high search recall (80% or higher), it provides a fairly competitive recommendation performance compared to an efficient exact search method but offers no advantage in terms of efficiency (0.1x—1.5x speedup). Low search recall (\u3c60%) leads to poor recommendation performance. Finally, medium recall values (60%—80%) lead to reasonable recommendation performance but are hard to achieve and offer only a modest gain in efficiency (1.5x—2.3x)

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Personalization and usage data in academic libraries : an exploratory study

    Get PDF
    Personalization is a service pattern for ensuring proactive information delivery tailored to an individual based on learned or perceived needs of the person. It is credited as a remedy for information explosion especially in the academic environment and its importance to libraries was described to the extent of justifying their existence. There have been numerous novel approaches or technical specifications forwarded for realization of personalization in libraries. However, literature shows that the implementation of the services in libraries is minimal which implies the need for a thorough analysis and discussion of issues underlying the practicality of this service in the library environment. This study was initiated by this need and it was done with the objective of finding answers for questions related to library usage data, user profiles and privacy which are among the factors determining the success of personalized services in academic libraries. With the aim of finding comprehensive answers, five distinct cases representing different approaches to academic library personalization were chosen for thorough analysis and themes extracted from them was substantiated by extensive literature review. Moreover, with the aim of getting more information, unstructured questions were presented to the libraries running the services. The overall finding shows that personalization can be realized in academic libraries but it has to address issues related to collecting and processing user/usage data, user interest management, safeguarding user privacy, library privacy laws and other important matters discovered in the course of the study.Joint Master Degree in Digital Library Learning (DILL

    From Social Data Mining to Forecasting Socio-Economic Crisis

    Full text link
    Socio-economic data mining has a great potential in terms of gaining a better understanding of problems that our economy and society are facing, such as financial instability, shortages of resources, or conflicts. Without large-scale data mining, progress in these areas seems hard or impossible. Therefore, a suitable, distributed data mining infrastructure and research centers should be built in Europe. It also appears appropriate to build a network of Crisis Observatories. They can be imagined as laboratories devoted to the gathering and processing of enormous volumes of data on both natural systems such as the Earth and its ecosystem, as well as on human techno-socio-economic systems, so as to gain early warnings of impending events. Reality mining provides the chance to adapt more quickly and more accurately to changing situations. Further opportunities arise by individually customized services, which however should be provided in a privacy-respecting way. This requires the development of novel ICT (such as a self- organizing Web), but most likely new legal regulations and suitable institutions as well. As long as such regulations are lacking on a world-wide scale, it is in the public interest that scientists explore what can be done with the huge data available. Big data do have the potential to change or even threaten democratic societies. The same applies to sudden and large-scale failures of ICT systems. Therefore, dealing with data must be done with a large degree of responsibility and care. Self-interests of individuals, companies or institutions have limits, where the public interest is affected, and public interest is not a sufficient justification to violate human rights of individuals. Privacy is a high good, as confidentiality is, and damaging it would have serious side effects for society.Comment: 65 pages, 1 figure, Visioneer White Paper, see http://www.visioneer.ethz.c

    A holistic semantic based approach to component specification and retrieval

    Get PDF
    Component-Based Development (CBD) has been broadly used in software development as it enhances the productivity and reduces the costs and risks involved in systems development. It has become a well-understood and widely used technology for developing not only large enterprise applications, but also a whole spectrum of software applications, as it offers fast and flexible development. However, driven by the continuous expansions of software applications, the increase in component varieties and sizes and the evolution from local to global component repositories, the so-called component mismatch problem has become an even more severe hurdle for component specification and retrieval. This problem not only prevents CBD from reaching its full potential, but also hinders the acceptance of many existing component repository. To overcome the above problem, existing approaches engaged a variety of technologies to support better component specification and retrieval. The existing approaches range from the early syntax-based (traditional) approaches to the recent semantic-based approaches. Although the different technologies are proposed to achieve accurate description of the component specification and/or user query in their specification and retrieval, the existing semantic-based approaches still fail to achieve the following goals which are desired for present component reuse: precise, automated, semantic-based and domain capable. This thesis proposes an approach, namely MVICS-based approach, aimed at achieving holistic, semantic-based and adaptation-aware component specification and retrieval. As the foundation, a Multiple-Viewed and Interrelated Component Specification ontology model (MVICS) is first developed for component specification and repository building. The MVICS model provides an ontology-based architecture to specify components from a range of perspectives; it integrates the knowledge of Component-Based Software Engineering (CBSE), and supports ontology evolution to reflect the continuous developments in CBD and components. A formal definition of the MVICS model is presented, which ensures the rigorousness of the model and supports the high level of automation of the retrieval. Furthermore, the MVICS model has a smooth mechanism to integrate with domain related software system ontology. Such integration enhances the function and application scope of the MVICS model by bringing more domain semantics into component specification and retrieval. Another improved feature of the proposed approach is that the effect of possible component adaptation is extended to the related components. Finally a comprehensive profile of the result components shows the search results to the user from a summary to satisfied and unsatisfied discrepancy details. The above features of the approach are well integrated, which enables a holistic view in semantic-based component specification and retrieval. A prototype tool was developed to exert the power of the MVICS model in expressing semantics and process automation in component specification and retrieval. The tool implements the complete process of component search. Three case studies have been undertaken to illustrate and evaluate the usability and correctness of the approach, in terms of supporting accurate component specification and retrieval, seamless linkage with a domain ontology, adaptive component suggestion and comprehensive result component profile. A conclusion is drawn based on an analysis of the feedback from the case studies, which shows that the proposed approach can be deployed in real life industrial development. The benefits of MVICS include not only the improvement of the component search precision and recall, reducing the development time and the repository maintenance effort, but also the decrease of human intervention on CBD.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A Usability Approach to Improving the User Experience in Web Directories

    Get PDF
    Submitted for the degree of Doctor of Philosophy, Queen Mary, University of Londo
    corecore