3 research outputs found
Providing Service-based Personalization in an Adaptive Hypermedia System
Adaptive hypermedia is one of the most popular approaches of personalized information access. When the field started to emerge, the expectation was that soon nearly all published hypermedia content could be adapted to the needs, preferences, and abilities of its users. However, after a decade and a half, the gap between the amount of total hypermedia content available and the amount of content available in a personalized way is still quite large.In this work we are proposing a novel way of speeding the development of new adaptive hypermedia systems. The gist of the approach is to extract the adaptation functionality out of the adaptive hypermedia system, encapsulate it into a standalone system, and offer adaptation as a service to the client applications. Such a standalone adaptation provider reduces the development of adaptation functionality to configuration and compliance and as a result creates new adaptive systems faster and helps serve larger user populations with adaptively accessible content.To empirically prove the viability of our approach, we developed PERSEUS - server of adaptation functionalities. First, we confirmed that the conceptual design of PERSEUS supports realization of a several of the widely used adaptive hypermedia techniques. Second, to demonstrate that the extracted adaptation does not create a significant computational bottleneck, we conducted a series of performance tests. The results show that PERSEUS is capable of providing a basis for implementing computationally challenging adaptation procedures and compares well with alternative, not-encapsulated adaptation solutions. As a result, even on modest hardware, large user populations can be served content adapted by PERSEUS
Recommended from our members
Mobile robot teleoperation through eye-gaze (telegaze)
In most teleoperation applications the human operator is required to monitor the status of the robot, as well as, issue controlling commands for the whole duration of the operation. Using a vision based feedback system, monitoring the robot requires the operator to look at a continuous stream of images displayed on an interaction screen. The eyes of the operator therefore, are fully engaged in monitoring and the hands in controlling. Since the eyes of the operator are engaged in monitoring anyway, inputs from their gaze can be used to aid in controlling. This frees the hands of the operator, either partially or fully, from controlling which can then be used to perform any other necessary tasks. However, the challenge here lies in distinguishing between the inputs that can be used for controlling and the inputs that can be used for monitoring. In mobile robot teleoperation, controlling is mainly composed of issuing locomotion commands to drive the robot. Monitoring on the other hand, is looking where the robot goes and looking for any obstacles in the route. Interestingly, there exist a strong correlation between human's gazing behaviours and their moving intentions. This correlation has been exploited in this thesis to investigate novel means for mobile robot teleoperation through eye-gaze, which has been named TeleGaze for short
High-level translation of adaptive hypermedia applications
In the early years of the adaptive hypermedia research a large number of special-purpose adaptive hypermedia systems (AHS) have been developed, to illustrate research ideas, or to serve a single application. Many of these systems are now obsolete. In this paper we propose to bring new life to these applications by means of translation to a general purpose adaptive hypermedia architecture. We illustrate that this approach can work by showing a high-level translation from InterBook [2] to AHA! [5]. Such a translation consists of three parts: the structure of concepts and concept relationships needs to be translated, the adaptive behavior for these concept relationships must be defined, and the layout and presentation of the source application must be “simulated”. Our high-level translation covers all three parts