1,416 research outputs found

    Contextualized and personalized location-based services

    Get PDF
    Advances in the technologies of smart mobile devices and tiny sensors together with the increase in the number of web resources open up a plethora of new mobile information services where people can acquire and disseminate information at any place and any time. Location-based services (LBS) are characterized by providing users with useful and local information, i.e. information that belongs to a particular domain of interest to the user and can be of use while the user remains in a particular area. In addition, LBS need to take into account the interactions and dependencies between services, user and context for the information filtering and delivery in order to fulfill the needs and constraints of mobile users. We argue that consequently it brings up a series of technical challenges in terms of data semantics and infrastructure, context-awareness and personalization, as well as query formulation and answering etc. They can not be simply extended from existing traditional data management strategies. Instead, they need a new solution. Firstly, we propose a semantic LBS infrastructure on the basis of the modularized ontologies approach. We elaborate a core ontology which is mainly composed of three modules describing the services, users and contexts. The core ontology aims at presenting an abstract view (a model) of all information in LBS. In contrast, data describing the instances (of services user and actual contextual data) are stored in three independent data stores, called the service profiles, user profiles and context profiles. These data are semantically aligned with the concepts in the core ontology through a set of mappings. This approach enables the distributed data sources to be maintained in a autonomous manner, which is well adapted to the high dynamics and mobility of the data sources. Secondly, we separately address the function, features, and our modelling approach of the three major players, i.e. service, context and user in LBS. Then, we define a set of constructs to represent their interactions and inter-dependencies and illustrate how these semantic constructs can contribute to personalized and contextualized query processing. Service classes are organized in a taxonomy, which distinguishes the services by their business functions. This concept hierarchy helps to analyze and reformulate the users' queries. We introduce three new kinds of relationships in the service module to enhance the semantics of interactions and dependencies between services. We identify five key components of contexts in LBS and regard them as a semantic contextual basis for LBS. Component contexts are related together by specific composition relationships that can describe spatio-temporal constraints. A user profile contains personal information about a given user and possibly a set of self-defined rules, which offer hints on what the user likes or dislikes, and what could attract him or her. In the core ontology clustering users with common features can help the cooperative query answering. Each of the three modules of the core ontology is an ontology in itself. They are inter-related by relationships that link concepts belonging to two different modules. The LBS fully benefits from the modularized structure of the core ontology. It allows restricting the search space, as well as facilitating the maintenance of each module. Finally, we studied the query reformulation and processing issues in LBS. How to make the query interface tangible and provide rapid and relevant answers are typical concerns in all information services. Our query format not only fully obeys the "simple, tangible and effective" golden-rules of user-interface design, but also satisfies the needs of domain-independent interface and emphasizes the importance of spatio-temporal constraints in LBS. With pre-defined spatio-temporal operators, users can easily specify in their queries the spatio-temporal availability they need for the services they are looking for. This allows eliminating most of irrelevant answers that are usually generated by keyword-based approaches. Constraints in the various dimensions (what, when, where and what-else) can be expressed by a conjunctive query, and then be smoothly translated to RDF-patterns. We illustrate our query answering strategy by using the SPARQL syntax, and explain how the relaxation can be done with rules specified in the query relaxation profile

    CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap

    Get PDF
    After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in multimedia search engines, we have identified and analyzed gaps within European research effort during our second year. In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio- economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal challenges

    Building Context-Aware Access Control In Enterprise Ontologies

    Get PDF
    Knowledge centric management (KCM) has become a key strategy for competitive edge. As an essential of KCM, an enterprise ontology represents the knowledge of an organization. Thus, the need for securing enterprise ontologies (EO) becomes imperative. Adequate access control is a major component of ontology security. However, access control for EO is largely neglected in information systems (IS) literature. This paper presents the first research to fill this gap. I propose five requirements for good access-control solutions for EO. The proposed solution offers an architecture framework that meets the five requirements. Semantic Web technology is used to build context-aware access controls into EO. My proposal includes a novel resolution for policy conflicts. This study provides the first design of fine-grained and dynamically-adjusted access authorizations

    Hybrid human-AI driven open personalized education

    Get PDF
    Attaining those skills that match labor market demand is getting increasingly complicated as prerequisite knowledge, skills, and abilities are evolving dynamically through an uncontrollable and seemingly unpredictable process. Furthermore, people's interests in gaining knowledge pertaining to their personal life (e.g., hobbies and life-hacks) are also increasing dramatically in recent decades. In this situation, anticipating and addressing the learning needs are fundamental challenges to twenty-first century education. The need for such technologies has escalated due to the COVID-19 pandemic, where online education became a key player in all types of training programs. The burgeoning availability of data, not only on the demand side but also on the supply side (in the form of open/free educational resources) coupled with smart technologies, may provide a fertile ground for addressing this challenge. Therefore, this thesis aims to contribute to the literature about the utilization of (open and free-online) educational resources toward goal-driven personalized informal learning, by developing a novel Human-AI based system, called eDoer. In this thesis, we discuss all the new knowledge that was created in order to complete the system development, which includes 1) prototype development and qualitative user validation, 2) decomposing the preliminary requirements into meaningful components, 3) implementation and validation of each component, and 4) a final requirement analysis followed by combining the implemented components in order develop and validate the planned system (eDoer). All in all, our proposed system 1) derives the skill requirements for a wide range of occupations (as skills and jobs are typical goals in informal learning) through an analysis of online job vacancy announcements, 2) decomposes skills into learning topics, 3) collects a variety of open/free online educational resources that address those topics, 4) checks the quality of those resources and topic relevance using our developed intelligent prediction models, 5) helps learners to set their learning goals, 6) recommends personalized learning pathways and learning content based on individual learning goals, and 7) provides assessment services for learners to monitor their progress towards their desired learning objectives. Accordingly, we created a learning dashboard focusing on three Data Science related jobs and conducted an initial validation of eDoer through a randomized experiment. Controlling for the effects of prior knowledge as assessed by the pretest, the randomized experiment provided tentative support for the hypothesis that learners who engaged with personal eDoer recommendations attain higher scores on the posttest than those who did not. The hypothesis that learners who received personalized content in terms of format, length, level of detail, and content type, would achieve higher scores than those receiving non-personalized content was not supported as a statistically significant result
    corecore