34 research outputs found

    Messung von Personalisierung in computervermittelter Kommunikation

    Get PDF
    Das Ziel personalisierter Online-Angebote ist, Rezipienten bei der Informationssuche zu unterstützen. Dabei greifen sie zwangsläufig in deren Auswahlentscheidungen ein und müssen deshalb als eigenständiger Einflussfaktor empirisch erschlossen werden. Die dahinterstehenden Algorithmen kommerzieller Anbieter sind für die Forschung größtenteils eine intransparente "Black Box". Automatisierte Online-Experimente stellen eine selten eingesetzte Methode dar, die durch systematische Simulation von Nutzerverhalten die Funktionsweise von Personalisierungsalgorithmen ermitteln kann. Der Beitrag diskutiert zunächst Auswirkungen und Funktionsweise von Personalisierung und stellt daraufhin ein automatisiertes Online-Experiment am Beispiel der Google-Suche dar. Die vorgestellte Methode ermöglicht einen sozialwissenschaftlichen Zugriff auf die Funktionsweise und Inhalte von personalisierten Angeboten und fördert gleichzeitig Validität, Transparenz und Replizierbarkeit von Nutzungsstudien.Personalized web pages are explicitly designed to curate the information available to their users. By definition, they influence users' selectivity and need to be considered as a new factor affecting selection decisions. In most cases, the algorithms that determine recommendations are "black boxes" whose precise functionality remains opaque to researchers. Drawing on a pilot study of Google Search, this chapter argues that the rarely employed automated online experiment represents a promising method for studying "personalization effects". The method presented not only gives researchers access to the effects and contents of personalized web pages, but it also reinforces scientific rigor through higher validity, transparency and replicability of studies on selection behavior

    Promoting Programming Learning. Engagement, Automatic Assessment with Immediate Feedback in Visualizations

    Get PDF
    The skill of programming is a key asset for every computer science student. Many studies have shown that this is a hard skill to learn and the outcomes of programming courses have often been substandard. Thus, a range of methods and tools have been developed to assist students’ learning processes. One of the biggest fields in computer science education is the use of visualizations as a learning aid and many visualization based tools have been developed to aid the learning process during last few decades. Studies conducted in this thesis focus on two different visualizationbased tools TRAKLA2 and ViLLE. This thesis includes results from multiple empirical studies about what kind of effects the introduction and usage of these tools have on students’ opinions and performance, and what kind of implications there are from a teacher’s point of view. The results from studies in this thesis show that students preferred to do web-based exercises, and felt that those exercises contributed to their learning. The usage of the tool motivated students to work harder during their course, which was shown in overall course performance and drop-out statistics. We have also shown that visualization-based tools can be used to enhance the learning process, and one of the key factors is the higher and active level of engagement (see. Engagement Taxonomy by Naps et al., 2002). The automatic grading accompanied with immediate feedback helps students to overcome obstacles during the learning process, and to grasp the key element in the learning task. These kinds of tools can help us to cope with the fact that many programming courses are overcrowded with limited teaching resources. These tools allows us to tackle this problem by utilizing automatic assessment in exercises that are most suitable to be done in the web (like tracing and simulation) since its supports students’ independent learning regardless of time and place. In summary, we can use our course’s resources more efficiently to increase the quality of the learning experience of the students and the teaching experience of the teacher, and even increase performance of the students. There are also methodological results from this thesis which contribute to developing insight into the conduct of empirical evaluations of new tools or techniques. When we evaluate a new tool, especially one accompanied with visualization, we need to give a proper introduction to it and to the graphical notation used by tool. The standard procedure should also include capturing the screen with audio to confirm that the participants of the experiment are doing what they are supposed to do. By taken such measures in the study of the learning impact of visualization support for learning, we can avoid drawing false conclusion from our experiments. As computer science educators, we face two important challenges. Firstly, we need to start to deliver the message in our own institution and all over the world about the new – scientifically proven – innovations in teaching like TRAKLA2 and ViLLE. Secondly, we have the relevant experience of conducting teaching related experiment, and thus we can support our colleagues to learn essential know-how of the research based improvement of their teaching. This change can transform academic teaching into publications and by utilizing this approach we can significantly increase the adoption of the new tools and techniques, and overall increase the knowledge of best-practices. In future, we need to combine our forces and tackle these universal and common problems together by creating multi-national and multiinstitutional research projects. We need to create a community and a platform in which we can share these best practices and at the same time conduct multi-national research projects easily.Siirretty Doriast

    LL(O)D and NLP perspectives on semantic change for humanities research

    Get PDF
    CC BY 4.0This paper presents an overview of the LL(O)D and NLP methods, tools and data for detecting and representing semantic change, with its main application in humanities research. The paper’s aim is to provide the starting point for the construction of a workflow and set of multilingual diachronic ontologies within the humanities use case of the COST Action Nexus Linguarum, European network for Web-centred linguistic data science, CA18209. The survey focuses on the essential aspects needed to understand the current trends and to build applications in this area of study

    Modelo de acesso a fontes em linguagem natural no governo electrónico

    Get PDF
    Doutoramento em Engenharia InformáticaFor the actual existence of e-government it is necessary and crucial to provide public information and documentation, making its access simple to citizens. A portion, not necessarily small, of these documents is in an unstructured form and in natural language, and consequently outside of which the current search systems are generally able to cope and effectively handle. Thus, in thesis, it is possible to improve access to these contents using systems that process natural language and create structured information, particularly if supported in semantics. In order to put this thesis to test, this work was developed in three major phases: (1) design of a conceptual model integrating the creation of structured information and making it available to various actors, in line with the vision of e-government 2.0; (2) definition and development of a prototype instantiating the key modules of this conceptual model, including ontology based information extraction supported by examples of relevant information, knowledge management and access based on natural language; (3) assessment of the usability and acceptability of querying information as made possible by the prototype - and in consequence of the conceptual model - by users in a realistic scenario, that included comparison with existing forms of access. In addition to this evaluation, at another level more related to technology assessment and not to the model, evaluations were made on the performance of the subsystem responsible for information extraction. The evaluation results show that the proposed model was perceived as more effective and useful than the alternatives. Associated with the performance of the prototype to extract information from documents, comparable to the state of the art, results demonstrate the feasibility and advantages, with current technology, of using natural language processing and integration of semantic information to improve access to unstructured contents in natural language. The conceptual model and the prototype demonstrator intend to contribute to the future existence of more sophisticated search systems that are also more suitable for e-government. To have transparency in governance, active citizenship, greater agility in the interaction with the public administration, among others, it is necessary that citizens and businesses have quick and easy access to official information, even if it was originally created in natural language.Para a efectiva existência de governo electrónico é necessário e crucial a disponibilização de informação e documentação pública e tornar simples o acesso a esta pelos cidadãos. Uma parte, não necessariamente pequena, destes documentos encontra-se sob uma forma não estruturada e em linguagem natural e, consequentemente, fora do que os sistemas de pesquisa actuais conseguem em geral suportar e disponibilizar eficazmente. Assim, em tese, é possível melhorar o acesso a estes conteúdos com recurso a sistemas que processem linguagem natural e que sejam capazes de criar informação estruturada, em especial se suportados numa semântica. Com o objectivo de colocar esta tese à prova, o desenvolvimento deste trabalho integrou três grandes fases ou vertentes: (1) Criação de um modelo conceptual integrando a criação de informação estruturada e a sua disponibilização para vários actores, alinhado com a visão do governo electrónico 2.0; (2) Definição e desenvolvimento de um protótipo instanciando os módulos essenciais deste modelo conceptual, nomeadamente a extracção de informação suportada em ontologias e exemplos de informação relevante, gestão de conhecimento e acesso baseado em linguagem natural; (3) Uma avaliação de usabilidade e aceitabilidade da consulta à informação tornada possível pelo protótipo – e em consequência do modelo conceptual - por utilizadores num cenário realista e que incluiu comparação com formas de acesso existentes. Além desta avaliação, a outro nível, mais relacionado com avaliação de tecnologias e não do modelo, foram efectuadas avaliações do desempenho do subsistema responsável pela extracção de informação. Os resultados da avaliação mostram que o modelo proposto foi percepcionado como mais eficaz e mais útil que as alternativas. Associado ao desempenho do protótipo a extrair informação dos documentos, comparável com o estado da arte, os resultados obtidos mostram a viabilidade e as vantagens, com a tecnologia actual, de utilizar processamento de linguagem natural e integração de informação semântica para melhorar acesso a conteúdos em linguagem natural e não estruturados. O modelo conceptual e o protótipo demonstrador pretendem contribuir para a existência futura de sistemas de pesquisa mais sofisticados e adequados ao governo electrónico. Para existir transparência na governação, cidadania activa, maior agilidade na interacção com a administração pública, entre outros, é necessário que cidadãos e empresas tenham acesso rápido e fácil a informação oficial, mesmo que ela tenha sido originalmente criada em linguagem natural

    Exploiting general-purpose background knowledge for automated schema matching

    Full text link
    The schema matching task is an integral part of the data integration process. It is usually the first step in integrating data. Schema matching is typically very complex and time-consuming. It is, therefore, to the largest part, carried out by humans. One reason for the low amount of automation is the fact that schemas are often defined with deep background knowledge that is not itself present within the schemas. Overcoming the problem of missing background knowledge is a core challenge in automating the data integration process. In this dissertation, the task of matching semantic models, so-called ontologies, with the help of external background knowledge is investigated in-depth in Part I. Throughout this thesis, the focus lies on large, general-purpose resources since domain-specific resources are rarely available for most domains. Besides new knowledge resources, this thesis also explores new strategies to exploit such resources. A technical base for the development and comparison of matching systems is presented in Part II. The framework introduced here allows for simple and modularized matcher development (with background knowledge sources) and for extensive evaluations of matching systems. One of the largest structured sources for general-purpose background knowledge are knowledge graphs which have grown significantly in size in recent years. However, exploiting such graphs is not trivial. In Part III, knowledge graph em- beddings are explored, analyzed, and compared. Multiple improvements to existing approaches are presented. In Part IV, numerous concrete matching systems which exploit general-purpose background knowledge are presented. Furthermore, exploitation strategies and resources are analyzed and compared. This dissertation closes with a perspective on real-world applications

    The Digital Classicist 2013

    Get PDF
    This edited volume collects together peer-reviewed papers that initially emanated from presentations at Digital Classicist seminars and conference panels. This wide-ranging volume showcases exemplary applications of digital scholarship to the ancient world and critically examines the many challenges and opportunities afforded by such research. The chapters included here demonstrate innovative approaches that drive forward the research interests of both humanists and technologists while showing that rigorous scholarship is as central to digital research as it is to mainstream classical studies. As with the earlier Digital Classicist publications, our aim is not to give a broad overview of the field of digital classics; rather, we present here a snapshot of some of the varied research of our members in order to engage with and contribute to the development of scholarship both in the fields of classical antiquity and Digital Humanities more broadly

    The Digital Classicist 2013

    Get PDF
    This edited volume collects together peer-reviewed papers that initially emanated from presentations at Digital Classicist seminars and conference panels. This wide-ranging volume showcases exemplary applications of digital scholarship to the ancient world and critically examines the many challenges and opportunities afforded by such research. The chapters included here demonstrate innovative approaches that drive forward the research interests of both humanists and technologists while showing that rigorous scholarship is as central to digital research as it is to mainstream classical studies. As with the earlier Digital Classicist publications, our aim is not to give a broad overview of the field of digital classics; rather, we present here a snapshot of some of the varied research of our members in order to engage with and contribute to the development of scholarship both in the fields of classical antiquity and Digital Humanities more broadly

    Génération automatique d'alignements complexes d'ontologies

    Get PDF
    Le web de données liées (LOD) est composé de nombreux entrepôts de données. Ces données sont décrites par différents vocabulaires (ou ontologies). Chaque ontologie a une terminologie et une modélisation propre ce qui les rend hétérogènes. Pour lier et rendre les données du web de données liées interopérables, les alignements d'ontologies établissent des correspondances entre les entités desdites ontologies. Il existe de nombreux systèmes d'alignement qui génèrent des correspondances simples, i.e., ils lient une entité à une autre entité. Toutefois, pour surmonter l'hétérogénéité des ontologies, des correspondances plus expressives sont parfois nécessaires. Trouver ce genre de correspondances est un travail fastidieux qu'il convient d'automatiser. Dans le cadre de cette thèse, une approche d'alignement complexe basée sur des besoins utilisateurs et des instances communes est proposée. Le domaine des alignements complexes est relativement récent et peu de travaux adressent la problématique de leur évaluation. Pour pallier ce manque, un système d'évaluation automatique basé sur de la comparaison d'instances est proposé. Ce système est complété par un jeu de données artificiel sur le domaine des conférences.The Linked Open Data (LOD) cloud is composed of data repositories. The data in the repositories are described by vocabularies also called ontologies. Each ontology has its own terminology and model. This leads to heterogeneity between them. To make the ontologies and the data they describe interoperable, ontology alignments establish correspondences, or links between their entities. There are many ontology matching systems which generate simple alignments, i.e., they link an entity to another. However, to overcome the ontology heterogeneity, more expressive correspondences are sometimes needed. Finding this kind of correspondence is a fastidious task that can be automated. In this thesis, an automatic complex matching approach based on a user's knowledge needs and common instances is proposed. The complex alignment field is still growing and little work address the evaluation of such alignments. To palliate this lack, we propose an automatic complex alignment evaluation system. This system is based on instances. A famous alignment evaluation dataset has been extended for this evaluation

    A Framework for Model-Driven Development of Mobile Applications with Context Support

    Get PDF
    Model-driven development (MDD) of software systems has been a serious trend in different application domains over the last 15 years. While technologies, platforms, and architectural paradigms have changed several times since model-driven development processes were first introduced, their applicability and usefulness are discussed every time a new technological trend appears. Looking at the rapid market penetration of smartphones, software engineers are curious about how model-driven development technologies can deal with this novel and emergent domain of software engineering (SE). Indeed, software engineering of mobile applications provides many challenges that model-driven development can address. Model-driven development uses a platform independent model as a crucial artifact. Such a model usually follows a domain-specific modeling language and separates the business concerns from the technical concerns. These platform-independent models can be reused for generating native program code for several mobile software platforms. However, a major drawback of model-driven development is that infrastructure developers must provide a fairly sophisticated model-driven development infrastructure before mobile application developers can create mobile applications in a model-driven way. Hence, the first part of this thesis deals with designing a model-driven development infrastructure for mobile applications. We will follow a rigorous design process comprising a domain analysis, the design of a domain-specific modeling language, and the development of the corresponding model editors. To ensure that the code generators produce high-quality application code and the resulting mobile applications follow a proper architectural design, we will analyze several representative reference applications beforehand. Thus, the reader will get an insight into both the features of mobile applications and the steps that are required to design and implement a model-driven development infrastructure. As a result of the domain analysis and the analysis of the reference applications, we identified context-awareness as a further important feature of mobile applications. Current software engineering tools do not sufficiently support designing and implementing of context-aware mobile applications. Although these tools (e.g., middleware approaches) support the definition and the collection of contextual information, the adaptation of the mobile application must often be implemented by hand at a low abstraction level by the mobile application developers. Thus, the second part of this thesis demonstrates how context-aware mobile applications can be designed more easily by using a model-driven development approach. Techniques such as model transformation and model interpretation are used to adapt mobile applications to different contexts at design time or runtime. Moreover, model analysis and model-based simulation help mobile application developers to evaluate a designed mobile application (i.e., app model) prior to its generation and deployment with respected to certain contexts. We demonstrate the usefulness and applicability of the model-driven development infrastructure we developed by seven case examples. These showcases demonstrate the designing of mobile applications in different domains. We demonstrate the scalability of our model-driven development infrastructure with several performance tests, focusing on the generation time of mobile applications, as well as their runtime performance. Moreover, the usability was successfully evaluated during several hands-on training sessions by real mobile application developers with different skill levels
    corecore