566 research outputs found

    Towards an Automated Semantic Data-driven Decision Making Employing Human Brain

    Full text link
    [EN] Decision making is time-consuming and costly, as it requires direct intensive involvement of the human brain. The variety of expertise of highly qualified experts is very high, and the available experts are mostly not available on a short notice: they might be physically remotely located, and/or not being able to address all the problems they could address time-wise. Further, people tend to base more of their intellectual labour on rapidly increasing volumes of online data, content and computing resources, and the lack of corresponding scaling in availability of the human brain resources poses a bottleneck in the intellectual labour. We discuss enabling direct interoperability between the Internet and the human brain, developing "Internet of Brains", similar to "Internet of Things", where one can semantically model, interoperate and control real life objects. The Web, "Internet of Things" and "Internet of Brains" will be connected employing the same kind of semantic structures, and work in interoperation. Applying Brain Computer Interfaces (BCIs), psychology and behavioural science, we discuss the feasibility of a possible decion making infrastructure for semantic transfer of human thoughts, thinking processes, communication directly to the InternetThis work has been partially funded by project DALICC, supported by the Austrian Research Promotion Agency (FFG) within the program “Future ICT”.Fensel, A. (2018). Towards an Automated Semantic Data-driven Decision Making Employing Human Brain. En 2nd International Conference on Advanced Reserach Methods and Analytics (CARMA 2018). Editorial Universitat Politècnica de València. 167-175. https://doi.org/10.4995/CARMA2018.2018.8338OCS16717

    Semantic Technologies for Manuscript Descriptions — Concepts and Visions

    Get PDF
    The contribution at hand relates recent developments in the area of the World Wide Web to codicological research. In the last number of years, an informational extension of the internet has been discussed and extensively researched: the Semantic Web. It has already been applied in many areas, including digital information processing of cultural heritage data. The Semantic Web facilitates the organisation and linking of data across websites, according to a given semantic structure. Software can then process this structural and semantic information to extract further knowledge. In the area of codicological research, many institutions are making efforts to improve the online availability of handwritten codices. If these resources could also employ Semantic Web techniques, considerable research potential could be unleashed. However, data acquisition from less structured data sources will be problematic. In particular, data stemming from unstructured sources needs to be made accessible to SemanticWeb tools through information extraction techniques. In the area of museum research, the CIDOC Conceptual Reference Model (CRM) has been widely examined and is being adopted successfully. The CRM translates well to Semantic Web research, and its concentration on contextualization of objects could support approaches in codicological research. Further concepts for the creation and management of bibliographic coherences and structured vocabularies related to the CRM will be considered in this chapter. Finally, a user scenario showing all processing steps in their context will be elaborated on

    Semantic Web Reasoning by Swarm Intelligence

    Get PDF
    Abstract. Semantic Web reasoning systems are confronted with the task to process growing amounts of distributed, dynamic resources. This paper presents a novel way of approaching the challenge by RDF graph traversal, exploiting the advantages of swarm intelligence. The natureinspired and index-free methodology is realised by self-organising swarms of autonomous, light-weight entities that traverse RDF graphs by following paths, aiming to instantiate pattern-based inference rules. The method is evaluated on the basis of a series of simulation experiments with regard to desirable properties of Semantic Web reasoning, focussing on anytime behaviour, adaptiveness and scalability.

    Enhancing Location-Based Social Media Network Services with Semantic Technologies: A Review

    Get PDF
    Today’s location-based social media services have gone beyond mere sharing users’real-time locationsvia internet, they now serve as the bridge between the real world and the online world. Location-based social media application can now recommend point of interest to users based on geographical information and user’s profile gatheredon social media networks. Semantic web technology provides tools,platforms and techniques to extract meaning, processand integrate structure datafrom the social web and other sources.The rapid increasein number of social media networks and theenormous amount of geographical and social data flow across mobile and web platforms, have not only provided rich data source for web applications but also help developers to facilitate location-based social media services. However, unprecedented amount of noise and unstructured data exist on these networks, making knowledge representation, point of interest recommendation and precision of search engine results cumbersome processes.In this paper, we presentreview of various semantic technologies that could bedeployed to enhance location-based social media services with emphasis on architectures, tools, supporting technologies and the pros and cons of each of these technologies

    A collaborative citizen science platform for real-time volunteer computing and games

    Full text link
    Volunteer computing (VC) or distributed computing projects are common in the citizen cyberscience (CCS) community and present extensive opportunities for scientists to make use of computing power donated by volunteers to undertake large-scale scientific computing tasks. Volunteer computing is generally a non-interactive process for those contributing computing resources to a project whereas volunteer thinking (VT) or distributed thinking, which allows volunteers to participate interactively in citizen cyberscience projects to solve human computation tasks. In this paper we describe the integration of three tools, the Virtual Atom Smasher (VAS) game developed by CERN, LiveQ, a job distribution middleware, and CitizenGrid, an online platform for hosting and providing computation to CCS projects. This integration demonstrates the combining of volunteer computing and volunteer thinking to help address the scientific and educational goals of games like VAS. The paper introduces the three tools and provides details of the integration process along with further potential usage scenarios for the resulting platform.Comment: 12 pages, 13 figure

    Scalable statistical learning for relation prediction on structured data

    Get PDF
    Relation prediction seeks to predict unknown but potentially true relations by revealing missing relations in available data, by predicting future events based on historical data, and by making predicted relations retrievable by query. The approach developed in this thesis can be used for a wide variety of purposes, including to predict likely new friends on social networks, attractive points of interest for an individual visiting an unfamiliar city, and associations between genes and particular diseases. In recent years, relation prediction has attracted significant interest in both research and application domains, partially due to the increasing volume of published structured data and background knowledge. In the Linked Open Data initiative of the Semantic Web, for instance, entities are uniquely identified such that the published information can be integrated into applications and services, and the rapid increase in the availability of such structured data creates excellent opportunities as well as challenges for relation prediction. This thesis focuses on the prediction of potential relations by exploiting regularities in data using statistical relational learning algorithms and applying these methods to relational knowledge bases, in particular in Linked Open Data in particular. We review representative statistical relational learning approaches, e.g., Inductive Logic Programming and Probabilistic Relational Models. While logic-based reasoning can infer and include new relations via deduction by using ontologies, machine learning can be exploited to predict new relations (with some degree of certainty) via induction, purely based on the data. Because the application of machine learning approaches to relation prediction usually requires handling large datasets, we also discuss the scalability of machine learning as a solution to relation prediction, as well as the significant challenge posed by incomplete relational data (such as social network data, which is often much more extensive for some users than others). The main contribution of this thesis is to develop a learning framework called the Statistical Unit Node Set (SUNS) and to propose a multivariate prediction approach used in the framework. We argue that multivariate prediction approaches are most suitable for dealing with large, sparse data matrices. According to the characteristics and intended application of the data, the approach can be extended in different ways. We discuss and test two extensions of the approach--kernelization and a probabilistic method of handling complex n-ary relationships--in empirical studies based on real-world data sets. Additionally, this thesis contributes to the field of relation prediction by applying the SUNS framework to various domains. We focus on three applications: 1. In social network analysis, we present a combined approach of inductive and deductive reasoning for recommending movies to users. 2. In the life sciences, we address the disease gene prioritization problem. 3. In the recommendation system, we describe and investigate the back-end of a mobile app called BOTTARI, which provides personalized location-based recommendations of restaurants.Die Beziehungsvorhersage strebt an, unbekannte aber potenziell wahre Beziehungen vorherzusagen, indem fehlende Relationen in verfügbaren Daten aufgedeckt, zukünftige Ereignisse auf der Grundlage historischer Daten prognostiziert und vorhergesagte Relationen durch Anfragen abrufbar gemacht werden. Der in dieser Arbeit entwickelte Ansatz lässt sich für eine Vielzahl von Zwecken einschließlich der Vorhersage wahrscheinlicher neuer Freunde in sozialen Netzen, der Empfehlung attraktiver Sehenswürdigkeiten für Touristen in fremden Städten und der Priorisierung möglicher Assoziationen zwischen Genen und bestimmten Krankheiten, verwenden. In den letzten Jahren hat die Beziehungsvorhersage sowohl in Forschungs- als auch in Anwendungsbereichen eine enorme Aufmerksamkeit erregt, aufgrund des Zuwachses veröffentlichter strukturierter Daten und von Hintergrundwissen. In der Linked Open Data-Initiative des Semantischen Web werden beispielsweise Entitäten eindeutig identifiziert, sodass die veröffentlichten Informationen in Anwendungen und Dienste integriert werden können. Diese rapide Erhöhung der Verfügbarkeit strukturierter Daten bietet hervorragende Gelegenheiten sowie Herausforderungen für die Beziehungsvorhersage. Diese Arbeit fokussiert sich auf die Vorhersage potenzieller Beziehungen durch Ausnutzung von Regelmäßigkeiten in Daten unter der Verwendung statistischer relationaler Lernalgorithmen und durch Einsatz dieser Methoden in relationale Wissensbasen, insbesondere in den Linked Open Daten. Wir geben einen Überblick über repräsentative statistische relationale Lernansätze, z.B. die Induktive Logikprogrammierung und Probabilistische Relationale Modelle. Während das logikbasierte Reasoning neue Beziehungen unter der Nutzung von Ontologien ableiten und diese einbeziehen kann, kann maschinelles Lernen neue Beziehungen (mit gewisser Wahrscheinlichkeit) durch Induktion ausschließlich auf der Basis der vorliegenden Daten vorhersagen. Da die Verarbeitung von massiven Datenmengen in der Regel erforderlich ist, wenn maschinelle Lernmethoden in die Beziehungsvorhersage eingesetzt werden, diskutieren wir auch die Skalierbarkeit des maschinellen Lernens sowie die erhebliche Herausforderung, die sich aus unvollständigen relationalen Daten ergibt (z. B. Daten aus sozialen Netzen, die oft für manche Benutzer wesentlich umfangreicher sind als für Anderen). Der Hauptbeitrag der vorliegenden Arbeit besteht darin, ein Lernframework namens Statistical Unit Node Set (SUNS) zu entwickeln und einen im Framework angewendeten multivariaten Prädiktionsansatz einzubringen. Wir argumentieren, dass multivariate Vorhersageansätze am besten für die Bearbeitung von großen und dünnbesetzten Datenmatrizen geeignet sind. Je nach den Eigenschaften und der beabsichtigten Anwendung der Daten kann der Ansatz auf verschiedene Weise erweitert werden. In empirischen Studien werden zwei Erweiterungen des Ansatzes--ein kernelisierter Ansatz sowie ein probabilistischer Ansatz zur Behandlung komplexer n-stelliger Beziehungen-- diskutiert und auf realen Datensätzen untersucht. Ein weiterer Beitrag dieser Arbeit ist die Anwendung des SUNS Frameworks auf verschiedene Bereiche. Wir konzentrieren uns auf drei Anwendungen: 1. In der Analyse sozialer Netze stellen wir einen kombinierten Ansatz von induktivem und deduktivem Reasoning vor, um Benutzern Filme zu empfehlen. 2. In den Biowissenschaften befassen wir uns mit dem Problem der Priorisierung von Krankheitsgenen. 3. In den Empfehlungssystemen beschreiben und untersuchen wir das Backend einer mobilen App "BOTTARI", das personalisierte ortsbezogene Empfehlungen von Restaurants bietet
    corecore