2,406 research outputs found

    Easing Embedding Learning by Comprehensive Transcription of Heterogeneous Information Networks

    Full text link
    Heterogeneous information networks (HINs) are ubiquitous in real-world applications. In the meantime, network embedding has emerged as a convenient tool to mine and learn from networked data. As a result, it is of interest to develop HIN embedding methods. However, the heterogeneity in HINs introduces not only rich information but also potentially incompatible semantics, which poses special challenges to embedding learning in HINs. With the intention to preserve the rich yet potentially incompatible information in HIN embedding, we propose to study the problem of comprehensive transcription of heterogeneous information networks. The comprehensive transcription of HINs also provides an easy-to-use approach to unleash the power of HINs, since it requires no additional supervision, expertise, or feature engineering. To cope with the challenges in the comprehensive transcription of HINs, we propose the HEER algorithm, which embeds HINs via edge representations that are further coupled with properly-learned heterogeneous metrics. To corroborate the efficacy of HEER, we conducted experiments on two large-scale real-words datasets with an edge reconstruction task and multiple case studies. Experiment results demonstrate the effectiveness of the proposed HEER model and the utility of edge representations and heterogeneous metrics. The code and data are available at https://github.com/GentleZhu/HEER.Comment: 10 pages. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, London, United Kingdom, ACM, 201

    Specifications and Development of Interoperability Solution dedicated to Multiple Expertise Collaboration in a Design Framework

    Get PDF
    This paper describes the specifications of an interoperability platform based on the PPO (Product Process Organization) model developed by the French community IPPOP in the context of collaborative and innovative design. By using PPO model as a reference, this work aims to connect together heterogonous tools used by experts easing data and information exchanges. After underlining the growing needs of collaborative design process, this paper focuses on interoperability concept by describing current solutions and their limits. Then a solution based on the flexibility of the PPO model adapted to the philosophy of interoperability is proposed. To illustrate these concepts, several examples are more particularly described (robustness analysis, CAD and Product Lifecycle Management systems connections)

    Automatic extraction of knowledge from web documents

    Get PDF
    A large amount of digital information available is written as text documents in the form of web pages, reports, papers, emails, etc. Extracting the knowledge of interest from such documents from multiple sources in a timely fashion is therefore crucial. This paper provides an update on the Artequakt system which uses natural language tools to automatically extract knowledge about artists from multiple documents based on a predefined ontology. The ontology represents the type and form of knowledge to extract. This knowledge is then used to generate tailored biographies. The information extraction process of Artequakt is detailed and evaluated in this paper

    Coping with Web Knowledge

    Get PDF
    The web seems to be the biggest existing information repository. The extraction of information from this repository has attracted the interest of many researchers, who have developed intelligent algorithms (wrappers) able to extract structured syntactic information automatically. In this article, we formalise a new solution in order to extract knowledge from today’s non-semantic web. It is novel in that it associates semantics with the information extracted, which improves agent interoperability; furthermore, it achieves to delegate the knowledge extraction procedure to specialist agents, easing software development and promoting software reuse and maintainability.Comisión Interministerial de Ciencia y Tecnología TIC 2000–1106–C02–01Comisión Interministerial de Ciencia y Tecnología FIT-150100-2001-7

    Mixing the reactive with the personal: Opportunities for end-user programming in personal information management

    No full text
    The transition of personal information management (PIM) tools off the desktop to the Web presents an opportunity to augment these tools with capabilities provided by the wealth of real-time information readily available. In this chapter, we describe a personal information assistance engine that lets end-users delegate to it various simple context- and activity-reactive tasks and reminders. Our system, Atomate, treats RSS/ATOM feeds from social networking and life-tracking sites as sensor streams, integrating information from such feeds into a simple unified RDF world model representing people, places and things and their time-varying states and activities. Combined with other information sources on the web, including the user's online calendar, web-based e-mail client, news feeds and messaging services, Atomate can be made to automatically carry out a variety of simple tasks for the user, ranging from context-aware filtering and messaging, to sharing and social coordination actions. Atomate's open architecture and world model easily accommodate new information sources and actions via the addition of feeds and web services. To make routine use of the system easy for non-programmers, Atomate provides a constrained-input natural language interface (CNLI) for behavior specification, and a direct-manipulation interface for inspecting and updating its world model

    Inherently flexible software

    Get PDF
    Software evolution is an important and expensive consequence of software. As Lehman's First Law of Program Evolution states, software must be changed to satisfy new user requirements or become progressively less useful to the stakeholders of the software. Software evolution is difficult for a multitude of different reasons, most notably because of an inherent lack of evolveability of software, design decisions and existing requirements which are difficult to change and conflicts between new requirements and existing assumptions and requirements. Software engineering has traditionally focussed on improvements in software development techniques, with little conscious regard for their effects on software evolution. The thesis emphasises design for change, a philosophy that stems from ideas in preventive maintenance and places the ease of software evolution more at the centre of the design of software systems than it is at present. The approach involves exploring issues of evolveability, such as adaptability, flexibility and extensibility with respect to existing software languages, models and architectures. A software model, SEvEn, is proposed which improves on the evolveability of these existing software models by improving on their adaptability, flexibility and extensibility, and provides a way to determine the ripple effects of changes by providing a reflective model of a software system. The main conclusion is that, whilst software evolveability can be improved, complete adaptability, flexibility and extensibility of a software system is not possible, hi addition, ripple effects can't be completely eradicated because assumptions will always persist in a software system and new requirements may conflict with existing requirements. However, the proposed reflective model of software (which consists of a set of software entities, or abstractions, with the characteristic of increased evolveability) provides trace-ability of ripple effects because it explicitly models the dependencies that exist between software entities, determines how software entities can change, ascertains the adaptability of software entities to changes in other software entities on which they depend and determines how changes to software entities affect those software entities that depend on them

    Citizen-Centric Data Services for Smarter Cities

    Get PDF
    Smart Cities use Information and Communication Technologies (ICT) to manage more efficiently the resources and services offered by a city and to make them more approachable to all its stakeholders (citizens, companies and public administration). In contrast to the view of big corporations promoting holistic “smart city in a box” solutions, this work proposes that smarter cities can be achieved by combining already available infrastructure, i.e., Open Government Data and sensor networks deployed in cities, with the citizens’ active contributions towards city knowledge by means of their smartphones and the apps executed in them. In addition, this work introduces the main characteristics of the IES Cities platform, whose goal is to ease the generation of citizen-centric apps that exploit urban data in different domains. The proposed vision is achieved by providing a common access mechanism to the heterogeneous data sources offered by the city, which reduces the complexity of accessing the city’s data whilst bringing citizens closely to a prosumer (double consumer and producer) role and allowing to integrate legacy data into the cities’ data ecosystem.The European Union’s Competitiveness and Innovation Framework Programme has supported this work under grant agreement No. 325097
    • …
    corecore