2,547 research outputs found

    Open semantic service networks

    Get PDF
    Online service marketplaces will soon be part of the economy to scale the provision of specialized multi-party services through automation and standardization. Current research, such as the *-USDL service description language family, is already defining the basic building blocks to model the next generation of business services. Nonetheless, the developments being made do not target to interconnect services via service relationships. Without the concept of relationship, marketplaces will be seen as mere functional silos containing service descriptions. Yet, in real economies, all services are related and connected. Therefore, to address this gap we introduce the concept of open semantic service network (OSSN), concerned with the establishment of rich relationships between services. These networks will provide valuable knowledge on the global service economy, which can be exploited for many socio-economic and scientific purposes such as service network analysis, management, and control

    The value of ontology, The BPM ontology

    Get PDF
    It is generally accepted that the creation of added value requires collaboration inside and between organizations. Collaboration requires sharing knowledge (e.g., a shared understanding of business processes) between trading partners and between colleagues. It is on the (unique) knowledge that is shared between and created by colleagues that organizations build their competitive advantage. To take full advantage of this knowledge, it should be disseminated as widely as possible within an organization. Nonaka distinguished tacit knowledge, which is personal, context specific, and not so easy to communicate (e.g., intuitions, unarticulated mental models, embodied technological skills), from explicit knowledge, which is meaningful information articulated in clear language, including numbers and diagrams. Tacit knowledge can be disseminated through socialization (e.g., face-to-face communication, sharing experiences), which implies a reduced dissemination speed, or can be externalized , which is the conversion of tacit into explicit knowledge. Although explicit knowledge can take many forms (e.g., business (process) models, manuals), this chapter focuses on ontologies, which are versatile knowledge artifacts created through externalization, with the power to fuel Nonaka’s knowledge spiral. Nonaka’s knowledge spiral visualizes how a body of unique corporate knowledge, and hence a competitive advantage, is developed through a collaborative and iterative knowledge creation process that involves iterative cycles of externalization, combination, and internalization. When corporate knowledge is documented with ontology, a knowledge spiral leads to ontology evolution

    What is an Analogue for the Semantic Web and Why is Having One Important?

    No full text
    This paper postulates that for the Semantic Web to grow and gain input from fields that will surely benefit it, it needs to develop an analogue that will help people not only understand what it is, but what the potential opportunities are that are enabled by these new protocols. The model proposed in the paper takes the way that Web interaction has been framed as a baseline to inform a similar analogue for the Semantic Web. While the Web has been represented as a Page + Links, the paper presents the argument that the Semantic Web can be conceptualized as a Notebook + Memex. The argument considers how this model also presents new challenges for fundamental human interaction with computing, and that hypertext models have much to contribute to this new understanding for distributed information systems

    From Semantic Search & Integration to Analytics

    Get PDF

    Design of a framework for automated service mashup creation and execution based on semantic reasoning

    Get PDF
    Instead of building self-contained silos, applications are being broken down in independent structures able to offer a scoped service using open communication standards and encoding. Nowadays there is no automatic environment for the construction of new mashups from these reusable services. At the same time the designer of the mashup needs to establish the actual locations for deployment of the different components. This paper introduces the development of a framework focusing on the dynamic creation and execution of service mashups. By enriching the available building blocks with semantic descriptions, new service mashups are automatically composed through the use of planning algorithms. The composed mashups are automatically deployed on the available resources making optimal use of bandwidth, storage and computing power of the network and server elements. The system is extended with dynamic recovery from resource and network failures. This enrichment of business components and services with semantics, reasoning, and distributed deployment is demonstrated by means of an e-shop use case

    Semantic Web: Who is who in the field – A bibliometric analysis

    Get PDF
    The Semantic Web (SW) is one of the main efforts aiming to enhance human and machine interaction by representing data in an understandable way for machines to mediate data and services. It is a fast-moving and multidisciplinary field. This study conducts a thorough bibliometric analysis of the field by collecting data from Web of Science (WOS) and Scopus for the period of 1960-2009. It utilizes a total of 44,157 papers with 651,673 citations from Scopus, and 22,951 papers with 571,911 citations from WOS. Based on these papers and citations, it evaluates the research performance of the SW by identifying the most productive players, major scholarly communication media, highly cited authors, influential papers and emerging stars

    Fuzzy Dynamic Discrimination Algorithms for Distributed Knowledge Management Systems

    Get PDF
    A reduction of the algorithmic complexity of the fuzzy inference engine has the following property: the inputs (the fuzzy rules and the fuzzy facts) can be divided in two parts, one being relatively constant for a long a time (the fuzzy rule or the knowledge model) when it is compared to the second part (the fuzzy facts) for every inference cycle. The occurrence of certain transformations over the constant part makes sense, in order to decrease the solution procurement time, in the case that the second part varies, but it is known at certain moments in time. The transformations attained in advance are called pre-processing or knowledge compilation. The use of variables in a Business Rule Management System knowledge representation allows factorising knowledge, like in classical knowledge based systems. The language of the first-degree predicates facilitates the formulation of complex knowledge in a rigorous way, imposing appropriate reasoning techniques. It is, thus, necessary to define the description method of fuzzy knowledge, to justify the knowledge exploiting efficiency when the compiling technique is used, to present the inference engine and highlight the functional features of the pattern matching and the state space processes. This paper presents the main results of our project PR356 for designing a compiler for fuzzy knowledge, like Rete compiler, that comprises two main components: a static fuzzy discrimination structure (Fuzzy Unification Tree) and the Fuzzy Variables Linking Network. There are also presented the features of the elementary pattern matching process that is based on the compiled structure of fuzzy knowledge. We developed fuzzy discrimination algorithms for Distributed Knowledge Management Systems (DKMSs). The implementations have been elaborated in a prototype system FRCOM (Fuzzy Rule COMpiler).Fuzzy Unification Tree, Dynamic Discrimination of Fuzzy Sets, DKMS, FRCOM

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed

    Web 2.0 and its impact on knowledge and business organizations

    Get PDF
    Today, information overload and the lack of systems that enable locating employees with the right knowledge or skills are common challenges that large organisations face. This makes knowledge workers to re-invent the wheel and have problems to retrieve information from both internal and external resources. In addition, information is dynamically changing and ownership of data is moving from corporations to the individuals. However, there is a set of web based tools that may cause a major progress in the way people collaborate and share their knowledge. This article aims to analyse the impact of ‘Web 2.0’ on organisational knowledge strategies. A comprehensive literature review was done to present the academic background followed by a review of current ‘Web 2.0’ technologies and assessment of their strengths and weaknesses. As the framework of this study is oriented to business applications, the characteristics of the involved segments and tools were reviewed from an organisational point of view. Moreover, the ‘Enterprise 2.0’ paradigm does not only imply tools but also changes the way people collaborate, the way the work is done (processes) and finally impacts on other technologies. Finally, gaps in the literature in this area are outlined

    From Expert Discipline to Common Practice: A Vision and Research Agenda for Extending the Reach of Enterprise Modeling

    Get PDF
    The benefits of enterprise modeling (EM) and its contribution to organizational tasks are largely undisputed in business and information systems engineering. EM as a discipline has been around for several decades but is typically performed by a limited number of people in organizations with an affinity to modeling. What is captured in models is only a fragment of what ought to be captured. Thus, this research note argues that EM is far from its maximum potential. Many people develop some kind of model in their local practice without thinking about it consciously. Exploiting the potential of this “grass roots modeling” could lead to groundbreaking innovations. The aim is to investigate integration of the established practices of modeling with local practices of creating and using model-like artifacts of relevance for the overall organization. The paper develops a vision for extending the reach of EM, identifies research areas contributing to the vision and proposes elements of a future research Agenda
    corecore