7,364 research outputs found

    State-of-the-art on evolution and reactivity

    Get PDF
    This report starts by, in Chapter 1, outlining aspects of querying and updating resources on the Web and on the Semantic Web, including the development of query and update languages to be carried out within the Rewerse project. From this outline, it becomes clear that several existing research areas and topics are of interest for this work in Rewerse. In the remainder of this report we further present state of the art surveys in a selection of such areas and topics. More precisely: in Chapter 2 we give an overview of logics for reasoning about state change and updates; Chapter 3 is devoted to briefly describing existing update languages for the Web, and also for updating logic programs; in Chapter 4 event-condition-action rules, both in the context of active database systems and in the context of semistructured data, are surveyed; in Chapter 5 we give an overview of some relevant rule-based agents frameworks

    Internet of things

    Get PDF
    Manual of Digital Earth / Editors: Huadong Guo, Michael F. Goodchild, Alessandro Annoni .- Springer, 2020 .- ISBN: 978-981-32-9915-3Digital Earth was born with the aim of replicating the real world within the digital world. Many efforts have been made to observe and sense the Earth, both from space (remote sensing) and by using in situ sensors. Focusing on the latter, advances in Digital Earth have established vital bridges to exploit these sensors and their networks by taking location as a key element. The current era of connectivity envisions that everything is connected to everything. The concept of the Internet of Things(IoT)emergedasaholisticproposaltoenableanecosystemofvaried,heterogeneous networked objects and devices to speak to and interact with each other. To make the IoT ecosystem a reality, it is necessary to understand the electronic components, communication protocols, real-time analysis techniques, and the location of the objects and devices. The IoT ecosystem and the Digital Earth (DE) jointly form interrelated infrastructures for addressing today’s pressing issues and complex challenges. In this chapter, we explore the synergies and frictions in establishing an efïŹcient and permanent collaboration between the two infrastructures, in order to adequately address multidisciplinary and increasingly complex real-world problems. Although there are still some pending issues, the identiïŹed synergies generate optimism for a true collaboration between the Internet of Things and the Digital Earth

    MANSION-GS: seMANtics as the n-th dimenSION for Geographic Space

    Get PDF
    The extended understanding of geographic ecosystems, including the physical and logical description of space with associated data and activities as well as the dynamics inside, poses complex scenarios that cannot be obtained from a simple geographic-oriented data model. The main purpose of this current work is the conceptual integration of a physical space model with dynamic logic support able to describe the relations amongst the different elements composing the space as well as the relations between spaces and external elements. In the context of this work, semantics have the critical and central role of connecting and relating the different dimensions on the space, even though they are mostly a virtual dimension in the overall model

    A framework to maximise the communicative power of knowledge visualisations

    Get PDF
    Knowledge visualisation, in the field of information systems, is both a process and a product, informed by the closely aligned fields of information visualisation and knowledg management. Knowledge visualisation has untapped potential within the purview of knowledge communication. Even so, knowledge visualisations are infrequently deployed due to a lack of evidence-based guidance. To improve this situation, we carried out a systematic literature review to derive a number of “lenses” that can be used to reveal the essential perspectives to feed into the visualisation production process.We propose a conceptual framework which incorporates these lenses to guide producers of knowledge visualisations. This framework uses the different lenses to reveal critical perspectives that need to be considered during the design process. We conclude by demonstrating how this framework could be used to produce an effective knowledge visualisation

    Sharing Video Emotional Information in the Web

    Get PDF
    Video growth over the Internet changed the way users search, browse and view video content. Watching movies over the Internet is increasing and becoming a pastime. The possibility of streaming Internet content to TV, advances in video compression techniques and video streaming have turned this recent modality of watching movies easy and doable. Web portals as a worldwide mean of multimedia data access need to have their contents properly classified in order to meet users’ needs and expectations. The authors propose a set of semantic descriptors based on both user physiological signals, captured while watching videos, and on video low-level features extraction. These XML based descriptors contribute to the creation of automatic affective meta-information that will not only enhance a web-based video recommendation system based in emotional information, but also enhance search and retrieval of videos affective content from both users’ personal classifications and content classifications in the context of a web portal.info:eu-repo/semantics/publishedVersio

    Integration of Data Mining into Scientific Data Analysis Processes

    Get PDF
    In recent years, using advanced semi-interactive data analysis algorithms such as those from the field of data mining gained more and more importance in life science in general and in particular in bioinformatics, genetics, medicine and biodiversity. Today, there is a trend away from collecting and evaluating data in the context of a specific problem or study only towards extensively collecting data from different sources in repositories which is potentially useful for subsequent analysis, e.g. in the Gene Expression Omnibus (GEO) repository of high throughput gene expression data. At the time the data are collected, it is analysed in a specific context which influences the experimental design. However, the type of analyses that the data will be used for after they have been deposited is not known. Content and data format are focused only to the first experiment, but not to the future re-use. Thus, complex process chains are needed for the analysis of the data. Such process chains need to be supported by the environments that are used to setup analysis solutions. Building specialized software for each individual problem is not a solution, as this effort can only be carried out for huge projects running for several years. Hence, data mining functionality was developed to toolkits, which provide data mining functionality in form of a collection of different components. Depending on the different research questions of the users, the solutions consist of distinct compositions of these components. Today, existing solutions for data mining processes comprise different components that represent different steps in the analysis process. There exist graphical or script-based toolkits for combining such components. The data mining tools, which can serve as components in analysis processes, are based on single computer environments, local data sources and single users. However, analysis scenarios in medical- and bioinformatics have to deal with multi computer environments, distributed data sources and multiple users that have to cooperate. Users need support for integrating data mining into analysis processes in the context of such scenarios, which lacks today. Typically, analysts working with single computer environments face the problem of large data volumes since tools do not address scalability and access to distributed data sources. Distributed environments such as grid environments provide scalability and access to distributed data sources, but the integration of existing components into such environments is complex. In addition, new components often cannot be directly developed in distributed environments. Moreover, in scenarios involving multiple computers, multiple distributed data sources and multiple users, the reuse of components, scripts and analysis processes becomes more important as more steps and configuration are necessary and thus much bigger efforts are needed to develop and set-up a solution. In this thesis we will introduce an approach for supporting interactive and distributed data mining for multiple users based on infrastructure principles that allow building on data mining components and processes that are already available instead of designing of a completely new infrastructure, so that users can keep working with their well-known tools. In order to achieve the integration of data mining into scientific data analysis processes, this thesis proposes an stepwise approach of supporting the user in the development of analysis solutions that include data mining. We see our major contributions as the following: first, we propose an approach to integrate data mining components being developed for a single processor environment into grid environments. By this, we support users in reusing standard data mining components with small effort. The approach is based on a metadata schema definition which is used to grid-enable existing data mining components. Second, we describe an approach for interactively developing data mining scripts in grid environments. The approach efficiently supports users when it is necessary to enhance available components, to develop new data mining components, and to compose these components. Third, building on that, an approach for facilitating the reuse of existing data mining processes based on process patterns is presented. It supports users in scenarios that cover different steps of the data mining process including several components or scripts. The data mining process patterns support the description of data mining processes at different levels of abstraction between the CRISP model as most general and executable workflows as most concrete representation

    IoT Data Processing for Smart City and Semantic Web Applications

    Full text link
    The world has been experiencing rapid urbanization over the last few decades, putting a strain on existing city infrastructure such as waste management, water supply management, public transport and electricity consumption. We are also seeing increasing pollution levels in cities threatening the environment, natural resources and health conditions. However, we must realize that the real growth lies in urbanization as it provides many opportunities to individuals for better employment, healthcare and better education. However, it is imperative to limit the ill effects of rapid urbanization through integrated action plans to enable the development of growing cities. This gave rise to the concept of a smart city in which all available information associated with a city will be utilized systematically for better city management. The proposed system architecture is divided in subsystems and is discussed in individual chapters. The first chapter introduces and gives overview to the reader of the complete system architecture. The second chapter discusses the data monitoring system and data lake system based on the oneM2M standards. DMS employs oneM2M as a middleware layer to achieve interoperability, and DLS uses a multi-tenant architecture with multiple logical databases, enabling efficient and reliable data management. The third chapter discusses energy monitoring and electric vehicle charging systems developed to illustrate the applicability of the oneM2M standards. The fourth chapter discusses the Data Exchange System based on the Indian Urban Data Exchange framework. DES uses IUDX standard data schema and open APIs to avoid data silos and enable secure data sharing. The fifth chapter discusses the 5D-IoT framework that provides uniform data quality assessment of sensor data with meaningful data descriptions

    Survey on Additive Manufacturing, Cloud 3D Printing and Services

    Full text link
    Cloud Manufacturing (CM) is the concept of using manufacturing resources in a service oriented way over the Internet. Recent developments in Additive Manufacturing (AM) are making it possible to utilise resources ad-hoc as replacement for traditional manufacturing resources in case of spontaneous problems in the established manufacturing processes. In order to be of use in these scenarios the AM resources must adhere to a strict principle of transparency and service composition in adherence to the Cloud Computing (CC) paradigm. With this review we provide an overview over CM, AM and relevant domains as well as present the historical development of scientific research in these fields, starting from 2002. Part of this work is also a meta-review on the domain to further detail its development and structure
    • 

    corecore