2,781 research outputs found

    An Optimal Control Approach for the Data Harvesting Problem

    Full text link
    We propose a new method for trajectory planning to solve the data harvesting problem. In a two-dimensional mission space, NN mobile agents are tasked with the collection of data generated at MM stationary sources and delivery to a base aiming at minimizing expected delays. An optimal control formulation of this problem provides some initial insights regarding its solution, but it is computationally intractable, especially in the case where the data generating processes are stochastic. We propose an agent trajectory parameterization in terms of general function families which can be subsequently optimized on line through the use of Infinitesimal Perturbation Analysis (IPA). Explicit results are provided for the case of elliptical and Fourier series trajectories and some properties of the solution are identified, including robustness with respect to the data generation processes and scalability in the size of an event set characterizing the underlying hybrid dynamic system

    Km4City Ontology Building vs Data Harvesting and Cleaning for Smart-city Services

    Get PDF
    Presently, a very large number of public and private data sets are available from local governments. In most cases, they are not semantically interoperable and a huge human effort would be needed to create integrated ontologies and knowledge base for smart city. Smart City ontology is not yet standardized, and a lot of research work is needed to identify models that can easily support the data reconciliation, the management of the complexity, to allow the data reasoning. In this paper, a system for data ingestion and reconciliation of smart cities related aspects as road graph, services available on the roads, traffic sensors etc., is proposed. The system allows managing a big data volume of data coming from a variety of sources considering both static and dynamic data. These data are mapped to a smart-city ontology, called KM4City (Knowledge Model for City), and stored into an RDF-Store where they are available for applications via SPARQL queries to provide new services to the users via specific applications of public administration and enterprises. The paper presents the process adopted to produce the ontology and the big data architecture for the knowledge base feeding on the basis of open and private data, and the mechanisms adopted for the data verification, reconciliation and validation. Some examples about the possible usage of the coherent big data knowledge base produced are also offered and are accessible from the RDF-Store and related services. The article also presented the work performed about reconciliation algorithms and their comparative assessment and selection

    Solving the Social Dilemma with Equilibrium Data Harvesting Strategies: A Game-Theoretic Approach

    Get PDF
    Social media platforms generate huge profits from targeted advertising by collecting massive amounts of data from their users, usually referred to as data harvesting. However, practitioners from the social media industry suggest that data harvesting hurts users by promoting social media addiction and the spread of misinformation. Therefore, policymakers have recently been considering regulating social media platforms. This paper investigates how imposing the regulation on data harvesting impacts social media platforms and users by developing a game-theoretic model. Our main finding shows that while the objective of the regulation on data harvesting is to discourage platforms from collecting a massive amount of data from the users, imposing the regulation may sometimes increase the data harvesting levels and profits of social media platforms. We contribute to the Information Systems literature by broadening the knowledge of the impact of the government\u27s regulation on social media platforms and users

    Servicing the federation : the case for metadata harvesting

    Get PDF
    The paper presents a comparative analysis of data harvesting and distributed computing as complementary models of service delivery within large-scale federated digital libraries. Informed by requirements of flexibility and scalability of federated services, the analysis focuses on the identification and assessment of model invariants. In particular, it abstracts over application domains, services, and protocol implementations. The analytical evidence produced shows that the harvesting model offers stronger guarantees of satisfying the identified requirements. In addition, it suggests a first characterisation of services based on their suitability to either model and thus indicates how they could be integrated in the context of a single federated digital library
    • ā€¦
    corecore