2,341 research outputs found

    Scheduling queries to improve the freshness of a website

    Get PDF
    The WWW is a new advertising media in recent years where corporations utilize it to increase their exposure to consumers. For a very large website whose content is derived from some source database, it is important to maintain its freshness in response to changes to the base data. This issue is particularly signicant for websites presenting fast changing information such as stock exchange information and product information. In this paper, we formally dene and study the freshness of a website that is refreshed by scheduling a set of queries to fetch fresh data from the databases. Then, we propose several online scheduling algorithms and compare the performance of the algorithms on the freshness metric. Our conclusion is veried by empirical results. Keywords: Internet Data Management, View Maintenance, Query Optimization, Hard Real-Time Scheduling 1 Introduction The popularity of the World-Wide Web (WWW) has made it a prime vehicle for disseminating information. More and ..

    Development of Distributed Research Center for analysis of regional climatic and environmental changes

    Get PDF
    We present an approach and first results of a collaborative project being carried out by a joint team of researchers from the Institute of Monitoring of Climatic and Ecological Systems, Russia and Earth Systems Research Center UNH, USA. Its main objective is development of a hardware and software platform prototype of a Distributed Research Center (DRC) for monitoring and projecting of regional climatic and environmental changes in the Northern extratropical areas. The DRC should provide the specialists working in climate related sciences and decision-makers with accurate and detailed climatic characteristics for the selected area and reliable and affordable tools for their in-depth statistical analysis and studies of the effects of climate change. Within the framework of the project, new approaches to cloud processing and analysis of large geospatial datasets (big geospatial data) inherent to climate change studies are developed and deployed on technical platforms of both institutions. We discuss here the state of the art in this domain, describe web based information-computational systems developed by the partners, justify the methods chosen to reach the project goal, and briefly list the results obtained so far

    Low-cost web-based Supervisory Control and Data Acquisition system for a microgrid testbed: A case study in design and implementation for academic and research applications

    Full text link
    [EN] This paper presents the design and implementation of a low-cost Supervisory Control and Data Acquisition system based on a Web interface to be applied to a Hybrid Renewable Energy System (HRES) microgrid. This development will provide a reliable and low-cost control and data acquisition systems for the Renewable Energy Laboratory a Universitat Politecnica de Valencia (LabDER-UPV) in Spain, oriented to the research on microgrid stability and energy generation. The developed low-cost SCADA operates on a microgrid that incorporates a photovoltaic array, a wind turbine, a biomass gasification plant and a battery bank as an energy storage system. Sensors and power meters for electrical parameters, such as voltage, current, frequency, power factor, power generation, and energy consumption, were processed digitally and integrated into Arduino-based devices. A master device on a Raspberry-PI board was set up to send all this information to a local database (DB), and a MySQL Web-DB linked to a Web SCADA interface, programmed in HTML5. The communications protocols include TCP/IP, I2C, SPI, and Serial communication; Arduino-based slave devices communicate with the master Raspberry-PI using NRF24L01 wireless radio frequency transceivers. Finally, a comparison between a standard SCADA against the developed Web-based SCADA system is carried out. The results of the operative tests and the cost comparison of the own-designed developed Web-SCADA system prove its reliability and low-cost, on average an 86% cheaper than a standard brandmark solution, for controlling, monitoring and data logging information, as well as for local and remote operation system when applied to the HRES microgrid testbed.Vargas Salgado, CA.; Águila-León, J.; Chiñas-Palacios, C.; Hurtado-Perez, E. (2019). Low-cost web-based Supervisory Control and Data Acquisition system for a microgrid testbed: A case study in design and implementation for academic and research applications. Heliyon. 5(9):1-11. https://doi.org/10.1016/j.heliyon.2019.e02474S1115

    GEORDi: Supporting lightweight end-user authoring and exploration of Linked Data

    No full text
    The US and UK governments have recently made much of the data created by their various departments available as data sets (often as csv files) available on the web. Known as ”open data” while these are valuable assets, much of this data remains useless because it is effectively inaccessible for citizens to access for the following reasons: (1) it is often a tedious, many step process for citizens simply to find data relevant to a query. Once the data candidate is located, it often must be downloaded and opened in a separate application simply to see if the data that may satisfy the query is contained in it. (2) It is difficult to join related data sets to create richer integrated information (3) it is particularly difficult to query either a single data set, and even harder to query across related data sets. (4) To date, one has had to be well versed in semantic web protocols like SPARQL, RDF and URI formation to integrate and query such sources as reusable linked data. Our goal has been to develop tools that will let regular, non-programmer web citizens make use of this Web of Data. To this end, we present GEORDi, a set of integrated tools and services that lets citizen users identify, explore, query and represent these open data sources over the web via Linked Data mechanisms. In this paper we describe the GEORDi process of authoring new and translating existing open data in a linkable format, GEORDi’s lens mechanism for rendering rich, plain language descriptions and views of resources, and the GEORDI link-sliding paradigm for data exploration. With these tools we demonstrate that it is possible to make the Web of open (and linked) data accessible for ordinary web citizen users

    Noise Data Visualization and Identification Project

    Get PDF
    This project aims to produce a space and time map of noise levels within a city using data gathered from sensors, with the goal of identifying noise hot spots and quiet zones. It also includes a noise identification module that attempts to classify reported sound data

    Velir Web Tracking System

    Get PDF
    This project consisted of the design and development of a business reporting software to be used by Velir Studios. The software tool allows the employees at Velir Studios to quickly and effectively obtainimportant business information such as employee time cards, project budgets and resource allocation, invoice reporting and project burn-rate trends. By implementing this piece of software, Velir employees will be able to quickly export the data they wish to access into an excel document without having to manually enter each piece of information separately. The addition of the reporting software into the Velir management department will not only expedite the overall reporting process but will save them time and money in the future

    Archiving the Relaxed Consistency Web

    Full text link
    The historical, cultural, and intellectual importance of archiving the web has been widely recognized. Today, all countries with high Internet penetration rate have established high-profile archiving initiatives to crawl and archive the fast-disappearing web content for long-term use. As web technologies evolve, established web archiving techniques face challenges. This paper focuses on the potential impact of the relaxed consistency web design on crawler driven web archiving. Relaxed consistent websites may disseminate, albeit ephemerally, inaccurate and even contradictory information. If captured and preserved in the web archives as historical records, such information will degrade the overall archival quality. To assess the extent of such quality degradation, we build a simplified feed-following application and simulate its operation with synthetic workloads. The results indicate that a non-trivial portion of a relaxed consistency web archive may contain observable inconsistency, and the inconsistency window may extend significantly longer than that observed at the data store. We discuss the nature of such quality degradation and propose a few possible remedies.Comment: 10 pages, 6 figures, CIKM 201
    corecore