90 research outputs found
Map Services Management
About 20 years ago, Google and other companies introduced the tiled maps, and nowadays,
it is possible to produce similar work using open data and open source software.
Web Map Service and Tile Map Service are a set of open standards to provide ways for users
to access and visualize maps by interacting with geospacial data, over the internet. Most of
the solutions to provide maps, make use of geospacial databases like PostgreSQL/PostGIS
or MBTiles/PMTiles. Dedicated servers follows the standards specified by organizations
such as Open Geospatial Consortium.
The main goal of this work is to create a centralized and scalable solution that publish
basemaps for a predefined set of geographic regions. These basemaps are displayed as part
of a desktop or mobile applications with internet access.
In order to fulfill this purpose, the best approach is, for each geographic region, to generate
a MBTile database using raw data extract of the OpenStreetMap packed by Geofabrik. The
raw data are also combined with a second data source, Natural Earth, to complete the
map information at smaller scales. The final result goes through a process of cartographic
generalization to be able to access only the relevant geospatial data at a given map scale or
zoom level.
The data are published as vector tiles, using a tile server, and for legacy applications there’s
also the possibility to display the basemaps as raster tiles. Another available option is to use
PMTiles files, which are similar to MBTiles but cloud optimized and suitable for serverless
solutions.
In the interest of ensuring good performance and stability, it is possible to keep everything
together behind a reverse proxy, using as an example a Nginx server. Taking advantage of
HTTP range requests functionality, also available in Nginx, it is possible to consider the
serverless option of PMTiles and the standard tile server under the same umbrella.
Finally, two points were considered and explored as opportunities for improvement, however
not fully implemented. The first is the ability to cache vector/raster tiles requests, and the
second is the ability to deploy the solution supported by a Content Delivery Network.Google e outros serviços introduziram o tiled maps há cerca de 20 anos. Atualmente, é
possível produzir trabalhos semelhantes usando dados e software de código abertos.
Web Map Service e Tile Map Service são um conjunto de protocolos padrão abertos que
fornecem aos utilizadores uma forma de acederem e visualizarem mapas interagindo com
dados geoespaciais, através da Internet. A maioria das soluções que fornecem mapas fazem
uso de bases de dados geoespaciais PostgreSQL/PostGIS ou MBTiles/PMTiles. Os servi dores são dedicados conforme normas padrão especificadas por instituições como a Open
Geospatial Consortium.
O principal objetivo deste trabalho é criar uma solução centralizada e escalável que publique
mapas de base para um conjunto predefinido de regiões geográficas. Estes mapas de base
devem ser mostrados numa aplicação desktop ou mobile com acesso à internet.
De forma a atingir este propósito, a melhor abordagem é, para cada região geográfica, gerar
uma base de dados MBTile, usando extratos de dados em bruto do OpenStreetMap disponibilizados pela Geofabrik. Os dados em bruto são também combinados com uma segunda
fonte de dados, o Natural Earth, para completar a informação do mapa nas escalas menores.
O resultado final passa por um processo de generalização cartográfica de forma a disponibilizar os dados geoespaciais relevantes para uma determinada escala ou um determinado
nível de zoom do mapa.
Os dados são publicados como vector tiles, usando um tile server, e para aplicações legacy
também existe a possibilidade de disponibilizar os mapas em formato raster. Existe uma
outra opção que consiste na utilização de ficheiros PMTile, que são ficheiros similares aos
MBTiles mas otimizados para a cloud e disponibilizados num princípio serverless.
De forma a garantir um bom desempenho e estabilidade, é possível agregar toda a solução
atrás de uma reverse proxy usando por exemplo um servidor Nginx. Tirando partido da
funcionalidade HTTP range requests, disponível também no Nginx, torna-se possível servir
PMTiles (serverless) e Tile servers sob a mesma infraestrutura.
Por fim, mais dois pontos foram considerados e explorados como oportunidades de melhoria,
mas não foram totalmente implementados. O primeiro é a capacidade de armazenar em
cache pedidos de Tiles vector/raster e o segundo é a capacidade de disponibilizar a solução
apoiada num Content Delivery Network
A basic framework and overview of a network-based RAID-like distributed back-up system : NetRAID
NetRAID is a framework for a simple, open, and free system to allow end-users to have the capacity to create a geographically distributed, secure, redundant system that will provide end-users with the capacity to back up important data. NetRAID is designed to be lightweight, cross-platform, low cost, extendable, and simple. As more important data becomes digitalized it is critical for even average home computer users to be able to ensure that their data is secure. Even for people with DVD burners that back up their data weekly, if the back ups and their sources are kept in the same physical location the value of the back-up is greatly diminished. NetRAID can offer a more comprehensive end-user back-up. NetRAID version 1 has some limitations with the types and speeds of networks it can run on; however, it provides a building block for the future extension to almost any sort of TCP/IP network. NetRAID also has the potential capability to use a wide variety of encryption and data verification schemes to make sure that data is secure in transmission and storage. The NetRAID virtual file system, sockets, and program core are written in Visual Basic.NET 2003, and should be portable to a wide variety of operating systems and languages in the future
Feasibility of backing up server information in a distributed storage using client workstations hard drives
As a consequence of nowadays large hard disk capacities, we can frequently find many networks in corporate environment with a considerable amount of unused hard disk storage space dispersed among all its computers. In an immediate future, the purpose of this unused space is unclearly defined and represents a waste of resource. Several studies suggest and evaluate numerous ways to take advantages of workstation unused hard disk space in a network. However, there are no evidences of studies that consider disk-based backup, distributed storage, and the unused workstation storage aiming at backing up server information in small business network. Determining whether it is possible to utilize these resources for backing up server information certainly can help small businesses to obtain a greater return of investment in their networks. In this paper, I present a case study in where I found out that under specific conditions there are resources that a backup system can utilize to back up server information by using workstation\u27s unused hard disk spaces without significantly affecting normal operation of that network
Peer-To-Peer Backup for Personal Area Networks
FlashBack is a peer-to-peer backup algorithm designed for power-constrained devices running in a personal area network (PAN). Backups are performed transparently as local updates initiate the spread of backup data among a subset of the currently available peers. Flashback limits power usage by avoiding flooding and keeping small neighbor sets. Flashback has also been designed to utilize powered infrastructure when possible to further extend device lifetime. We propose our architecture and algorithms, and present initial experimental results that illustrate FlashBack’s performance characteristic
Recommended from our members
Making Data Storage Efficient in the Era of Cloud Computing
We enter the era of cloud computing in the last decade, as many paradigm shifts are happening on how people write and deploy applications. Despite the advancement of cloud computing, data storage abstractions have not evolved much, causing inefficiencies in performance, cost, and security.
This dissertation proposes a novel approach to make data storage efficient in the era of cloud computing by building new storage abstractions and systems that bridge the gap between cloud computing and data storage and simplify development. We build four systems to address four data inefficiencies in cloud computing.
The first system, Grandet, solves the data storage inefficiency caused by the paradigm shift from upfront provisioning to a variety of pay-as-you-go cloud services. Grandet is an extensible storage system that significantly reduces storage costs for web applications deployed in the cloud. Under the hood, it supports multiple heterogeneous stores and unifies them by placing each data object at the store deemed most economical. Our results show that Grandet reduces their costs by an average of 42.4%, and it is fast, scalable, and easy to use.
The second system, Unic, solves the data inefficiency caused by the paradigm shift from single-tenancy to multi-tenancy. Unic securely deduplicates general computations. It exports a cache service that allows cloud applications running on behalf of mutually distrusting users to memoize and reuse computation results, thereby improving performance. Unic achieves both integrity and secrecy through a novel use of code attestation, and it provides a simple yet expressive API that enables applications to deduplicate their own rich computations. Our results show that Unic is easy to use, speeds up applications by an average of 7.58x, and with little storage overhead.
The third system, Lambdata, solves the data inefficiency caused by the paradigm shift to serverless computing, where developers only write core business logic, and cloud service providers maintain all the infrastructure. Lambdata is a novel serverless computing system that enables developers to declare a cloud function's data intents, including both data read and data written. Once data intents are made explicit, Lambdata performs a variety of optimizations to improve speed, including caching data locally and scheduling functions based on code and data locality. Our results show that Lambdata achieves an average speedup of 1.51x on the turnaround time of practical workloads and reduces monetary cost by 16.5%.
The fourth system, CleanOS, solves the data inefficiency caused by the paradigm shift from desktop computers to smartphones always connected to the cloud. CleanOS is a new Android-based operating system that manages sensitive data rigorously and maintains a clean environment at all times. It identifies and tracks sensitive data, encrypts it with a key, and evicts that key to the cloud when the data is not in active use on the device. Our results show that CleanOS limits sensitive-data exposure drastically while incurring acceptable overheads on mobile networks
- …