382 research outputs found

    Engineering an Open Web Syndication Interchange with Discovery and Recommender Capabilities

    Get PDF
    Web syndication has become a popular means of delivering relevant information to people online but the complexity of standards, algorithms and applications pose considerable challenges to engineers.  This paper describes the design and development of a novel Web-based syndication intermediary called InterSynd and a simple Web client as a proof of concept. We developed format-neutral middleware that sits between content sources and the user. Additional objectives were to add feed discovery and recommendation components to the intermediary. A search-based feed discovery module helps users find relevant feed sources. Implicit collaborative recommendations of new feeds are also made to the user. The syndication software built uses open standard XML technologies and the free open source libraries. Extensibility and re-configurability were explicit goals. The experience shows that a modular architecture can combine open source modules to build state-of-the-art syndication middleware and applications. The data produced by software metrics indicate the high degree of modularity retained

    XPath: Looking Forward

    Get PDF
    The location path language XPath is of particular importance for XML applications since it is a core component of many XML processing standards such as XSLT or XQuery. In this paper, based on axis symmetry of XPath, equivalences of XPath 1.0 location paths involving reverse axes, such as anc and prec, are established. These equivalences are used as rewriting rules in an algorithm for transforming location paths with reverse axes into equivalent reverse-axis-free ones. Location paths without reverse axes, as generated by the presented rewriting algorithm, enable efficient SAX-like streamed data processing of XPath

    Web Atlas como Herramienta para la Gestión Integrada Costera: de los Datos al Conocimiento Práctico

    Get PDF
    Despite the importance of coastal areas to sustainable development, they are poorly known by the public or even by decision-makers. This undermines consistent action towards their protection. Existing data and information, published in very complex language, tend to be restricted to academic use. The Coastal Web Atlas as the one developed here is a tool that makes this information more accessible to managers, by preserving, integrating, comparing, and sharing data as smart maps. The spatial analysis based on multiple impact indicators facilitates the correlation of causes and effects. The Coastal Web Atlas is available to a broad audience and it could be a strong instrument for spatial planning and oversight. The authors propose to improve coastal area management by using colors on maps to decode scientific language to friendly language and to publish it on a geoportal. This technology promotes the use of collected data and enables collaborative work. A pilot experiment is being developed in the Santos Port Region, at the São Paulo state coast, Brazil: http://santoswebatlas.com.brA pesar de la importancia de las áreas costeras para el desarrollo sostenible, ellas son poco conocidas por el público o incluso por los tomadores de decisiones. Esto socava una acción constante para su protección. Los datos y la información existentes, publicados en un lenguaje muy complejo, tienden a restringirse al uso académico. El Coastal Web Atlas, tal como se presenta aquí, es una herramienta que hace la información más accesible para los administradores, preservando, integrando, comparando y compartiendo datos en forma de mapas inteligentes. El análisis espacial basado en múltiples indicadores de impacto facilita la correlación entre causas y efectos. Un web Atlas Costero disponible para una amplia audiencia es una herramienta poderosa para la planificación y evaluación espacial. Los autores proponen mejorar la gestión de la zona costera utilizando colores en mapas para decodificar el lenguaje científico en un lenguaje amigable y publicarlo en un geoportal. Esta tecnología permite el uso de datos recopilados previamente y la construcción como un trabajo colaborativo. Se está desarrollando un experimento piloto en la Región del Puerto de Santos, en la costa del estado de São Paulo, Brasil: http://santoswebatlas.com.br

    Acquisition and Declarative Analytical Processing of Spatio-Temporal Observation Data

    Get PDF
    A generic framework for spatio-temporal observation data acquisition and declarative analytical processing has been designed and implemented in this Thesis. The main contributions of this Thesis may be summarized as follows: 1) generalization of a data acquisition and dissemination server, with great applicability in many scientific and industrial domains, providing flexibility in the incorporation of different technologies for data acquisition, data persistence and data dissemination, 2) definition of a new hybrid logical-functional paradigm to formalize a novel data model for the integrated management of entity and sampled data, 3) definition of a novel spatio-temporal declarative data analysis language for the previous data model, 4) definition of a data warehouse data model supporting observation data semantics, including application of the above language to the declarative definition of observation processes executed during observation data load, and 5) column-oriented parallel and distributed implementation of the spatial analysis declarative language. The huge amount of data to be processed forces the exploitation of current multi-core hardware architectures and multi-node cluster infrastructures

    Towards evidence-based, GIS-driven national spatial health information infrastructure and surveillance services in the United Kingdom

    Get PDF
    The term "Geographic Information Systems" (GIS) has been added to MeSH in 2003, a step reflecting the importance and growing use of GIS in health and healthcare research and practices. GIS have much more to offer than the obvious digital cartography (map) functions. From a community health perspective, GIS could potentially act as powerful evidence-based practice tools for early problem detection and solving. When properly used, GIS can: inform and educate (professionals and the public); empower decision-making at all levels; help in planning and tweaking clinically and cost-effective actions, in predicting outcomes before making any financial commitments and ascribing priorities in a climate of finite resources; change practices; and continually monitor and analyse changes, as well as sentinel events. Yet despite all these potentials for GIS, they remain under-utilised in the UK National Health Service (NHS). This paper has the following objectives: (1) to illustrate with practical, real-world scenarios and examples from the literature the different GIS methods and uses to improve community health and healthcare practices, e.g., for improving hospital bed availability, in community health and bioterrorism surveillance services, and in the latest SARS outbreak; (2) to discuss challenges and problems currently hindering the wide-scale adoption of GIS across the NHS; and (3) to identify the most important requirements and ingredients for addressing these challenges, and realising GIS potential within the NHS, guided by related initiatives worldwide. The ultimate goal is to illuminate the road towards implementing a comprehensive national, multi-agency spatio-temporal health information infrastructure functioning proactively in real time. The concepts and principles presented in this paper can be also applied in other countries, and on regional (e.g., European Union) and global levels

    Putting the past in place : a conceptual data model for a 4D archaeological GIS

    Get PDF

    A model for the digital representation and transaction of complex pricing and ordering for high-value spatial products and services

    Get PDF
    Ein Modell für die digitale Repräsentation und Transaktion von komplexen Bepreisungen und Bestellungen für hochwertige raumbezogene Produkte und Dienste Roland M. Wagner Da sich der Mensch und alles um ihn herum sich grundsätzlich in einem Raum aufhalten, werden sehr viele seiner Entscheidungen durch die Ort-Dimensionen beeinflusst. Durch die Digitalisierung von geo-referenzierten Informationen im letzten Jahrzehnt wurde die Voraussetzung geschaffen, Raumbezüge nun in allgemeine, automatisierte Prozesse zu integrieren. Digitale, geo-referenzierte Informationen sind z. B. Staumeldungen, Bau- oder Lagepläne. Digitale Geoinformationen haben die Eigenschaft, dass sie sehr groß (Terabyte oder Petabyte) sein können und gleichzeitig schnell veralten. Zugleich ist deren Erstellung sehr aufwendig, und damit sind Geoinformationen i. Allg. sehr teuer. Geoinformationen können des weiteren häufig auf Grund mathematischer Regeln automatisch prozessiert werden. Diese Informationen werden i. Allg. zudem dezentral erstellt, gehalten und gepflegt. Durch diese Eigenschaften scheint der Einsatz von Webdiensten viele Vorteile für diese Domäne zu bieten. Gleichzeitig wirft die Einführung von dezentralen Webdiensten komplexe Fragen im Bereich Bepreisung und Bestellung auf. Die weiten Konfektionierungsmöglichkeiten der digitalen Produkte durch Dienste bei gleichzeitig sehr hohen absoluten Preisen resultieren in komplexe Preismodelle. Der erste Teil der Arbeit analysiert die Strukturen dieser Preismodelle und schlägt einen generischen Ansatz zur digitalen Repräsentation mit Hilfe von mathematischen Formeln vor. Da Preise sich häufig nur für konkrete Konfiguration errechnen lassen, ist zudem die Einbeziehung produktionstechnischer Parameter, z. B. thematische Layer nötig. Da Preismodelle häufig Beziehungen zwischen eigenen Instanzen haben, wurde ein Verfahren entwickelt, mit dem diese über mathematische Beziehungen digital in Preis- und Bestellkatalogen zusammengefasst und dargestellt werden können. Durch einen zwar aufwendigeren, aber generischen Ansatz ist es möglich, komplexe Preismodelle auch von verschiedenen Anbietern mit sehr unterschiedlichen Bepreisungsansätzen aufzunehmen. Dadurch können in der Anwendung erhebliche Rationalisierungseffekte erzielt werden. Durch die explizite Trennung von Daten und Software können Wartungen ohne Eingriffe in den Softwarecode ermöglicht werden. Das entwickelte Datenformat wird XML complex Configuration & Pricing Format (XCPF) genannt. Der zweite Teil beschreibt die möglichen Transaktionen auf den digital repräsentierten Preismodellen und die komplexe Einbettung in bereits vorhandene Webdienst-Strukturen. Ein wesentliches Kriterium für eine nachhaltige Lösung ist dabei die Implementationsunabhängigkeit zu Produktgenerations-diensten, wobei eine Lösung auch eine Dienst-Reihung über Kaskaden hinweg unterstützen soll. Daraus ergibt sich das Paradoxon, dass die Produktdatenströme nicht beeinflusst werden dürfen, bei der gleichzeitigen Anforderung, Preise zu errechnen, die nur durch die Kenntnis der Inhalte der Produktdatenströme (Protokoll) herleitbar sind. Ein Zugriff auf die Inhalte dieser Protokolle ist aber gleichzeitig nur durch die Kenntnis der Struktur der Protokolle möglich. Die Randbedingung, dass eine Lösung nicht nur für ein Protokoll gelten soll, sondern dynamisch für viele Protokolle nutzbar und gleichzeitig in verteilte Netze nutzbar sein soll, ist eine weitere, kritische Anforderung an eine Lösung. Der Lösungsansatz wird durch ein Umlenken der Datenströme mittels Änderung der URL auf eine protokollspezifische Fassade erreicht. Diese Fassaden haben die Fähigkeit, das Protokoll der Produk aten-Ströme zu verstehen, gezielt benötigte Datenbestandteile zu adressieren, zu filtern und über ein Mapping-Verfahren an eine generalisierte Datenstruktur, die nur von dem jeweiligen Preismodell abhängt, zu übergeben. Diese so isolierten Informationen können dann für Subabfragen genutzt werden. So ist z. B. vor einer Generierung eines Produkts die Preisermittlung möglich. Erst bei einer tatsächlichen Bestellung durch eine Subanfrage wird der ursprüngliche, gepackte Datenstrom entpackt und zu dem Produktionsgenerationsdienst weitergelenkt. Die Antwort wird wieder umgelenkt und durch die Bepreisungs- und Bestellschicht geleitet, die dann über eine Fassade weitere Prozesse, wie z. B. die Rechnungsstellung, einleiten kann. Diese Separation zwischen protokollspezifische und generellen Datenstrukturen ermöglicht es in einer verteilten Infrastruktur, nur jeweils benötigte Fassaden aus dem verteilten Netz und nur bei Bedarf zu laden. Die Lokalitäten von Fassadendiensten können dabei über sog. Registries publiziert werden. Auf diese Weise lassen sich neue Protokolle bzw. neue Versionen vollautomatisch in ein vorhandenes, verteiltes Netz einführen. Die Bepreisungs- und Bestell-Komponente wird als Web Pricing & Ordering Service (WPOS) bezeichnet. Beispiele zu XCPF und WPOS lassen die Lösung in diem Dokument anschaulicher werden

    The Analysis of Open Source Software and Data for Establishment of GIS Services Throughout the Network in a Mapping Organization at National or International Level

    Get PDF
    Federal agencies and their partners collect and manage large amounts of geospatial data but it is often not easily found when needed, and sometimes data is collected or purchased multiple times. In short, the best government data is not always organized and managed efficiently to support decision making in a timely and cost effective manner. National mapping agencies, various Departments responsible for collection of different types of Geospatial data and their authorities cannot, for very long, continue to operate, as they did a few years ago like people living in an island. Leaders need to look at what is now possible that was not possible before, considering capabilities such as cloud computing, crowd sourced data collection, available Open source remotely sensed data and multi source information vital in decision-making as well as new Web-accessible services that provide, sometimes at no cost. Many of these services previously could be obtained only from local GIS experts. These authorities need to consider the available solution and gather information about new capabilities, reconsider agency missions and goals, review and revise policies, make budget and human resource for decisions, and evaluate new products, cloud services, and cloud service providers. To do so, we need, choosing the right tools to rich the above-mentioned goals. As we know, Data collection is the most cost effective part of the mapping and establishment of a Geographic Information system. However, it is not only because of the cost for the data collection task but also because of the damages caused by the delay and the time that takes to provide the user with proper information necessary for making decision from the field up to the user’s hand. In fact, the time consumption of a project for data collection, processing, and presentation of geospatial information has more effect on the cost of a bigger project such as disaster management, construction, city planning, environment, etc. Of course, with such a pre-assumption that we provide all the necessary information from the existing sources directed to user’s computer. The best description for a good GIS project optimization or improvement is finding a methodology to reduce the time and cost, and increase data and service quality (meaning; Accuracy, updateness, completeness, consistency, suitability, information content, integrity, integration capability, and fitness for use as well as user’s specific needs and conditions that must be addressed with a special attention). Every one of the above-mentioned issues must be addressed individually and at the same time, the whole solution must be provided in a global manner considering all the criteria. In this thesis at first, we will discuss about the problem we are facing and what is needed to be done as establishment of National Spatial Data Infra-Structure (NSDI), the definition and related components. Then after, we will be looking for available Open Source Software solutions to cover the whole process to manage; Data collection, Data base management system, data processing and finally data services and presentation. The first distinction among Software is whether they are, Open source and free or commercial and proprietary. It is important to note that in order to make distinction among softwares it is necessary to define a clear specification for this categorization. It is somehow very difficult to distinguish what software belongs to which class from legal point of view and therefore, makes it necessary to clarify what is meant by various terms. With reference to this concept there are 2 global distinctions then, inside each group, we distinguish another classification regarding their functionalities and applications they are made for in GIScience. According to the outcome of the second chapter, which is the technical process for selection of suitable and reliable software according to the characteristics of the users need and required components, we will come to next chapter. In chapter 3, we elaborate in to the details of the GeoNode software as our best candidate tools to take responsibilities of those issues stated before. In Chapter 4, we will discuss the existing Open Source Data globally available with the predefined data quality criteria (Such as theme, data content, scale, licensing, and coverage) according to the metadata statement inside the datasets by mean of bibliographic review, technical documentation and web search engines. We will discuss in chapter 5 further data quality concepts and consequently define sets of protocol for evaluation of all datasets according to the tasks that a mapping organization in general, needed to be responsible to the probable users in different disciplines such as; Reconnaissance, City Planning, Topographic mapping, Transportation, Environment control, disaster management and etc… In Chapter 6, all the data quality assessment and protocols will be implemented into the pre-filtered, proposed datasets. In the final scores and ranking result, each datasets will have a value corresponding to their quality according to the sets of rules that are defined in previous chapter. In last steps, there will be a vector of weight that is derived from the questions that has to be answered by user with reference to the project in hand in order to finalize the most appropriate selection of Free and Open Source Data. This Data quality preference has to be defined by identifying a set of weight vector, and then they have to be applied to the quality matrix in order to get a final quality scores and ranking. At the end of this chapter there will be a section presenting data sets utilization in various projects such as “ Early Impact Analysis” as well as “Extreme Rainfall Detection System (ERDS)- version 2” performed by ITHACA. Finally, in conclusion, the important criteria, as well as future trend in GIS software are discussed and at the end recommendations will be presented

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio
    corecore