3,956 research outputs found

    Seamless Online Distribution of Amundsen Multibeam Data

    Get PDF
    Since 2003, all underway multibeam and sub-bottom data from the Canadian Coast Guard Ship Amundsen has been posted online within approximately six months of the end of each cruise. A custom interface allowing the user to access 15\u27 latitude by 30\u27 longitude mapsheets was implemented in 2006, allowing the user to download the bathymetric and backscatter data at 10 metre resolution. While this interface matched the underlying data management scheme implemented at the University of New Brunswick, the zoom and pan capability was at a fixed scale with limited contextual data. In the past few years, with the introduction of web-based geographic information systems (GIS) (e.g. Google Maps, Yahoo Maps, Bing Maps), there have been thousands of maps published online. These online GIS programs are a suitable platform to display the seven years of Amundsen coverage within the context of the GIS-served satellite imagery and allow the user to freely browse all data in a familiar interface. The challenge, however, for serving up third party data through these map engines is to efficiently cope with the multiple zoom levels and changing resolutions. Custom tiling software was developed to take all the raw data from the seven years of Amundsen (and others\u27) multibeam coverage and convert it into multiple scale resolution images suitable for interpretation by Google Maps. The images were stored in a pyramid structure utilizing Google\u27s map projection and uniquely named to reflect their georeferencing and resolution. This image pyramid is then accessed by Google Maps according to the user\u27s current zoom level to optimize visualization. This multi-resolution data is served up on demand from the University of New Brunswick for dynamic overlay on Google\u27s satellite data. This web interface allows any interested parties to easily view multibeam and sub-bottom data from the Pacific Ocean through the Canadian Arctic Archipelago and into the Atlantic Ocean. The broad overview helps to understand regional trends and then focus on areas of interest at high resolutions to see particular features. The web interface also provides a link to the 15\u27 by 30\u27 mapsheet model with full source traceability

    High-Performance Cloud Computing: A View of Scientific Applications

    Full text link
    Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.Comment: 13 pages, 9 figures, conference pape

    3D and 4D Simulations for Landscape Reconstruction and Damage Scenarios. GIS Pilot Applications

    Get PDF
    The project 3D and 4D Simulations for Landscape Reconstruction and Damage Scenarios: GIS Pilot Applications has been devised with the intention to deal with the demand for research, innovation and applicative methodology on the part of the international programme, requiring concrete results to increase the capacity to know, anticipate and respond to a natural disaster. This project therefore sets out to develop an experimental methodology, a wide geodatabase, a connected performant GIS platform and multifunctional scenarios able to profitably relate the added values deriving from different geotechnologies, aimed at a series of crucial steps regarding landscape reconstruction, event simulation, damage evaluation, emergency management, multi-temporal analysis. The Vesuvius area has been chosen for the pilot application owing to such an impressive number of people and buildings subject to volcanic risk that one could speak in terms of a possible national disaster. The steps of the project move around the following core elements: creation of models that reproduce the territorial and anthropic structure of the past periods, and reconstruction of the urbanized area, with temporal distinctions; three-dimensional representation of the Vesuvius area in terms of infrastructuralresidential aspects; GIS simulation of the expected event; first examination of the healthcareepidemiological consequences; educational proposals. This paper represents a proactive contribution which describes the aims of the project, the steps which constitute a set of specific procedures for the methodology which we are experimenting, and some thoughts regarding the geodatabase useful to “package” illustrative elaborations. Since the involvement of the population and adequate hazard preparedness are very important aspects, some educational and communicational considerations are presented in connection with the use of geotechnologies to promote the knowledge of risk

    The XII century towers, a benchmark of the Rome countryside almost cancelled. The safeguard plan by low cost uav and terrestrial DSM photogrammetry surveying and 3D Web GIS applications

    Get PDF
    “Giving a bird-fly look at the Rome countryside, throughout the Middle Age central period, it would show as if the multiple city towers has been widely spread around the territory” on a radial range of maximum thirty kilometers far from the Capitol Hill center (Carocci and Vendittelli, 2004). This is the consequence of the phenomenon identified with the “Incasalamento” neologism, described in depth in the following paper, intended as the general process of expansion of the urban society interests outside the downtown limits, started from the half of the XII and developed through all the XIII century, slowing down and ending in the following years. From the XIX century till today the architectural finds of this reality have raised the interest of many national and international scientists, which aimed to study and catalog them all to create a complete framework that, cause of its extension, didn’t allow yet attempting any element by element detailed analysis. From the described situation has started our plan of intervention, we will apply integrated survey methods and technologies of terrestrial and UAV near stereo-photogrammetry, by the use of low cost drones, more than action cameras and reflex on extensible rods, integrated and referenced with GPS and topographic survey. In the final project we intend to produce some 3D scaled and textured surface models of any artifact (almost two hundreds were firstly observed still standing), to singularly study the dimensions and structure, to analyze the building materials and details and to formulate an hypothesis about any function, based even on the position along the territory. These models, successively georeferenced, will be imported into a 2D and 3D WebGIS and organized in layers made visible on basemaps of reference, as much as on historical maps

    SpatialData IO: A Web-based Spatial Data Customization Tool for Collection of Area of Interest Information

    Get PDF
    Ruumiandmete pĂ€rimiseks teenusepakkujalt tuleb kasutajal tihti kirjeldata huvipiirkond (Area of Interest, AOI) mingis ruumiandmete vormingus. Kasutajal on mitmeid viise, kuidas selliseid andmeid esitada. Üheks vĂ”imaluseks on ruumiandmete salvestamine faili mĂ”nes ruumiandmete vormingus vĂ”i siis laadida veebist alla terved asjakohased andmestikud. Kuna ruumiandmete vorming on tehniliselt detailiderohke, siis on kirjeldatud lĂ€henemine ebapraktiline. NĂ€iteks pole mĂ”istlik kĂ€sitsi kirjeldata suure hulga hoonete vĂ”i taristuobjektide paiknemist. Teisalt vĂ”ib terviklikus andmestikus olla liiga palju teavet, mida pole kĂ€sil oleva ĂŒlesande lahendamiseks vaja. Lisaks vĂ”ivad andmed olla erinevates vormingutes, nĂ€iteks geojson, kml, shp jt. Seega on ruumiandmete mugav kirjeldamine lĂ”ppkasutajale suure lisandvÀÀrtusega. KĂ€esoleva magistritöö eesmĂ€rk on arendada vabavaraline kasutajasĂ”bralik veebipĂ”hine lahendus, mis aitab kasutajatel, kes pole geoinfosĂŒsteemide spetsialistid, kirjeldada huvipiirkondi graafiliselt nii, et mitmed abistavad toimingud tehakse Ă€ra taustal. PĂ€rast vastavate andmete töötlemist saavad kasutajad tulemust kaardil visualiseerimise abil kontrollida. Tulemus on alla laetav erinevates ruumivormingutes. Töö tulemuseks on testitud rakendus huvipiirkondade kirjeldamiseks.To get services from some spatial data analysis service provider, a user often needs to specify the area of interest (AOI) to the provider in some spatial data format. The user has multiple ways to express their wish to collect AOI information. One is writing all information in a file according to the spatial data format or download datasets from the world wide web. Due to the technically detailed structure of spatial data, first one is too complicated and time-consuming for a non-specialist to write manually; for example, writing location information for 1000 buildings, roads or rail line of a city. On the other hand, a full dataset may include unnecessary location data, few of them may be small, big or over-lap with other objects. Additionally, data can be in different data formats, e.g. geojson, kml, shp etc. So it is quite challenging for the user to quickly understand and modify the spatial data according to their need. The goal of this thesis is to develop a free and open-source user-friendly web solution for non GIS specialist end-users so that they can solve above-mentioned problems graphically with the help of performing different spatial operations in the background. After processing those data end-users will be able to visualize and check the processed data on the map and it will be available for download in different spatial data formats as well. The expected outcome of this application is to provide a graphical solution for the user to perform different geospatial operation for collecting in-formation for the AOI. That will help users to send clean and specific information regarding AOI to their desired providers

    Modeling Mohave Ground Squirrel Habitat

    Get PDF
    The Mohave ground squirrel (Spermophilus mohavensis) is endemic to the northwestern region of the Mojave Desert. It is currently considered a threatened species by the state of California. Habitat loss due to human development within the species’ limited geographic area has been identified as a major contributor to the species’ threatened status. Habitat degradation due to livestock grazing, military training, and the increased abundance of invasive and non-invasive plant species are also serious conservation issues. It is important to be able to identify remaining areas of suitable habitat in order to conserve and manage viable Mohave ground squirrel populations. This project consisted of developing a habitat suitability model for the Mohave ground squirrel. The weighted overlay model approach was chosen after reviewing its effectiveness in similar case studies. This model allowed the combination of selected variables for integrated analysis that have been identified as important for the occurrence of the species. The Weighted Overlay tool from the Spatial Analyst toolbox – ArcGIS 9.1 was used to accomplish this task. The model was designed by assigning values of relative importance to the datasets as well as to layers themselves. The variables used were: elevation, slope, land cover, vegetation, geomorphology, mean winter precipitation (1976 – 2006), percentage of dry periods (1976 – 2006), and consecutive dry winter periods (1976 – 2006). The results obtained from the model will help to identify areas that are important for the conservation of the Mohave ground squirrel. It can be tested for accuracy against actual field data and modified to upgrade its predictive performance. This model will serve as a tool to help understand habitat variables that are critical for the survival of the species. It will also aid researchers in gathering more information on the species itself and its habitat requirements

    RAPID WEBGIS DEVELOPMENT FOR EMERGENCY MANAGEMENT

    Get PDF
    The use of spatial data during emergency response and management helps to make faster and better decisions. Moreover spatial data should be as much updated as possible and easy to access. To face the challenge of rapid and updated data sharing the most efficient solution is largely considered the use of internet where the field of web mapping is constantly evolving. ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action) is a non profit association founded by Politecnico di Torino and SITI (Higher Institute for the Environmental Systems) as a joint project with the WFP (World Food Programme). The collaboration with the WFP drives some projects related to Early Warning Systems (i.e. flood and drought monitoring) and Early Impact Systems (e.g. rapid mapping and assessment through remote sensing systems). The Web GIS team has built and is continuously improving a complex architecture based entirely on Open Source tools. This architecture is composed by three main areas: the database environment, the server side logic and the client side logic. Each of them is implemented respecting the MCV (Model Controller View) pattern which means the separation of the different logic layers (database interaction, business logic and presentation). The MCV architecture allows to easily and fast build a Web GIS application for data viewing and exploration. In case of emergency data publication can be performed almost immediately as soon as data production is completed. The server side system is based on Python language and Django web development framework, while the client side on OpenLayers, GeoExt and Ext.js that manage data retrieval and user interface. The MCV pattern applied to javascript allows to keep the interface generation and data retrieval logic separated from the general application configuration, thus the server side environment can take care of the generation of the configuration file. The web application building process is data driven and can be considered as a view of the current architecture composed by data and data interaction tools. Once completely automated, the Web GIS application building process can be performed directly by the final user, that can customize data layers and controls to interact with the

    Myths and Realities about Online Forums in Open Source Software Development: An Empirical Study

    Full text link
    The use of free and open source software (OSS) is gaining momentum due to the ever increasing availability and use of the Internet. Organizations are also now adopting open source software, despite some reservations, in particular regarding the provision and availability of support. Some of the biggest concerns about free and open source software are post release software defects and their rectification, management of dynamic requirements and support to the users. A common belief is that there is no appropriate support available for this class of software. A contradictory argument is that due to the active involvement of Internet users in online forums, there is in fact a large resource available that communicates and manages the provision of support. The research model of this empirical investigation examines the evidence available to assess whether this commonly held belief is based on facts given the current developments in OSS or simply a myth, which has developed around OSS development. We analyzed a dataset consisting of 1880 open source software projects covering a broad range of categories in this investigation. The results show that online forums play a significant role in managing software defects, implementation of new requirements and providing support to the users in open source software and have become a major source of assistance in maintenance of the open source projects

    Towards globally customizable ecosystem service models

    Get PDF
    Scientists, stakeholders and decision makers face trade-offs between adopting simple or complex approaches when modeling ecosystem services (ES). Complex approaches may be time- and data-intensive, making them more challenging to implement and difficult to scale, but can produce more accurate and locally specific results. In contrast, simple approaches allow for faster assessments but may sacrifice accuracy and credibility. The ARtificial Intelligence for Ecosystem Services (ARIES) modeling platform has endeavored to provide a spectrum of simple to complex ES models that are readily accessible to a broad range of users. In this paper, we describe a series of five “Tier 1” ES models that users can run anywhere in the world with no user input, while offering the option to easily customize models with context-specific data and parameters. This approach enables rapid ES quantification, as models are automatically adapted to the application context. We provide examples of customized ES assessments at three locations on different continents and demonstrate the use of ARIES' spatial multi-criteria analysis module, which enables spatial prioritization of ES for different beneficiary groups. The models described here use publicly available global- and continental-scale data as defaults. Advanced users can modify data input requirements, model parameters or entire model structures to capitalize on high-resolution data and context-specific model formulations. Data and methods contributed by the research community become part of a growing knowledge base, enabling faster and better ES assessment for users worldwide. By engaging with the ES modeling community to further develop and customize these models based on user needs, spatiotemporal contexts, and scale(s) of analysis, we aim to cover the full arc from simple to complex assessments, minimizing the additional cost to the user when increased complexity and accuracy are needed
    • 

    corecore