501 research outputs found

    SOLAP+: extending the interaction model

    Get PDF
    Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfillment of the requirements for the degree of Master in Computer ScienceDecision making is a crucial process that can dictate success or failure in today’s businesses and organizations. Decision Support Systems (DSS) are designed in order to help human users with decision making activities. Inside the big family of DSSs there is OnLine Analytical Processing (OLAP) - an approach to answer multidimensional queries quickly and effectively. Even though OLAP is recognized as an efficient technique and widely used in mostly every area, it does not offer spatial analysis, spatial data visualization nor exploration. Geographic Information Systems (GIS) had a huge growth in the last years and acquiring and storing spatial data is easier than ever. In order to explore this potential and include spatial data and spatial analysis features to OLAP, Bédard introduced Spatial OLAP (SOLAP). Although it is a relatively new area, many proposals towards SOLAP’s standardization and consolidation have been made,as well as functional tools for different application areas. There are however many issues and topics in SOLAP that are either not covered or with incompatible/non general proposals. We propose to define a generic model for SOLAP interaction based on previous works, extending it to include new visualization options,components and cases; create and present a component-driven architecture proposal for such a tool, including descriptive metamodels, aggregate navigator to increase perfomance and a communication protocol; finally, develop an example prototype that partially implements the proposed interaction features, taking into consideration guidelines for a user friendly, yet powerful and flexible application

    PathBinder: mining MEDLINE for protein-protein interactions

    Get PDF
    Exploring protein-protein interactions and the regulation of signal transduction pathways from biomedical texts has become a daily routine for many scientists. The MEDLINE citation database is the largest English language biomedical bibliographic database. We report the design and development of an automatic text mining system, PathBinder, for extracting, manipulating, and managing protein-protein interactions from MEDLINE abstracts to facilitate extracting pathway information to a database. PathBinder is a broad-scale data mining tool. It processes large volumes of text information in the biomedical domain, resulting in output that contains the desired information in highly concentrated form. The extracted information is then presented to the user with a visualization tool. Developing and integrating PathBinder documents processing functionalities will allow it to serve as the content builder of a pathways database. We partially solved the problems of inefficiency, ambiguity and low coverage, which exist in many text mining systems. Our processing unit is the sentence. Abstracts from the MEDLINE database are parsed into individual sentences, which in turn are searched for the presence of 1) one pair of protein names 2), one protein name and one interaction related verb, 3) one protein name and another word describing desired context. The qualified sentences are in a database for querying. The query is automatically conflated to aliases of gene; build the link for related information. The user can add human knowledge inputs to create the pathway database, and apply graphical display to connect each protein name with clickable nodes and edges. The nodes of graph represent the protein names occurrence in a sentence within the literature. The edges of graph represent the two protein names in one sentence within the literature. The nodes and edges can be hypertext-linked to sentence databases. The system dynamically generates and presents all related sentences in a friendly interface. Protein names are highlighted in each sentence for quick browsing. A hypertext link to the original abstract in the online MEDLINE database is offered

    Web Processing Services for Forestry and Environmental Applications

    Get PDF
    Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.Nowadays spatial processing on the web is becoming a requirement for more and more web applications. The use of processes helps to find solutions to a wide range of spatial problems and extends the common functionality of Web GIS. There are many open source technologies that can be implemented in each component of a Web GIS application. Forestry and environmental problems, with their strong territorial implications, are especially suitable to be analyzed applying these technologies. In order to create an application with spatial processes, we propose a framework with a layered service-based architecture. It is layered because its structure is divided in a set of functional layers: the user layer (geoportal or client), the service layer (inside the server) and the data layer (spatial database). The access and processing of spatial data is accomplished through adequate service standards of OGC (Open Geospatial Consortium): Web Map Services (WMS), Web Feature Services (WFS), Web Coverage Services (WCS) and Web Processing Services (WPS). We implement a complete forestry – related application from scratch that offers access, visualization, querying and processing of spatial data and an active user interaction. The key of the application is WPS. Additionally, other processing solutions (like making queries with the spatial database) are discussed. In brief, this work presents an overview of the current technology and possible solutions for integrating spatial processes on the web and proposes some guidelines to implement them in a fully working system

    R for SAS and SPSS Users

    Full text link

    The Analysis of Open Source Software and Data for Establishment of GIS Services Throughout the Network in a Mapping Organization at National or International Level

    Get PDF
    Federal agencies and their partners collect and manage large amounts of geospatial data but it is often not easily found when needed, and sometimes data is collected or purchased multiple times. In short, the best government data is not always organized and managed efficiently to support decision making in a timely and cost effective manner. National mapping agencies, various Departments responsible for collection of different types of Geospatial data and their authorities cannot, for very long, continue to operate, as they did a few years ago like people living in an island. Leaders need to look at what is now possible that was not possible before, considering capabilities such as cloud computing, crowd sourced data collection, available Open source remotely sensed data and multi source information vital in decision-making as well as new Web-accessible services that provide, sometimes at no cost. Many of these services previously could be obtained only from local GIS experts. These authorities need to consider the available solution and gather information about new capabilities, reconsider agency missions and goals, review and revise policies, make budget and human resource for decisions, and evaluate new products, cloud services, and cloud service providers. To do so, we need, choosing the right tools to rich the above-mentioned goals. As we know, Data collection is the most cost effective part of the mapping and establishment of a Geographic Information system. However, it is not only because of the cost for the data collection task but also because of the damages caused by the delay and the time that takes to provide the user with proper information necessary for making decision from the field up to the user’s hand. In fact, the time consumption of a project for data collection, processing, and presentation of geospatial information has more effect on the cost of a bigger project such as disaster management, construction, city planning, environment, etc. Of course, with such a pre-assumption that we provide all the necessary information from the existing sources directed to user’s computer. The best description for a good GIS project optimization or improvement is finding a methodology to reduce the time and cost, and increase data and service quality (meaning; Accuracy, updateness, completeness, consistency, suitability, information content, integrity, integration capability, and fitness for use as well as user’s specific needs and conditions that must be addressed with a special attention). Every one of the above-mentioned issues must be addressed individually and at the same time, the whole solution must be provided in a global manner considering all the criteria. In this thesis at first, we will discuss about the problem we are facing and what is needed to be done as establishment of National Spatial Data Infra-Structure (NSDI), the definition and related components. Then after, we will be looking for available Open Source Software solutions to cover the whole process to manage; Data collection, Data base management system, data processing and finally data services and presentation. The first distinction among Software is whether they are, Open source and free or commercial and proprietary. It is important to note that in order to make distinction among softwares it is necessary to define a clear specification for this categorization. It is somehow very difficult to distinguish what software belongs to which class from legal point of view and therefore, makes it necessary to clarify what is meant by various terms. With reference to this concept there are 2 global distinctions then, inside each group, we distinguish another classification regarding their functionalities and applications they are made for in GIScience. According to the outcome of the second chapter, which is the technical process for selection of suitable and reliable software according to the characteristics of the users need and required components, we will come to next chapter. In chapter 3, we elaborate in to the details of the GeoNode software as our best candidate tools to take responsibilities of those issues stated before. In Chapter 4, we will discuss the existing Open Source Data globally available with the predefined data quality criteria (Such as theme, data content, scale, licensing, and coverage) according to the metadata statement inside the datasets by mean of bibliographic review, technical documentation and web search engines. We will discuss in chapter 5 further data quality concepts and consequently define sets of protocol for evaluation of all datasets according to the tasks that a mapping organization in general, needed to be responsible to the probable users in different disciplines such as; Reconnaissance, City Planning, Topographic mapping, Transportation, Environment control, disaster management and etc… In Chapter 6, all the data quality assessment and protocols will be implemented into the pre-filtered, proposed datasets. In the final scores and ranking result, each datasets will have a value corresponding to their quality according to the sets of rules that are defined in previous chapter. In last steps, there will be a vector of weight that is derived from the questions that has to be answered by user with reference to the project in hand in order to finalize the most appropriate selection of Free and Open Source Data. This Data quality preference has to be defined by identifying a set of weight vector, and then they have to be applied to the quality matrix in order to get a final quality scores and ranking. At the end of this chapter there will be a section presenting data sets utilization in various projects such as “ Early Impact Analysis” as well as “Extreme Rainfall Detection System (ERDS)- version 2” performed by ITHACA. Finally, in conclusion, the important criteria, as well as future trend in GIS software are discussed and at the end recommendations will be presented

    ATC Airport Builder-A shapefile tool for simulation purposes

    Get PDF
    Saab supplies the global market with products, services and solutions ranging from military defense to civil security. One of their many offices is located in Helsingborg where a new product for training air traffic controllers (Saab ATC training solution) is developed. This product offers air traffic control students a realistic 3d environment where they may go through a variety of supervised air traffic control scenarios. In order to stay ahead of competitors, Saab strives to continuously improve their products. Today, it is possible to add airports to the Saab ATC training solution; however, this is a tedious process. Another area of improvement is the way in which map information is utilised in the simulator. Currently, the simulator knows very little about airport environments. By utilising existing map data, the simulator can make more intelligent decisions inside airport areas. This thesis describes how map information in existing shapefiles can be used to enhance airport simulation. In addition, this thesis, describe how the existing 3d engine can benefit from shapefile data. A prototype, ATC AirportBuilder, has been created to prepare map information for further use by the Saab ATC training solution. ATC AirportBuilder imports shape file data, adapts and exports it to a database. An analysis has also been conducted where several integration possibilities with the simulator and the 3d engine have been evaluated
    • …
    corecore