13,989 research outputs found

    Aspects of dealing with imperfect data in temporal databases

    Get PDF
    In reality, some objects or concepts have properties with a time-variant or time-related nature. Modelling these kinds of objects or concepts in a (relational) database schema is possible, but time-variant and time-related attributes have an impact on the consistency of the entire database. Therefore, temporal database models have been proposed to deal with this. Time itself can be at the source of imprecision, vagueness and uncertainty, since existing time measuring devices are inherently imperfect. Accordingly, human beings manage time using temporal indications and temporal notions, which may contain imprecision, vagueness and uncertainty. However, the imperfection in human-used temporal indications is supported by human interpretation, whereas information systems need extraordinary support for this. Several proposals for dealing with such imperfections when modelling temporal aspects exist. Some of these proposals consider the basis of the system to be the conversion of the specificity of temporal notions between used temporal expressions. Other proposals consider the temporal indications in the used temporal expressions to be the source of imperfection. In this chapter, an overview is given, concerning the basic concepts and issues related to the modelling of time as such or in (relational) database models and the imperfections that may arise during or as a result of this modelling. Next to this, a novel and currently researched technique for handling some of these imperfections is presented

    Combining quantifications for flexible query result ranking

    Get PDF
    Databases contain data and database systems governing such databases are often intended to allow a user to query these data. On one hand, these data may be subject to imperfections, on the other hand, users may employ imperfect query preference specifications to query such databases. All of these imperfections lead to each query answer being accompanied by a collection of quantifications indicating how well (part of) a group of data complies with (part of) the user's query. A fundamental question is how to present the user with the query answers complying best to his or her query preferences. The work presented in this paper first determines the difficulties to overcome in reaching such presentation. Mainly, a useful presentation needs the ranking of the query answers based on the aforementioned quantifications, but it seems advisable to not combine quantifications with different interpretations. Thus, the work presented in this paper continues to introduce and examine a novel technique to determine a query answer ranking. Finally, a few aspects of this technique, among which its computational efficiency, are discussed

    Bipolarity in the querying of temporal databases

    Get PDF
    A database represents part of reality by containing data representing properties of real objects or concepts. To many real-world concepts or objects, time is an essential aspect and thus it should often be (implicitly) represented by databases, making these temporal databases. However, like other data, the time-related data in such databases may also contain imperfections such as uncertainties. One of the main purposes of a database is to allow the retrieval of information or knowledge deduced from its data, which is often done by querying the database. Because users may have both positive and negative preferences, they may want to query a database in a bipolar way. Moreover, their demands may have some temporal aspects. In this paper, a novel technique is presented, to query a valid-time relation containing uncertain valid-time data in a heterogeneously bipolar way, allowing every elementary query constraint a specific temporal constraint

    Towards a Scalable Dynamic Spatial Database System

    Get PDF
    With the rise of GPS-enabled smartphones and other similar mobile devices, massive amounts of location data are available. However, no scalable solutions for soft real-time spatial queries on large sets of moving objects have yet emerged. In this paper we explore and measure the limits of actual algorithms and implementations regarding different application scenarios. And finally we propose a novel distributed architecture to solve the scalability issues.Comment: (2012

    Analysing imperfect temporal information in GIS using the Triangular Model

    Get PDF
    Rough set and fuzzy set are two frequently used approaches for modelling and reasoning about imperfect time intervals. In this paper, we focus on imperfect time intervals that can be modelled by rough sets and use an innovative graphic model [i.e. the triangular model (TM)] to represent this kind of imperfect time intervals. This work shows that TM is potentially advantageous in visualizing and querying imperfect time intervals, and its analytical power can be better exploited when it is implemented in a computer application with graphical user interfaces and interactive functions. Moreover, a probabilistic framework is proposed to handle the uncertainty issues in temporal queries. We use a case study to illustrate how the unique insights gained by TM can assist a geographical information system for exploratory spatio-temporal analysis

    Historical collaborative geocoding

    Full text link
    The latest developments in digital have provided large data sets that can increasingly easily be accessed and used. These data sets often contain indirect localisation information, such as historical addresses. Historical geocoding is the process of transforming the indirect localisation information to direct localisation that can be placed on a map, which enables spatial analysis and cross-referencing. Many efficient geocoders exist for current addresses, but they do not deal with the temporal aspect and are based on a strict hierarchy (..., city, street, house number) that is hard or impossible to use with historical data. Indeed historical data are full of uncertainties (temporal aspect, semantic aspect, spatial precision, confidence in historical source, ...) that can not be resolved, as there is no way to go back in time to check. We propose an open source, open data, extensible solution for geocoding that is based on the building of gazetteers composed of geohistorical objects extracted from historical topographical maps. Once the gazetteers are available, geocoding an historical address is a matter of finding the geohistorical object in the gazetteers that is the best match to the historical address. The matching criteriae are customisable and include several dimensions (fuzzy semantic, fuzzy temporal, scale, spatial precision ...). As the goal is to facilitate historical work, we also propose web-based user interfaces that help geocode (one address or batch mode) and display over current or historical topographical maps, so that they can be checked and collaboratively edited. The system is tested on Paris city for the 19-20th centuries, shows high returns rate and is fast enough to be used interactively.Comment: WORKING PAPE

    Relational Representation of Uncertain and Imprecise Time Assess-ments: An Application to Artworks Dating.

    Get PDF
    Imprecision and uncertainty appear together in many situations of real life and thereforesoft computing techniques must be studied to tackle this problem. Imprecise and uncertainvalues are usually expressed by means of linguistic terms, specially when they have beenprovidedbya human being. This is also the case of temporal information where, inaddition to handling time constraints, we may also have both uncertainty and imprecisionin the description, like in the sentence”It is very possible that Giotto’s Crucifix was paintedby 1289”. To manage both uncertainty (very possible) and imprecision (by 1289) in aseparate way would lead to a quite complicated computation and a lack of comprehensionby the users of the system. Because of these reasons, it is very desirable that bothsources of imperfection of time values are combined into a single value which appropriatelydescribes the intended information. In this work, we extend our previous research on thistopic and we study how to adapt it to relational systems in order to be useful. The finalgoal is obtaining normalized fuzzy values that provide an equivalent information about thedescribed temporal fact than the original ones, for making it possible to store and managethem in a fuzzy relational database. On the other hand, there will be some situationswhere more than one expert opinion about a time period must be taken into account andwe need to find a representative value of them all in order to be stored and managed. Forthe sake of simplicity, comprehensibility and the efficiency in computation (when usingtrapezoidal representation), the fuzzy average is used to find such a representative value

    Sustainable Development Data Availability on the Internet

    Get PDF
    Defining what Sustainability and Sustainable Development mean is a critical task, as they are global objectives, which cover different aspects of life often difficult to quantify and describe. Talking about sustainable development means dealing with the development and implementation of SD strategies at international as well as at local level. With this regard, SD information plays a key role in monitoring SD performances at different administration levels. The aim of this paper is to give an overview of sustainable data availability on the internet at international, European, national and regional level. The paper is novel in the fact that the attention of the whole analysis focused on internet, considered as the principal mean for accessing data. In fact, the web has become through the years a fundamental tool for exchanging information amongst people, organisations, institutes, governments, thanks to its easy accessibility for a wide knowledge exchange. Sustainable development data collected at different administrative levels are classified and processed according to different methods and procedures; they are gathered at different scales, in different periods and they have a different frequency of updating. Data accuracy and meta-information on available data considerably vary, too. Few organisations at the international and at the European level such as, for example, World Bank, United Nations, OECD, FAO, Eurostat, EEA committed themselves to process information belonging to different sources aiming at standardising and producing comparable data sets for several nations and regions. Following the above considerations, various international, European and national organisations’ databases were investigated in order to check the availability of data at different administrative levels, mostly focusing on those sectors considered as pillars for the definition and monitoring of the implementation of the EU Sustainable Development Strategy, as pointed out in the Communication of the EC SEC(2005) 161 final.Sustainability, Indicators, Regional Development, Internet, Database
    • …
    corecore