88 research outputs found

    Implementing imperfect information in fuzzy databases

    Get PDF
    Information in real-world applications is often vague, imprecise and uncertain. Ignoring the inherent imperfect nature of real-world will undoubtedly introduce some deformation of human perception of real-world and may eliminate several substantial information, which may be very useful in several data-intensive applications. In database context, several fuzzy database models have been proposed. In these works, fuzziness is introduced at different levels. Common to all these proposals is the support of fuzziness at the attribute level. This paper proposes first a rich set of data types devoted to model the different kinds of imperfect information. The paper then proposes a formal approach to implement these data types. The proposed approach was implemented within a relational object database model but it is generic enough to be incorporated into other database models.ou

    Developing an Algorithm for Securing the Biometric Data Template in the Database

    Get PDF
    This research article published by the International Journal of Advanced Computer Science and Applications, Vol. 10, No. 10, 2019In the current technology advancement, biometric template provides a dependable solution to the problem of user verification in an identity control system. The template is saved in the database during the enrollment and compared with query information in the verification stage. Serious security and privacy concerns can arise, if raw, unprotected data template is saved in the database. An attacker can hack the template information in the database to gain illicit access. A novel approach of encryption-decryption algorithm utilizing a design pattern of Model View Template (MVT) is developed to secure the biometric data template. The model manages information logically, the view shows the visualization of the data, and the template addresses the data migration into pattern object. The established algorithm is based on the cryptographic module of the Fernet key instance. The Fernet keys are combined to generate a multiFernet key to produce two encrypted files (byte and text file). These files are incorporated with Twilio message and securely preserved in the database. In the event where an attacker tries to access the biometric data template in the database, the system alerts the user and stops the attacker from unauthorized access, and cross-verify the impersonator based on the validation of the ownership. Thus, helps inform the users and the authority of, how secure the individual biometric data template is, and provided a high level of the security pertaining the individual data privac

    Knowledge Accumulation of Microbial Data Aiming at a Dynamic Taxonomic Framework

    Get PDF
    Deze thesis is een poging om precies dit onderzoeksgebied te overbruggen dat ligt tussen ruw gegeven en abstract concept, tussen praktijk en theorie, binnen het kader van de hedendaagse bacteriële taxonomie. Als gevolg hiervan is het een kruisbestuiving geworden tussen microbiologie, wiskunde en computerwetenschappen. De kunst om het landschap van de bacteriële diversiteit uit te tekenen, gebruikt als een metafoor voor het modelleren van de taxonomie, vereist het bepalen van een representatieve waaier aan reproduceerbare en vergelijkbare experimentele kenmerken van een verzameling bacteriën (microbiologie/taxonomie), het ontwerpen en implementeren van objectieve classificatiemethodes voor het groeperen van gegevens op een niet gecoördineerde manier (wiskunde/classificatie) en het consolideren van experimentele gegevens en hun verschillende onderverdelingen via een uniforme en weldoordachte aanpak (computerwetenschappen/kennisbeheer). Men kan zich gemakkelijk een globaal kennissysteem voor de geest halen dat de vellen vol experimentele gegevens die voortspruiten uit de microbiologische onderzoeksverrichtingen op een gestructureerde en geüniformiseerde manier kan absorberen. Een dergelijk kennisbeheersysteem zou een ongelofelijke vooruitgang betekenen voor de mogelijke toepassing van intelligente en goed gefundeerde methodes voor het ontginnen van de gegevens, ingezet als hulpmiddel om het afbakenen van objectieve en universele taxonomische consensusmodellen op een betere manier te stroomlijnen en te automatiseren. Bovendien kunnen dergelijke inferentiesystemen in staat worden geacht om ogenblikkelijk te reageren op een toevloed van nieuwe gegevens en interactief te communiceren met de buitenwereld indien noodzakelijke stukken voor het vervolledigen van de taxonomische puzzel zouden ontbreken. De geldigheid van nieuwe inzichten of hypothesen omtrent het leven en de evolutie van bacteriën zou onmiddellijk kunnen getoetst worden aan deze vergaarbakken vol kennis, mogelijks met een directe aanpassing van bestaande taxonomische modellen tot gevolg. Vooraleer de betrachtingen van een autodidactisch inferentiesysteem voor het uittekenen van het landschap van de bacteriële diversiteit kunnen gerealiseerd worden, moeten belangrijke technische en organisatorische hindernissen overwonnen worden. Dit vraagt het verleggen van de grenzen van een mondiale uitwisseling van gegevens, het nasporen en invullen van de hiaten in de waarnemingen, en het verkennen van de mogelijkheden van nieuwe technieken voor het ontginnen van gegevens, ten voordele van een beter inzicht in het leven en de evolutie van bacteriën. Spijts de nog vele onopgeloste kwesties, kunnen de ideeën die worden aangebracht in deze verhandeling als stimulans en leidraad dienen bij het integreren en exploiteren van microbiële gegevens, in plaats van het blijvend koesteren van een ijdele hoo

    Transformation of graphical models to support knowledge transfer

    Get PDF
    Menschliche Experten verfügen über die Fähigkeit, ihr Entscheidungsverhalten flexibel auf die jeweilige Situation abzustimmen. Diese Fähigkeit zahlt sich insbesondere dann aus, wenn Entscheidungen unter beschränkten Ressourcen wie Zeitrestriktionen getroffen werden müssen. In solchen Situationen ist es besonders vorteilhaft, die Repräsentation des zugrunde liegenden Wissens anpassen und Entscheidungsmodelle auf unterschiedlichen Abstraktionsebenen verwenden zu können. Weiterhin zeichnen sich menschliche Experten durch die Fähigkeit aus, neben unsicheren Informationen auch unscharfe Wahrnehmungen in die Entscheidungsfindung einzubeziehen. Klassische entscheidungstheoretische Modelle basieren auf dem Konzept der Rationalität, wobei in jeder Situation die nutzenmaximale Entscheidung einer Entscheidungsfunktion zugeordnet wird. Neuere graphbasierte Modelle wie Bayes\u27sche Netze oder Entscheidungsnetze machen entscheidungstheoretische Methoden unter dem Aspekt der Modellbildung interessant. Als Hauptnachteil lässt sich die Komplexität nennen, wobei Inferenz in Entscheidungsnetzen NP-hart ist. Zielsetzung dieser Dissertation ist die Transformation entscheidungstheoretischer Modelle in Fuzzy-Regelbasen als Zielsprache. Fuzzy-Regelbasen lassen sich effizient auswerten, eignen sich zur Approximation nichtlinearer funktionaler Beziehungen und garantieren die Interpretierbarkeit des resultierenden Handlungsmodells. Die Übersetzung eines Entscheidungsmodells in eine Fuzzy-Regelbasis wird durch einen neuen Transformationsprozess unterstützt. Ein Agent kann zunächst ein Bayes\u27sches Netz durch Anwendung eines in dieser Arbeit neu vorgestellten parametrisierten Strukturlernalgorithmus generieren lassen. Anschließend lässt sich durch Anwendung von Präferenzlernverfahren und durch Präzisierung der Wahrscheinlichkeitsinformation ein entscheidungstheoretisches Modell erstellen. Ein Transformationsalgorithmus kompiliert daraus eine Regelbasis, wobei ein Approximationsmaß den erwarteten Nutzenverlust als Gütekriterium berechnet. Anhand eines Beispiels zur Zustandsüberwachung einer Rotationsspindel wird die Praxistauglichkeit des Konzeptes gezeigt.Human experts are able to flexible adjust their decision behaviour with regard to the respective situation. This capability pays in situations under limited resources like time restrictions. It is particularly advantageous to adapt the underlying knowledge representation and to make use of decision models at different levels of abstraction. Furthermore human experts have the ability to include uncertain information and vague perceptions in decision making. Classical decision-theoretic models are based directly on the concept of rationality, whereby the decision behaviour prescribed by the principle of maximum expected utility. For each observation some optimal decision function prescribes an action that maximizes expected utility. Modern graph-based methods like Bayesian networks or influence diagrams make use of modelling. One disadvantage of decision-theoretic methods concerns the issue of complexity. Finding an optimal decision might become very expensive. Inference in decision networks is known to be NP-hard. This dissertation aimed at combining the advantages of decision-theoretic models with rule-based systems by transforming a decision-theoretic model into a fuzzy rule-based system. Fuzzy rule bases are an efficient implementation from a computational point of view, they can approximate non-linear functional dependencies and they are also intelligible. There was a need for establishing a new transformation process to generate rule-based representations from decision models, which provide an efficient implementation architecture and represent knowledge in an explicit, intelligible way. At first, an agent can apply the new parameterized structure learning algorithm to identify the structure of the Bayesian network. The use of learning approaches to determine preferences and the specification of probability information subsequently enables to model decision and utility nodes and to generate a consolidated decision-theoretic model. Hence, a transformation process compiled a rule base by measuring the utility loss as approximation measure. The transformation process concept has been successfully applied to the problem of representing condition monitoring results for a rotation spindle

    Digital Library Services for Three-Dimensional Models

    Get PDF
    With the growth in computing, storage and networking infrastructure, it is becoming increasingly feasible for multimedia professionals—such as graphic designers in commercial, manufacturing, scientific and entertainment areas—to work with 3D digital models of the objects with which they deal in their domain. Unfortunately most of these models exist in individual repositories, and are not accessible to geographically distributed professionals who are in need of them. Building an efficient digital library system presents a number of challenges. In particular, the following issues need to be addressed: (1) What is the best way of representing 3D models in a digital library, so that the searches can be done faster? (2) How to compress and deliver the 3D models to reduce the storage and bandwidth requirements? (3) How can we represent the user\u27s view on similarity between two objects? (4) What search types can be used to enhance the usability of the digital library and how can we implement these searches, what are the trade-offs? In this research, we have developed a digital library architecture for 3D models that addresses the above issues as well as other technical issues. We have developed a prototype for our 3D digital library (3DLIB) that supports compressed storage, along with retrieval of 3D models. The prototype also supports search and discovery services that are targeted for 3-D models. The key to 3DLIB is a representation of a 3D model that is based on “surface signatures”. This representation captures the shape information of any free-form surface and encodes it into a set of 2D images. We have developed a shape similarity search technique that uses the signature images to compare 3D models. One advantage of the proposed technique is that it works in the compressed domain, thus it eliminates the need for uncompressing in content-based search. Moreover, we have developed an efficient discovery service consisting of a multi-level hierarchical browsing service that enables users to navigate large sets of 3D models. To implement this targeted browsing (find an object that is similar to a given object in a large collection through browsing) we abstract a large set of 3D models to a small set of representative models (key models). The abstraction is based on shape similarity and uses specially tailored clustering techniques. The browsing service applies clustering recursively to limit the number of key models that a user views at any time. We have evaluated the performance of our digital library services using the Princeton Shape Benchmark (PSB) and it shows significantly better precision and recall, as compared to other approaches

    Personalizable Knowledge Integration

    Get PDF
    Large repositories of data are used daily as knowledge bases (KBs) feeding computer systems that support decision making processes, such as in medical or financial applications. Unfortunately, the larger a KB is, the harder it is to ensure its consistency and completeness. The problem of handling KBs of this kind has been studied in the AI and databases communities, but most approaches focus on computing answers locally to the KB, assuming there is some single, epistemically correct solution. It is important to recognize that for some applications, as part of the decision making process, users consider far more knowledge than that which is contained in the knowledge base, and that sometimes inconsistent data may help in directing reasoning; for instance, inconsistency in taxpayer records can serve as evidence of a possible fraud. Thus, the handling of this type of data needs to be context-sensitive, creating a synergy with the user in order to build useful, flexible data management systems. Inconsistent and incomplete information is ubiquitous and presents a substantial problem when trying to reason about the data: how can we derive an adequate model of the world, from the point of view of a given user, from a KB that may be inconsistent or incomplete? In this thesis we argue that in many cases users need to bring their application-specific knowledge to bear in order to inform the data management process. Therefore, we provide different approaches to handle, in a personalized fashion, some of the most common issues that arise in knowledge management. Specifically, we focus on (1) inconsistency management in relational databases, general knowledge bases, and a special kind of knowledge base designed for news reports; (2) management of incomplete information in the form of different types of null values; and (3) answering queries in the presence of uncertain schema matchings. We allow users to define policies to manage both inconsistent and incomplete information in their application in a way that takes both the user's knowledge of his problem, and his attitude to error/risk, into account. Using the frameworks and tools proposed here, users can specify when and how they want to manage/solve the issues that arise due to inconsistency and incompleteness in their data, in the way that best suits their needs

    Query processing in temporal object-oriented databases

    Get PDF
    This PhD thesis is concerned with historical data management in the context of objectoriented databases. An extensible approach has been explored to processing temporal object queries within a uniform query framework. By the uniform framework, we mean temporal queries can be processed within the existing object-oriented framework that is extended from relational framework, by extending the existing query processing techniques and strategies developed for OODBs and RDBs. The unified model of OODBs and RDBs in UmSQL/X has been adopted as a basis for this purpose. A temporal object data model is thereby defined by incorporating a time dimension into this unified model of OODBs and RDBs to form temporal relational-like cubes but with the addition of aggregation and inheritance hierarchies. A query algebra, that accesses objects through these associations of aggregation, inheritance and timereference, is then defined as a general query model /language. Due to the extensive features of our data model and reducibility of the algebra, a layered structure of query processor is presented that provides a uniforrn framework for processing temporal object queries. Within the uniform framework, query transformation is carried out based on a set of transformation rules identified that includes the known relational and object rules plus those pertaining to the time dimension. To evaluate a temporal query involving a path with timereference, a strategy of decomposition is proposed. That is, evaluation of an enhanced path, which is defined to extend a path with time-reference, is decomposed by initially dividing the path into two sub-paths: one containing the time-stamped class that can be optimized by making use of the ordering information of temporal data and another an ordinary sub-path (without time-stamped classes) which can be further decomposed and evaluated using different algorithms. The intermediate results of traversing the two sub-paths are then joined together to create the query output. Algorithms for processing the decomposed query components, i. e., time-related operation algorithms, four join algorithms (nested-loop forward join, sort-merge forward join, nested-loop reverse join and sort-merge reverse join) and their modifications, have been presented with cost analysis and implemented with stream processing techniques using C++. Simulation results are also provided. Both cost analysis and simulation show the effects of time on the query processing algorithms: the join time cost is linearly increased with the expansion in the number of time-epochs (time-dimension in the case of a regular TS). It is also shown that using heuristics that make use of time information can lead to a significant time cost saving. Query processing with incomplete temporal data has also been discussed

    A geo-database for potentially polluting marine sites and associated risk index

    Get PDF
    The increasing availability of geospatial marine data provides an opportunity for hydrographic offices to contribute to the identification of Potentially Polluting Marine Sites (PPMS). To adequately manage these sites, a PPMS Geospatial Database (GeoDB) application was developed to collect and store relevant information suitable for site inventory and geo-spatial analysis. The benefits of structuring the data to conform to the Universal Hydrographic Data Model (IHO S-100) and to use the Geographic Mark-Up Language (GML) for encoding are presented. A storage solution is proposed using a GML-enabled spatial relational database management system (RDBMS). In addition, an example of a risk index methodology is provided based on the defined data structure. The implementation of this example was performed using scripts containing SQL statements. These procedures were implemented using a cross-platform C++ application based on open-source libraries and called PPMS GeoDB Manager

    24th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    In the last three decades information modelling and knowledge bases have become essentially important subjects not only in academic communities related to information systems and computer science but also in the business area where information technology is applied. The series of European – Japanese Conference on Information Modelling and Knowledge Bases (EJC) originally started as a co-operation initiative between Japan and Finland in 1982. The practical operations were then organised by professor Ohsuga in Japan and professors Hannu Kangassalo and Hannu Jaakkola in Finland (Nordic countries). Geographical scope has expanded to cover Europe and also other countries. Workshop characteristic - discussion, enough time for presentations and limited number of participants (50) / papers (30) - is typical for the conference. Suggested topics include, but are not limited to: 1. Conceptual modelling: Modelling and specification languages; Domain-specific conceptual modelling; Concepts, concept theories and ontologies; Conceptual modelling of large and heterogeneous systems; Conceptual modelling of spatial, temporal and biological data; Methods for developing, validating and communicating conceptual models. 2. Knowledge and information modelling and discovery: Knowledge discovery, knowledge representation and knowledge management; Advanced data mining and analysis methods; Conceptions of knowledge and information; Modelling information requirements; Intelligent information systems; Information recognition and information modelling. 3. Linguistic modelling: Models of HCI; Information delivery to users; Intelligent informal querying; Linguistic foundation of information and knowledge; Fuzzy linguistic models; Philosophical and linguistic foundations of conceptual models. 4. Cross-cultural communication and social computing: Cross-cultural support systems; Integration, evolution and migration of systems; Collaborative societies; Multicultural web-based software systems; Intercultural collaboration and support systems; Social computing, behavioral modeling and prediction. 5. Environmental modelling and engineering: Environmental information systems (architecture); Spatial, temporal and observational information systems; Large-scale environmental systems; Collaborative knowledge base systems; Agent concepts and conceptualisation; Hazard prediction, prevention and steering systems. 6. Multimedia data modelling and systems: Modelling multimedia information and knowledge; Contentbased multimedia data management; Content-based multimedia retrieval; Privacy and context enhancing technologies; Semantics and pragmatics of multimedia data; Metadata for multimedia information systems. Overall we received 56 submissions. After careful evaluation, 16 papers have been selected as long paper, 17 papers as short papers, 5 papers as position papers, and 3 papers for presentation of perspective challenges. We thank all colleagues for their support of this issue of the EJC conference, especially the program committee, the organising committee, and the programme coordination team. The long and the short papers presented in the conference are revised after the conference and published in the Series of “Frontiers in Artificial Intelligence” by IOS Press (Amsterdam). The books “Information Modelling and Knowledge Bases” are edited by the Editing Committee of the conference. We believe that the conference will be productive and fruitful in the advance of research and application of information modelling and knowledge bases. Bernhard Thalheim Hannu Jaakkola Yasushi Kiyok

    Design and Implementation of a Multi-Purpose Object-Orientated Spatio-Temporal (MPooST) Data Model for Cadastral and Land Information Systems (C/LIS)

    Get PDF
    The application of the object-oriented methodology in geospatial information management has significantly increased during the last 10 years and tends to gradually replace the status quo relational technology. In general, object orientation offers a flexible and adaptable modelling framework to satisfy the most demanding complex data structuring requirements. The objective of this thesis is to determine how a modern Land Information System used for cadastral purposes can benefit from an object-oriented methodology. To this aim, a Multi-Purpose, Object-Oriented Spatio-Temporal (abbreviated as MPOOST) data model has been developed. In brief, the MPOOST data model embodies spatial data and their temporal reference in the form of objects which contain their attributes as well as their behaviour. The design of the MPOOST data model has been specified in such a way that it enables other data models to exploit its functionality, therefore enabling the multi-purpose aspect. At first, the requirements of Land Information Systems are being examined. Next, the functionality that is offered by the object-oriented methodology is being analysed in detail. Even if the bibliography is quite rich in relevant research, however there seems to be no starting point regarding the application of OO in LIS. Hence, a whole chapter of this thesis has been dedicated in an extended bibliographic research. Finally, the OO methodology is applied for the design and implementation of the MPOOST data model. The outcome of the design and the implementation is the first version of the MPOOST data model written using the Java object-oriented programming language. In this way, it is proven that: the relational technology has significant drawbacks which prohibit it from being applied in conceptually demanding information systems; and that object-orientation can fully satisfy the most complex data structuring requirements posed in modern geographic information systems
    corecore