1,535 research outputs found

    The use of alternative data models in data warehousing environments

    Get PDF
    Data Warehouses are increasing their data volume at an accelerated rate; high disk space consumption; slow query response time and complex database administration are common problems in these environments. The lack of a proper data model and an adequate architecture specifically targeted towards these environments are the root causes of these problems. Inefficient management of stored data includes duplicate values at column level and poor management of data sparsity which derives from a low data density, and affects the final size of Data Warehouses. It has been demonstrated that the Relational Model and Relational technology are not the best techniques for managing duplicates and data sparsity. The novelty of this research is to compare some data models considering their data density and their data sparsity management to optimise Data Warehouse environments. The Binary-Relational, the Associative/Triple Store and the Transrelational models have been investigated and based on the research results a novel Alternative Data Warehouse Reference architectural configuration has been defined. For the Transrelational model, no database implementation existed. Therefore it was necessary to develop an instantiation of it’s storage mechanism, and as far as could be determined this is the first public domain instantiation available of the storage mechanism for the Transrelational model

    Design and implementation of an integrated surface texture information system for design, manufacture and measurement

    Get PDF
    The optimised design and reliable measurement of surface texture are essential to guarantee the functional performance of a geometric product. Current support tools are however often limited in functionality, integrity and efficiency. In this paper, an integrated surface texture information system for design, manufacture and measurement, called “CatSurf”, has been designed and developed, which aims to facilitate rapid and flexible manufacturing requirements. A category theory based knowledge acquisition and knowledge representation mechanism has been devised to retrieve and organize knowledge from various Geometrical Product Specifications (GPS) documents in surface texture. Two modules (for profile and areal surface texture) each with five components are developed in the CatSurf. It also focuses on integrating the surface texture information into a Computer-aided Technology (CAx) framework. Two test cases demonstrate design process of specifications for the profile and areal surface texture in AutoCAD and SolidWorks environments respectively

    Extending the relational model version 2 to support generalization hierarchies

    Get PDF

    The exploration of a category theory-based virtual Geometrical product specification system for design and manufacturing

    Get PDF
    In order to ensure quality of products and to facilitate global outsourcing, almost all the so-called “world-class” manufacturing companies nowadays are applying various tools and methods to maintain the consistency of a product’s characteristics throughout its manufacturing life cycle. Among these, for ensuring the consistency of the geometric characteristics, a tolerancing language − the Geometrical Product Specification (GPS) has been widely adopted to precisely transform the functional requirements from customers into manufactured workpieces expressed as tolerance notes in technical drawings. Although commonly acknowledged by industrial users as one of the most successful efforts in integrating existing manufacturing life-cycle standards, current GPS implementations and software packages suffer from several drawbacks in their practical use, possibly the most significant, the difficulties in inferring the data for the “best” solutions. The problem stemmed from the foundation of data structures and knowledge-based system design. This indicates that there need to be a “new” software system to facilitate GPS applications. The presented thesis introduced an innovative knowledge-based system − the VirtualGPS − that provides an integrated GPS knowledge platform based on a stable and efficient database structure with knowledge generation and accessing facilities. The system focuses on solving the intrinsic product design and production problems by acting as a virtual domain expert through translating GPS standards and rules into the forms of computerized expert advices and warnings. Furthermore, this system can be used as a training tool for young and new engineers to understand the huge amount of GPS standards in a relative “quicker” manner. The thesis started with a detailed discussion of the proposed categorical modelling mechanism, which has been devised based on the Category Theory. It provided a unified mechanism for knowledge acquisition and representation, knowledge-based system design, and database schema modelling. As a core part for assessing this knowledge-based system, the implementation of the categorical Database Management System (DBMS) is also presented in this thesis. The focus then moved on to demonstrate the design and implementation of the proposed VirtualGPS system. The tests and evaluations of this system were illustrated in Chapter 6. Finally, the thesis summarized the contributions to knowledge in Chapter 7. After thoroughly reviewing the project, the conclusions reached construe that the III entire VirtualGPS system was designed and implemented to conform to Category Theory and object-oriented programming rules. The initial tests and performance analyses show that the system facilitates the geometric product manufacturing operations and benefits the manufacturers and engineers alike from function designs, to a manufacturing and verification

    The advantages and cost effectiveness of database improvement methods

    Get PDF
    Relational databases have proved inadequate for supporting new classes of applications, and as a consequence, a number of new approaches have been taken (Blaha 1998), (Harrington 2000). The most salient alternatives are denormalisation and conversion to an object-oriented database (Douglas 1997). Denormalisation can provide better performance but has deficiencies with respect to data modelling. Object-oriented databases can provide increased performance efficiency but without the deficiencies in data modelling (Blaha 2000). Although there have been various benchmark tests reported, none of these tests have compared normalised, object oriented and de-normalised databases. This research shows that a non-normalised database for data containing type code complexity would be normalised in the process of conversion to an objectoriented database. This helps to correct badly organised data and so gives the performance benefits of de-normalisation while improving data modelling. The costs of conversion from relational databases to object oriented databases were also examined. Costs were based on published benchmark tests, a benchmark carried out during this study and case studies. The benchmark tests were based on an engineering database benchmark. Engineering problems such as computer-aided design and manufacturing have much to gain from conversion to object-oriented databases. Costs were calculated for coding and development, and also for operation. It was found that conversion to an object-oriented database was not usually cost effective as many of the performance benefits could be achieved by the far cheaper process of de-normalisation, or by using the performance improving facilities provided by many relational database systems such as indexing or partitioning or by simply upgrading the system hardware. It is concluded therefore that while object oriented databases are a better alternative for databases built from scratch, the conversion of a legacy relational database to an object oriented database is not necessarily cost effective

    Development of a knowledge-based system for the repair and maintenance of concrete structures

    Get PDF
    PhD ThesisInformation Technology (IT) can exploit strategic opportunities for new ways of facilitating information and data exchange and the exchange of expert and specialist opinions in any field of engineering. Knowledge-Based Systems are sophisticated computer programs which store expert knowledge on specific subject and are applied to a broad range of engineering problems. Integrated Database applications have facilitated the essential capability of storing data to overcome an increasing information malaise. Integrating these areas of Information Technology (IT) can be used to bring a group of experts in any field of engineering closer together by allowing them to communicate and exchange information and opinions. The central feature of this research study is the integration of these hitherto separate areas of Information Technology (IT). In this thesis an adaptable Graphic User Interface Centred application comprising a Knowledge-Based Expert System (DEMARECEXPERT), a Database Management System (REPCON) and Evaluation program (ECON) alongside visualisation technologies is developed to produce an innovative platform which will facilitate and encourage the development of knowledge in concrete repair. Diagnosis, Evaluation, MAintenance and REpair of Concrete structures (DEMAREQ is a flexible application which can be used in four modes of Education, Diagnostic, Evaluation and Evolution. In the educational mode an inexperienced user can develop a better understanding of the repair of concrete technology by navigating through a database of textual and pictorial data. In the diagnostic mode, pictures and descriptive information taken from the database and performance of the expert system (DEMAREC-EXPERT) are used in a way that makes problem solving and decision making easier. The DEMAREC-EXPERT system is coupled to the REPCON (as an independent database) in order to provide the user with recommendations related to the best course required for maintenance and in the selection of materials and methods for the repair of concrete. In the evaluation mode the conditions observed are described in unambiguous terms that can be used by the user to be able to take engineering and management actions for the repair and maintenance of the structure. In the evolution mode of the application, the nature of distress, repair and maintenance of concrete structures within the extent of the database management system has been assessedT. he new methodology of data/usere valuation could have wider implications in many knowledge rich areas of expertise. The benefit of using REPCON lies in the enhanced levels of confidence which can be attributed to the data and to contribution of that data. Effectively, REPCON is designed to model a true evolution of a field of expertise but allows that expertise to move on in faster and more structured manner. This research has wider implications than within the realm of concrete repair. The methodology described in this thesis is developed to provide tecĂœnology transfer of information from experts, specialists to other practitioners and vice versa and it provides a common forum for communication and exchange information between them. Indeed, one of the strengths of the system is the way in which it allows the promotion and relegation of knowledge according to the opinion of users of different levels of ability from expert to novice. It creates a flexible environment in which an inexperienced user can develop his knowledge in maintenance and concrete repair structures. It is explained how an expert and a specialist can contribute his experience and knowledge towards improving and evolving the problem solving capability of the application

    Intelligent urban water infrastructure management

    Get PDF
    Copyright © 2013 Indian Institute of ScienceUrban population growth together with other pressures, such as climate change, create enormous challenges to provision of urban infrastructure services, including gas, electricity, transport, water, etc. Smartgrid technology is viewed as the way forward to ensure that infrastructure networks are fl exible, accessible, reliable and economical. “Intelligent water networks” take advantage of the latest information and communication technologies to gather and act on information to minimise waste and deliver more sustainable water services. The effective management of water distribution, urban drainage and sewerage infrastructure is likely to require increasingly sophisticated computational techniques to keep pace with the level of data that is collected from measurement instruments in the field. This paper describes two examples of intelligent systems developed to utilise this increasingly available real-time sensed information in the urban water environment. The first deals with the failure-management decision-support system for water distribution networks, NEPTUNE, that takes advantage of intelligent computational methods and tools applied to near real-time logger data providing pressures, flows and tank levels at selected points throughout the system. The second, called RAPIDS, deals with urban drainage systems and the utilisation of rainfall data to predict flooding of urban areas in near real-time. The two systems have the potential to provide early warning and scenario testing for decision makers within reasonable time, this being a key requirement of such systems. Computational methods that require hours or days to run will not be able to keep pace with fast-changing situations such as pipe bursts or manhole flooding and thus the systems developed are able to react in close to real time.Engineering and Physical Sciences Research CouncilUK Water Industry ResearchYorkshire Wate

    Review on Main Memory Database

    Get PDF
    The idea of using Main Memory Database (MMDB) as physical memory is not new but is in existence quite since a decade. MMDB have evolved from a period when they were only used for caching or in high-speed data systems to a time now in twenty first century when they form a established part of the mainstream IT. Early in this century, although larger main memories were affordable but processors were not fast enough for main memory databases to be admired. However, today’s processors are faster, available in multicore and multiprocessor configurations having 64-bit memory addressability stocked with multiple gigabytes of main memory. Thus, MMDBs definitely call for a solution for meeting the requirements of next generation IT challenges. To aid this swing, database systems are reconsidered to handle implementation issues adjoining the inherent differences between disk and memory storage and gain performance benefits. This paper is a review on Main Memory Databases (MMDB)
    • 

    corecore