8,602 research outputs found

    Decomposition of automotive manufacturing machines through a mechanism taxonomy within a product lifecycle management framework

    Get PDF
    The automotive sector as with other manufacturing industries is under continual pressure from the consumer to deliver greater levels of product customisation at a higher quality and at reduced costs. Maintaining market position is therefore increasingly determined by a company's ability to innovate design changes quickly and produce greater numbers of product variants on leaner production lines with shorter times to market. In response manufacturers are attempting to accommodate product customisation and change through the use of reconfigurable production machines. Besides the need for flexibility, production facilities represent a significant investment for automotive manufacturers which is increasingly critical to commercial success; consequently the need to reduce costs through the reuse of assembly and manufacturing hardware on new product programs is becoming crucial. The aim of this research is to enable production machines to be more easily and cost effectively built and subsequently reconfigurable through the adoption of a component-based approach to their implementation utilising virtual manufacturing tools such as Product Lifecycle Management (PLM). It is suggested that through the decomposition of manufacturing machines into standardised mechanisms and their associated data structures a revised business model can be defined. The mechanisms are classified and deployed as part of a consistent integrated data structure that encompasses product, process and plant information. An objective is to properly integrate manufacturing data with more established Product Data Management (PDM) processes. The main areas of research reported in this article are, (1) development of a method for identifying and mapping data producers, consumers and flow, (2) development of standardised data structures for the management of manufacturing data within a PLM tool, (3) development of a taxonomy for the decomposition of manufacturing and assembly lines into a library of standard physical, logical and structural mechanisms and their associated interfaces. An automotive OEM case study is presented to illustrate the classification and management of production mechanisms focusing on an engine assembly line

    A Novel Scoring Based Distributed Protein Docking Application to Improve Enrichment

    Get PDF
    Molecular docking is a computational technique which predicts the binding energy and the preferred binding mode of a ligand to a protein target. Virtual screening is a tool which uses docking to investigate large chemical libraries to identify ligands that bind favorably to a protein target. We have developed a novel scoring based distributed protein docking application to improve enrichment in virtual screening. The application addresses the issue of time and cost of screening in contrast to conventional systematic parallel virtual screening methods in two ways. Firstly, it automates the process of creating and launching multiple independent dockings on a high performance computing cluster. Secondly, it uses a NË™ aive Bayes scoring function to calculate binding energy of un-docked ligands to identify and preferentially dock (Autodock predicted) better binders. The application was tested on four proteins using a library of 10,573 ligands. In all the experiments, (i). 200 of the 1000 best binders are identified after docking only 14% of the chemical library, (ii). 9 or 10 best-binders are identified after docking only 19% of the chemical library, and (iii). no significant enrichment is observed after docking 70% of the chemical library. The results show significant increase in enrichment of potential drug leads in early rounds of virtual screening

    A conceptual modelling perspective for data warehouses

    Get PDF

    Intelligent Management and Efficient Operation of Big Data

    Get PDF
    This chapter details how Big Data can be used and implemented in networking and computing infrastructures. Specifically, it addresses three main aspects: the timely extraction of relevant knowledge from heterogeneous, and very often unstructured large data sources, the enhancement on the performance of processing and networking (cloud) infrastructures that are the most important foundational pillars of Big Data applications or services, and novel ways to efficiently manage network infrastructures with high-level composed policies for supporting the transmission of large amounts of data with distinct requisites (video vs. non-video). A case study involving an intelligent management solution to route data traffic with diverse requirements in a wide area Internet Exchange Point is presented, discussed in the context of Big Data, and evaluated.Comment: In book Handbook of Research on Trends and Future Directions in Big Data and Web Intelligence, IGI Global, 201

    COMPETITIVE DYNAMICS IN ELECTRONIC NETWORKS - ACHIEVING COMPETITIVENESS THROUGH INTERORGANIZATIONAL SYSTEMS

    Get PDF
    Many dramatic and potentially powerful uses of information technology involveinterorganizational systems (IOS). These systems, defined as distributed computing systems thatsupport shared processes between firms, have become fundamental to business operations,spanning multiple activities in value/supply chains. They have opened avenues to unprecedentedcollaborative linkages between firms. As IOS-mediated relational networks are rapidly evolving,roles of IOS have progressively changed beyond those of efficiency and power functions.To fully appreciate modern roles of IOS in e-business, this dissertation addresses two keyresearch questions: (1) How do firms achieve competitiveness through IOS? (2) How do IOSinfluence competitive behaviors of the competing firms in intertwined electronic networks? Itdoes so by integrating three research streams – social network analysis, interorganizationalsystems, and competitive dynamics – into a model of competitive dynamics in electronicnetworks. This study focuses on the paired relationships between the three constructs of networkstructure, IOS use, and competitive action, and empirically investigates nine general hypotheses.Data collection focuses on second-hand data in the automotive industry. A total of 805collaborative relationships, 106 IOS technologies and applications, and 305 competitive actionsinvolving nine major automakers are collected. Data sources include databases, major tradepublications, Web sites, and industry indices. Data analysis includes network analysis, ANOVAtest, and correlation.Empirical results support the general contention that network structure and IOS use coevolveand influence competitive action. Building on these results, a framework characterizingIOS\u27s roles in achieving firm competitiveness is concluded and advanced. This dissertation broadens our view of IOS\u27s roles in e-business. It contributes to IS/IOS theory, methodology, and practice. First, this study examines IOS-mediated networks inmultiple levels, including firm-level, pair-level, and network-level. It provides new theoreticalconceptualizations of IOS\u27s roles. Second, this study advances a new IT value measureaddressing limitations of the traditional measures. Third, it introduces a novel, usefulmethodology for data collection. Fourth, results from this study can guide a firm\u27s e-businessinitiatives for using IOS as powerful tools for achieving firm competitiveness

    Facilitating Transformations in a Human Genome Project Database

    Get PDF
    Human Genome Project databases present a confluence of interesting database challenges: rapid schema and data evolution, complex data entry and constraint management, and the need to integrate multiple data sources and software systems which range over a wide variety of models and formats. While these challenges are not necessarily unique to biological databases, their combination, intensity and complexity are unusual and make automated solutions imperative. We illustrate these problems in the context of the Human Genome Database for Chromosome 22 (Chr22DB), and describe a new approach to a solution for these problems, by means of a deductive language for expressing database transformations and constraints

    Interoperability of Traffic Infrastructure Planning and Geospatial Information Systems

    Get PDF
    Building Information Modelling (BIM) as a Model-based design facilitates to investigate multiple solutions in the infrastructure planning process. The most important reason for implementing model-based design is to help designers and to increase communication between different design parties. It decentralizes and coordinates team collaboration and facilitates faster and lossless project data exchange and management across extended teams and external partners in project lifecycle. Infrastructure are fundamental facilities, services, and installations needed for the functioning of a community or society, such as transportation, roads, communication systems, water and power networks, as well as power plants. Geospatial Information Systems (GIS) as the digital representation of the world are systems for maintaining, managing, modelling, analyzing, and visualizing of the world data including infrastructure. High level infrastructure suits mostly facilitate to analyze the infrastructure design based on the international or user defined standards. Called regulation1-based design, this minimizes errors, reduces costly design conflicts, increases time savings and provides consistent project quality, yet mostly in standalone solutions. Tasks of infrastructure usually require both model based and regulation based design packages. Infrastructure tasks deal with cross-domain information. However, the corresponding data is split in several domain models. Besides infrastructure projects demand a lot of decision makings on governmental as well as on private level considering different data models. Therefore lossless flow of project data as well as documents like regulations across project team, stakeholders, governmental and private level is highly important. Yet infrastructure projects have largely been absent from product modelling discourses for a long time. Thus, as will be explained in chapter 2 interoperability is needed in infrastructure processes. Multimodel (MM) is one of the interoperability methods which enable heterogeneous data models from various domains get bundled together into a container keeping their original format. Existing interoperability methods including existing MM solutions can’t satisfactorily fulfill the typical demands of infrastructure information processes like dynamic data resources and a huge amount of inter model relations. Therefore chapter 3 concept of infrastructure information modelling investigates a method for loose and rule based coupling of exchangeable heterogeneous information spaces. This hypothesis is an extension for the existing MM to a rule-based Multimodel named extended Multimodel (eMM) with semantic rules – instead of static links. The semantic rules will be used to describe relations between data elements of various models dynamically in a link-database. Most of the confusion about geospatial data models arises from their diversity. In some of these data models spatial IDs are the basic identities of entities and in some other data models there are no IDs. That is why in the geospatial data, data structure is more important than data models. There are always spatial indexes that enable accessing to the geodata. The most important unification of data models involved in infrastructure projects is the spatiality. Explained in chapter 4 the method of infrastructure information modelling for interoperation in spatial domains generate interlinks through spatial identity of entities. Match finding through spatial links enables any kind of data models sharing spatial property get interlinked. Through such spatial links each entity receives the spatial information from other data models which is related to the target entity due to sharing equivalent spatial index. This information will be the virtual properties for the object. The thesis uses Nearest Neighborhood algorithm for spatial match finding and performs filtering and refining approaches. For the abstraction of the spatial matching results hierarchical filtering techniques are used for refining the virtual properties. These approaches focus on two main application areas which are product model and Level of Detail (LoD). For the eMM suggested in this thesis a rule based interoperability method between arbitrary data models of spatial domain has been developed. The implementation of this method enables transaction of data in spatial domains run loss less. The system architecture and the implementation which has been applied on the case study of this thesis namely infrastructure and geospatial data models are described in chapter 5. Achieving afore mentioned aims results in reducing the whole project lifecycle costs, increasing reliability of the comprehensive fundamental information, and consequently in independent, cost-effective, aesthetically pleasing, and environmentally sensitive infrastructure design.:ABSTRACT 4 KEYWORDS 7 TABLE OF CONTENT 8 LIST OF FIGURES 9 LIST OF TABLES 11 LIST OF ABBREVIATION 12 INTRODUCTION 13 1.1. A GENERAL VIEW 14 1.2. PROBLEM STATEMENT 15 1.3. OBJECTIVES 17 1.4. APPROACH 18 1.5. STRUCTURE OF THESIS 18 INTEROPERABILITY IN INFRASTRUCTURE ENGINEERING 20 2.1. STATE OF INTEROPERABILITY 21 2.1.1. Interoperability of GIS and BIM 23 2.1.2. Interoperability of GIS and Infrastructure 25 2.2. MAIN CHALLENGES AND RELATED WORK 27 2.3. INFRASTRUCTURE MODELING IN GEOSPATIAL CONTEXT 29 2.3.1. LamdXML: Infrastructure Data Standards 32 2.3.2. CityGML: Geospatial Data Standards 33 2.3.3. LandXML and CityGML 36 2.4. INTEROPERABILITY AND MULTIMODEL TECHNOLOGY 39 2.5. LIMITATIONS OF EXISTING APPROACHES 41 INFRASTRUCTURE INFORMATION MODELLING 44 3.1. MULTI MODEL FOR GEOSPATIAL AND INFRASTRUCTURE DATA MODELS 45 3.2. LINKING APPROACH, QUERYING AND FILTERING 48 3.2.1. Virtual Properties via Link Model 49 3.3. MULTI MODEL AS AN INTERDISCIPLINARY METHOD 52 3.4. USING LEVEL OF DETAIL (LOD) FOR FILTERING 53 SPATIAL MODELLING AND PROCESSING 58 4.1. SPATIAL IDENTIFIERS 59 4.1.1. Spatial Indexes 60 4.1.2. Tree-Based Spatial Indexes 61 4.2. NEAREST NEIGHBORHOOD AS A BASIC LINK METHOD 63 4.3. HIERARCHICAL FILTERING 70 4.4. OTHER FUNCTIONAL LINK METHODS 75 4.5. ADVANCES AND LIMITATIONS OF FUNCTIONAL LINK METHODS 76 IMPLEMENTATION OF THE PROPOSED IIM METHOD 77 5.1. IMPLEMENTATION 78 5.2. CASE STUDY 83 CONCLUSION 89 6.1. SUMMERY 90 6.2. DISCUSSION OF RESULTS 92 6.3. FUTURE WORK 93 BIBLIOGRAPHY 94 7.1. BOOKS AND PAPERS 95 7.2. WEBSITES 10

    TLAD 2010 Proceedings:8th international workshop on teaching, learning and assesment of databases (TLAD)

    Get PDF
    This is the eighth in the series of highly successful international workshops on the Teaching, Learning and Assessment of Databases (TLAD 2010), which once again is held as a workshop of BNCOD 2010 - the 27th International Information Systems Conference. TLAD 2010 is held on the 28th June at the beautiful Dudhope Castle at the Abertay University, just before BNCOD, and hopes to be just as successful as its predecessors.The teaching of databases is central to all Computing Science, Software Engineering, Information Systems and Information Technology courses, and this year, the workshop aims to continue the tradition of bringing together both database teachers and researchers, in order to share good learning, teaching and assessment practice and experience, and further the growing community amongst database academics. As well as attracting academics from the UK community, the workshop has also been successful in attracting academics from the wider international community, through serving on the programme committee, and attending and presenting papers.This year, the workshop includes an invited talk given by Richard Cooper (of the University of Glasgow) who will present a discussion and some results from the Database Disciplinary Commons which was held in the UK over the academic year. Due to the healthy number of high quality submissions this year, the workshop will also present seven peer reviewed papers, and six refereed poster papers. Of the seven presented papers, three will be presented as full papers and four as short papers. These papers and posters cover a number of themes, including: approaches to teaching databases, e.g. group centered and problem based learning; use of novel case studies, e.g. forensics and XML data; techniques and approaches for improving teaching and student learning processes; assessment techniques, e.g. peer review; methods for improving students abilities to develop database queries and develop E-R diagrams; and e-learning platforms for supporting teaching and learning
    • …
    corecore