61 research outputs found

    High level data fusion

    Get PDF
    We address the question of how to obtain effective fusion of identification information such that it is robust to the quality of this information. As well as technical issues data fusion is encumbered with a collection of (potentially confusing) practical considerations. These considerations are described during the early chapters in which a framework for data fusion is developed. Following this process of diversification it becomes clear that the original question is not well posed and requires more precise specification. We use the framework to focus on some of the technical issues relevant to the question being addressed. We show that fusion of hard decisions through use of an adaptive version of the maximum a posteriori decision rule yields acceptable performance. Better performance is possible using probability level fusion as long as the probabilities are accurate. Of particular interest is the prevalence of overconfidence and the effect it has on fused performance. The production of accurate probabilities from poor quality data forms the latter part of the thesis. Two approaches are taken. Firstly the probabilities may be moderated at source (either analytically or numerically). Secondly, the probabilities may be transformed at the fusion centre. In each case an improvement in fused performance is demonstrated. We therefore conclude that in order to obtain robust fusion care should be taken to model the probabilities accurately; either at the source or centrally

    An automated coding and classification system with supporting database for effective design of manufacturing systems

    Full text link
    The philosophy of group technology (GT) is an important concept in the design of flexible manufacturing systems and manufacturing cells. Group technology is a manufacturing philosophy that identifies similar parts and groups them into families. Beside assigning unique codes to these parts, group technology developers intend to take advantage of part similarities during design and manufacturing processes. GT is not the answer to all manufacturing problems, but it is a good management technique with which to standardize efforts and eliminate duplication. Group technology classifies parts by assigning them to different families based on their similarities in: (1) design attributes (physical shape and size), and/or (2) manufacturing attributes (processing sequence). The manufacturing industry today is process focused; departments and sub units are no longer independent but are interdependent. If the product development process is to be optimized, engineering and manufacturing cannot remain independent any more: they must be coordinated. Each sub-system is a critical component within an integrated manufacturing framework. The coding and classification system is the basis of CAPP and the functioning and reliability of CAPP depends on the robustness of the coding system. The proposed coding system is considered superior to the previously proposed coding systems, in that it has the capability to migrate into multiple manufacturing environments. This article presents the design of a coding and classification system and the supporting database for manufacturing processes based on both design and manufacturing attributes of parts. An interface with the spreadsheet will calculate the machine operation costs for various processes. This menu-driven interactive package is implemented using dBASE-IV. Part Family formation is achieved using a KAMCELL package developed in TURBO Pascal.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/46606/1/10845_2004_Article_BF00123696.pd

    DESIGN OF AN ELECTRO-MECHANICAL DEVICE FOR SIMULATING PRODUCT FLOW CHARACTERISTICS IN A MANUFACTURING LINE

    No full text
    Abstract not availabl

    Integrated production control systems : management, analysis, design 2E

    No full text
    xiv, 477 p.; 21 cm

    Integrated Production Control System

    No full text
    477hl

    Integrated Production Control System

    No full text
    48

    Quantisation for Probability-Level Fusion on a Bandwidth Budget

    No full text
    Results are established for a simulated data fusion architecture featuring a synthetic two-class Gaussian problem, with Bayesian recognisers. The recognisers output posterior probabilities for each class. The probabilities from two or more recognisers of identical error rate are quantised using the nearest-neighbour coding rule. The coded values are decoded at a fusion centre and fused. A decision is made from the fused probabilities. The performance of the architecture is examined experimentally using code values that are uniformly distributed and code values that are produced using the Linde-Buzo-Grey (LBG) algorithm. Results are produced for two to six sensors and two to 32 code values. These results are compared to fusing probabilities represented using 32 bit floating-point numbers. Using 32 uniform or LBG-produced code values, produces results that are at most only 1% worse than fusing the uncoded probabilities. Keywords: data fusion, quantisation, architecture 1. INTRODUCTION ..
    corecore