4,204 research outputs found

    Aggregate assembly process planning for concurrent engineering

    Get PDF
    In today's consumer and economic climate, manufacturers are finding it increasingly difficult to produce finished products with increased functionality whilst fulfilling the aesthetic requirements of the consumer. To remain competitive, manufacturers must always look for ways to meet the faster, better, and cheaper mantra of today's economy. The ability for any industry to mirror the ideal world, where the design, manufacturing, and assembly process of a product would be perfected before it is put mto production, will undoubtedly save a great deal of time and money. This thesis introduces the concept of aggregate assembly process planning for the conceptual stages of design, with the aim of providing the methodology behind such an environment. The methodology is based on an aggregate product model and a connectivity model. Together, they encompass all the requirements needed to fully describe a product in terms of its assembly processes, providing a suitable means for generating assembly sequences. Two general-purpose heuristics methods namely, simulated annealing and genetic algorithms are used for the optimisation of assembly sequences generated, and the loading of the optimal assembly sequences on to workstations, generating an optimal assembly process plan for any given product. The main novelty of this work is in the mapping of the optimisation methods to the issue of assembly sequence generation and line balancing. This includes the formulation of the objective functions for optimismg assembly sequences and resource loading. Also novel to this work is the derivation of standard part assembly methodologies, used to establish and estimate functional tunes for standard assembly operations. The method is demonstrated using CAPABLEAssembly; a suite of interlinked modules that generates a pool of optimised assembly process plans using the concepts above. A total of nine industrial products have been modelled, four of which are the conceptual product models. The process plans generated to date have been tested on industrial assembly lines and in some cases yield an increase in the production rate

    Genome Assembly: Novel Applications by Harnessing Emerging Sequencing Technologies and Graph Algorithms

    Get PDF
    Genome assembly is a critical first step for biological discovery. All current sequencing technologies share the fundamental limitation that segments read from a genome are much shorter than even the smallest genomes. Traditionally, whole- genome shotgun (WGS) sequencing over-samples a single clonal (or inbred) target chromosome with segments from random positions. The amount of over-sampling is known as the coverage. Assembly software then reconstructs the target. So called next-generation (or second-generation) sequencing has reduced the cost and increased throughput exponentially over first-generation sequencing. Unfortunately, next-generation sequences present their own challenges to genome assembly: (1) they require amplification of source DNA prior to sequencing leading to artifacts and biased coverage of the genome; (2) they produce relatively short reads: 100bp- 700bp; (3) the sizeable runtime of most second-generation instruments is prohibitive for applications requiring rapid analysis, with an Illumina HiSeq 2000 instrument requiring 11 days for the sequencing reaction. Recently, successors to the second-generation instruments (third-generation) have become available. These instruments promise to alleviate many of the down- sides of second-generation sequencing and can generate multi-kilobase sequences. The long sequences have the potential to dramatically improve genome and transcriptome assembly. However, the high error rate of these reads is challenging and has limited their use. To address this limitation, we introduce a novel correction algorithm and assembly strategy that utilizes shorter, high-identity sequences to correct the error in single-molecule sequences. Our approach achieves over 99% read accuracy and produces substantially better assemblies than current sequencing strategies. The availability of cheaper sequencing has made new sequencing targets, such as multiple displacement amplified (MDA) single-cells and metagenomes, popular. Current algorithms assume assembly of a single clonal target, an assumption that is violated in these sequencing projects. We developed Bambus 2, a new scaffolder that works for metagenomics and single cell datasets. It can accurately detect repeats without assumptions about the taxonomic composition of a dataset. It can also identify biological variations present in a sample. We have developed a novel end-to-end analysis pipeline leveraging Bambus 2. Due to its modular nature, it is applicable to clonal, metagenomic, and MDA single-cell targets and allows a user to rapidly go from sequences to assembly, annotation, genes, and taxonomic info. We have incorporated a novel viewer, allowing a user to interactively explore the variation present in a genomic project on a laptop. Together, these developments make genome assembly applicable to novel targets while utilizing emerging sequencing technologies. As genome assembly is critical for all aspects of bioinformatics, these developments will enable novel biological discovery

    Aggregate process planning and manufacturing assessment for concurrent engineering

    Get PDF
    The introduction of concurrent engineering has led to a need to perform product development tasks with reduced information detail. Decisions taken during the early design stages will have the greatest influence on the cost of manufacture. The manufacturing requirements for alternative design options should therefore be considered at this time. Existing tools for product manufacture assessment are either too detailed, requiring the results of detailed design information, or too abstract, unable to consider small changes in design configuration. There is a need for an intermediate level of assessment which will make use of additional design detail where available, whilst allowing assessment of early designs. This thesis develops the concept of aggregate process planning as a methodology for supporting concurrent engineering. A methodology for performing aggregate process planning of early product designs is presented. Process and resources alternatives are identified for each feature of the component and production plans are generated from these options. Alternative production plans are assessed in terms of cost, quality and production time. A computer based system (CESS, Concurrent Engineering Support System) has been developed to implement the proposed methodology. The system employs object oriented modelling techniques to represent designs, manufacturing resources and process planning knowledge. A product model suitable for the representation of component designs at varying levels of detail is presented. An aggregate process planning functionality has been developed to allow the generation of sets of alternative plans for a component in a given factory. Manufacturing cost is calculated from the cost of processing, set-ups, transport, material and quality. Processing times are calculated using process specific methods which are based on standard cutting data. Process quality cost is estimated from a statistical analysis of historical SPC data stored for similar operations performed in the factory, where available. The aggregate process planning functionality has been tested with example component designs drawn from industry

    Efficient Algorithms for Prokaryotic Whole Genome Assembly and Finishing

    Get PDF
    De-novo genome assembly from DNA fragments is primarily based on sequence overlap information. In addition, mate-pair reads or paired-end reads provide linking information for joining gaps and bridging repeat regions. Genome assemblers in general assemble long contiguous sequences (contigs) using both overlapping reads and linked reads until the assembly runs into an ambiguous repeat region. These contigs are further bridged into scaffolds using linked read information. However, errors can be made in both phases of assembly due to high error threshold of overlap acceptance and linking based on too few mate reads. Identical as well as similar repeat regions can often cause errors in overlap and mate-pair evidence. In addition, the problem of setting the correct threshold to minimize errors and optimize assembly of reads is not trivial and often requires a time-consuming trial and error process to obtain optimal results. The typical trial-and-error with multiple assembler, which can be computationally intensive, and is very inefficient, especially when users must learn how to use a wide variety of assemblers, many of which may be serial requiring long execution time and will not return usable or accurate results. Further, we show that the comparison of assembly results may not provide the users with a clear winner under all circumstances. Therefore, we propose a novel scaffolding tool, Correlative Algorithm for Repeat Placement (CARP), capable of joining short low error contigs using mate pair reads, computationally resolved repeat structures and synteny with one or more reference organisms. The CARP tool requires a set of repeat sequences such as insertion sequences (IS) that can be found computationally found without assembling the genome. Development of methods to identify such repeating regions directly from raw sequence reads or draft genomes led to the development of the ISQuest software package. ISQuest identifies bacterial ISs and their sequence elements—inverted and direct repeats—in raw read data or contigs using flexible search parameters. ISQuest is capable of finding ISs in hundreds of partially assembled genomes within hours; making it a valuable high-throughput tool for a global search of IS and repeat elements. The CARP tool matches very low error contigs with strong overlap using the ambiguous partial repeat sequence at the ends of the contig annotated using the repeat sequences discovered using ISQuest. These matches are verified by synteny with genomes of one or more reference organisms. We show that the CARP tool can be used to verify low mate pair evidence regions, independently find new joins and significantly reduce the number of scaffolds. Finally, we are demonstrate a novel viewer that presents to the user the computationally derived joins along with the evidence used to make the joins. The viewer allows the user to independently assess their confidence in the joins made by the finishing tools and make an informed decision of whether to invest the resources necessary to confirm a particular portion of the assembly. Further, we allow users to manually record join evidence, re-order contigs, and track the assembly finishing process

    Powers and Behaviors of Directed Self-assembly

    Get PDF
    In nature there are a variety of self-assembling systems occurring at varying scales which give rise to incredibly complex behaviors. Theoretical models of self-assembly allow us to gain insight into the fundamental nature of self-assembly independent of the specific physical implementation. In Winfree\u27s abstract tile assembly model (aTAM), the atomic components are unit square tiles which have glues on their four sides. Beginning from a seed assembly, these tiles attach one at a time during the assembly process in an asynchronous and nondeterministic manner. We can gain valuable insights into the nature of self-assembly by comparing different models of self-assembly which use fundamentally different mechanisms for local interactions. A powerful notion which allows us to compare models of self-assembly is simulation. The first result of this thesis examines the role of non-determinism in simulation. It shows that the universal simulation of directed aTAM systems requires undirectedness. A tile assembly model is said to be directed if it always assembles the same final assembly. We distinguish between two types of aTAM systems: cooperative systems and non-cooperative systems. In cooperative aTAM systems, we are able to enforce that in order for a tile to attach to an assembly, the glues of a tile must match two or more glues of neighboring tiles. On the other hand, in non-cooperative aTAM systems, tiles are able to attach to an assembly provided that one of the tile\u27s glues match an exposed glue on the assembly. It is well known that the cooperative aTAM is computationally universal, and it is conjectured that the non-cooperative aTAM is not computationally universal. For our second result, we show that if we allow tiles to be polygons with six or more sides, then the class of non-cooperative systems is capable of universal computation. On the other hand, we show that the class of systems consisting of polygons with six or less sides is not capable of computing using any of the currently known methods

    Shear-induced rigidity of frictional particles: Analysis of emergent order in stress space

    Full text link
    Solids are distinguished from fluids by their ability to resist shear. In traditional solids, the resistance to shear is associated with the emergence of broken translational symmetry as exhibited by a non-uniform density pattern, which results from either minimizing the energy cost or maximizing the entropy or both. In this work, we focus on a class of systems, where this paradigm is challenged. We show that shear-driven jamming in dry granular materials is a collective process controlled solely by the constraints of mechanical equilibrium. We argue that these constraints lead to a broken translational symmetry in a dual space that encodes the statistics of contact forces and the topology of the contact network. The shear-jamming transition is marked by the appearance of this broken symmetry. We extend our earlier work, by comparing and contrasting real space measures of rheology with those obtained from the dual space. We investigate the structure and behavior of the dual space as the system evolves through the rigidity transition in two different shear protocols. We analyze the robustness of the shear-jamming scenario with respect to protocol and packing fraction, and demonstrate that it is possible to define a protocol-independent order parameter in this dual space, which signals the onset of rigidity.Comment: 14 pages, 17 figure

    Structured layout design

    Get PDF

    A draft human pangenome reference

    Get PDF
    Here the Human Pangenome Reference Consortium presents a first draft of the human pangenome reference. The pangenome contains 47 phased, diploid assemblies from a cohort of genetically diverse individual

    Optimal self-assembly of finite shapes at temperature 1 in 3D

    Full text link
    Working in a three-dimensional variant of Winfree's abstract Tile Assembly Model, we show that, for an arbitrary finite, connected shape X⊂Z2X \subset \mathbb{Z}^2, there is a tile set that uniquely self-assembles into a 3D representation of a scaled-up version of XX at temperature 1 in 3D with optimal program-size complexity (the "program-size complexity", also known as "tile complexity", of a shape is the minimum number of tile types required to uniquely self-assemble it). Moreover, our construction is "just barely" 3D in the sense that it only places tiles in the z=0z = 0 and z=1z = 1 planes. Our result is essentially a just-barely 3D temperature 1 simulation of a similar 2D temperature 2 result by Soloveichik and Winfree (SICOMP 2007)
    • …
    corecore