288 research outputs found

    Logic Synthesis for Established and Emerging Computing

    Get PDF
    Logic synthesis is an enabling technology to realize integrated computing systems, and it entails solving computationally intractable problems through a plurality of heuristic techniques. A recent push toward further formalization of synthesis problems has shown to be very useful toward both attempting to solve some logic problems exactly--which is computationally possible for instances of limited size today--as well as creating new and more powerful heuristics based on problem decomposition. Moreover, technological advances including nanodevices, optical computing, and quantum and quantum cellular computing require new and specific synthesis flows to assess feasibility and scalability. This review highlights recent progress in logic synthesis and optimization, describing models, data structures, and algorithms, with specific emphasis on both design quality and emerging technologies. Example applications and results of novel techniques to established and emerging technologies are reported

    Bridging from single to collective cell migration: A review of models and links to experiments

    Full text link
    Mathematical and computational models can assist in gaining an understanding of cell behavior at many levels of organization. Here, we review models in the literature that focus on eukaryotic cell motility at 3 size scales: intracellular signaling that regulates cell shape and movement, single cell motility, and collective cell behavior from a few cells to tissues. We survey recent literature to summarize distinct computational methods (phase-field, polygonal, Cellular Potts, and spherical cells). We discuss models that bridge between levels of organization, and describe levels of detail, both biochemical and geometric, included in the models. We also highlight links between models and experiments. We find that models that span the 3 levels are still in the minority.Comment: 39 pages, 5 figure

    Algorithms for Power Aware Testing of Nanometer Digital ICs

    Get PDF
    At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits has become mandatory to catch small delay defects. Now, due to continuous shrinking of complementary metal oxide semiconductor (CMOS) transistor feature size, power density grows geometrically with technology scaling. Additionally, power dissipation inside a digital circuit during the testing phase (for test vectors under all fault models (Potluri, 2015)) is several times higher than its power dissipation during the normal functional phase of operation. Due to this, the currents that flow in the power grid during the testing phase, are much higher than what the power grid is designed for (the functional phase of operation). As a result, during at-speed testing, the supply grid experiences unacceptable supply IR-drop, ultimately leading to delay failures during at-speed testing. Since these failures are specific to testing and do not occur during functional phase of operation of the chip, these failures are usually referred to false failures, and they reduce the yield of the chip, which is undesirable. In nanometer regime, process parameter variations has become a major problem. Due to the variation in signalling delays caused by these variations, it is important to perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak power dissipation causing false failures, that was addressed previously in the context of at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al., 1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012; Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during at-speed testing can be kept under control by minimizing switching activity during testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past for reduction of peak switching activity during at-speed testing of transition/delay faults ii in both combinational and sequential circuits. As far as at-speed testing of stuck faults are concerned, while there were some techniques proposed in the past for combinational circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning the same for sequential circuits. This thesis addresses this open problem. We propose algorithms for minimization of peak switching activity during at-speed testing of stuck faults in sequential digital circuits under the combinational state preservation scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that, under this CSP-scan architecture, when the test set is completely specified, the peak switching activity during testing can be minimized by solving the Bottleneck Traveling Salesman Problem (BTSP). This mapping of peak test switching activity minimization problem to BTSP is novel, and proposed for the first time in the literature. Usually, as circuit size increases, the percentage of don’t cares in the test set increases. As a result, test vector ordering for any arbitrary filling of don’t care bits is insufficient for producing effective reduction in switching activity during testing of large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care filling plays a crucial role in reducing switching activity during testing. Taking this into consideration, we propose an algorithm, XStat, which is capable of performing test vector ordering while preserving don’t care bits in the test vectors, following which, the don’t cares are filled in an intelligent fashion for minimizing input switching activity, which effectively minimizes switching activity inside the circuit (Girard et al., 1998). Through empirical validation on benchmark circuits, we show that XStat minimizes peak switching activity significantly, during testing. Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity, it will not guarantee optimality. To address this issue, we propose an algorithm that uses Dynamic Programming to calculate the lower bound for a given sequence of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm, which we refer to as DP-fill in this thesis, provides the globally optimal solution for minimizing peak input-switching-activity and also is the best known in the literature for minimizing peak input-switching-activity during testing. The proof of optimality of DP-fill in minimizing peak input-switching-activity is also provided in this thesis

    Functional Genomics of Streptomics

    Get PDF

    Serial-data computation in VLSI

    Get PDF

    Holistic biomimicry: a biologically inspired approach to environmentally benign engineering

    Get PDF
    Humanity's activities increasingly threaten Earth's richness of life, of which mankind is a part. As part of the response, the environmentally conscious attempt to engineer products, processes and systems that interact harmoniously with the living world. Current environmental design guidance draws upon a wealth of experiences with the products of engineering that damaged humanity's environment. Efforts to create such guidelines inductively attempt to tease right action from examination of past mistakes. Unfortunately, avoidance of past errors cannot guarantee environmentally sustainable designs in the future. One needs to examine and understand an example of an environmentally sustainable, complex, multi-scale system to engineer designs with similar characteristics. This dissertation benchmarks and evaluates the efficacy of guidance from one such environmentally sustainable system resting at humanity's doorstep - the biosphere. Taking a holistic view of biomimicry, emulation of and inspiration by life, this work extracts overarching principles of life from academic life science literature using a sociological technique known as constant comparative method. It translates these principles into bio-inspired sustainable engineering guidelines. During this process, it identifies physically rooted measures and metrics that link guidelines to engineering applications. Qualitative validation for principles and guidelines takes the form of review by biology experts and comparison with existing environmentally benign design and manufacturing guidelines. Three select bio-inspired guidelines at three different organizational scales of engineering interest are quantitatively validated. Physical experiments with self-cleaning surfaces quantify the potential environmental benefits generated by applying the first, sub-product scale guideline. An interpretation of a metabolically rooted guideline applied at the product / organism organizational scale is shown to correlate with existing environmental metrics and predict a sustainability threshold. Finally, design of a carpet recycling network illustrates the quantitative environmental benefits one reaps by applying the third, multi-facility scale bio-inspired sustainability guideline. Taken as a whole, this work contributes (1) a set of biologically inspired sustainability principles for engineering, (2) a translation of these principles into measures applicable to design, (3) examples demonstrating a new, holistic form of biomimicry and (4) a deductive, novel approach to environmentally benign engineering. Life, the collection of processes that tamed and maintained themselves on planet Earth's once hostile surface, long ago confronted and solved the fundamental problems facing all organisms. Through this work, it is hoped that humanity has taken one small step toward self-mastery, thus drawing closer to a solution to the latest problem facing all organisms.Ph.D.Committee Chair: Bert Bras; Committee Member: David Rosen; Committee Member: Dayna Baumeister; Committee Member: Janet Allen; Committee Member: Jeannette Yen; Committee Member: Matthew Realf

    The Significance of Beta-72 Asparagine Methylation in C-Phycocyanin.

    Get PDF
    A novel post-translationally modified asparagine, γ\gamma-N-methylasparagine (NMA), has been characterized at the β\beta-72 site in many phycobiliproteins. The behavior of a NMA-containing peptide under Edman degradation conditions employed for sequence analysis was examined; NMA can be reproducibly identified in protein sequences by careful attention to repetitive yields and the presence of minor peaks. The effects of asparagine methylation on photosynthetic rates in the cyanobacterium Synechococcus PCC 7942 and two methylase-minus mutants was measured by steady-state oxygen evolution in whole cells. The methylase-minus mutants demonstrated lower rates of electron transfer through Photosystem II under conditions in which phycobilisomes were preferentially illuminated at low light intensity. Asparagine methylation is also associated with a selective growth advantage upon cells containing the modification. When grown under low light illumination for 200 generations, cells containing NMA outcompete the unmethylated strain at a rate of 0.38% per generation. Thus a plausible selective advantage for methylation can be demonstrated. Two site-specific mutants of C-phycocyanin from Synechococcus PCC 7002 in which β\beta-72 NMA has been replaced with either aspartate or glutamine have been extensively characterized. The presence of NMA improves Photosystem II electron transfer under broadband and specific illumination conditions in whole cells. Replacements of NMA at β\beta-72 of C-phycocyanin cause blue-shifts in the absorbance spectra and a decrease of approximately 15% in fluorescence quantum yields. Fluorescence studies have revealed that NMA is associated with improved energy transfer efficiency in both C-phycocyanin and isolated phycobilisomes that is accomplished by a decrease in non-radiative pathways of deexcitation other than resonance energy transfer. Molecular dynamics calculations are consistent and suggest that NMA alters hydrogen bonding networks and chromophore geometry. The effects of NMA replacement on C-phycocyanin stability have been measured by urea and thermal denaturation. While β\beta-72 substitution does not affect overall free energies of folding, the replacements seem to alter the denatured state of the protein and quaternary structure dissociation

    Tracking Foodborne Pathogens from Farm to Table: Data Needs to Evaluate Control Options

    Get PDF
    Food safety policymakers and scientists came together at a conference in January 1995 to evaluate data available for analyzing control of foodborne microbial pathogens. This proceedings starts with data regarding human illnesses associated with foodborne pathogens and moves backwards in the food chain to examine pathogen data in the processing sector and at the farm level. Of special concern is the inability to link pathogen data throughout the food chain. Analytical tools to evaluate the impact of changing production and consumption practices on foodborne disease risks and their economic consequences are presented. The available data are examined to see how well they meet current analytical needs to support policy analysis. The policymaker roundtable highlights the tradeoffs involved in funding databases, the economic evaluation of USDA's Hazard Analysis Critical Control Point (HACCP) proposal and other food safety policy issues, and the necessity of a multidisciplinary approach toward improving food safety databases.food safety, cost benefit analysis, foodborne disease risk, foodborne pathogens, Hazard Analysis Critical Control Point (HACCP), probabilistic scenario analysis, fault-tree analysis, Food Consumption/Nutrition/Food Safety,
    corecore