199 research outputs found

    Test set generation and optimisation using evolutionary algorithms and cubical calculus.

    Get PDF
    As the complexity of modern day integrated circuits rises, many of the challenges associated with digital testing rise exponentially. VLSI technology continues to advance at a rapid pace, in accordance with Moore's Law, posing evermore complex, NP-complete problems for the test community. The testing of ICs currently accounts for approximately a third of the overall design costs and according to the Semiconductor Industry Association, the per-transistor test cost will soon exceed the per-transistor production cost. Given the need to test ICs of ever-increasing complexity and to contain the cost of test, the problems of test pattern generation, testability analysis and test set minimisation continue to provide formidable challenges for the research community. This thesis presents original work in these three areas. Firstly, a new method is presented for generating test patterns for multiple output combinational circuits based on the Boolean difference method and cubical calculus. The Boolean difference method has been largely overlooked in automatic test pattern generation algorithms due to its cumbersome, algebraic nature. It is shown that cubical calculus provides an elegant and economical technique for solving Boolean difference equations. Formal mathematical techniques are presented involving the Boolean difference and cubical calculus providing, a test pattern generation method that dispenses with the need for costly circuit simulations. The methods provide the basis for test generation algorithms which are suitable for computer implementation. Secondly, some of the core test pattern generation computations outlined above also provide the basis of a new method for computing testability measures such as controllability and observability. This method is effectively a very economical spin-off of the test pattern generation process using Boolean differences and cubical calculus.The third and largest part of this thesis introduces a new test set minimization algorithm, GA-MITS, based on an evolutionary optimization algorithm. This novel approach applies a genetic algorithm to find minimal or near minimal test sets while maintaining a given fault coverage. The algorithm is designed as a postprocessor to minimise test sets that have been previously generated by an ATPG system and is thus considered a static approach to the test set minimisation problem. It is shown empirically that GA-MITS is remarkably successful in minimizing test sets generated for the ISCAS-85 benchmark circuits and hence potentially capable of reducing the production costs of realistic digital circuits

    Pseudo-functional testing: bridging the gap between manufacturing test and functional operation.

    Get PDF
    Yuan, Feng.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (leaves 60-65).Abstract also in Chinese.Abstract --- p.iAcknowledgement --- p.iiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Manufacturing Test --- p.1Chapter 1.1.1 --- Functional Testing vs. Structural Testing --- p.2Chapter 1.1.2 --- Fault Model --- p.3Chapter 1.1.3 --- Automatic Test Pattern Generation --- p.4Chapter 1.1.4 --- Design for Testability --- p.6Chapter 1.2 --- Pseudo-Functional Manufacturing Test --- p.13Chapter 1.3 --- Thesis Motivation and Organization --- p.16Chapter 2 --- On Systematic Illegal State Identification --- p.19Chapter 2.1 --- Introduction --- p.19Chapter 2.2 --- Preliminaries and Motivation --- p.20Chapter 2.3 --- What is the Root Cause of Illegal States? --- p.22Chapter 2.4 --- Illegal State Identification Flow --- p.26Chapter 2.5 --- Justification Scheme Construction --- p.30Chapter 2.6 --- Experimental Results --- p.34Chapter 2.7 --- Conclusion --- p.35Chapter 3 --- Compression-Aware Pseudo-Functional Testing --- p.36Chapter 3.1 --- Introduction --- p.36Chapter 3.2 --- Motivation --- p.38Chapter 3.3 --- Proposed Methodology --- p.40Chapter 3.4 --- Pattern Generation in Compression-Aware Pseudo-Functional Testing --- p.42Chapter 3.4.1 --- Circuit Pre-Processing --- p.42Chapter 3.4.2 --- Pseudo-Functional Random Pattern Generation with Multi-Launch Cycles --- p.43Chapter 3.4.3 --- Compressible Test Pattern Generation for Pseudo-Functional Testing --- p.45Chapter 3.5 --- Experimental Results --- p.52Chapter 3.5.1 --- Experimental Setup --- p.52Chapter 3.5.2 --- Results and Discussion --- p.54Chapter 3.6 --- Conclusion --- p.56Chapter 4 --- Conclusion and Future Work --- p.58Bibliography --- p.6

    A framework for multidimensional indexes on distributed and highly-available data stores

    Get PDF
    Spatial Big Data is considered an essential trend in future scientific and business applications. Indeed, research instruments, medical devices, and social networks generate hundreds of peta bytes of spatial data per year. However, as many authors have pointed out, the lack of specialized frameworks dealing with such kind of data is limiting possible applications and probably precluding many scientific breakthroughs. In this thesis, we describe three HPC scientific applications, ranging from molecular dynamics, neuroscience analysis, and physics simulations, where we experience first hand the limits of the existing technologies. Thanks to our experience, we define the desirable missing functionalities, and we focus on two features that when combined significantly improve the way scientific data is analyzed. On one side, scientific simulations generate complex datasets where multiple correlated characteristics describe each item. For instance, a particle might have a space position (x,y,z) at a given time (t). If we want to find all elements within the same area and period, we either have to scan the whole dataset, or we must organize the data so that all items in the same space and time are stored together. The second approach is called Multidimensional Indexing (MI), and it uses different techniques to cluster and to organize similar data together. On the other side, approximate analytics has been often indicated as a smart and flexible way to explore large datasets in a short period. Approximate analytics includes a broad family of algorithms which aims to speed up analytical workloads by relaxing the precision of the results within a specific interval of confidence. For instance, if we want to know the average age in a group with 1-year precision, we can consider just a random fraction of all the people, thus reducing the amount of calculation. But if we also want less I/O operations, we need efficient data sampling, which means organizing data in a way that we do not need to scan the whole data set to generate a random sample of it. According to our analysis, combining Multidimensional Indexing with efficient data Sampling (MIS) is a vital missing feature not available in the current distributed data management solutions. This thesis aims to solve such a shortcoming and it provides novel scalable solutions. At first, we describe the existing data management alternatives; then we motivate our preference for NoSQL key-value databases. Secondly, we propose an analytical model to study the influence of data models on the scalability and performance of this kind of distributed database. Thirdly, we use the analytical model to design two novel multidimensional indexes with efficient data sampling: the D8tree and the AOTree. Our first solution, the D8tree, improves state of the art for approximate spatial queries on static and mostly read dataset. Later, we enhanced the data ingestion capability or our approach by introducing the AOTree, an algorithm that enables the query performance of the D8tree even for HPC write-intensive applications. We compared our solution with PostgreSQL and plain storage, and we demonstrate that our proposal has better performance and scalability. Finally, we describe Qbeast, the novel distributed system that implements the D8tree and the AOTree using NoSQL technologies, and we illustrate how Qbeast simplifies the workflow of scientists in various HPC applications providing a scalable and integrated solution for data analysis and management.La gestión de BigData con información espacial está considerada como una tendencia esencial en el futuro de las aplicaciones científicas y de negocio. De hecho, se generan cientos de petabytes de datos espaciales por año mediante instrumentos de investigación, dispositivos médicos y redes sociales. Sin embargo, tal y como muchos autores han señalado, la falta de entornos especializados en manejar este tipo de datos está limitando sus posibles aplicaciones y está impidiendo muchos avances científicos. En esta tesis, describimos 3 aplicaciones científicas HPC, que cubren los ámbitos de dinámica molecular, análisis neurocientífico y simulaciones físicas, donde hemos experimentado en primera mano las limitaciones de las tecnologías existentes. Gracias a nuestras experiencias, hemos podido definir qué funcionalidades serían deseables y no existen, y nos hemos centrado en dos características que, al combinarlas, mejoran significativamente la manera en la que se analizan los datos científicos. Por un lado, las simulaciones científicas generan conjuntos de datos complejos, en los que cada elemento es descrito por múltiples características correlacionadas. Por ejemplo, una partícula puede tener una posición espacial (x, y, z) en un momento dado (t). Si queremos encontrar todos los elementos dentro de la misma área y periodo, o bien recorremos y analizamos todo el conjunto de datos, o bien organizamos los datos de manera que se almacenen juntos todos los elementos que comparten área en un momento dado. Esta segunda opción se conoce como Indexación Multidimensional (IM) y usa diferentes técnicas para agrupar y organizar datos similares. Por otro lado, se suele señalar que las analíticas aproximadas son una manera inteligente y flexible de explorar grandes conjuntos de datos en poco tiempo. Este tipo de analíticas incluyen una amplia familia de algoritmos que acelera el tiempo de procesado, relajando la precisión de los resultados dentro de un determinado intervalo de confianza. Por ejemplo, si queremos saber la edad media de un grupo con precisión de un año, podemos considerar sólo un subconjunto aleatorio de todas las personas, reduciendo así la cantidad de cálculo. Pero si además queremos menos operaciones de entrada/salida, necesitamos un muestreo eficiente de datos, que implica organizar los datos de manera que no necesitemos recorrerlos todos para generar una muestra aleatoria. De acuerdo con nuestros análisis, la combinación de Indexación Multidimensional con Muestreo eficiente de datos (IMM) es una característica vital que no está disponible en las soluciones actuales de gestión distribuida de datos. Esta tesis pretende resolver esta limitación y proporciona unas soluciones novedosas que son escalables. En primer lugar, describimos las alternativas de gestión de datos que existen y motivamos nuestra preferencia por las bases de datos NoSQL basadas en clave-valor. En segundo lugar, proponemos un modelo analítico para estudiar la influencia que tienen los modelos de datos sobre la escalabilidad y el rendimiento de este tipo de bases de datos distribuidas. En tercer lugar, usamos el modelo analítico para diseñar dos novedosos algoritmos IMM: el D8tree y el AOTree. Nuestra primera solución, el D8tree, mejora el estado del arte actual para consultas espaciales aproximadas, cuando el conjunto de datos es estático y mayoritariamente de lectura. Después, mejoramos la capacidad de ingestión introduciendo el AOTree, un algoritmo que conserva el rendimiento del D8tree incluso para aplicaciones HPC intensivas en escritura. Hemos comparado nuestra solución con PostgreSQL y almacenamiento plano demostrando que nuestra propuesta mejora tanto el rendimiento como la escalabilidad. Finalmente, describimos Qbeast, el sistema que implementa los algoritmos D8tree y AOTree, e ilustramos cómo Qbeast simplifica el flujo de trabajo de los científicos ofreciendo una solución escalable e integraPostprint (published version

    A SAT Based Test Generation Method for Delay Fault Testing of Macro Based Circuits

    Full text link

    Assessing sandstone reservoir quality: identifying the reality

    Get PDF
    Quartz cementation is one of the most important cements governing reservoir quality in sandstones. The presence of clay coats plays a crucial role in preserving anomalous high porosity in deeply buried sandstones by inhibiting porosity-occluding macroquartz cementation. More robust and greater grain coat coverage is required for higher temperature reservoirs to preserve significant amounts of porosity. The first and main component of this research entails a series of hydrothermal experiments simulating quartz cementation and grain coat development, particularly authigenic chlorite, at specific temperature steps to mimic the conditions of deeply buried reservoirs and develop predictive models for clay-coat-controlled reservoir quality in such settings. Naturally-occurring sandstone samples from the Lower Jurassic Cook Formation of the Oseberg Field (Norway) were exposed to a silica supersaturated solution for up to 360 hours at temperatures of 0–250 °C. An array of microscopic, analytical and modelling techniques was employed to track the mineralogical alterations, quantify the temperature-dependent volumetric changes of authigenic chlorite, the thickness and coverage of the clay coats, and capture the evolving reservoir quality attributes in the 2D and 3D domain. Results show that grain coating chlorite is formed through a mixture of the solid-state transformation and dissolution-precipitation mechanisms from berthierine transformation, replacement of siderite and neoformation on precursor-free substrate surfaces. Ceasing of porosity loss and permeability maintenance correlate directly to grain coat volume increase at temperatures higher than 175 °C. The second component presents a series of hydrothermal experiments performed on laboratory synthesised biofilm-rich, pure sand samples to test the influence of microbial extracellular polymeric substances (EPS) on early mineral precipitation at temperatures up to 120 °C. An artificial solution was used to synthesise and preserve the microbial communities and promote mineral genesis during each experimental run. Textural evaluation of the samples shows that EPS-coated surfaces serve as templates for the nucleation of early mineral precipitates. Poorly-ordered, morphologically clay-like material developed at points of contact between quartz grains and where EPS structures pre-existed. Future energy and climate change mitigation developments require better characterisation of subsurface reservoirs that can act as energy sources or storage media. This research has important implications for diagenesis studies, providing key insights that can be used to improve predictability of reservoir quality modelling applied in a wide range of geoscience applications

    A framework for multidimensional indexes on distributed and highly-available data stores

    Get PDF
    Spatial Big Data is considered an essential trend in future scientific and business applications. Indeed, research instruments, medical devices, and social networks generate hundreds of peta bytes of spatial data per year. However, as many authors have pointed out, the lack of specialized frameworks dealing with such kind of data is limiting possible applications and probably precluding many scientific breakthroughs. In this thesis, we describe three HPC scientific applications, ranging from molecular dynamics, neuroscience analysis, and physics simulations, where we experience first hand the limits of the existing technologies. Thanks to our experience, we define the desirable missing functionalities, and we focus on two features that when combined significantly improve the way scientific data is analyzed. On one side, scientific simulations generate complex datasets where multiple correlated characteristics describe each item. For instance, a particle might have a space position (x,y,z) at a given time (t). If we want to find all elements within the same area and period, we either have to scan the whole dataset, or we must organize the data so that all items in the same space and time are stored together. The second approach is called Multidimensional Indexing (MI), and it uses different techniques to cluster and to organize similar data together. On the other side, approximate analytics has been often indicated as a smart and flexible way to explore large datasets in a short period. Approximate analytics includes a broad family of algorithms which aims to speed up analytical workloads by relaxing the precision of the results within a specific interval of confidence. For instance, if we want to know the average age in a group with 1-year precision, we can consider just a random fraction of all the people, thus reducing the amount of calculation. But if we also want less I/O operations, we need efficient data sampling, which means organizing data in a way that we do not need to scan the whole data set to generate a random sample of it. According to our analysis, combining Multidimensional Indexing with efficient data Sampling (MIS) is a vital missing feature not available in the current distributed data management solutions. This thesis aims to solve such a shortcoming and it provides novel scalable solutions. At first, we describe the existing data management alternatives; then we motivate our preference for NoSQL key-value databases. Secondly, we propose an analytical model to study the influence of data models on the scalability and performance of this kind of distributed database. Thirdly, we use the analytical model to design two novel multidimensional indexes with efficient data sampling: the D8tree and the AOTree. Our first solution, the D8tree, improves state of the art for approximate spatial queries on static and mostly read dataset. Later, we enhanced the data ingestion capability or our approach by introducing the AOTree, an algorithm that enables the query performance of the D8tree even for HPC write-intensive applications. We compared our solution with PostgreSQL and plain storage, and we demonstrate that our proposal has better performance and scalability. Finally, we describe Qbeast, the novel distributed system that implements the D8tree and the AOTree using NoSQL technologies, and we illustrate how Qbeast simplifies the workflow of scientists in various HPC applications providing a scalable and integrated solution for data analysis and management.La gestión de BigData con información espacial está considerada como una tendencia esencial en el futuro de las aplicaciones científicas y de negocio. De hecho, se generan cientos de petabytes de datos espaciales por año mediante instrumentos de investigación, dispositivos médicos y redes sociales. Sin embargo, tal y como muchos autores han señalado, la falta de entornos especializados en manejar este tipo de datos está limitando sus posibles aplicaciones y está impidiendo muchos avances científicos. En esta tesis, describimos 3 aplicaciones científicas HPC, que cubren los ámbitos de dinámica molecular, análisis neurocientífico y simulaciones físicas, donde hemos experimentado en primera mano las limitaciones de las tecnologías existentes. Gracias a nuestras experiencias, hemos podido definir qué funcionalidades serían deseables y no existen, y nos hemos centrado en dos características que, al combinarlas, mejoran significativamente la manera en la que se analizan los datos científicos. Por un lado, las simulaciones científicas generan conjuntos de datos complejos, en los que cada elemento es descrito por múltiples características correlacionadas. Por ejemplo, una partícula puede tener una posición espacial (x, y, z) en un momento dado (t). Si queremos encontrar todos los elementos dentro de la misma área y periodo, o bien recorremos y analizamos todo el conjunto de datos, o bien organizamos los datos de manera que se almacenen juntos todos los elementos que comparten área en un momento dado. Esta segunda opción se conoce como Indexación Multidimensional (IM) y usa diferentes técnicas para agrupar y organizar datos similares. Por otro lado, se suele señalar que las analíticas aproximadas son una manera inteligente y flexible de explorar grandes conjuntos de datos en poco tiempo. Este tipo de analíticas incluyen una amplia familia de algoritmos que acelera el tiempo de procesado, relajando la precisión de los resultados dentro de un determinado intervalo de confianza. Por ejemplo, si queremos saber la edad media de un grupo con precisión de un año, podemos considerar sólo un subconjunto aleatorio de todas las personas, reduciendo así la cantidad de cálculo. Pero si además queremos menos operaciones de entrada/salida, necesitamos un muestreo eficiente de datos, que implica organizar los datos de manera que no necesitemos recorrerlos todos para generar una muestra aleatoria. De acuerdo con nuestros análisis, la combinación de Indexación Multidimensional con Muestreo eficiente de datos (IMM) es una característica vital que no está disponible en las soluciones actuales de gestión distribuida de datos. Esta tesis pretende resolver esta limitación y proporciona unas soluciones novedosas que son escalables. En primer lugar, describimos las alternativas de gestión de datos que existen y motivamos nuestra preferencia por las bases de datos NoSQL basadas en clave-valor. En segundo lugar, proponemos un modelo analítico para estudiar la influencia que tienen los modelos de datos sobre la escalabilidad y el rendimiento de este tipo de bases de datos distribuidas. En tercer lugar, usamos el modelo analítico para diseñar dos novedosos algoritmos IMM: el D8tree y el AOTree. Nuestra primera solución, el D8tree, mejora el estado del arte actual para consultas espaciales aproximadas, cuando el conjunto de datos es estático y mayoritariamente de lectura. Después, mejoramos la capacidad de ingestión introduciendo el AOTree, un algoritmo que conserva el rendimiento del D8tree incluso para aplicaciones HPC intensivas en escritura. Hemos comparado nuestra solución con PostgreSQL y almacenamiento plano demostrando que nuestra propuesta mejora tanto el rendimiento como la escalabilidad. Finalmente, describimos Qbeast, el sistema que implementa los algoritmos D8tree y AOTree, e ilustramos cómo Qbeast simplifica el flujo de trabajo de los científicos ofreciendo una solución escalable e integr

    Structure, strength and defect characterisation of cement based materials.

    Get PDF
    In cement based systems, the residual stresses are created by internal expansion. This provides toughening by the release of the residual stresses as the macro-crack propagates. While circumstantial evidence of the residual stresses exist (e.g. micro-crack formation leading to permanent deformation in flexural tests), it is very difficult to observe the mechanism in action. The quantitative estimate of the changes occurring in such cement-based systems is challenging due to the anisotropy and complexity of the material. Non-Destructive Testing (NDT) techniques were used in this research to observe the mechanism in action. An ultrasound technique is used to examine strength development and an acoustic emission (AE) technique is used to examine micro-structural changes, micro-cracks, crack initiation, crack propagation, crack arrests and crack bridging in plain concrete samples including samples containing admixtures and waste materials. The NDT techniques were found to be accurate in being able to measure compressive strength, with good correlation between both standard mechanical testing and NDT techniques. It was shown that admixtures could be effectively used to alter the properties of a curing cement mortar. This work has also demonstrated that ultrasound can be successfully used to determine the compressive strength of concrete from an early age. The ability to pre-determine the strength of concrete through correlation with NDT test parameters may reduce the time spent waiting on concrete to set and to obtain results using standard mechanical testing methods. The findings in this research present the effect admixtures had on the curing process of the cement based material. The introduction of certain additives into mortars have demonstrated an increase in both the rate of initial hardening and the magnitude of the compressive strength attained over the curing period depending on the mixture specification. The additives considered have been shown to actively alter and enhance the chemical process of curing from the start of hydration. Some additives that accelerate the curing process (accelerators) were found to lower the compressive strength of concrete using the ultrasound technique. Additives that caused an increase in the final strength of mortar also increased its toughness. The significant contributions in this research enabled observation of micro-structural changes and failure behaviour under compressive and flexural loading conditions on an on-line basis. The results obtained are encouraging and lead to increased understanding of cracking mechanisms in concrete containing various types of additives and aggregates. The application of the AE technique allowed the failure of interfacial bonding to be observed. The variation of the aggregate size and its effect on the monitored waveforms was established and the parameters in the AE signals are directly related to crack propagation (grain bridging/micro-mechanism) and strength of interfacial bonding. These findings have greatly contributed to the understanding of the concrete behaviour under complex conditions where no other technique could provide such valuable information on an online basis
    • …
    corecore