121 research outputs found
Recommended from our members
Testing for delay defects utilizing test data compression techniques
textAs technology shrinks new types of defects are being discovered and new fault models are being created for those defects. Transition delay and path delay fault models are two such models that have been created, but they still fall short in that they are unable to obtain a high test coverage of smaller delay defects; these defects can cause functional behavior to fail and also indicate potential reliability issues. The first part of this dissertation addresses these problems by presenting an enhanced timing-based delay fault testing technique that incorporates the use of standard delay ATPG, along with timing information gathered from standard static timing analysis. Utilizing delay fault patterns typically increases the test data volume by 3-5X when compared to stuck-at patterns. Combined with the increase in test data volume associated with the increase in gate count that typically accompanies the miniaturization of technology, this adds up to a very large increase in test data volume that directly affect test time and thus the manufacturing cost. The second part of this dissertation presents a technique for improving test compression and reducing test data volume by using multiple expansion ratios while determining the configuration of the scan chains for each of the expansion ratios using a dependency analysis procedure that accounts for structural dependencies as well as free variable dependencies to improve the probability of detecting faults. Finally, this dissertation addresses the problem of unknown values (X’s) in the output response data corrupting the data and degrading the performance of the output response compactor and thus the overall amount of test compression. Four techniques are presented that focus on handling response data with large percentages of X’s. The first uses X-canceling MISR architecture that is based on deterministically observing scan cells, and the second is a hybrid approach that combines a simple X-masking scheme with the X-canceling MISR for further gains in test compression. The third and fourth techniques revolve around reiterative LFSR X-masking, which take advantage of LFSR-encoded masks that can be reused for multiple scan slices in novel ways.Electrical and Computer Engineerin
A Hardware Security Solution against Scan-Based Attacks
Scan based Design for Test (DfT) schemes have been widely used to achieve high fault coverage for integrated circuits. The scan technique provides full access to the internal nodes of the device-under-test to control them or observe their response to input test vectors. While such comprehensive access is highly desirable for testing, it is not acceptable for secure chips as it is subject to exploitation by various attacks. In this work, new methods are presented to protect the security of critical information against scan-based attacks. In the proposed methods, access to the circuit containing secret information via the scan chain has been severely limited in order to reduce the risk of a security breach. To ensure the testability of the circuit, a built-in self-test which utilizes an LFSR as the test pattern generator (TPG) is proposed. The proposed schemes can be used as a countermeasure against side channel attacks with a low area overhead as compared to the existing solutions in literature
Modelo integrado de fratura em modo misto para o dimensionamento de estruturas 2D em betão reforçado com fibras
Tese de doutoramento em Civil EngineeringThe research work presented in this Thesis aims at contributing to the field of numerical simulation
and analysis of fibre reinforced concrete (FRC) structures by developing and implementing numerical tools
in a computer code designated FEMIX, which is a general purpose finite element software. Initially, the
existing multi-fixed smeared crack model approach in the FEMIX software is used in combination with
the constitutive models currently available to conduct a set of numerical case studies of FRC elements
failing in bending and shear to analyse the influence the fracture mode I/II parameters of the constitutive
models. Subsequently, a generalised approach to compute the crack band width (CBW) is presented and
its implementation in the FEMIX computer code is detailed. Furthermore, the extension of this approach to
the integration point (IP) level was also conducted and its application made available for plane stress, shell
and solid finite elements. The referred approach is analysed by means of a numerical case study where
distinct mesh configurations are used to investigate the impact of the CBW in the results. Subsequently,
an approach for deriving the fracture mode I parameters of FRC from experimental results of three-point
notched beam bending test (3PNBBT) and round panel tests supported on three points (RPT-3PS) is
proposed and implemented with C programming language. The developed inverse analysis (IA) approach
is based on a nonlinear least squares algorithm coupled with an automatic parameter updating procedure
in which the optimised variables are modified based on the deviation between numerical and experimental
response. The numerical response is simulated by means of analytical models eliminating the need for
a finite element (FE) model significantly reducing the computational time. The developed methodology
provides very accurate predictions of the experimental responses both for 3PNBBT and RPT-3PS results.
Furthermore, the automatic updating procedure of the input parameters ensures that the final results are
practically insensitive to the initial guess of the variables provided by the user. Finally, the developed tool
is used to derive the fracture mode I parameters of a real-scale fibre reinforced concrete (FRC) beam
and the results are discussed. Finally, the development and implementation of a two dimensional mixedmode
fracture smeared crack model (MMFSCM) in the FEMIX software is detailed. The model is based
on the combination of the aggregate interlock and fibre pullout resisting mechanisms, using the rough
crack model (RCM) and the contact density Model (CDM) for the aggregate interlock component, and the
Pfyl, simplified diverse embeddment model (SDEM) and universal variable engagement model (UVEM) for
simulating the fibre pullout contribution. The model is appraised by means of numerical case studies,
addressing both mode I and mode II fracture dominant mechanisms. Furthermore, modifications to the
original fibre pullout models formulation are proposed in order to enhance the predictive performance
of the MMFSCM. The predictive performance of the MMFSCM is analysed and discussed by comparing
numerical and experimental results gathered from the literature.O trabalho de investigação apresentado nesta Tese visa contribuir para a área da mecânica computacional
e análise de estruturas em betão reforçado com fibras (BRF) através do desenvolvimento e
implementação de ferramentas numéricas no software de elementos finitos designado FEMIX. Assim,
analisam-se inicialmente um conjunto de casos de estudo numéricos de elementos em BRF cuja rotura
se dá por flexão e por corte. Os casos apresentados são analisados através de modelos multifendas fixas
de fendilhação distribuída disponíveis no FEMIX, sendo o impacto da variação dos parâmetros que simulam
o modo I e II analisado através de um estudo paramétrico. Apresenta-se também neste trabalho uma
abordagem genérica para o cálculo do comprimento característico (CC). A metodologia é aplicada ao nível
do EF, usando a geometria de todo o elemento para efetuar o seu cálculo, tendo sido também estendida
ao nível do PI, usando para o efeito a sua área tributária. Ambas as abordagens são implementadas
no software FEMIX e a sua utilização analisada através de casos de estudo numéricos. A metodologia
referida foi implementada para elementos de estado plano de tensão, elementos de casca e também
elementos sólidos. Seguidamente é proposta uma nova metodologia de análise inversa (AI) para obtenção
dos parâmetros de fratura em modo I de elementos em BRF tendo por base ensaios experimentais
comummente utilizados nomeadamente ensaios de flexão sob três pontos de carga em vigas entalhadas
e ensaios de painéis circulares apoiados em três pontos. A metodologia baseia-se num procedimento de
regressão de mínimos quadrados não-linear acoplado a um processo automático de otimização das variáveis
de entrada. A resposta numérica é calculada através de modelos analíticos em vez de modelos de
EF, eliminando por um lado a necessidade de construção de um modelo de elementos finitos e por outro
lado o custo computacional do cálculo da resposta numérica. A abordagem proposta é implementada na
linguagem de programação C sendo o seu bom desempenho avaliada através de resultados numéricos e
experimentais. Por fim, o desenvolvimento de um modelo integrado de fendilhação distribuída em modo
de fratura misto é detalhado. O modelo proposto tem em consideração os mecanismos resistentes que
se desenvolvem durante o processo de fendilhação de elementos em BRF, usando para o efeito modelos
consitutivos de arrancamento das fibras e também modelos micromecânicos para simulação do efeito
do embricamento dos agregados. São utilizados três modelos constitutivos para simular o arrancamento
das fibras, nomeadamente o modelo proposto por Pfyl, o SDEM e o UVEM, e dois modelos para simulação
do efeito de embricamento dos agregados, nomeadamente o RCM e CDM. O modelo integrado
é implementado no software FEMIX, estudando-se por um lado o impacto das variáveis envolvidas na
sua definição e por outro lado a sua capacidade preditiva, em roturas governadas pelo modo de fratura
I e II. Para o efeito são levadas a cabo várias simulações numéricas utilizando o modelo proposto, e os
resultados numéricos obtidos comparados com resultados experimentais encontrados na literatura.Fundação para Ciência e a Tecnologia (FCT) through the grants PD/BD/135174/2017 and COVID/BD/151997/2021
Measurement and Prediction of Fundamental Tensile Failure Limits of Hot Mix Asphalt (HMA)
The purpose of this research program is to provide an insight into key mechanisms and asphalt mixture properties that control fracture in asphalt materials.
A Digital Image Correlation (DIC) (non contact, full-field, surface displacement/strain measurement technique) was developed on purpose for accurately capturing localized or non-uniform stress distributions in asphalt mixtures and as a tool for detecting first fracture.
The experimental analysis of asphalt mixture cracking behavior was based on the “HMA Fracture Mechanics” visco-elastic crack growth law (Zhang et al. 2001; Roque et al.; 2002). Investigation of the asphalt cracking mechanism and identification of fundamental tensile failure limits were achieved performing multiple laboratory test configurations, namely the Superpave Indirect Tensile Test (IDT), the Semi-Circular Bending Test (SCB) and the Three Point Bending Beam Test (3PB). Both unmodified and polymer modified mixtures were tested.
The results presented herein illustrate that the DIC method could be used to reliably determine asphalt tensile failure limits at first fracture. It was found that these failure limits are independent of the specimen geometry and of the test configuration. Also, importantly the tensile failure limits were shown to be sensitive to both presence and level of polymer modification.
It is also shown that first fracture and crack growth in asphalt mixtures can be predicted effectively using a Displacement Discontinuity (DD) boundary element method, for various different boundary condition problems, and not just for the calibrated laboratory test conditions.
The numerical simulations and the DIC results presented show that significant damage, stress redistribution and other changes following initial fracture make the analysis at peak load difficult to interpret meaningfully. The effect of polymer modification on crack localization was also investigated through the use of horizontal full-field strain maps obtained from the DIC. It was found that polymer modified mixtures exhibit high strains only up to the location of impending fracture, while unmodified mixtures exhibit highly distributed damage in both the critical area and around the point of fracture
Recommended from our members
Modelling student errors in physics problem-solving
The motivation for this work has been the development of knowledge about the behaviour of human problem-solvers that would enable an intelligent machine tutor to be designed. In the domain of Newtonian Mechanics, this breaks down into two necessary sub-tasks; how do people decide what equation to generate; and what do they produce when they do try to generate an equation? Although these are psychologically separate questions, an automatic tutor for the domain would need to make use of both kinds of knowledge.
Therefore, strategies for controlling search in physics problem-solving are investigated, and a computational model of erroneous solutions is described. Experimental data is used to evaluate the model. Errors in the domain are classified, and the behaviour of problem-solvers predicted under certain circumstances.
Prediction of Novice errors is a crucial ability for an intelligent tutorial system, and the error analysis implemented in the NEWT program is the main contribution of this thesis.
The investigation has two principal aims:
(1) To develop a model that allows a student's future behaviour to be predicted from an analysis of his past actions. It is argued that this is a necessary prerequisite for the construction of an intelligent tutorial system.
(2) To identify the psychological mechanisms used by problem-solvers working in the domain.
The thesis attempts to achieve these aims in two main ways:
(1) A computer program called NEWT has been constructed, which solves problems of Newtonian Mechanics correctly, or in one of a number of erroneous ways. This allows human errors to be matched, classified, and in some cases predicted.
(2) An analysis of published data leads to the formulation of a control strategy termed "planstacking". This is compared to alternative control strategies, and shown to explain existing data more adequately.
The program is evaluated both as a psychological theory, and as a proposed student model for use in a computer-based tutorial system. The NEWT program was developed from the MECHO program written by Bundy, Byrd, Luger, Mellish and Palmer (1979), at the Department of Artificial Intelligence, Edinburgh University. This program was adapted to produce erroneous problem solutions by the inclusion of procedures to implement malrules observed in the domain
Proceedings of FORM 2022. Construction The Formation of Living Environment
This study examines the integration of building information modelling (BIM) technologies in operation & maintenance stage in the system of managing real estate that helps to reduce transaction costs. The approach and method are based on Digital Twin technology and Model Based System Engineering (MBSE) approach.
The results of the development of a service for digital facility management and
digital expertise are presented. The connection between physical and digital objects is conceptualized
Use of Large Lysimeters to Monitor Unsaturated Hydraulic Properties of Amended Soils
The design and construction of large ~ diameter lysimeters has been implemented to monitor the soil water retention behaviour and permeability characteristics of contaminated soils under remediation. The work was carried out as part of a larger project focussing on the sustainable remediation of low value brownfield land. Three lysimeters have been filled with lead contaminated soil: one control; one with a \ac{WTR} amendment; and one with a \ac{WTR} and compost amendment. A new software system was built to control the \ac{TDR} point water content measurement and irrigation system, which could log data to an online unified data repository; provided an interface for connectivity to any serial port device; deal with templating for simplified setup; and realtime feedback for the end user. High capacity tensiometers were used in conjunction with the \ac{TDR} point water content measurement system to read volumetric water contents and suctions in the large control lysimeter over a series of wetting and drying cycles, each lasting several months. The results demonstrate that there was a difference between small scale laboratory tests and the data obtained from the lysimeters, particularly in the near surface soil due to cracking. Where cracking was not present, the agreement was stronger, but differences suggested that the drying curves in the lysimeter was predominantly scanning behaviour whereas the element tests were likely more representative of primary drying behaviour
Internationales Kolloquium über Anwendungen der Informatik und Mathematik in Architektur und Bauwesen : 20. bis 22.7. 2015, Bauhaus-Universität Weimar
The 20th International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering will be held at the Bauhaus University Weimar from 20th till 22nd July 2015. Architects, computer scientists, mathematicians, and engineers from all over the world will meet in Weimar for an interdisciplinary exchange of experiences, to report on their results in research, development and practice and to discuss. The conference covers a broad range of research areas: numerical analysis, function theoretic methods, partial differential equations, continuum mechanics, engineering applications, coupled problems, computer sciences, and related topics. Several plenary lectures in aforementioned areas will take place during the conference.
We invite architects, engineers, designers, computer scientists, mathematicians, planners, project managers, and software developers from business, science and research to participate in the conference
Discrete Wavelet Transforms
The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications
- …