82 research outputs found
A low-speed BIST framework for high-performance circuit testing
Testing of high performance integrated circuits is becoming increasingly a challenging task owing to high clock frequencies. Often testers are not able to test such devices due to their limited high frequency capabilities. In this article we outline a design-for-test methodology such that high performance devices can be tested on relatively low performance testers. In addition, a BIST framework is discussed based on this methodology. Various implementation aspects of this technique are also addresse
Bridging the Testing Speed Gap: Design for Delay Testability
The economic testing of high-speed digital ICs is becoming increasingly problematic. Even advanced, expensive testers are not always capable of testing these ICs because of their high-speed limitations. This paper focuses on a design for delay testability technique such that high-speed ICs can be tested using inexpensive, low-speed ATE. Also extensions for possible full BIST of delay faults are addresse
Adaptive sampling trust-region methods for derivative-based and derivative-free simulation optimization problems
We consider unconstrained optimization problems where only “stochastic” estimates of the objective function are observable as replicates from a Monte Carlo simulation oracle. In the first study we assume that the function gradients are directly observable through the Monte Carlo simulation. We propose ASTRO, which is an adaptive sampling based trust-region optimization method where a stochastic local model is constructed, optimized, and updated iteratively. ASTRO is a derivative-based algorithm and provides almost sure convergence to a first-order critical point with good practical performance. In the second study the Monte Carlo simulation is assumed to provide no direct observations of the function gradient. We present ASTRO-DF, which is a class of derivative-free trust-region algorithms, where the stochastic local model is obtained through interpolation. Function estimation (as well as gradient estimation) and model construction within ASTRO and ASTRO-DF are adaptive in the sense that the extent of Monte Carlo sampling is determined by continuously monitoring and balancing metrics of sampling and structural errors within ASTRO and ASTRO-DF. Such error balancing is designed to ensure that the Monte Carlo effort within ASTRO and ASTRO-DF is sensitive to algorithm trajectory, sampling more whenever an iterate is inferred to be close to a critical point and less when far away. We demonstrate the almost-sure convergence of ASTRO-DF\u27s iterates to a first-order critical point when using quadratic stochastic interpolation models. The question of using more complicated models, e.g., regression or stochastic kriging, in combination with adaptive sampling is worth further investigation and will benefit from the methods of proof we present. We investigate the implementation of ASTRO and ASTRO-DF along with the heuristics that enhance the implementation of ASTRO-DF, and report their finite-time performance on a series of low-to-moderate dimensional problems in the CUTEr framework. We speculate that the iterates of both ASTRO and ASTRO-DF achieve the canonical Monte Carlo convergence rate, although a proof remains elusive
Robust Prediction Error Estimation with Monte-Carlo Methodology
In predictive modeling with simulation or machine learning, it is critical to
assess the quality of estimated values through output analysis accurately. In
recent decades output analysis has become enriched with methods that quantify
the impact of input data uncertainty in the model outputs to increase
robustness. However, most developments apply when the input data can be
parametrically parameterized. We propose a unified output analysis framework
for simulation and machine learning outputs through the lens of Monte Carlo
sampling. This framework provides nonparametric quantification of the variance
and bias induced in the outputs with higher-order accuracy. Our new
bias-corrected estimation from the model outputs leverages the extension of
fast iterative bootstrap sampling and higher-order influence functions. For the
scalability of the proposed estimation methods, we devise budget-optimal rules
and leverage control variates for variance reduction. Our numerical results
demonstrate a clear advantage in building better and more robust confidence
intervals for both simulation and machine learning frameworks
Iteration Complexity and Finite-Time Efficiency of Adaptive Sampling Trust-Region Methods for Stochastic Derivative-Free Optimization
Adaptive sampling with interpolation-based trust regions or ASTRO-DF is a
successful algorithm for stochastic derivative-free optimization with an
easy-to-understand-and-implement concept that guarantees almost sure
convergence to a first-order critical point. To reduce its dependence on the
problem dimension, we present local models with diagonal Hessians constructed
on interpolation points based on a coordinate basis. We also leverage the
interpolation points in a direct search manner whenever possible to boost
ASTRO-DF's performance in a finite time. We prove that the algorithm has a
canonical iteration complexity of almost surely,
which is the first guarantee of its kind without placing assumptions on the
quality of function estimates or model quality or independence between them.
Numerical experimentation reveals the computational advantage of ASTRO-DF with
coordinate direct search due to saving and better steps in the early iterations
of the search
A Low Speed BIST Framework for High Speed Circuit Testing
Testing of high performance integrated circuits is becoming increasingly a challenging task owing to high clock frequencies. Often testers are not able to test such devices due to their limited high frequency capabilities. In this article we outline a design-for-test methodology such that high performance devices can be tested on relatively low performance testers. In addition, a BIST framework is discussed based on this methodology. Various implementation aspects of this technique are also addresse
Network Intrusion Detection with Limited Labeled Data
With the increasing dependency of daily life over computer networks, the
importance of these networks security becomes prominent. Different intrusion
attacks to networks have been designed and the attackers are working on
improving them. Thus the ability to detect intrusion with limited number of
labeled data is desirable to provide networks with higher level of security. In
this paper we design an intrusion detection system based on a deep neural
network. The proposed system is based on self-supervised contrastive learning
where a huge amount of unlabeled data can be used to generate informative
representation suitable for various downstream tasks with limited number of
labeled data. Using different experiments, we have shown that the proposed
system presents an accuracy of 94.05% over the UNSW-NB15 dataset, an
improvement of 4.22% in comparison to previous method based on self-supervised
learning. Our simulations have also shown impressive results when the size of
labeled training data is limited. The performance of the resulting Encoder
Block trained on UNSW-NB15 dataset has also been tested on other datasets for
representation extraction which shows competitive results in downstream tasks
Autoimmune pancreatitis as a very rare cause of recurrent pancreatitis in children; a case report and review of literature
Autoimmune pancreatitis as chronic inflammation of the pancreas due to an autoimmune mechanism is a rare type of pancreatitis. A 14 years old girl presented with multiple episodes of abdominal pain, nausea with elevation of amylase and lipase suspicions of acute recurrent pancreatitis since 3 years of age. After through evaluation about secondary causes of recurrent and familial pancreatitis finally she responded to corticosteroid treatment. Although very rare but autoimmune processes should be considered in teenagers with recurrent pancreatitis
Using layer-wise training for Road Semantic Segmentation in Autonomous Cars
A recently developed application of computer vision is pathfinding in self-driving cars. Semantic scene understanding and semantic segmentation, as subfields of computer vision, are widely used in autonomous driving. Semantic segmentation for pathfinding uses deep learning methods and various large sample datasets to train a proper model. Due to the importance of this task, accurate and robust models should be trained to perform properly in different lighting and weather conditions and in the presence of noisy input data. In this paper, we propose a novel learning method for semantic segmentation called layer-wise training and evaluate it on a light efficient structure called an efficient neural network (ENet). The results of the proposed learning method are compared with the classic learning approaches, including mIoU performance, network robustness to noise, and the possibility of reducing the size of the structure on two RGB image datasets on the road (CamVid) and off-road (Freiburg Forest) paths. Using this method partially eliminates the need for Transfer Learning. It also improves network performance when input is noisy
A multidimensional approach to determinants of computer use in primary education: teacher and school characteristics
- …