1,551 research outputs found

    Systolic Array Implementations With Reduced Compute Time.

    Get PDF
    The goal of the research is the establishment of a formal methodology to develop computational structures more suitable for the changing nature of real-time signal processing and control applications. A major effort is devoted to the following question: Given a systolic array designed to execute a particular algorithm, what other algorithms can be executed on the same array? One approach for answering this question is based on a general model of array operations using graph-theoretic techniques. As a result, a systematic procedure is introduced that models array operations as a function of the compute cycle. As a consequence of the analysis, the dissertation develops the concept of fast algorithm realizations. This concept characterizes specific realizations that can be evaluated in a reduced number of cycles. It restricts the operations to remain in the same class but with reduced execution time. The concept takes advantage of the data dependencies of the algorithm at hand. This feature allows the modification of existing structures by reordering the input data. Applications of the principle allows optimum time band and triangular matrix product on arrays designed for dense matrices. A second approach for analyzing the families of algorithms implementable in an array, is based on the concept of array time constrained operation. The principle uses the number of compute cycle as an additional degree of freedom to expand the class of transformations generated by a single array. A mathematical approach, based on concepts from multilinear algebra, is introduced to model the recursive transformations implemented in linear arrays at each compute cycle. The proposed representation is general enough to encompass a large class of signal processing and control applications. A complete analytical model of the linear maps implementable by the array at each compute cycle is developed. The proposed methodology results in arrays that are more adaptable to the changing nature of operations. Lessons learned from analyzing existing arrays are used to design smart arrays for special algorithm realizations. Applications of the methodology include the design of flexible time structures and the ability to decompose a full size array into subarrays implementing smaller size problems

    Generalized Methodology for Array Processor Design of Real-time Systems

    Get PDF
    Many techniques and design tools have been developed for mapping algorithms to array processors. Linear mapping is usually used for regular algorithms. Large and complex problems are not regular by nature and regularization may cause a computational overhead which prevents the ability to meet real-time deadlines. In this paper, a systematic design methodology for mapping partially-regular as well as regular Dependence Graphs is presented. In this approach the set of all optimal solutions is generated under the given constraints. Due to nature of the problem and the tight timing constraints of real-time systems the set of alternative solutions is limited. An image processing example is discusse

    Blood flow velocity prediction in aorto-iliac stent grafts using computational fluid dynamics and Taguchi method.

    Get PDF
    Covered Endovascular Reconstruction of Aortic Bifurcation (CERAB) is a new technique to treat extensive aortoiliac occlusive disease with covered expandable stent grafts to rebuild the aortoiliac bifurcation. Post stenting Doppler ultrasound (DUS) measurement of maximum peak systolic velocity (PSVmax) in the stented segment is widely used to determine patency and for follow up surveillance due to the portability, affordability and ease of use. Anecdotally, changes in hemodynamics created by CERAB can lead to falsely high PSVmax requiring CT angiography (CTA) for further assessment. Therefore, the importance of DUS would be enhanced with a proposed PSVmax prediction tool to ascertain whether PSVmax falls within the acceptable range of prediction. We have developed a prediction tool based on idealized models of aortoiliac bifurcations with various infra-renal PSV (PSVin), iliac to aortic area ratios (R) and aortoiliac bifurcation angles (a). Taguchi method with orthogonal arrays (OA) was utilized to minimize the number of Computational Fluid Dynamics (CFD) simulations performed under physiologically realistic conditions. Analysis of Variance (ANOVA) and Multiple Linear Regression (MLR) analyses were performed to assess Goodness of fit and to predict PSVmax. PSVin and R were found to contribute 94.06% and 3.36% respectively to PSVmax. The Goodness of fit based on adjusted R2 improved from 99.1% to 99.9% based on linear and exponential functions. The PSVmax predictor based on the exponential model was evaluated with sixteen patient specific cases with a mean prediction error of 9.9% and standard deviation of 6.4%. Eleven out of sixteen cases (69%) in our current retrospective studies would have avoided CTA if the proposed predictor was used to screen out DUS measured PSVmax with prediction error greater than 15%. The predictor therefore has the potential to be used as a clinical tool to detect PSVmax more accurately post aortoiliac stenting and might reduce diagnostic errors and avoid unnecessary expense and risk from CTA follow-up imaging

    A 2D DWT architecture suitable for the Embedded Zerotree Wavelet Algorithm

    Get PDF
    Digital Imaging has had an enormous impact on industrial applications such as the Internet and video-phone systems. However, demand for industrial applications is growing enormously. In particular, internet application users are, growing at a near exponential rate. The sharp increase in applications using digital images has caused much emphasis on the fields of image coding, storage, processing and communications. New techniques are continuously developed with the main aim of increasing efficiency. Image coding is in particular a field of great commercial interest. A digital image requires a large amount of data to be created. This large amount of data causes many problems when storing, transmitting or processing the image. Reducing the amount of data that can be used to represent an image is the main objective of image coding. Since the main objective is to reduce the amount of data that represents an image, various techniques have been developed and are continuously developed to increase efficiency. The JPEG image coding standard has enjoyed widespread acceptance, and the industry continues to explore its various implementation issues. However, recent research indicates multiresolution based image coding is a far superior alternative. A recent development in the field of image coding is the use of Embedded Zerotree Wavelet (EZW) as the technique to achieve image compression. One of The aims of this theses is to explain how this technique is superior to other current coding standards. It will be seen that an essential part orthis method of image coding is the use of multi resolution analysis, a subband system whereby the subbands arc logarithmically spaced in frequency and represent an octave band decomposition. The block structure that implements this function is termed the two dimensional Discrete Wavelet Transform (2D-DWT). The 20 DWT is achieved by several architectures and these are analysed in order to choose the best suitable architecture for the EZW coder. Finally, this architecture is implemented and verified using the Synopsys Behavioural Compiler and recommendations are made based on experimental findings
    • …
    corecore