2,173 research outputs found

    Design of ALU and Cache Memory for an 8 bit ALU

    Get PDF
    The design of an ALU and a Cache memory for use in a high performance processor was examined in this thesis. Advanced architectures employing increased parallelism were analyzed to minimize the number of execution cycles needed for 8 bit integer arithmetic operations. In addition to the arithmetic unit, an optimized SRAM memory cell was designed to be used as cache memory and as fast Look Up Table. The ALU consists of stand alone units for bit parallel computation of basic integer arithmetic operations. Addition and subtraction were performed using Kogge Stone parallel prefix hardware operating at 330MHz. A high performance multiplier was built using Radix 4 Modified Booth Encoder (MBE) and a Wallace Tree summation array. The multiplier requires single clock cycle for 8 bit integer multiplication and operates at a maximum frequency of 100MHz. Multiplicative division hardware was built for executing both integer division and square root. The division hardware computes 8-bit division and square root in 4 clock cycles. Multiplier forms the basic building block of all these functional units, making high level of resource sharing feasible with this architecture. The optimal operating frequency for the arithmetic unit is 70MHz. A 6T CMOS SRAM cell measuring 90 µm2 was designed using minimum size transistors. The layout allows for horizontal overlap resulting in effective area of 76 µm2 for an 8x8 array. By substituting equivalent bit line capacitance of P4 L1 Cache, the memory was simulated to have a read time of 3.27ns. An optimized set of test vectors were identified to enable high fault coverage without the need for any additional test circuitry. Sixteen test cases were identified that would toggle all the nodes and provide all possible inputs to the sub units of the multiplier. A correlation based semi automatic method was investigated to facilitate test case identification for large multipliers. This method of testability eliminates performance and area overhead associated with conventional testability hardware. Bottom up design methodology was employed for the design. The performance and area metrics are presented along with estimated power consumption. A set of Monte Carlo analysis was carried out to ensure the dependability of the design under process variations as well as fluctuations in operating conditions. The arithmetic unit was found to require a total die area of 2mm2 (approx.) in 0.35 micron process

    Blockwise Transform Image Coding Enhancement and Edge Detection

    Get PDF
    The goal of this thesis is high quality image coding, enhancement and edge detection. A unified approach using novel fast transforms is developed to achieve all three objectives. Requirements are low bit rate, low complexity of implementation and parallel processing. The last requirement is achieved by processing the image in small blocks such that all blocks can be processed simultaneously. This is similar to biological vision. A major issue is to minimize the resulting block effects. This is done by using proper transforms and possibly an overlap-save technique. The bit rate in image coding is minimized by developing new results in optimal adaptive multistage transform coding. Newly developed fast trigonometric transforms are also utilized and compared for transform coding, image enhancement and edge detection. Both image enhancement and edge detection involve generalised bandpass filtering wit fast transforms. The algorithms have been developed with special attention to the properties of biological vision systems

    AOIPS water resources data management system

    Get PDF
    A geocoded data management system applicable for hydrological applications was designed to demonstrate the utility of the Atmospheric and Oceanographic Information Processing System (AOIPS) for hydrological applications. Within that context, the geocoded hydrology data management system was designed to take advantage of the interactive capability of the AOIPS hardware. Portions of the Water Resource Data Management System which best demonstrate the interactive nature of the hydrology data management system were implemented on the AOIPS. A hydrological case study was prepared using all data supplied for the Bear River watershed located in northwest Utah, southeast Idaho, and western Wyoming

    Effect of Biocrust Development on Establishment of Native Plants in a Salt Desert System

    Get PDF
    Salt desert shrublands are semiarid, shrub-dominated ecosystems that inhabit salt- affected soils. In Great Basin salt deserts, exotic annual plants are invading and displacing native plants. Low plant productivity and slow population growth of native plants in these ecosystems makes them vulnerable to invasion and limits their ability to compete with invasive plants and return to a natural state. Active revegetation efforts, such as planting and direct seeding of native plants, are often unsuccessful for the same reasons. Biological soil crusts (communities of cyanobacteria, lichen, moss, microfungi and other microorganisms that live on the surface layer of the soil) are an important component of salt deserts and commonly occur in the interspaces between plants. Biological soil crusts may provide favorable places to target revegetation practices, as they can provide increased water and nutrients to vascular plants. Additionally, exotic annual species are less able to invade and dominate biological soil crusts. I tested how differing levels of crust development (the amount of cyanobacteria and other organisms) influence the effectiveness of two methods of planting native species: broadcast seeding and transplanting of greenhouse-grown seedlings. Experiments were done in a degraded salt desert shrubland in southeastern Idaho, on salt-affected soils. I evaluated planting techniques in areas of both high and low crust development. Treatment combinations were applied to three native grasses, Indian ricegrass (Achnatherum hymenoides (Roem.& Schult.) Barkworth.), squirreltail (Elymus elymoides (Raf.) Swezey), basin wildrye (Leymus cinereus (Scribn. & Merr.) Á. Löve) and one native forb, gooseberryleaf globemallow (Sphaeralcea grossulariifolia, (Hook. & Arn.) Rydb.). Plant survival, size and reproductive success were measured at different points in time for a year after planting. Broadcast seeding largely failed across all levels of crust development and species. Biological soil crusts improved the ability of spring-planted species, in a dry year, to survive through the first summer after planting. The survival of transplanted seedlings varied across species. This study demonstrates the greater success of transplanting as a method of native species establishment, and the benefits of high soil crust development in the initial establishment of transplanted seedlings during dry times

    Simulation of arithmetic and Boolean functions on Turing machines

    Get PDF
    Call number: LD2668 .T4 1962 C4

    Pipelining Of Double Precision Floating Point Division And Square Root Operations On Field-programmable Gate Arrays

    Get PDF
    Many space applications, such as vision-based systems, synthetic aperture radar, and radar altimetry rely increasingly on high data rate DSP algorithms. These algorithms use double precision floating point arithmetic operations. While most DSP applications can be executed on DSP processors, the DSP numerical requirements of these new space applications surpass by far the numerical capabilities of many current DSP processors. Since the tradition in DSP processing has been to use fixed point number representation, only recently have DSP processors begun to incorporate floating point arithmetic units, even though most of these units handle only single precision floating point addition/subtraction, multiplication, and occasionally division. While DSP processors are slowly evolving to meet the numerical requirements of newer space applications, FPGA densities have rapidly increased to parallel and surpass even the gate densities of many DSP processors and commodity CPUs. This makes them attractive platforms to implement compute-intensive DSP computations. Even in the presence of this clear advantage on the side of FPGAs, few attempts have been made to examine how wide precision floating point arithmetic, particularly division and square root operations, can perform on FPGAs to support these compute-intensive DSP applications. In this context, this thesis presents the sequential and pipelined designs of IEEE-754 compliant double floating point division and square root operations based on low radix digit recurrence algorithms. FPGA implementations of these algorithms have the advantage of being easily testable. In particular, the pipelined designs are synthesized based on careful partial and full unrolling of the iterations in the digit recurrence algorithms. In the overall, the implementations of the sequential and pipelined designs are common-denominator implementations which do not use any performance-enhancing embedded components such as multipliers and block memory. As these implementations exploit exclusively the fine-grain reconfigurable resources of Virtex FPGAs, they are easily portable to other FPGAs with similar reconfigurable fabrics without any major modifications. The pipelined designs of these two operations are evaluated in terms of area, throughput, and dynamic power consumption as a function of pipeline depth. Pipelining experiments reveal that the area overhead tends to remain constant regardless of the degree of pipelining to which the design is submitted, while the throughput increases with pipeline depth. In addition, these experiments reveal that pipelining reduces power considerably in shallow pipelines. Pipelining further these designs does not necessarily lead to significant power reduction. By partitioning these designs into deeper pipelines, these designs can reach throughputs close to the 100 MFLOPS mark by consuming a modest 1% to 8% of the reconfigurable fabric within a Virtex-II XC2VX000 (e.g., XC2V1000 or XC2V6000) FPGA

    Image Classification with the Fisher Vector: Theory and Practice

    Get PDF
    A standard approach to describe an image for classification and retrieval purposes is to extract a set of local patch descriptors, encode them into a high dimensional vector and pool them into an image-level signature. The most common patch encoding strategy consists in quantizing the local descriptors into a finite set of prototypical elements. This leads to the popular Bag-of-Visual words (BOV) representation. In this work, we propose to use the Fisher Kernel framework as an alternative patch encoding strategy: we describe patches by their deviation from an ''universal'' generative Gaussian mixture model. This representation, which we call Fisher Vector (FV) has many advantages: it is efficient to compute, it leads to excellent results even with efficient linear classifiers, and it can be compressed with a minimal loss of accuracy using product quantization. We report experimental results on five standard datasets -- PASCAL VOC 2007, Caltech 256, SUN 397, ILSVRC 2010 and ImageNet10K -- with up to 9M images and 10K classes, showing that the FV framework is a state-of-the-art patch encoding technique

    Predictively Mapping the Plant Associations of the North Fork John Day Wilderness in Northeastern Oregon Using Classification Tree Modeling

    Get PDF
    Shifting perspectives on restoration and management of public lands in the inland West have resulted in an increased need for maps of potential natural vegetation which cover large areas at sufficient scale to delineate individual stands . In this study, classification tree modeling was used to predictively model and map the plant association types of a relatively undisturbed wilderness area in the Blue Mountains of northeastern Oregon. Models were developed using field data and data derived from a geographic information system database. Elevation, slope, aspect, annual precipitation, solar radiation, soil type, and topographic position were important predictor variables. The model predicted plant association types with a relatively high degree of accuracy for most plant association types, with the lowest accuracy for the types within the grand fir series. Fuzzy confusion analysis was used to analyze model performance, and indicated the overall model accuracy was 72%

    Conjecturing-Based Computational Discovery of Patterns in Data

    Full text link
    Modern machine learning methods are designed to exploit complex patterns in data regardless of their form, while not necessarily revealing them to the investigator. Here we demonstrate situations where modern machine learning methods are ill-equipped to reveal feature interaction effects and other nonlinear relationships. We propose the use of a conjecturing machine that generates feature relationships in the form of bounds for numerical features and boolean expressions for nominal features that are ignored by machine learning algorithms. The proposed framework is demonstrated for a classification problem with an interaction effect and a nonlinear regression problem. In both settings, true underlying relationships are revealed and generalization performance improves. The framework is then applied to patient-level data regarding COVID-19 outcomes to suggest possible risk factors.Comment: 25 pages, 6 figure
    • …
    corecore