40 research outputs found

    HIGH-SPEED MULTIOUTPUT CLA-ADDERS USING 8-BIT MCC ADDER IN DOMINO LOGIC

    Get PDF
    Adders are the critical parts of processor circuits. The performance of processors increases by improving the performance and functionality of adders. Carry look-ahead (CLA) adder’s principle remains dominant in High-speed adder architectures, so the carry delay can be improved by calculating each stage in parallel. In this project by using an 8-bit Manchester carry chain (MCC) adder block in multi output domino CMOS logic. The even and odd carries of this adder are computed in parallel by two independent 4-bit carry chains. Implementation of wider adders based on the use of 8-bit adder module improves the operating speed compared to adders based on the standard 4-bit MCC adder module. Proposed design technique can be used for the implementation of 8, 16, 32 and 64 bit adders in multi output domino logic by using mentor graphics

    Performance Comparison of Static CMOS and Domino Logic Style in VLSI Design: A Review

    Get PDF
    Of late, there is a steep rise in the usage of handheld gadgets and high speed applications. VLSI designers often choose static CMOS logic style for low power applications. This logic style provides low power dissipation and is free from signal noise integrity issues. However, designs based on this logic style often are slow and cannot be used in high performance circuits. On the other hand designs based on Domino logic style yield high performance and occupy less area. Yet, they have more power dissipation compared to their static CMOS counterparts. As a practice, designers during circuit synthesis, mix more than one logic style judiciously to obtain the advantages of each logic style. Carefully designing a mixed static Domino CMOS circuit can tap the advantages of both static and Domino logic styles overcoming their own short comings

    THE METHODS OF IMPROVING THE SPEED OF CLA ADDERS IN DOMINO LOGIC

    Get PDF
    Adders are the critical parts of processor circuits. The performance of processors increases by improving the performance and functionality of adders. Carry look-ahead (CLA) adder’s principle remains dominant in High-speed adder architectures, so the carry delay can be improved by calculating each stage in parallel. In this paper by using an 4-bit Manchester carry chain (MCC) adder block in multi output domino CMOS logic. The even and odd carries of this adder are computed in parallel by two independent 2-bit carry chains. Implementation of wider adders based on the use of 4-bit adder module improves the operating speed compared to adders based on the standard 4-bit MCC adder module. Proposed design technique can be used for the implementation of 4, 8, 16 and 32 bit adders in multi output domino logic by using mentor graphics

    Quantum Algorithm for Variant Maximum Satisfiability

    Get PDF
    In this paper, we proposed a novel quantum algorithm for the maximum satisfiability problem. Satisfiability (SAT) is to find the set of assignment values of input variables for the given Boolean function that evaluates this function as TRUE or prove that such satisfying values do not exist. For a POS SAT problem, we proposed a novel quantum algorithm for the maximum satisfiability (MAX-SAT), which returns the maximum number of OR terms that are satisfied for the SAT-unsatisfiable function, providing us with information on how far the given Boolean function is from the SAT satisfaction. We used Grover’s algorithm with a new block called quantum counter in the oracle circuit. The proposed circuit can be adapted for various forms of satisfiability expressions and several satisfiability-like problems. Using the quantum counter and mirrors for SAT terms reduces the need for ancilla qubits and realizes a large Toffoli gate that is then not needed. Our circuit reduces the number of ancilla qubits for the terms T of the Boolean function from T of ancilla qubits to ≈⌈log2⁡T⌉+1. We analyzed and compared the quantum cost of the traditional oracle design with our design which gives a low quantum cost

    Activity Prediction of Business Process Instances using Deep Learning Techniques

    Get PDF
    The ability to predict the next activity of an ongoing case is becoming increasingly important in today’s businesses. Processes need to be monitored in real-life time in order to predict the remaining time of an open case, or also to be able to detect and prevent anomalies before they have a chance to impact the performances. Moreover, financial regulations and laws are changing, requiring companies' processes to be increasingly transparent. Process mining, supported by deep learning techniques, can improve the results of internal audit activities. The task of predicting the next activity can be used in this context to point out traces at risk that need to be monitored. In this way, the business is aware of the situation and, if possible, can take resolution actions in time. In recent years, this problem has been tackled using deep learning techniques, such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) neural networks, achieving consistent results. The first contribution of this thesis consists of a generation of a real-life process mining dataset based on the Purchase-to-Pay (P2P) process. The SAP tables structure is taken into account since it is the most popular management software in today's companies. We exploit the simulated dataset to explore modeling techniques and to define the type and the quantity of anomalies. The second contribution of the thesis is an investigation of LSTM neural networks architectures that exploit information from both temporal data and static features, applied to the previously generated dataset. The neural networks are then used to predict future events characteristics of running traces. Finally, real-life application of the results are discussed and future work proposals are presented.The ability to predict the next activity of an ongoing case is becoming increasingly important in today’s businesses. Processes need to be monitored in real-life time in order to predict the remaining time of an open case, or also to be able to detect and prevent anomalies before they have a chance to impact the performances. Moreover, financial regulations and laws are changing, requiring companies' processes to be increasingly transparent. Process mining, supported by deep learning techniques, can improve the results of internal audit activities. The task of predicting the next activity can be used in this context to point out traces at risk that need to be monitored. In this way, the business is aware of the situation and, if possible, can take resolution actions in time. In recent years, this problem has been tackled using deep learning techniques, such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) neural networks, achieving consistent results. The first contribution of this thesis consists of a generation of a real-life process mining dataset based on the Purchase-to-Pay (P2P) process. The SAP tables structure is taken into account since it is the most popular management software in today's companies. We exploit the simulated dataset to explore modeling techniques and to define the type and the quantity of anomalies. The second contribution of the thesis is an investigation of LSTM neural networks architectures that exploit information from both temporal data and static features, applied to the previously generated dataset. The neural networks are then used to predict future events characteristics of running traces. Finally, real-life application of the results are discussed and future work proposals are presented

    Development of Process Technology for GaAs E/D MODFET Logic Circuits

    Get PDF
    The GaAs MODFET device is one of the prominent candidates for very high speed circuit applications. This thesis presents the MODFET DCFL inverter and other logic circuit design and process development. Working circuits of E/D type inverters, three input NAND and NOR logic gates and ring oscillators are reported

    Doctor of Philosophy

    Get PDF
    dissertationRecent breakthroughs in silicon photonics technology are enabling the integration of optical devices into silicon-based semiconductor processes. Photonics technology enables high-speed, high-bandwidth, and high-fidelity communications on the chip-scale-an important development in an increasingly communications-oriented semiconductor world. Significant developments in silicon photonic manufacturing and integration are also enabling investigations into applications beyond that of traditional telecom: sensing, filtering, signal processing, quantum technology-and even optical computing. In effect, we are now seeing a convergence of communications and computation, where the traditional roles of optics and microelectronics are becoming blurred. As the applications for opto-electronic integrated circuits (OEICs) are developed, and manufacturing capabilities expand, design support is necessary to fully exploit the potential of this optics technology. Such design support for moving beyond custom-design to automated synthesis and optimization is not well developed. Scalability requires abstractions, which in turn enables and requires the use of optimization algorithms and design methodology flows. Design automation represents an opportunity to take OEIC design to a larger scale, facilitating design-space exploration, and laying the foundation for current and future optical applications-thus fully realizing the potential of this technology. This dissertation proposes design automation for integrated optic system design. Using a buildingblock model for optical devices, we provide an EDA-inspired design flow and methodologies for optical design automation. Underlying these flows and methodologies are new supporting techniques in behavioral and physical synthesis, as well as device-resynthesis techniques for thermal-aware system integration. We also provide modeling for optical devices and determine optimization and constraint parameters that guide the automation techniques. Our techniques and methodologies are then applied to the design and optimization of optical circuits and devices. Experimental results are analyzed to evaluate their efficacy. We conclude with discussions on the contributions and limitations of the approaches in the context of optical design automation, and describe the tremendous opportunities for future research in design automation for integrated optics

    Neural networks-on-chip for hybrid bio-electronic systems

    Get PDF
    PhD ThesisBy modelling the brains computation we can further our understanding of its function and develop novel treatments for neurological disorders. The brain is incredibly powerful and energy e cient, but its computation does not t well with the traditional computer architecture developed over the previous 70 years. Therefore, there is growing research focus in developing alternative computing technologies to enhance our neural modelling capability, with the expectation that the technology in itself will also bene t from increased awareness of neural computational paradigms. This thesis focuses upon developing a methodology to study the design of neural computing systems, with an emphasis on studying systems suitable for biomedical experiments. The methodology allows for the design to be optimized according to the application. For example, di erent case studies highlight how to reduce energy consumption, reduce silicon area, or to increase network throughput. High performance processing cores are presented for both Hodgkin-Huxley and Izhikevich neurons incorporating novel design features. Further, a complete energy/area model for a neural-network-on-chip is derived, which is used in two exemplar case-studies: a cortical neural circuit to benchmark typical system performance, illustrating how a 65,000 neuron network could be processed in real-time within a 100mW power budget; and a scalable highperformance processing platform for a cerebellar neural prosthesis. From these case-studies, the contribution of network granularity towards optimal neural-network-on-chip performance is explored

    Machine Learning based Restaurant Sales Forecasting

    Get PDF
    To encourage proper employee scheduling for managing crew load, restaurants have a need for accurate sales forecasting. We predict partitions of sales days, so each day is broken up into three sales periods: 10:00 AM-1:59 PM, 2:00 PM-5:59 PM, and 6:00 PM-10:00 PM. This study focuses on the middle timeslot, where sales forecasts should extend for one week. We gather three years of sales between 2016-2019 from a local restaurant, to generate a new dataset for researching sales forecasting methods. Outlined are methodologies used when going from raw data to a workable dataset. We test many machine learning models on the dataset, including recurrent neural network models. The test domain is extended by considering methods which remove trend and seasonality. The best model for one-day forecasting regression is ridge with an MAE of 214, and the best for one-week forecasting is the temporal fusion transformer with an MAE of 216
    corecore