778 research outputs found

    The Unification and Decomposition of Processing Structures Using Lattice Theoretic Methods

    Get PDF
    The purpose of this dissertation is to demonstrate that lattice theoretic methods can be used to decompose and unify computational structures over a variety of processing systems. The unification arguments provide a better understanding of the intricacies of the development of processing system decomposition. Since abstract algebraic techniques are used, the decomposition process is systematized which makes it conducive to the use of computers as tools for decomposition. A general algorithm using the lattice theoretic method is developed to examine the structures and therefore decomposition properties of integer and polynomial rings. Two fundamental representations, the Sino-correspondence and the weighted radix representation, are derived for integer and polynomial structures and are shown to be a natural result of the decomposition process. They are used in developing systematic methods for decomposing discrete Fourier transforms and discrete linear systems. That is, fast Fourier transforms and partial fraction expansions of linear systems are a result of the natural representation derived using the lattice theoretic method. The discrete Fourier transform is derived from a lattice theoretic base demonstrating its independence of the continuous form and of the field over which it is computed. The same properties are demonstrated for error control codes based on polynomials. Partial fraction expansions are shown to be independent of the concept of a derivative for repeated roots and the field used to implement them

    Logic synthesis and optimisation using Reed-Muller expansions

    Get PDF
    This thesis presents techniques and algorithms which may be employed to represent, generate and optimise particular categories of Exclusive-OR SumOf-Products (ESOP) forms. The work documented herein concentrates on two types of Reed-Muller (RM) expressions, namely, Fixed Polarity Reed-Muller (FPRM) expansions and KROnecker (KRO) expansions (a category of mixed polarity RM expansions). Initially, the theory of switching functions is comprehensively reviewed. This includes descriptions of various types of RM expansion and ESOP forms. The structure of Binary Decision Diagrams (BDDs) and Reed-Muller Universal Logic Module (RM-ULM) networks are also examined. Heuristic algorithms for deriving optimal (sub-optimal) FPRM expansions of Boolean functions are described. These algorithms are improved forms of an existing tabular technique [1]. Results are presented which illustrate the performance of these new minimisation methods when evaluated against selected existing techniques. An algorithm which may be employed to generate FPRM expansions from incompletely specified Boolean functions is also described. This technique introduces a means of determining the optimum allocation of the Boolean 'don't care' terms so as to derive equivalent minimal FPRM expansions. The tabular technique [1] is extended to allow the representation of KRO expansions. This new method may be employed to generate KRO expansions from either an initial incompletely specified Boolean function or a KRO expansion of different polarity. Additionally, it may be necessary to derive KRO expressions from Boolean Sum-Of-Products (SOP) forms where the product terms are not minterms. A technique is described which forms KRO expansions from disjoint SOP forms without first expanding the SOP expressions to minterm forms. Reed-Muller Binary Decision Diagrams (RMBDDs) are introduced as a graphical means of representing FPRM expansions. RMBDDs are analogous to the BDDs used to represent Boolean functions. Rules are detailed which allow the efficient representation of the initial FPRM expansions and an algorithm is presented which may be employed to determine an optimum (sub-optimum) variable ordering for the RMBDDs. The implementation of RMBDDs as RM-ULM networks is also examined. This thesis is concluded with a review of the algorithms and techniques developed during this research project. The value of these methods are discussed and suggestions are made as to how improved results could have been obtained. Additionally, areas for future work are proposed

    Immittance- versus scattering-domain fast algorithms for non-Hermitian Toeplitz and quasi-Toeplitz matrices

    Get PDF
    AbstractThe classical algorithms of Schur and Levinson are efficient procedures to solve sets of Hermitian Toeplitz linear equations or to invert the corresponding coefficient matrices. They propagate pairs of variables that may describe incident and scattered waves in an associated cascade-of-layered-media model, and thus they can be viewed as scattering-domain algorithms. It was recently found that a certain transformation of these variables followed by a change from two-term to three-term recursions results in reduction in computational complexity in the abovementioned algorithms roughly by a factor of two. The ratio of such pairs of transformed variables can be interpreted in the above layered-media model as an impedance or admittance; hence the name immittance-domain variables. This paper provides extensions for previous immittance Schur and Levinson algorithms from Hermitian to non-Hermitian matrices. It considers both Toeplitz and quasi-Toeplitz matrices (matrices with certain “hidden” Toeplitz structure) and compares two- and three-term recursion algorithms in the two domains. The comparison reveals that for non-Hermitian matrices the algorithms are equally efficient in both domains. This observation adds new comprehension to the source and value of algorithms in the immittance domain. The immittance algorithms, like the scattering algorithms, exploit the (quasi-)Toeplitz structure to produce fast algorithms. However, unlike the scattering algorithms, they can respond also to symmetry of the underlying matrix when such extra structure is present, and yield algorithms with improved efficiency

    Sequence-Based Specification of Embedded Systems

    Get PDF
    Software has become integral to the control mechanism of modern devices. From transportation and medicine to entertainment and recreation, embedded systems integrate fundamentally with time and the physical world to impact our lives; therefore, product dependability and safety are of paramount importance. Model-based design has evolved as an effective way to prototype systems and to analyze system function through simulation. This process mitigates the problems and risks associated with embedding software into consumer and industrial products. However, the most difficult tasks remain: Getting the requirements right and reducing them to precise specifications for development, and providing compelling evidence that the product is fit for its intended use. Sequence-based specification of discrete systems, using well-chosen abstractions, has proven very effective in exposing deficiencies in requirements, and then producing precise specifications for good requirements. The process ensures completeness, consistency, and correctness by tracing each specification decision precisely to the requirements. Likewise, Markov chain based testing has proven effective in providing evidence that systems are fit for field use. Model-based designs integrate discrete and continuous behavior; models have both hybrid and switching properties. In this research, we extend sequence-based specification to explicitly include time, continuous functions, nondeterminism, and internal events for embedded real-time systems. The enumeration is transformed into an enumeration hybrid automaton that acts as the foundation for an executable model-based design and an algebraic hybrid I/O automaton with valuable theoretical properties. Enumeration is a step-wise problem solving technique that complements model-based design by converting ordinary requirements into precise specifications. The goal is a complete, consistent, and traceably correct design with a basis for automated testing

    Feynman Categories

    Full text link
    In this paper we give a new foundational, categorical formulation for operations and relations and objects parameterizing them. This generalizes and unifies the theory of operads and all their cousins including but not limited to PROPs, modular operads, twisted (modular) operads, properads, hyperoperads, their colored versions, as well as algebras over operads and an abundance of other related structures, such as crossed simplicial groups, the augmented simplicial category or FI--modules. The usefulness of this approach is that it allows us to handle all the classical as well as more esoteric structures under a common framework and we can treat all the situations simultaneously. Many of the known constructions simply become Kan extensions. In this common framework, we also derive universal operations, such as those underlying Deligne's conjecture, construct Hopf algebras as well as perform resolutions, (co)bar transforms and Feynman transforms which are related to master equations. For these applications, we construct the relevant model category structures. This produces many new examples.Comment: Expanded version. New introduction, new arrangement of text, more details on several constructions. New figure

    A Controlled Study of the Flipped Classroom With Numerical Methods for Engineers

    Get PDF
    Recent advances in technology and ideology have unlocked entirely new directions for education research. Mounting pressure from increasing tuition costs and free, online course offerings are opening discussion and catalyzing change in the physical classroom. The flipped classroom is at the center of this discussion. The flipped classroom is a new pedagogical method, which employs asynchronous video lectures, practice problems as homework, and active, group-based problem-solving activities in the classroom. It represents a unique combination of learning theories once thought to be incompatible—active, problem-based learning activities founded upon constructivist schema and instructional lectures derived from direct instruction methods founded upon behaviorist principles. The primary reason for examining this teaching method is that it holds the promise of delivering the best from both worlds. A controlled study of a sophomore-level numerical methods course was conducted using video lectures and model-eliciting activities (MEAs) in one section (treatment) and traditional group lecture-based teaching in the other (comparison). This study compared knowledge-based outcomes on two dimensions: conceptual understanding and conventional problem-solving ability. Homework and unit exams were used to assess conventional problem-solving ability, while quizzes and a conceptual test were used to measure conceptual understanding. There was no difference between sections on conceptual under- standing as measured by quizzes and concept test scores. The difference between average exam scores was also not significant. However, homework scores were significantly lower by 15.5 percentage points (out of 100), which was equivalent to an effect size of 0.70. This difference appears to be due to the fact that students in the MEA/video lecture section had a higher workload than students in the comparison section and consequently neglected to do some of the homework because it was not heavily weighted in the final course grade. A comparison of student evaluations across the sections of this course revealed that perceptions were significantly lower for the MEA/video lecture section on 3 items (out of 18). Based on student feedback, it is recommended that future implementations ensure tighter integration between MEAs and other required course assignments. This could involve using a higher number of shorter MEAs and more focus on the early introduction of MEAs to students
    • …
    corecore