26 research outputs found

    Development of a Model and Imbalance Detection System for the Cal Poly Wind Turbine

    Get PDF
    This thesis develops a model of the Cal Poly Wind Turbine that is used to determine if there is an imbalance in the turbine rotor. A theoretical model is derived to estimate the expected vibrations when there is an imbalance in the rotor. Vibration and acceleration data are collected from the turbine tower during operation to confirm the model is useful and accurate for determining imbalances in the turbine. Digital signal processing techniques for analyzing the vibration data are explored and tested with simulation data. This includes frequency shifts, lock-in amplifiers, phase-locked loops, discrete Fourier transforms, and decimation filters. The processed data is fed into an algorithm that determines if there is an imbalance. The detection algorithm consists of a machine learning classification model that uses experimental data to train and increase the success rate of the imbalance detection. Various models are explored, including the K-Nearest Neighbors algorithm, logistic regression, and neural networks. These models have trade-offs between mathematical complexity, required computing power, scalability, and accuracy. With proper implementations of these detection models, the imbalance detection accuracy was measured to be about 90%

    Portable high-performance programs

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (p. 159-169).by Matteo Frigo.Ph.D

    Indexed dependence metadata and its applications in software performance optimisation

    No full text
    To achieve continued performance improvements, modern microprocessor design is tending to concentrate an increasing proportion of hardware on computation units with less automatic management of data movement and extraction of parallelism. As a result, architectures increasingly include multiple computation cores and complicated, software-managed memory hierarchies. Compilers have difficulty characterizing the behaviour of a kernel in a general enough manner to enable automatic generation of efficient code in any but the most straightforward of cases. We propose the concept of indexed dependence metadata to improve application development and mapping onto such architectures. The metadata represent both the iteration space of a kernel and the mapping of that iteration space from a given index to the set of data elements that iteration might use: thus the dependence metadata is indexed by the kernel’s iteration space. This explicit mapping allows the compiler or runtime to optimise the program more efficiently, and improves the program structure for the developer. We argue that this form of explicit interface specification reduces the need for premature, architecture-specific optimisation. It improves program portability, supports intercomponent optimisation and enables generation of efficient data movement code. We offer the following contributions: an introduction to the concept of indexed dependence metadata as a generalisation of stream programming, a demonstration of its advantages in a component programming system, the decoupled access/execute model for C++ programs, and how indexed dependence metadata might be used to improve the programming model for GPU-based designs. Our experimental results with prototype implementations show that indexed dependence metadata supports automatic synthesis of double-buffered data movement for the Cell processor and enables aggressive loop fusion optimisations in image processing, linear algebra and multigrid application case studies

    The Deep Space Network, volume 39

    Get PDF
    The functions, facilities, and capabilities of the Deep Space Network and its support of the Pioneer, Helios, and Viking missions are described. Progress in tracking and data acquisition research and technology, network engineering and modifications, as well as hardware and software implementation and operations are reported

    Algorithms and Circuits for Analog-Digital Hybrid Multibeam Arrays

    Get PDF
    Fifth generation (5G) and beyond wireless communication systems will rely heavily on larger antenna arrays combined with beamforming to mitigate the high free-space path-loss that prevails in millimeter-wave (mmW) and above frequencies. Sharp beams that can support wide bandwidths are desired both at the transmitter and the receiver to leverage the glut of bandwidth available at these frequency bands. Further, multiple simultaneous sharp beams are imperative for such systems to exploit mmW/sub-THz wireless channels using multiple reflected paths simultaneously. Therefore, multibeam antenna arrays that can support wider bandwidths are a key enabler for 5G and beyond systems. In general, N-beam systems using N-element antenna arrays will involve circuit complexities of the order of N2. This dissertation investigates new analog, digital and hybrid low complexity multibeam beamforming algorithms and circuits for reducing the associated high size, weight, and power (SWaP) complexities in larger multibeam arrays. The research efforts on the digital beamforming aspect propose the use of a new class of discrete Fourier transform (DFT) approximations for multibeam generation to eliminate the need for digital multipliers in the beamforming circuitry. For this, 8-, 16- and 32-beam multiplierless multibeam algorithms have been proposed for uniform linear array applications. A 2.4 GHz 16-element array receiver setup and a 5.8 GHz 32-element array receiver system which use field programmable gate arrays (FPGAs) as digital backend have been built for real-time experimental verification of the digital multiplierless algorithms. The multiplierless algorithms have been experimentally verified by digitally measuring beams. It has been shown that the measured beams from the multiplierless algorithms are in good agreement with the exact counterpart algorithms. Analog realizations of the proposed approximate DFT transforms have also been investigated leading to low-complex, high bandwidth circuits in CMOS. Further, a novel approach for reducing the circuit complexity of analog true-time delay (TTD) N-beam beamforming networks using N-element arrays has been proposed for wideband squint-free operation. A sparse factorization of the N-beam delay Vandermonde beamforming matrix is used to reduce the total amount of TTD elements that are needed for obtaining N number of beams in a wideband array. The method has been verified using measured responses of CMOS all-pass filters (APFs). The wideband squint-free multibeam algorithm is also used to propose a new low-complexity hybrid beamforming architecture targeting future 5G mmW systems. Apart from that, the dissertation also explores multibeam beamforming architectures for uniform circular arrays (UCAs). An algorithm having N log N circuit complexity for simultaneous generation of N-beams in an N-element UCA is explored and verified

    Entrepreneurial Discovery and Information Complexity in Knowledge-Intensive Industries

    Get PDF
    Why are some firms better able than others to exploit new opportunities? I posit that differences in the type and level of complexity of the information obtained through the entrepreneurial discovery process may be a meaningful indicator of the likelihood that a firm is able to exploit a new opportunity. Specifically, I investigate knowledge reproduction processes for product replication (internal copying) and imitation (external copying) as a means of exploiting opportunities and building competitive advantage. Integrating concepts from information theory and the knowledge-based view of the firm, I introduce a generalized model and quantitative methods for estimating the inherent complexity of any unit of knowledge, such as a strategy, technology, product, or service, as long as the unit is represented in algorithm form. Modeling organizations as information processing systems, I develop measures of the information complexity of an algorithm representing a unit of knowledge in terms of the minimum amount of data (algorithmic complexity) and the minimum number of instructions (computational complexity) required to fully describe and execute the algorithm. I apply this methodology to construct and analyze a unique historical dataset of 91 firms (diversifying and de novo entrants) and 853 new product introductions (1974-2009), in a knowledge-intensive industry, digital signal processing. I find that: (1) information complexity is negatively and significantly related to product replication and imitation; (2) replicators have the greatest advantage over imitators at moderate levels of information complexity; (3) intellectual property regimes strengthening the patentability of algorithms significantly increase product replication, without significantly decreasing imitation; (4) outbound licensing of patented technologies decreases product replication and increases imitation; (5) products introduced by de novo entrants are less likely to be replicated and more likely to be imitated than products introduced by diversifying entrants; and (6) diversifying entrants have the greatest advantage over de novo entrants at high and low levels of information complexity; neither type of entrant has a significant advantage at moderate levels of complexity. These empirical findings support and extend predictions from earlier simulation studies. The model is applicable to other aspects of organizational strategy and has important implications for researchers, managers, and policymakers.Doctor of Philosoph

    Space Communications: Theory and Applications. Volume 3: Information Processing and Advanced Techniques. A Bibliography, 1958 - 1963

    Get PDF
    Annotated bibliography on information processing and advanced communication techniques - theory and applications of space communication

    Non-Standard Sound Synthesis with Dynamic Models

    Get PDF
    Full version unavailable due to 3rd party copyright restrictions.This Thesis proposes three main objectives: (i) to provide the concept of a new generalized non-standard synthesis model that would provide the framework for incorporating other non-standard synthesis approaches; (ii) to explore dynamic sound modeling through the application of new non-standard synthesis techniques and procedures; and (iii) to experiment with dynamic sound synthesis for the creation of novel sound objects. In order to achieve these objectives, this Thesis introduces a new paradigm for non-standard synthesis that is based in the algorithmic assemblage of minute wave segments to form sound waveforms. This paradigm is called Extended Waveform Segment Synthesis (EWSS) and incorporates a hierarchy of algorithmic models for the generation of microsound structures. The concepts of EWSS are illustrated with the development and presentation of a novel non-standard synthesis system, the Dynamic Waveform Segment Synthesis (DWSS). DWSS features and combines a variety of algorithmic models for direct synthesis generation: list generation and permutation, tendency masks, trigonometric functions, stochastic functions, chaotic functions and grammars. The core mechanism of DWSS is based in an extended application of Cellular Automata. The potential of the synthetic capabilities of DWSS is explored in a series of Case Studies where a number of sound object were generated revealing (i) the capabilities of the system to generate sound morphologies belonging to other non-standard synthesis approaches and, (ii) the capabilities of the system of generating novel sound objects with dynamic morphologies. The introduction of EWSS and DWSS is preceded by an extensive and critical overview on the concepts of microsound synthesis, algorithmic composition, the two cultures of computer music, the heretical approach in composition, non- standard synthesis and sonic emergence along with the thorough examination of algorithmic models and their application in sound synthesis and electroacoustic composition. This Thesis also proposes (i) a new definition for “algorithmic composition”, (ii) the term “totalistic algorithmic composition”, and (iii) four discrete aspects of non-standard synthesis
    corecore