236 research outputs found

    Routing Permutations in Partitioned Optical Passive Star Networks

    Full text link
    It is shown that a POPS network with g groups and d processors per group can efficiently route any permutation among the n=dg processors. The number of slots used is optimal in the worst case, and is at most the double of the optimum for all permutations p such that p(i)i for all i.Comment: 8 pages, 3 figure

    Online Permutation Routing in Partitioned Optical Passive Star Networks

    Full text link
    This paper establishes the state of the art in both deterministic and randomized online permutation routing in the POPS network. Indeed, we show that any permutation can be routed online on a POPS network either with O(dglogg)O(\frac{d}{g}\log g) deterministic slots, or, with high probability, with 5cd/g+o(d/g)+O(loglogg)5c\lceil d/g\rceil+o(d/g)+O(\log\log g) randomized slots, where constant c=exp(1+e1)3.927c=\exp (1+e^{-1})\approx 3.927. When d=Θ(g)d=\Theta(g), that we claim to be the "interesting" case, the randomized algorithm is exponentially faster than any other algorithm in the literature, both deterministic and randomized ones. This is true in practice as well. Indeed, experiments show that it outperforms its rivals even starting from as small a network as a POPS(2,2), and the gap grows exponentially with the size of the network. We can also show that, under proper hypothesis, no deterministic algorithm can asymptotically match its performance

    Definition, analysis and development of an optical data distribution network for integrated avionics and control systems

    Get PDF
    The potential and functional requirements of fiber optic bus designs for next generation aircraft are assessed. State-of-the-art component evaluations and projections were used in the system study. Complex networks were decomposed into dedicated structures, star buses, and serial buses for detailed analysis. Comparisons of dedicated links, star buses, and serial buses with and without full duplex operation and with considerations for terminal to terminal communication requirements were obtained. This baseline was then used to consider potential extensions of busing methods to include wavelength multiplexing and optical switches. Example buses were illustrated for various areas of the aircraft as potential starting points for more detail analysis as the platform becomes definitized

    CMOS optical centroid processor for an integrated Shack-Hartmann wavefront sensor

    Get PDF
    A Shack Hartmann wavefront sensor is used to detect the distortion of light in an optical wavefront. It does this by sampling the wavefront with an array of lenslets and measuring the displacement of focused spots from reference positions. These displacements are linearly related to the local wavefront tilts from which the entire wavefront can be reconstructed. In most Shack Hartmann wavefront sensors, a CCD is used to sample the entire wavefront, typically at a rate of 25 to 60 Hz, and a whole frame of light spots is read out before their positions are processed. This results in a data bottleneck. In this design, parallel processing is achieved by incorporating local centroid processing for each focused spot, thereby requiring only reduced bandwidth data to be transferred off-chip at a high rate. To incorporate centroid processing at the sensor level requires high levels of circuit integration not possible with a CCD technology. Instead a standard 0.7J..lmCMOS technology was used but photodetector structures for this technology are not well characterised. As such characterisation of several common photodiode structures was carried out which showed good responsitivity of the order of 0.3 AIW. Prior to fabrication on-chip, a hardware emulation system using a reprogrammable FPGA was built which implemented the centroiding algorithm successfully. Subsequently, the design was implemented as a single-chip CMOS solution. The fabricated optical centroid processor successfully computed and transmitted the centroids at a rate of more than 2.4 kHz, which when integrated as an array of tilt sensors will allow a data rate that is independent of the number of tilt sensors' employed. Besides removing the data bottleneck present in current systems, the design also offers advantages in terms of power consumption, system size and cost. The design was also shown to be extremely scalable to a complete low cost real time adaptive optics system

    CMOS optical centroid processor for an integrated Shack-Hartmann wavefront sensor

    Get PDF
    A Shack Hartmann wavefront sensor is used to detect the distortion of light in an optical wavefront. It does this by sampling the wavefront with an array of lenslets and measuring the displacement of focused spots from reference positions. These displacements are linearly related to the local wavefront tilts from which the entire wavefront can be reconstructed. In most Shack Hartmann wavefront sensors, a CCD is used to sample the entire wavefront, typically at a rate of 25 to 60 Hz, and a whole frame of light spots is read out before their positions are processed. This results in a data bottleneck. In this design, parallel processing is achieved by incorporating local centroid processing for each focused spot, thereby requiring only reduced bandwidth data to be transferred off-chip at a high rate. To incorporate centroid processing at the sensor level requires high levels of circuit integration not possible with a CCD technology. Instead a standard 0.7J..lmCMOS technology was used but photodetector structures for this technology are not well characterised. As such characterisation of several common photodiode structures was carried out which showed good responsitivity of the order of 0.3 AIW. Prior to fabrication on-chip, a hardware emulation system using a reprogrammable FPGA was built which implemented the centroiding algorithm successfully. Subsequently, the design was implemented as a single-chip CMOS solution. The fabricated optical centroid processor successfully computed and transmitted the centroids at a rate of more than 2.4 kHz, which when integrated as an array of tilt sensors will allow a data rate that is independent of the number of tilt sensors' employed. Besides removing the data bottleneck present in current systems, the design also offers advantages in terms of power consumption, system size and cost. The design was also shown to be extremely scalable to a complete low cost real time adaptive optics system

    Intelligent Sensor Networks

    Get PDF
    In the last decade, wireless or wired sensor networks have attracted much attention. However, most designs target general sensor network issues including protocol stack (routing, MAC, etc.) and security issues. This book focuses on the close integration of sensing, networking, and smart signal processing via machine learning. Based on their world-class research, the authors present the fundamentals of intelligent sensor networks. They cover sensing and sampling, distributed signal processing, and intelligent signal learning. In addition, they present cutting-edge research results from leading experts

    Towards practical linear optical quantum computing

    Get PDF
    Quantum computing promises a new paradigm of computation where information is processed in a way that has no classical analogue. There are a number of physical platforms conducive to quantum computation, each with a number of advantages and challenges. Single photons, manipulated using integrated linear optics, constitute a promising platform for universal quantum computation. Their low decoherence rates make them particularly favourable, however the inability to perform deterministic two-qubit gates and the issue of photon loss are challenges that need to be overcome. In this thesis we explore the construction of a linear optical quantum computer based on the cluster state model. We identify the different necessary stages: state preparation, cluster state construction and implementation of quantum error correcting codes, and address the challenges that arise in each of these stages. For the state preparation, we propose a series of linear optical circuits for the generation of small entangled states, assessing their performance under different scenarios. For the cluster state construction, we introduce a ballistic scheme which not only consumes an order of magnitude fewer resources than previously proposed schemes, but also benefits from a natural loss tolerance. Based on this scheme, we propose a full architectural blueprint with fixed physical depth. We make investigations into the resource efficiency of this architecture and propose a new multiplexing scheme which optimises the use of resources. Finally, we study the integration of quantum error-correcting codes in the linear optical scheme proposed and suggest three ways in which the linear optical scheme can be made fault-tolerant.Open Acces

    The Fifth NASA Symposium on VLSI Design

    Get PDF
    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design

    Optical flow switched networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Includes bibliographical references (p. 253-279).In the four decades since optical fiber was introduced as a communications medium, optical networking has revolutionized the telecommunications landscape. It has enabled the Internet as we know it today, and is central to the realization of Network-Centric Warfare in the defense world. Sustained exponential growth in communications bandwidth demand, however, is requiring that the nexus of innovation in optical networking continue, in order to ensure cost-effective communications in the future. In this thesis, we present Optical Flow Switching (OFS) as a key enabler of scalable future optical networks. The general idea behind OFS-agile, end-to-end, all-optical connections-is decades old, if not as old as the field of optical networking itself. However, owing to the absence of an application for it, OFS remained an underdeveloped idea-bereft of how it could be implemented, how well it would perform, and how much it would cost relative to other architectures. The contributions of this thesis are in providing partial answers to these three broad questions. With respect to implementation, we address the physical layer design of OFS in the metro-area and access, and develop sensible scheduling algorithms for OFS communication. Our performance study comprises a comparative capacity analysis for the wide-area, as well as an analytical approximation of the throughput-delay tradeoff offered by OFS for inter-MAN communication. Lastly, with regard to the economics of OFS, we employ an approximate capital expenditure model, which enables a throughput-cost comparison of OFS with other prominent candidate architectures. Our conclusions point to the fact that OFS offers significant advantage over other architectures in economic scalability.(cont.) In particular, for sufficiently heavy traffic, OFS handles large transactions at far lower cost than other optical network architectures. In light of the increasing importance of large transactions in both commercial and defense networks, we conclude that OFS may be crucial to the future viability of optical networking.by Guy E. Weichenberg.Ph.D

    Computational Methods in Science and Engineering : Proceedings of the Workshop SimLabs@KIT, November 29 - 30, 2010, Karlsruhe, Germany

    Get PDF
    In this proceedings volume we provide a compilation of article contributions equally covering applications from different research fields and ranging from capacity up to capability computing. Besides classical computing aspects such as parallelization, the focus of these proceedings is on multi-scale approaches and methods for tackling algorithm and data complexity. Also practical aspects regarding the usage of the HPC infrastructure and available tools and software at the SCC are presented
    corecore