3,141 research outputs found
Development of a Low-cost Hybrid Music Synthesizer
Until recently, affordable music equipment has always been seen as “budget”, providing a poor user experience. Inexpensive equipment was plagued with audible noise, signal integrity issues, and convoluted user interfaces. Companies like Teenage Engineering have proven that this does not have to be the case, in 2019 introducing their Pocket Operator” series for $89. Due to the modern availability of low cost, high quality, consumer off the shelf [COTS] analog and digital components as well as creative engineering, the quality of inexpensive audio equipment has increased significantly. Despite these industry advances, the market is relatively small and shows a great potential for growth.
This senior project capitalizes on this market possibility, providing a low-cost analog/ digital hybrid synthesizer architecture without the aforementioned caveats of poor signal integrity, user interface and sound quality. The synthesizer provides a low latency, simple to use, visual interface to the user. This visual interface allows intuitive and simple-to-learn access to the synthesizer’s parameters. The value of these parameters can also be loaded or saved from non-volatile memory. The power will be provided locally by a battery. Therefore, the synthesizer’s power draw will be low enough to ensure a significant on-time. Physically, the synthesizer provides industry standard audio connectivity to be interfaced with the end user’s existing equipment
ATLAS SCT Endcap Module Production
The challenges for the tracking detector systems at the LHC are unprecedented in terms of the number of channels, the required read-out speed and the expected radiation levels. The ATLAS Semiconductor Tracker (SCT) end-caps have a total of about 3 million electronics channels each reading out every 25 ns into its own on-chip 3.3 ?s buffer. The highest anticipated dose after 10 years operation is 1.4×1014 cm-2 in units of 1 MeV neutron equivalent (assuming the damage factors scale with the non-ionising energy loss). The forward tracker has 1976 double-sided modules, mostly of area ? 70 cm2, each having 2×768 strips read out by 6 ASICs per side. The requirement to achieve an average perpendicular radiation length of 1.5% X0, while coping with up to 7 W dissipation per module (after irradiation), leads to stringent constraints on the thermal design. The additional requirement of 1500 e- equivalent noise charge (ENC) rising to only 1800 e-ENC after irradiation, provides stringent design constraints on both high-density Cu/Polyimide flex read-out circuit and the ABCD3TA read-out ASICs. Finally, the accuracy of module assembly must not compromise the 16 ?m (r-?) resolution perpendicular to the strip directions or 580 ?m radial resolution coming from the 40 mrad front-back stereo angle. 2196 modules were built to the tight tolerances and specifications required for the SCT. This was 220 more than the 1976 required and represents a yield of 93%. The component flow was at times tight, but the module production rate of 40 to 50 per week was maintained despite this. The distributed production was not found to be a major logistical problem and it allowed additional flexibility to take advantage of where the effort was available, including any spare capacity, for building the end-cap modules. The collaboration that produced the ATLAS SCT end-cap modules kept in close contact at all times so that the effects of shortages or stoppages at different sites could be rapidly resolved
Une odyssée de la communication classique au calcul quantique tolérant aux fautes
Cette thèse traite principalement de la protection de l'information. Non pas au sens de protection des renseignements privés dont on entend souvent parler dans les médias,
mais plutôt au sens de robustesse à la corruption des données. En effet, lorsque nous utilisons un cellulaire pour envoyer un texto, plusieurs facteurs, comme les particules atmosphériques et l'interférence avec d'autres signaux, peuvent modifier le message initial. Si nous ne faisons rien pour protéger le signal, il est peu probable que le contenu du texto reste inchangé lors de la réception.
C'est ce problème qui a motivé le premier projet de recherche de cette thèse.
Sous la supervision du professeur David Poulin, j'ai étudié une généralisation des codes polaires, une technologie au coeur du protocole de télécommunication de 5\textsuperscript{ième} génération (5G). Pour cela, j'ai utilisé les réseaux de tenseurs, outils
mathématiques initialement développés pour étudier les matériaux quantiques. L'avantage de cette approche est qu'elle permet une représentation graphique
intuitive du problème, ce qui facilite grandement le développement des algorithmes.
À la suite de cela,
j'ai étudié l'impact de deux paramètres clés sur la performance des
codes polaires convolutifs.
En considérant le temps d'exécution des protocoles,
j'ai identifié les valeurs de paramètres qui permettent de mieux protéger
l'information à un coût raisonnable.
Ce résultat permet ainsi de mieux comprendre comment améliorer les performances
des codes polaires, ce qui a un grand potentiel d'application en raison
de l'importance de ces derniers.
Cette idée d'utiliser des outils mathématiques graphiques pour étudier des
problèmes de protection de l'information sera le fil conducteur dans le reste de
la thèse. Cependant, pour la suite, les erreurs n'affecteront plus des systèmes
de communications classiques, mais plutôt des systèmes de calcul quantique.
Comme je le présenterai dans cette thèse, les systèmes quantiques sont
naturellement beaucoup plus sensibles aux erreurs.
À cet égard, j'ai effectué un stage au sein de l'équipe de Microsoft Research,
principalement sous la supervision de Michael Beverland, au cours duquel j'ai conçu
des circuits permettant de mesurer un système quantique afin d'identifier les
potentielles fautes qui affectent celui-ci.
Avec le reste de l'équipe, nous avons prouvé mathématiquement
que les circuits que j'ai développés sont optimaux.
Ensuite, j'ai proposé une architecture pour implémenter ces circuits
de façon plus réaliste en laboratoire
et les simulations numériques que j'ai effectuées ont démontré des résultats
prometteurs pour cette approche.
D'ailleurs, ce résultat a été accueilli avec grand intérêt par la communauté
scientifique et a été publié dans la prestigieuse revue \textit{Physical Review Letters}.
Pour complémenter ce travail,
j'ai collaboré avec l'équipe de Microsoft pour démontrer analytiquement
que les architectures actuelles d'ordinateurs quantiques reposant sur des connexions
locales entre les qubits ne suffiront pas pour la réalisation d'ordinateurs de grandes
tailles protégés des erreurs.
L'ensemble de ces résultats sont inspirés de méthodes issues de la théorie des graphes
et plus particulièrement des méthodes de représentation des graphes dans un espace en
deux dimensions.
L'utilisation de telles méthodes pour le design de circuits et d'architectures quantiques
est également une approche novatrice.
J'ai terminé ma thèse sous la supervision du professeur Stefanos Kourtis.
Avec celui-ci, j'ai créé une méthode,
fondée sur la théorie des graphes et des méthodes d'informatique théorique,
qui permet de concevoir automatiquement de nouveaux protocoles de
correction des erreurs dans un système quantique.
La méthode que j'ai conçue repose sur la résolution d'un problème
de satisfaction de contraintes.
Ce type de problème est généralement très difficile à résoudre.
Cependant,
il existe pour ces derniers un paramètre critique.
En variant ce paramètre,
le système passe d'une phase où les instances sont facilement résolubles
vers une phase où il est facile de montrer qu'il n'y pas de solution.
Les problèmes difficiles sont alors concentrés autour de cette transition.
À l'aide d'expériences numériques,
j'ai montré que la méthode proposée a un comportement similaire.
Cela permet de montrer qu'il existe un régime où il est beaucoup plus facile
que ce que le croyait la communauté de concevoir des protocoles de corrections
des erreurs quantiques.
De plus, en autant que je sache,l'article qui a résulté de ce travail est le premier qui met de l'avant ce lien entre la construction de protocoles de corrections des erreurs,
les problèmes de satisfaction de contraintes et les transitions de phases
Design of an Efficient Wall Adapter
This report presents a design for an efficient AC adapter that uses 85% less power than conventional adapters when idle, for an additional cost of only 1.30. The team logically derived the final polling design from three initially proposed solutions. This project addresses the inefficiencies of modern AC adapters, whose increased utilization has become an increasing detriment to both economy and environment
Recommended from our members
Coupling, Conservation, and Performance in Numerical Simulations
This thesis considers three aspects of the numerical simulations, which arecoupling, conservation, and performance. We conduct a project and addressone challenge from each of these aspects.We propose a novel penalty force to enforce contacts with accurate Coulombfriction. The force is compatible with fully-implicit time integration and theuse of optimization-based integration. In addition to processing collisionsbetween deformable objects, the force can be used to couple rigid bodies todeformable objects or the material point method. The force naturally leads tostable stacking without drift over time, even when solvers are not run toconvergence. The force leads to an asymmetrical system, and we provide apractical solution for handling these.Next we present a new technique for transferring momentum and velocity betweenparticles and MAC grids based on the Affine-Particle-In-Cell (APIC) frameworkpreviously developed for co-locatedgrids. We extend the original APIC paper and show thatthe proposed transfers preserve linear and angular momentum and also satisfyall of the original APIC properties.Early indications in the original APIC paper suggested that APIC might besuitable for simulating high Reynolds fluids due to favorable retention ofvortices, but these properties were not studied further. We use twodimensional Fourier analysis to investigate dissipation in the limit \dt=0.We investigate dissipation and vortex retention numerically to quantify theeffectiveness of APIC compared with other transfer algorithms.Finally we present an efficient solver for problems typically seen inmicrofluidic applications.Microfluidic ``lab on a chip'' devices are small devices that operate on smalllength scales on small volumes of fluid. Designs for microfluidic chips aregenerally composed of standardized and often repeated components connected bylong, thin, straight fluid channels. We propose a novel discretizationalgorithm for simulating the Stokes equations on geometry with these features,which produces sparse linear systems with many repeated matrix blocks. Thediscretization is formally third order accurate for velocity and second orderaccurate for pressure in the norm. We also propose a novel linearsystem solver based on cyclic reduction, reordered sparse Gaussian elimination,and operation caching that is designed to efficiently solve systems withrepeated matrix blocks
Graph-based representations and coupled verification of VLSI schematics and layouts
Includes bibliographical references (p. 199-202).Work supported by the Air Force Office of Scientific Research. AFSOR 86-0164 Work supported by IBM and Analog Devices.Cyrus S. Bamji
Recommended from our members
Architectural Exploration and Design Methodologies of Photonic Interconnection Networks
Photonic technology is becoming an increasingly attractive solution to the problems facing today's electronic chip-scale interconnection networks. Recent progress in silicon photonics research has enabled the demonstration of all the necessary optical building blocks for creating extremely high-bandwidth density and energy-efficient links for on- and off-chip communications. From the feasibility and architecture perspective however, photonics represents a dramatic paradigm shift from traditional electronic network designs due to fundamental differences in how electronics and photonics function and behave. As a result of these differences, new modeling and analysis methods must be employed in order to properly realize a functional photonic chip-scale interconnect design. In this work, we present a methodology for characterizing and modeling fundamental photonic building blocks which can subsequently be combined to form full photonic network architectures. We also describe a set of tools which can be utilized to assess the physical-layer and system-level performance properties of a photonic network. The models and tools are integrated in a novel open-source design and simulation environment called PhoenixSim. Next, we leverage PhoenixSim for the study of chip-scale photonic networks. We examine several photonic networks through the synergistic study of both physical-layer metrics and system-level metrics. This holistic analysis method enables us to provide deeper insight into architecture scalability since it considers insertion loss, crosstalk, and power dissipation. In addition to these novel physical-layer metrics, traditional system-level metrics of bandwidth and latency are also obtained. Lastly, we propose a novel routing architecture known as wavelength-selective spatial routing. This routing architecture is analogous to electronic virtual channels since it enables the transmission of multiple logical optical channels through a single physical plane (i.e. the waveguides). The available wavelength channels are partitioned into separate groups, and each group is routed independently in the network. Each partition is spectrally multiplexed, as opposed to temporally multiplexed in the electronic case. The wavelength-selective spatial routing technique benefits network designers by provider lower contention and increased path diversity
Tailoring three-dimensional topological codes for biased noise
Tailored topological stabilizer codes in two dimensions have been shown to
exhibit high storage threshold error rates and improved subthreshold
performance under biased Pauli noise. Three-dimensional (3D) topological codes
can allow for several advantages including a transversal implementation of
non-Clifford logical gates, single-shot decoding strategies, parallelized
decoding in the case of fracton codes as well as construction of fractal
lattice codes. Motivated by this, we tailor 3D topological codes for enhanced
storage performance under biased Pauli noise. We present Clifford deformations
of various 3D topological codes, such that they exhibit a threshold error rate
of under infinitely biased Pauli noise. Our examples include the 3D
surface code on the cubic lattice, the 3D surface code on a checkerboard
lattice that lends itself to a subsystem code with a single-shot decoder, the
3D color code, as well as fracton models such as the X-cube model, the
Sierpinski model and the Haah code. We use the belief propagation with ordered
statistics decoder (BP-OSD) to study threshold error rates at finite bias. We
also present a rotated layout for the 3D surface code, which uses roughly half
the number of physical qubits for the same code distance under appropriate
boundary conditions. Imposing coprime periodic dimensions on this rotated
layout leads to logical operators of weight at infinite bias and a
corresponding subthreshold scaling of the logical failure rate,
where is the number of physical qubits in the code. Even though this
scaling is unstable due to the existence of logical representations with
low-rate Pauli errors, the number of such representations scales only
polynomially for the Clifford-deformed code, leading to an enhanced effective
distance.Comment: 51 pages, 34 figure
Synthesis of Translinear Analog Signal Processing Systems
Even in the predominantly digital world of today, analog circuits maintain a significant and necessary role in the way electronic signals are generated and processed. A straightforward method for synthesizing analog circuits would greatly improve the way that analog circuits are currently designed. In this dissertation, I build upon a synthesis methodology for translinear circuits originally introduced by Bradley Minch that uses multiple-input translinear elements (MITEs) as its fundamental building block. Introducing a graphical representation for the way that MITEs are connected, the designer can get a feel for how the equations relate to the physical circuit structure and allows for a visual method for reducing the number of transistors in the final circuit. Having refined some of the synthesis steps, I illustrate the methodology with many examples of static and dynamic MITE networks. For static MITE networks, I present a squaring reciprocal circuit and two versions of a vector magnitude circuit. A first-order log-domain filter and an RMS-to-DC converter are synthesized showing two first-order systems, both linear and non-linear. Higher order systems are illustrated with the synthesis of a second-order log-domain filter and a quadrature oscillator. The resulting circuits from several of these examples are combined to form a phase-locked loop (PLL). I present simulated and experimental results from many of these examples. Additionally, I present information related to the process of programming the floating-gate charge for the MITEs through the use of Fowler-Nordheim tunneling and hot-electron injection. I also include code for a Perl program that determines the optimum connections to minimize the total number of MITEs for a given circuit.NSF Career award CCR-998462
- …