46 research outputs found

    Effects of electrical storage with batteries in the formation of the final price of the energy rate in the colombian electricity market

    Get PDF
    ilustraciones, diagramas, mapas, tablasLos sistemas de almacenamiento de energía eléctrica pueden contribuir con diversas necesidades propias de los sistemas eléctricos de potencia, como son los controles de frecuencia, el almacenamiento de energía en horario valle para ser inyectada a la red una vez la demanda lo requiera y la integración de las fuentes de energía no convencionales. Los costos de la electricidad renovable han presentado variaciones a la baja de manera significativa a causa de las mejoras en las tecnologías en los últimos años, el mercado eléctrico colombiano no es ajeno a los retos que conlleva la implementación de dichos sistemas de almacenamiento. Esta investigación está orientada a realizar una revisión de los mercados eléctricos internacionales donde se hayan implementado sistemas de almacenamiento con baterías al igual que una revisión al marco regulatorio que rige a estos mercados, con esto se busca entender los posibles efectos de la implementación de sistemas de almacenamiento de energía eléctrica en la formación de la tarifa de energía para el mercado eléctrico colombiano. Para el desarrollo de este trabajo final de maestría se realizará un análisis cualitativo a partir de una revisión de literatura, con el fin que esta permita extraer aparte importante de artículos científicos y publicaciones especializadas, proporcionando así un contexto y una justificación a la presente investigación. (Texto tomado de la fuente)Electric energy storage systems can contribute to various needs of electric power systems, such as frequency controls, energy storage at off-peak hours, to be injected into the grid once the demand requires it, and the integration of unconventional energy sources. The costs of renewable electricity have presented significant downward variations due to improvements in technologies in recent years, the Colombian electricity market is not immune to the challenges that the implementation of such storage systems entails. This research is aimed at conducting a review of the international electricity markets where storage systems with batteries have been implemented, as well as a review of the regulatory framework that governs these markets, with this it seeks to understand the possible effects of the implementation of systems of storage of electrical energy in the formation of the energy rate for the Colombian electricity market. For the development of this final master's project, a qualitative analysis will be carried out based on a literature review, in order to allow it to extract an important part of scientific articles and specialized publications, thus providing a context and justification for this research.MaestríaMagister en Ingeniería - Sistemas EnergéticosÁrea Curricular de Ingeniería de Sistemas e Informátic

    Essais d'impact et propriétés thermiques résiduelles sur composites silice/phénolique

    Get PDF
    Les composites Silice/phénolique sont utilisés comme boucliers thermique en rentrée atmosphérique dans le domaine spatial et stratégique. Pour cette application le matériau est soumis à des sollicitations thermomécaniques qui conduisent à son délaminage. Pour réduire les conséquences de ces délaminages, il a été proposé de découper les tissus de Silice et de les réassembler à la manière d’un patchwork. Des matériaux « classiques » (empilement de tissus 2D 0°/90°) et « patchwork » ont été impactés sur une tour de chute instrumentée (Instron). Ces essais ont été menés à différentes énergies pour créer plusieurs types d’endommagements. Ces endommagements ont été analysés par tomographie aux rayons X pour révéler en particulier la morphologie des délaminages. Parallèlement des mesures de diffusivité thermique transverse initiale et résiduelle ont été réalisées sur ces échantillons pour différents niveaux d’endommagement. Ces mesures ont permis d’établir des corrélations entre la perte de diffusivité thermique et l’endommagement du matériau. La comparaison entre les résultats sur échantillons « classiques » et « patchworks » a montré des différences de modes d’endommagement et l’intérêt de la solution patchwork

    Three dimensional multi-pass repair weld simulations

    Get PDF
    Full 3-dimensional (3-D) simulation of multi-pass weld repairs is now feasible and practical given the development of improved analysis tools and significantly greater computer power. This paper presents residual stress results from 3-D finite element (FE) analyses simulating a long (arc length of 62°) and a short (arc length of 20°) repair to a girth weld in a 19.6 mm thick, 432 mm outer diameter cylindrical test component. Sensitivity studies are used to illustrate the importance of weld bead inter-pass temperature assumptions and to show where model symmetry can be used to reduce the analysis size. The predicted residual stress results are compared with measured axial, hoop and radial through-wall profiles in the heat affected zone of the test component repairs. A good overall agreement is achieved between neutron diffraction and deep hole drilling measurements and the prediction at the mid-length position of the short repair. These results demonstrate that a coarse 3-D FE model, using a ‘block-dumped’ weld bead deposition approach (rather than progressively depositing weld metal), can accurately capture the important components of a short repair weld residual stress field. However, comparisons of measured with predicted residual stress at mid-length and stop-end positions in the long repair are less satisfactory implying some shortcomings in the FE modelling approach that warrant further investigation

    In-Datacenter Performance Analysis of a Tensor Processing Unit

    Full text link
    Many architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU)---deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, ...) that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X - 30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X - 80X higher. Moreover, using the GPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.Comment: 17 pages, 11 figures, 8 tables. To appear at the 44th International Symposium on Computer Architecture (ISCA), Toronto, Canada, June 24-28, 201

    Overcoming leakage in scalable quantum error correction

    Full text link
    Leakage of quantum information out of computational states into higher energy states represents a major challenge in the pursuit of quantum error correction (QEC). In a QEC circuit, leakage builds over time and spreads through multi-qubit interactions. This leads to correlated errors that degrade the exponential suppression of logical error with scale, challenging the feasibility of QEC as a path towards fault-tolerant quantum computation. Here, we demonstrate the execution of a distance-3 surface code and distance-21 bit-flip code on a Sycamore quantum processor where leakage is removed from all qubits in each cycle. This shortens the lifetime of leakage and curtails its ability to spread and induce correlated errors. We report a ten-fold reduction in steady-state leakage population on the data qubits encoding the logical state and an average leakage population of less than 1×1031 \times 10^{-3} throughout the entire device. The leakage removal process itself efficiently returns leakage population back to the computational basis, and adding it to a code circuit prevents leakage from inducing correlated error across cycles, restoring a fundamental assumption of QEC. With this demonstration that leakage can be contained, we resolve a key challenge for practical QEC at scale.Comment: Main text: 7 pages, 5 figure

    Suppressing quantum errors by scaling a surface code logical qubit

    Full text link
    Practical quantum computing will require error rates that are well below what is achievable with physical qubits. Quantum error correction offers a path to algorithmically-relevant error rates by encoding logical qubits within many physical qubits, where increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low in order for logical performance to improve with increasing code size. Here, we report the measurement of logical qubit performance scaling across multiple code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, both in terms of logical error probability over 25 cycles and logical error per cycle (2.914%±0.016%2.914\%\pm 0.016\% compared to 3.028%±0.023%3.028\%\pm 0.023\%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7×1061.7\times10^{-6} logical error per round floor set by a single high-energy event (1.6×1071.6\times10^{-7} when excluding this event). We are able to accurately model our experiment, and from this model we can extract error budgets that highlight the biggest challenges for future systems. These results mark the first experimental demonstration where quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.Comment: Main text: 6 pages, 4 figures. v2: Update author list, references, Fig. S12, Table I

    Measurement-induced entanglement and teleportation on a noisy quantum processor

    Full text link
    Measurement has a special role in quantum theory: by collapsing the wavefunction it can enable phenomena such as teleportation and thereby alter the "arrow of time" that constrains unitary evolution. When integrated in many-body dynamics, measurements can lead to emergent patterns of quantum information in space-time that go beyond established paradigms for characterizing phases, either in or out of equilibrium. On present-day NISQ processors, the experimental realization of this physics is challenging due to noise, hardware limitations, and the stochastic nature of quantum measurement. Here we address each of these experimental challenges and investigate measurement-induced quantum information phases on up to 70 superconducting qubits. By leveraging the interchangeability of space and time, we use a duality mapping, to avoid mid-circuit measurement and access different manifestations of the underlying phases -- from entanglement scaling to measurement-induced teleportation -- in a unified way. We obtain finite-size signatures of a phase transition with a decoding protocol that correlates the experimental measurement record with classical simulation data. The phases display sharply different sensitivity to noise, which we exploit to turn an inherent hardware limitation into a useful diagnostic. Our work demonstrates an approach to realize measurement-induced physics at scales that are at the limits of current NISQ processors

    Non-Abelian braiding of graph vertices in a superconducting processor

    Full text link
    Indistinguishability of particles is a fundamental principle of quantum mechanics. For all elementary and quasiparticles observed to date - including fermions, bosons, and Abelian anyons - this principle guarantees that the braiding of identical particles leaves the system unchanged. However, in two spatial dimensions, an intriguing possibility exists: braiding of non-Abelian anyons causes rotations in a space of topologically degenerate wavefunctions. Hence, it can change the observables of the system without violating the principle of indistinguishability. Despite the well developed mathematical description of non-Abelian anyons and numerous theoretical proposals, the experimental observation of their exchange statistics has remained elusive for decades. Controllable many-body quantum states generated on quantum processors offer another path for exploring these fundamental phenomena. While efforts on conventional solid-state platforms typically involve Hamiltonian dynamics of quasi-particles, superconducting quantum processors allow for directly manipulating the many-body wavefunction via unitary gates. Building on predictions that stabilizer codes can host projective non-Abelian Ising anyons, we implement a generalized stabilizer code and unitary protocol to create and braid them. This allows us to experimentally verify the fusion rules of the anyons and braid them to realize their statistics. We then study the prospect of employing the anyons for quantum computation and utilize braiding to create an entangled state of anyons encoding three logical qubits. Our work provides new insights about non-Abelian braiding and - through the future inclusion of error correction to achieve topological protection - could open a path toward fault-tolerant quantum computing

    Probing the viability of oxo-coupling pathways in iridium-catalyzed oxygen evolution

    Get PDF
    [Image: see text] A series of Cp*Ir(III) dimers have been synthesized to elucidate the mechanistic viability of radical oxo-coupling pathways in iridium-catalyzed O(2) evolution. The oxidative stability of the precursors toward nanoparticle formation and their oxygen evolution activity have been investigated and compared to suitable monomeric analogues. We found that precursors bearing monodentate NHC ligands degraded to form nanoparticles (NPs), and accordingly their O(2) evolution rates were not significantly influenced by their nuclearity or distance between the two metals in the dimeric precursors. A doubly chelating bis-pyridine–pyrazolide ligand provided an oxidation-resistant ligand framework that allowed a more meaningful comparison of catalytic performance of dimers with their corresponding monomers. With sodium periodate (NaIO(4)) as the oxidant, the dimers provided significantly lower O(2) evolution rates per [Ir] than the monomer, suggesting a negative interaction instead of cooperativity in the catalytic cycle. Electrochemical analysis of the dimers further substantiates the notion that no radical oxyl-coupling pathways are accessible. We thus conclude that the alternative path, nucleophilic attack of water on high-valent Ir-oxo species, may be the preferred mechanistic pathway of water oxidation with these catalysts, and bimolecular oxo-coupling is not a valid mechanistic alternative as in the related ruthenium chemistry, at least in the present system
    corecore