14,248 research outputs found
Quantum Mechanics Lecture Notes. Selected Chapters
These are extended lecture notes of the quantum mechanics course which I am
teaching in the Weizmann Institute of Science graduate physics program. They
cover the topics listed below. The first four chapter are posted here. Their
content is detailed on the next page. The other chapters are planned to be
added in the coming months.
1. Motion in External Electromagnetic Field. Gauge Fields in Quantum
Mechanics.
2. Quantum Mechanics of Electromagnetic Field
3. Photon-Matter Interactions
4. Quantization of the Schr\"odinger Field (The Second Quantization)
5. Open Systems. Density Matrix
6. Adiabatic Theory. The Berry Phase. The Born-Oppenheimer Approximation
7. Mean Field Approaches for Many Body Systems -- Fermions and Boson
Countermeasures for the majority attack in blockchain distributed systems
La tecnología Blockchain es considerada como uno de los paradigmas informáticos más importantes posterior al Internet; en función a sus características únicas que la hacen ideal para registrar, verificar y administrar información de diferentes transacciones. A pesar de esto, Blockchain se enfrenta a diferentes problemas de seguridad, siendo el ataque del 51% o ataque mayoritario uno de los más importantes. Este consiste en que uno o más mineros tomen el control de al menos el 51% del Hash extraído o del cómputo en una red; de modo que un minero puede manipular y modificar arbitrariamente la información registrada en esta tecnología. Este trabajo se enfocó en diseñar e implementar estrategias de detección y mitigación de ataques mayoritarios (51% de ataque) en un sistema distribuido Blockchain, a partir de la caracterización del comportamiento de los mineros. Para lograr esto, se analizó y evaluó el Hash Rate / Share de los mineros de Bitcoin y Crypto Ethereum, seguido del diseño e implementación de un protocolo de consenso para controlar el poder de cómputo de los mineros. Posteriormente, se realizó la exploración y evaluación de modelos de Machine Learning para detectar software malicioso de tipo Cryptojacking.DoctoradoDoctor en Ingeniería de Sistemas y Computació
Floquet codes and phases in twist-defect networks
We introduce a class of models, dubbed paired twist-defect networks, that
generalize the structure of Kitaev's honeycomb model for which there is a
direct equivalence between: i) Floquet codes (FCs), ii) adiabatic loops of
gapped Hamiltonians, and iii) unitary loops or Floquet-enriched topological
orders (FETs) many-body localized phases. This formalism allows one to apply
well-characterized topological index theorems for FETs to understand the
dynamics of FCs, and to rapidly assess the code properties of many FC models.
As an application, we show that the Honeycomb Floquet code of Haah and Hastings
is governed by an irrational value of the chiral Floquet index, which implies a
topological obstruction to forming a simple, logical boundary with the same
periodicity as the bulk measurement schedule. In addition, we construct
generalizations of the Honeycomb Floquet code exhibiting arbitrary
anyon-automorphism dynamics for general types of Abelian topological order.Comment: 17+5 pages, 10 figure
Trainable Variational Quantum-Multiblock ADMM Algorithm for Generation Scheduling
The advent of quantum computing can potentially revolutionize how complex
problems are solved. This paper proposes a two-loop quantum-classical solution
algorithm for generation scheduling by infusing quantum computing, machine
learning, and distributed optimization. The aim is to facilitate employing
noisy near-term quantum machines with a limited number of qubits to solve
practical power system optimization problems such as generation scheduling. The
outer loop is a 3-block quantum alternative direction method of multipliers
(QADMM) algorithm that decomposes the generation scheduling problem into three
subproblems, including one quadratically unconstrained binary optimization
(QUBO) and two non-QUBOs. The inner loop is a trainable quantum approximate
optimization algorithm (T-QAOA) for solving QUBO on a quantum computer. The
proposed T-QAOA translates interactions of quantum-classical machines as
sequential information and uses a recurrent neural network to estimate
variational parameters of the quantum circuit with a proper sampling technique.
T-QAOA determines the QUBO solution in a few quantum-learner iterations instead
of hundreds of iterations needed for a quantum-classical solver. The outer
3-block ADMM coordinates QUBO and non-QUBO solutions to obtain the solution to
the original problem. The conditions under which the proposed QADMM is
guaranteed to converge are discussed. Two mathematical and three generation
scheduling cases are studied. Analyses performed on quantum simulators and
classical computers show the effectiveness of the proposed algorithm. The
advantages of T-QAOA are discussed and numerically compared with QAOA which
uses a stochastic gradient descent-based optimizer.Comment: 11 page
Modelling uncertainties for measurements of the H → γγ Channel with the ATLAS Detector at the LHC
The Higgs boson to diphoton (H → γγ) branching ratio is only 0.227 %, but this
final state has yielded some of the most precise measurements of the particle. As
measurements of the Higgs boson become increasingly precise, greater import is
placed on the factors that constitute the uncertainty. Reducing the effects of these
uncertainties requires an understanding of their causes. The research presented
in this thesis aims to illuminate how uncertainties on simulation modelling are
determined and proffers novel techniques in deriving them.
The upgrade of the FastCaloSim tool is described, used for simulating events in
the ATLAS calorimeter at a rate far exceeding the nominal detector simulation,
Geant4. The integration of a method that allows the toolbox to emulate the
accordion geometry of the liquid argon calorimeters is detailed. This tool allows
for the production of larger samples while using significantly fewer computing
resources.
A measurement of the total Higgs boson production cross-section multiplied
by the diphoton branching ratio (σ × Bγγ) is presented, where this value was
determined to be (σ × Bγγ)obs = 127 ± 7 (stat.) ± 7 (syst.) fb, within agreement
with the Standard Model prediction. The signal and background shape modelling
is described, and the contribution of the background modelling uncertainty to the
total uncertainty ranges from 18–2.4 %, depending on the Higgs boson production
mechanism.
A method for estimating the number of events in a Monte Carlo background
sample required to model the shape is detailed. It was found that the size of
the nominal γγ background events sample required a multiplicative increase by
a factor of 3.60 to adequately model the background with a confidence level of
68 %, or a factor of 7.20 for a confidence level of 95 %. Based on this estimate,
0.5 billion additional simulated events were produced, substantially reducing the
background modelling uncertainty.
A technique is detailed for emulating the effects of Monte Carlo event generator
differences using multivariate reweighting. The technique is used to estimate the
event generator uncertainty on the signal modelling of tHqb events, improving the
reliability of estimating the tHqb production cross-section. Then this multivariate
reweighting technique is used to estimate the generator modelling uncertainties
on background V γγ samples for the first time. The estimated uncertainties were
found to be covered by the currently assumed background modelling uncertainty
Moduli Stabilisation and the Statistics of Low-Energy Physics in the String Landscape
In this thesis we present a detailed analysis of the statistical properties of the type IIB flux landscape of string theory. We focus primarily on models constructed via the Large Volume Scenario (LVS) and KKLT and study the distribution of various phenomenologically relevant quantities. First, we compare our considerations with previous results and point out the importance of Kähler moduli stabilisation, which has been neglected in this context so far. We perform different moduli stabilisation procedures and compare the resulting distributions. To this end, we derive the expressions for the gravitino mass, various quantities related to axion physics and other phenomenologically interesting quantities in terms of the fundamental flux dependent quantities , and , the parameter which specifies the nature of the non-perturbative effects. Exploiting our knowledge of the distribution of these fundamental parameters, we can derive a distribution for all the quantities we are interested in. For models that are stabilised via LVS we find a logarithmic distribution, whereas for KKLT and perturbatively stabilised models we find a power-law distribution. We continue by investigating the statistical significance of a newly found class of KKLT vacua and present a search algorithm for such constructions. We conclude by presenting an application of our findings. Given the mild preference for higher scale supersymmetry breaking, we present a model of the early universe, which allows for additional periods of early matter domination and ultimately leads to rather sharp predictions for the dark matter mass in this model. We find the dark matter mass to be in the very heavy range
Data-to-text generation with neural planning
In this thesis, we consider the task of data-to-text generation, which takes non-linguistic
structures as input and produces textual output. The inputs can take the form of
database tables, spreadsheets, charts, and so on. The main application of data-to-text
generation is to present information in a textual format which makes it accessible to
a layperson who may otherwise find it problematic to understand numerical figures.
The task can also automate routine document generation jobs, thus improving human
efficiency. We focus on generating long-form text, i.e., documents with multiple paragraphs. Recent approaches to data-to-text generation have adopted the very successful
encoder-decoder architecture or its variants. These models generate fluent (but often
imprecise) text and perform quite poorly at selecting appropriate content and ordering
it coherently. This thesis focuses on overcoming these issues by integrating content
planning with neural models. We hypothesize data-to-text generation will benefit from
explicit planning, which manifests itself in (a) micro planning, (b) latent entity planning, and (c) macro planning. Throughout this thesis, we assume the input to our
generator are tables (with records) in the sports domain. And the output are summaries
describing what happened in the game (e.g., who won/lost, ..., scored, etc.).
We first describe our work on integrating fine-grained or micro plans with data-to-text generation. As part of this, we generate a micro plan highlighting which records
should be mentioned and in which order, and then generate the document while taking
the micro plan into account.
We then show how data-to-text generation can benefit from higher level latent entity planning. Here, we make use of entity-specific representations which are dynam ically updated. The text is generated conditioned on entity representations and the
records corresponding to the entities by using hierarchical attention at each time step.
We then combine planning with the high level organization of entities, events, and
their interactions. Such coarse-grained macro plans are learnt from data and given
as input to the generator. Finally, we present work on making macro plans latent
while incrementally generating a document paragraph by paragraph. We infer latent
plans sequentially with a structured variational model while interleaving the steps of
planning and generation. Text is generated by conditioning on previous variational
decisions and previously generated text.
Overall our results show that planning makes data-to-text generation more interpretable, improves the factuality and coherence of the generated documents and re duces redundancy in the output document
Cost-effective non-destructive testing of biomedical components fabricated using additive manufacturing
Biocompatible titanium-alloys can be used to fabricate patient-specific medical components using additive manufacturing (AM). These novel components have the potential to improve clinical outcomes in various medical scenarios. However, AM introduces stability and repeatability concerns, which are potential roadblocks for its widespread use in the medical sector. Micro-CT imaging for non-destructive testing (NDT) is an effective solution for post-manufacturing quality control of these components. Unfortunately, current micro-CT NDT scanners require expensive infrastructure and hardware, which translates into prohibitively expensive routine NDT. Furthermore, the limited dynamic-range of these scanners can cause severe image artifacts that may compromise the diagnostic value of the non-destructive test. Finally, the cone-beam geometry of these scanners makes them susceptible to the adverse effects of scattered radiation, which is another source of artifacts in micro-CT imaging.
In this work, we describe the design, fabrication, and implementation of a dedicated, cost-effective micro-CT scanner for NDT of AM-fabricated biomedical components. Our scanner reduces the limitations of costly image-based NDT by optimizing the scanner\u27s geometry and the image acquisition hardware (i.e., X-ray source and detector). Additionally, we describe two novel techniques to reduce image artifacts caused by photon-starvation and scatter radiation in cone-beam micro-CT imaging.
Our cost-effective scanner was designed to match the image requirements of medium-size titanium-alloy medical components. We optimized the image acquisition hardware by using an 80 kVp low-cost portable X-ray unit and developing a low-cost lens-coupled X-ray detector. Image artifacts caused by photon-starvation were reduced by implementing dual-exposure high-dynamic-range radiography. For scatter mitigation, we describe the design, manufacturing, and testing of a large-area, highly-focused, two-dimensional, anti-scatter grid.
Our results demonstrate that cost-effective NDT using low-cost equipment is feasible for medium-sized, titanium-alloy, AM-fabricated medical components. Our proposed high-dynamic-range strategy improved by 37% the penetration capabilities of an 80 kVp micro-CT imaging system for a total x-ray path length of 19.8 mm. Finally, our novel anti-scatter grid provided a 65% improvement in CT number accuracy and a 48% improvement in low-contrast visualization. Our proposed cost-effective scanner and artifact reduction strategies have the potential to improve patient care by accelerating the widespread use of patient-specific, bio-compatible, AM-manufactured, medical components
- …