217 research outputs found

    Testing the Quantumness of Atom Trajectories

    Get PDF
    This thesis reports on a novel concept of state-dependent transport, which achieves an unprecedented control over the position of individual atoms in optical lattices. Utilizing this control I demonstrate an experimental violation of the Leggett Garg inequality, which rigorously excludes (i.e. falsifies) any explanation of quantum transport based on classical, well-defined trajectories. Furthermore, I demonstrate the generation of arbitrary low-entropy states of neutral atoms following a bottom-up approach by rearranging a dilute thermal ensemble into a predefined, ordered distribution in a one-dimensional optical lattice. Additionally, I probe two-particle quantum interference effects of two atom trajectories by realizing a microwave Hong-Ou-Mandel interferometer with massive particles, which are cooled into the vibrational ground state. The first part of this thesis reports on several new experimental tools and techniques: three-dimensional ground state cooling of single atoms, which are trapped in the combined potential of a polarization-synthesized optical lattice and a blue-detuned hollow dipole potential; A high-NA (0.92) objective lens achieving a diffraction limited resolution of 460 nm; and an improved super-resolution algorithm, which resolves the position of individual atoms in small clusters at high filling factors, even when each lattice site is occupied. The next part is devoted to the conceptually new optical-lattice technique that relies on a high-precision, high-bandwidth synthesis of light polarization. Polarization-synthesized optical lattices provide two fully controllable optical lattice potentials, each of them confining only atoms in either one of the two long-lived hyperfine states. By employing one lattice as the storage register and the other one as the shift register, I provide a proof of concept that selected regions of the periodic potential can be filled with one particle per site. In the following part I report on a stringent test of the non-classicality of the motion of a massive quantum particle, which propagates on a discrete lattice. Measuring temporal correlations of the position of single atoms performing a quantum walk, we observe a 6σ (standard deviation) violation of the Leggett-Garg inequality. The experiment is carried out using so-called ideal negative measurements – an essential requisite for any genuine Leggett-Garg test – which acquire information about the atom’s position while avoiding any direct interaction with it. This interaction-free measurement is based on our polarization-synthesized optical lattice, which allows us to directly probe the absence rather than the presence of atoms at a chosen lattice site. Beyond its fundamental aspect, I demonstrate the application of the Leggett-Garg correlation function as a witness of quantum superposition. The witness allows us to discriminate the quantumness of different types of walks spanning from merely classical to quantum dynamics and further to witness the decoherence of a quantum state. In the last experimental part I will discuss recent results on collisional losses due to inelastic collisions occurring at high two-atom densities and demonstrate a Hong-Ou-Mandel interference with massive particles. Our precise control over individual indistinguishable particles embodies a direct analogue of the original Hong-Ou-Mandel experiment. By carrying out a Monte Carlo analysis of our experimental data, I demonstrate a signature of the two-particle interference of two-atom trajectories with a statistical significance of 4σ. In the final part I will introduce several new experiments which can be realized with the tools and techniques developed in this thesis, spanning from the detection of topologically protected edge states to the prospect of building a one-million-operation quantum cellular automaton

    The Reasonable Effectiveness of Randomness in Scalable and Integrative Gene Regulatory Network Inference and Beyond

    Get PDF
    Gene regulation is orchestrated by a vast number of molecules, including transcription factors and co-factors, chromatin regulators, as well as epigenetic mechanisms, and it has been shown that transcriptional misregulation, e.g., caused by mutations in regulatory sequences, is responsible for a plethora of diseases, including cancer, developmental or neurological disorders. As a consequence, decoding the architecture of gene regulatory networks has become one of the most important tasks in modern (computational) biology. However, to advance our understanding of the mechanisms involved in the transcriptional apparatus, we need scalable approaches that can deal with the increasing number of large-scale, high-resolution, biological datasets. In particular, such approaches need to be capable of efficiently integrating and exploiting the biological and technological heterogeneity of such datasets in order to best infer the underlying, highly dynamic regulatory networks, often in the absence of sufficient ground truth data for model training or testing. With respect to scalability, randomized approaches have proven to be a promising alternative to deterministic methods in computational biology. As an example, one of the top performing algorithms in a community challenge on gene regulatory network inference from transcriptomic data is based on a random forest regression model. In this concise survey, we aim to highlight how randomized methods may serve as a highly valuable tool, in particular, with increasing amounts of large-scale, biological experiments and datasets being collected. Given the complexity and interdisciplinary nature of the gene regulatory network inference problem, we hope our survey maybe helpful to both computational and biological scientists. It is our aim to provide a starting point for a dialogue about the concepts, benefits, and caveats of the toolbox of randomized methods, since unravelling the intricate web of highly dynamic, regulatory events will be one fundamental step in understanding the mechanisms of life and eventually developing efficient therapies to treat and cure diseases

    Percolation and isoperimetry on roughly transitive graphs

    Get PDF
    In this paper we study percolation on a roughly transitive graph G with polynomial growth and isoperimetric dimension larger than one. For these graphs we are able to prove that p_c < 1, or in other words, that there exists a percolation phase. The main results of the article work for both dependent and independent percolation processes, since they are based on a quite robust renormalization technique. When G is transitive, the fact that p_c < 1 was already known before. But even in that case our proof yields some new results and it is entirely probabilistic, not involving the use of Gromov's theorem on groups of polynomial growth. We finish the paper giving some examples of dependent percolation for which our results apply.Comment: 32 pages, 2 figure

    Multiscale Kinetic Monte Carlo Simulation of Self-Organized Growth of GaN/AlN Quantum Dots

    Get PDF
    A three-dimensional kinetic Monte Carlo methodology is developed to study the strained epitaxial growth of wurtzite GaN/AlN quantum dots. It describes the kinetics of effective GaN adatoms on an hexagonal lattice. The elastic strain energy is evaluated by a purposely devised procedure: first, we take advantage of the fact that the deformation in a lattice-mismatched heterostructure is equivalent to that obtained by assuming that one of the regions of the system is subjected to a properly chosen uniform stress (Eshelby inclusion concept), and then the strain is obtained by applying the Green’s function method. The standard Monte Carlo method has been modified to implement a multiscale algorithm that allows the isolated adatoms to perform long diffusion jumps. With these state-of-the art modifications, it is possible to perform efficiently simulations over large areas and long elapsed times. We have taylored the model to the conditions of molecular beam epitaxy under N-rich conditions. The corresponding simulations reproduce the different stages of the Stranski–Krastanov transition, showing quantitative agreement with the experimental findings concerning the critical deposition, and island size and density. The influence of growth parameters, such as the relative fluxes of Ga and N and the substrate temperature, is also studied and found to be consistent with the experimental observations. In addition, the growth of stacked layers of quantum dots is also simulated and the conditions for their vertical alignment and homogenization are illustrated. In summary, the developed methodology allows one to reproduce the main features of the self-organized quantum dot growth and to understand the microscopic mechanisms at play

    Proceedings of the 26th International Symposium on Theoretical Aspects of Computer Science (STACS'09)

    Get PDF
    The Symposium on Theoretical Aspects of Computer Science (STACS) is held alternately in France and in Germany. The conference of February 26-28, 2009, held in Freiburg, is the 26th in this series. Previous meetings took place in Paris (1984), Saarbr¨ucken (1985), Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg (1991), Cachan (1992), W¨urzburg (1993), Caen (1994), M¨unchen (1995), Grenoble (1996), L¨ubeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002), Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), and Bordeaux (2008). ..

    Watching children: A history of America\u27s race to educate kids and the creation of the \u27slow-learner\u27 subject

    Get PDF
    On January 25, 2011, United States President Barack Obama delivered his State of the Union address to Congress and to the nation. As part of that address, President Obama articulated his vision for American education and stated that America had to win the race to educate our kids (Obama, 2011, state of the union). Mr. Obama\u27s speech and his Race to the Top policy stand as statements in a discourse that expects fast-paced education based on universal standards and quantitative measures. Tracing a history of American schooling, one sees that this discourse has been dominant in this society for most of the past hundred years. However, while policy makers often tout \u27science\u27 as the foundation for decisions in America\u27s race to educate children, a \u27science\u27 that employs a one-dimensional concept of universal time and linear progress is problematic when applied to human learning. Drawing from Michel Foucault\u27s methodological \u27toolbox\u27, the current study is a critical ontology asking how American society has constructed education as a time-oriented endeavor in which we race to educate our children. A \u27Foucauldian analysis\u27 allows us to question our understandings of ourselves and helps us question the power that rules over our lives, and in an examination of this history, the current study shows how notions of universal time and linear progress have gained power in American schools. As part of this history, the study illustrates how the American government, newspaper media, and academic journals have created a \u27slow learner\u27 subject as an object of power used to explain vast economic inequalities in society, justify dividing practices that sort students based on intellectual measures, and instill anxiety about the pace of education into American society. However, the current study also interrupts the discourse of universal time and linear progress now used in American schools in two ways. First, the current study highlights inconsistencies in the dominant narrative of \u27fast-\u27 and \u27slow-\u27 learners by illustrating a broader understanding of these subjects than how they are characterized in the discourse. Second, the current study problematizes the constructed binaries made possible with notions of universal time and linear progress by introducing alternative models of time and progress (e.g., relativity theory; quantum theory; chaos theory) that are more accurate in describing the phenomenon of time and arguably are more appropriate for use in American schools. The significance of the dissertation emerges when we realize that in considering education policies, we must question the discourse of time that shapes how we view our students and ourselves, and we must question why we race to educate our kids

    Exposition on over-squashing problem on GNNs: Current Methods, Benchmarks and Challenges

    Full text link
    Graph-based message-passing neural networks (MPNNs) have achieved remarkable success in both node and graph-level learning tasks. However, several identified problems, including over-smoothing (OSM), limited expressive power, and over-squashing (OSQ), still limit the performance of MPNNs. In particular, OSQ serves as the latest identified problem, where MPNNs gradually lose their learning accuracy when long-range dependencies between graph nodes are required. In this work, we provide an exposition on the OSQ problem by summarizing different formulations of OSQ from current literature, as well as the three different categories of approaches for addressing the OSQ problem. In addition, we also discuss the alignment between OSQ and expressive power and the trade-off between OSQ and OSM. Furthermore, we summarize the empirical methods leveraged from existing works to verify the efficiency of OSQ mitigation approaches, with illustrations of their computational complexities. Lastly, we list some open questions that are of interest for further exploration of the OSQ problem along with potential directions from the best of our knowledge

    Honolulu Weekly. Volume 7, Number 24, 1997-06-11

    Get PDF

    Casco Bay Weekly : 5 March 1998

    Get PDF
    https://digitalcommons.portlandlibrary.com/cbw_1998/1011/thumbnail.jp
    • …
    corecore