706 research outputs found
An Introduction to the Calibration of Computer Models
In the context of computer models, calibration is the process of estimating
unknown simulator parameters from observational data. Calibration is variously
referred to as model fitting, parameter estimation/inference, an inverse
problem, and model tuning. The need for calibration occurs in most areas of
science and engineering, and has been used to estimate hard to measure
parameters in climate, cardiology, drug therapy response, hydrology, and many
other disciplines. Although the statistical method used for calibration can
vary substantially, the underlying approach is essentially the same and can be
considered abstractly. In this survey, we review the decisions that need to be
taken when calibrating a model, and discuss a range of computational methods
that can be used to compute Bayesian posterior distributions
Experimental quantum computing without entanglement
Entanglement is widely believed to lie at the heart of the advantages offered
by a quantum computer. This belief is supported by the discovery that a
noiseless (pure) state quantum computer must generate a large amount of
entanglement in order to offer any speed up over a classical computer. However,
deterministic quantum computation with one pure qubit (DQC1), which employs
noisy (mixed) states, is an efficient model that generates at most a marginal
amount of entanglement. Although this model cannot implement any arbitrary
algorithm it can efficiently solve a range of problems of significant
importance to the scientific community. Here we experimentally implement a
first-order case of a key DQC1 algorithm and explicitly characterise the
non-classical correlations generated. Our results show that while there is no
entanglement the algorithm does give rise to other non-classical correlations,
which we quantify using the quantum discord - a stronger measure of
non-classical correlations that includes entanglement as a subset. Our results
suggest that discord could replace entanglement as a necessary resource for a
quantum computational speed-up. Furthermore, DQC1 is far less resource
intensive than universal quantum computing and our implementation in a scalable
architecture highlights the model as a practical short-term goal.Comment: 5 pages, 4 figure
Calculating Unknown Eigenvalues with a Quantum Algorithm
Quantum algorithms are able to solve particular problems exponentially faster
than conventional algorithms, when implemented on a quantum computer. However,
all demonstrations to date have required already knowing the answer to
construct the algorithm. We have implemented the complete quantum phase
estimation algorithm for a single qubit unitary in which the answer is
calculated by the algorithm. We use a new approach to implementing the
controlled-unitary operations that lie at the heart of the majority of quantum
algorithms that is more efficient and does not require the eigenvalues of the
unitary to be known. These results point the way to efficient quantum
simulations and quantum metrology applications in the near term, and to
factoring large numbers in the longer term. This approach is architecture
independent and thus can be used in other physical implementations
Experimental quantum coding against photon loss error
A significant obstacle for practical quantum computation is the loss of
physical qubits in quantum computers, a decoherence mechanism most notably in
optical systems. Here we experimentally demonstrate, both in the quantum
circuit model and in the one-way quantum computer model, the smallest
non-trivial quantum codes to tackle this problem. In the experiment, we encode
single-qubit input states into highly-entangled multiparticle codewords, and we
test their ability to protect encoded quantum information from detected
one-qubit loss error. Our results prove the in-principle feasibility of
overcoming the qubit loss error by quantum codes.Comment: "Quantum Computing even when Photons Go AWOL". published versio
Adding control to arbitrary unknown quantum operations
While quantum computers promise significant advantages, the complexity of
quantum algorithms remains a major technological obstacle. We have developed
and demonstrated an architecture-independent technique that simplifies adding
control qubits to arbitrary quantum operations-a requirement in many quantum
algorithms, simulations and metrology. The technique is independent of how the
operation is done, does not require knowledge of what the operation is, and
largely separates the problems of how to implement a quantum operation in the
laboratory and how to add a control. We demonstrate an entanglement-based
version in a photonic system, realizing a range of different two-qubit gates
with high fidelity.Comment: 9 pages, 8 figure
Teaching with Big Data: Report from the 2016 Society for Neuroscience Teaching Workshop
As part of a series of workshops on teaching neuroscience at the Society for Neuroscience annual meetings, William Grisham and Richard Olivo organized the 2016 workshop on Teaching Neuroscience with Big Data. This article presents a summary of that workshop.
Speakers provided overviews of open datasets that could be used in teaching undergraduate courses. These included resources that already appear in educational settings, including the Allen Brain Atlas (presented by Joshua Brumberg and Terri Gilbert), and the Mouse Brain Library and GeneNetwork (presented by Robert Williams). Other resources, such as NeuroData (presented by William R. Gray Roncal), and OpenFMRI, NeuroVault, and Neurosynth (presented by Russell Poldrack) have not been broadly utilized by the neuroscience education community but offer obvious potential.
Finally, William Grisham discussed the iNeuro Project, an NSF-sponsored effort to develop the necessary curriculum for preparing students to handle Big Data. Linda Lanyon further elaborated on the current state and challenges in educating students to deal with Big Data and described some training resources provided by the International Neuroinformatics Coordinating Facility. Neuroinformatics is a subfield of neuroscience that deals with data utilizing analytical tools and computational models. The feasibility of offering neuroinformatics programs at primarily undergraduate institutions was also discussed
Proteome-wide analysis of protein lipidation using chemical probes: in-gel fluorescence visualisation, identification and quantification of N-myristoylation, N- and S-acylation, Ocholesterylation, S-farnesylation and S-geranylgeranylation
Protein lipidation is one of the most widespread post-translational modifications (PTMs) found in nature, regulating protein function, structure and subcellular localization. Lipid transferases and their substrate proteins are also attracting increasing interest as drug targets because of their dysregulation in many disease states. However, the inherent hydrophobicity and potential dynamic nature of lipid modifications makes them notoriously challenging to detect by many analytical methods. Chemical proteomics provides a powerful approach to identify and quantify these diverse protein modifications by combining bespoke chemical tools for lipidated protein enrichment with quantitative mass spectrometry–based proteomics. Here, we report a robust and proteome-wide approach for the exploration of five major classes of protein lipidation in living cells, through the use of specific chemical probes for each lipid PTM. In-cell labeling of lipidated proteins is achieved by the metabolic incorporation of a lipid probe that mimics the specific natural lipid, concomitantly wielding an alkyne as a bio-orthogonal labeling tag. After incorporation, the chemically tagged proteins can be coupled to multifunctional ‘capture reagents’ by using click chemistry, allowing in-gel fluorescence visualization or enrichment via affinity handles for quantitative chemical proteomics based on label-free quantification (LFQ) or tandem mass-tag (TMT) approaches. In this protocol, we describe the application of lipid probes for N-myristoylation, N- and S-acylation, O-cholesterylation, S-farnesylation and S-geranylgeranylation in multiple cell lines to illustrate both the workflow and data obtained in these experiments. We provide detailed workflows for method optimization, sample preparation for chemical proteomics and data processing. A properly trained researcher (e.g., technician, graduate student or postdoc) can complete all steps from optimizing metabolic labeling to data processing within 3 weeks. This protocol enables sensitive and quantitative analysis of lipidated proteins at a proteome-wide scale at native expression levels, which is critical to understanding the role of lipid PTMs in health and disease
- …