458 research outputs found
Limits on quantum deletion from no signaling principle
One of the fundamental restrictions that quantum mechanics imposes is the "No
deletion Theorem" which tells us that given two identical unknown quantum
states, it is impossible to delete one of them. But nevertheless if not
perfect, people have tried to delete it approximately. In these approximate
deleting processes our basic target is to delete one of the two identical
copies as much as possible while preserving the other copy. In this brief
report, by using the No communication theorem (NCT) (impossibility of sending
signal faster than light using a quantum resource) as a guiding principle, we
obtain a bound on the sum of the fidelity of deletion and the fidelity of
preservation. Our result not only brings out the complementary relation between
these two fidelities but also predicts the optimal value of the fidelity of
deletion achievable for a given fidelity of preservation under no signaling
constraint. This work eventually saturates the quest for finding out the
optimal value of deletion within the NCT framework
Asymmetric broadcasting of quantum correlations
In this work, we exhaustively investigate local and
nonlocal broadcasting of entanglement as well as correlations beyond
entanglement (geometric discord) using asymmetric Pauli cloners with most
general two qubit state as the resource. We exemplify asymmetric broadcasting
of entanglement using Maximally Entangled Mixed States. We demonstrate the
variation of broadcasting range with the amount of entanglement present in the
resource state as well as with the asymmetry in the cloner. We show that it is
impossible to optimally broadcast geometric discord with the help of these
asymmetric Pauli cloning machines. We also study the problem of broadcasting of entanglement using non-maximally entangled state (NME) as
the resource. For this task, we introduce a method we call successive
broadcasting which involves application of optimal cloning
machines multiple times. We compare and contrast the performance of this method
with the application of direct optimal cloning machines. We
show that optimal cloner does a better job at broadcasting
than the successive application of cloners and the successive
method can be beneficial in the absence of cloners. We also
bring out the fundamental difference between the tasks of cloning and
broadcasting in the final part of the manuscript. We create examples to show
that there exist local unitaries which can be employed to give a better range
for broadcasting. Such unitary operations are not only economical, but also
surpass the best possible range obtained using existing cloning machines
enabling broadcasting of lesser entangled states. This result opens up a new
direction in exploration of methods to facilitate broadcasting which may
outperform the standard strategies implemented through cloning transformations.Comment: Edited sections, changed figures, to be published in Physical Review
How local is the information in MPS/PEPS tensor networks?
Two dimensional tensor networks such as projected entangled pairs states
(PEPS) are generally hard to contract. This is arguably the main reason why
variational tensor network methods in 2D are still not as successful as in 1D.
However, this is not necessarily the case if the tensor network represents a
gapped ground state of a local Hamiltonian; such states are subject to many
constraints and contain much more structure. In this paper we introduce a new
approach for approximating the expectation value of a local observable in
ground states of local Hamiltonians that are represented as PEPS
tensor-networks. Instead of contracting the full tensor-network, we try to
estimate the expectation value using only a local patch of the tensor-network
around the observable. Surprisingly, we demonstrate that this is often easier
to do when the system is frustrated. In such case, the spanning vectors of the
local patch are subject to non-trivial constraints that can be utilized via a
semi-definite program to calculate rigorous lower- and upper-bounds on the
expectation value. We test our approach in 1D systems, where we show how the
expectation value can be calculated up to at least 3 or 4 digits of precision,
even when the patch radius is smaller than the correlation length.Comment: 11 pages, 5 figures, RevTeX4.1. Comments are welcome. (v2) Minor
corrections and slightly modified intro. Matches the published versio
Flavour Enhanced Food Recommendation
We propose a mechanism to use the features of flavour to enhance the quality
of food recommendations. An empirical method to determine the flavour of food
is incorporated into a recommendation engine based on major gustatory nerves.
Such a system has advantages of suggesting food items that the user is more
likely to enjoy based upon matching with their flavour profile through use of
the taste biological domain knowledge. This preliminary intends to spark more
robust mechanisms by which flavour of food is taken into consideration as a
major feature set into food recommendation systems. Our long term vision is to
integrate this with health factors to recommend healthy and tasty food to users
to enhance quality of life.Comment: In Proceedings of 5th International Workshop on Multimedia Assisted
Dietary Management, Nice, France, October 21, 2019, MADiMa 2019, 6 page
Employer’s civil liability for work-related accidents: a comparison of non-economic loss in Chile and England
Employers’ civil liability for work injuries and subsequent compensation for non-economic losses represent challenges for many countries. Several countries in Latin America and Europe still do not apply non-economic damage standards, and even when they do, the task of assessing damages is often left to a judges’ discretion. Similarly, common and civil law jurisdictions have shown non-standard ways about how these losses should be assessed and understood. This paper therefore, aims to compare England and Chile in terms of employers’ civil liability and subsequent non-economic damages compensation after work-related accidents. It focuses on the current progress made by both countries in terms of non-economic loss definition and assessment. This is a qualitative study. Research data consisted on a review of case law within the last five years in both countries. Additional, contextual data was gathered by looking at published documents examining the current English context and four interviews were carried out to further understand the Chilean context. Data was analyzed using framework analysis and the Nvivo software. From a comparative perspective, findings show that England has made important progress in terms of types of employer’s liability, understanding of non-economic loss, and standards of assessment. Although several advances have been made in Chile over the last few years, a number of challenges still remain
Electromagnetically Induced Transparency and Absorption in Metamaterials: The Radiating Two-Oscillator Model and Its Experimental Confirmation
Several classical analogues of electromagnetically induced transparency in
metamaterials have been demonstrated. A simple two-resonator model can describe
their absorption spectrum qualitatively, but fails to provide information about
the scattering properties-e.g., transmission and group delay. Here we develop
an alternative model that rigorously includes the coupling of the radiative
resonator to the external electromagnetic fields. This radiating two-oscillator
model can describe both the absorption spectrum and the scattering parameters
quantitatively. The model also predicts metamaterials with a narrow spectral
feature in the absorption larger than the background absorption of the
radiative element. This classical analogue of electromagnetically induced
absorption is shown to occur when both the dissipative loss of the radiative
resonator and the coupling strength are small. These predictions are
subsequently demonstrated in experiments.Comment: 5 pages, 3 figures; supplemental information available from AP
Generator Assisted Mixture of Experts For Feature Acquisition in Batch
Given a set of observations, feature acquisition is about finding the subset
of unobserved features which would enhance accuracy. Such problems have been
explored in a sequential setting in prior work. Here, the model receives
feedback from every new feature acquired and chooses to explore more features
or to predict. However, sequential acquisition is not feasible in some settings
where time is of the essence. We consider the problem of feature acquisition in
batch, where the subset of features to be queried in batch is chosen based on
the currently observed features, and then acquired as a batch, followed by
prediction. We solve this problem using several technical innovations. First,
we use a feature generator to draw a subset of the synthetic features for some
examples, which reduces the cost of oracle queries. Second, to make the feature
acquisition problem tractable for the large heterogeneous observed features, we
partition the data into buckets, by borrowing tools from locality sensitive
hashing and then train a mixture of experts model. Third, we design a tractable
lower bound of the original objective. We use a greedy algorithm combined with
model training to solve the underlying problem. Experiments with four datasets
show that our approach outperforms these methods in terms of trade-off between
accuracy and feature acquisition cost.Comment: Accepted in AAAI-2
- …