671,641 research outputs found
The Underlying Term Is Democracy: An Interview With Julian Stallabrass
In Art Incorporated, you seek to debunk the myth of the artworld as autonomous of the market forces of global capitalism. Instead, you argue, works of art have become yet another commodity. However, one could say that works of art have always been commodities as well as objects of aesthetic appreciation. What makes the problem pertinent now, in the age of artists like Takashi Murakami, Jeff Koons and Damien Hirst
Najzahodnejša najdba malega kresničarja Neptis sappho (Pallas, 1771) (Lepidoptera: Rhopalocera) v Sloveniji
Time Evolution and Deterministic Optimisation of Correlator Product States
We study a restricted class of correlator product states (CPS) for a
spin-half chain in which each spin is contained in just two overlapping
plaquettes. This class is also a restriction upon matrix product states (MPS)
with local dimension ( being the size of the overlapping regions of
plaquettes) equal to the bond dimension. We investigate the trade-off between
gains in efficiency due to this restriction against losses in fidelity. The
time-dependent variational principle formulated for these states is numerically
very stable. Moreover, it shows significant gains in efficiency compared to the
naively related matrix product states - the evolution or optimisation scales as
for the correlator product states versus for the unrestricted
matrix product state. However, much of this advantage is offset by a
significant reduction in fidelity. Correlator product states break the local
Hilbert space symmetry by the explicit selection of a local basis. We
investigate this dependence in detail and formulate the broad principles under
which correlator product states may be a useful tool. In particular, we find
that scaling with overlap/bond order may be more stable with correlator product
states allowing a more efficient extraction of critical exponents - we present
an example in which the use of correlator product states is several orders of
magnitude quicker than matrix product states.Comment: 19 pages, 14 figure
Tailoring ink-substrate interactions via thin polymeric layers for high-resolution printing
The surface properties of a substrate are among the most important parameters
in the printing technology of functional materials, determining not only the
printing resolution but also the stability of the printed features. This paper
addresses the wetting difficulties encountered during inkjet printing on
homogeneous substrates as a result of improper surface properties. We show that
the wetting of a substrate and, consequently, the quality of the printed
pattern, can be mediated through the deposition of polymeric layers that are a
few nanometers thick. The chemical nature of the polymers determines the
surface energy and polarity of the thin layer. Some applications, however,
require a rigorous adjustment of the surface properties. We propose a simple
and precise method of surface-energy tailoring based on the thermal
decomposition of poly(methyl methacrylate) (PMMA) layers. A smooth transition
in the wetting occurs when the thickness of the PMMA layer approaches zero,
probably due to percolating the underlying surface of the substrate, which
enables the inkjet printing of complex structures with a high resolution. In
particular, the wetting of three substrate-ink systems was successfully
adjusted using the thin polymeric layer: (i) a tantalum-oxide-based ink on
indium-tin-oxide-coated glass, (ii) a ferroelectric lead zirconate titanate ink
on a platinized silicon substrate, and (iii) a silver nanoparticle ink on an
alumina substrate
Compact Neural Networks based on the Multiscale Entanglement Renormalization Ansatz
This paper demonstrates a method for tensorizing neural networks based upon
an efficient way of approximating scale invariant quantum states, the
Multi-scale Entanglement Renormalization Ansatz (MERA). We employ MERA as a
replacement for the fully connected layers in a convolutional neural network
and test this implementation on the CIFAR-10 and CIFAR-100 datasets. The
proposed method outperforms factorization using tensor trains, providing
greater compression for the same level of accuracy and greater accuracy for the
same level of compression. We demonstrate MERA layers with 14000 times fewer
parameters and a reduction in accuracy of less than 1% compared to the
equivalent fully connected layers, scaling like O(N).Comment: 8 pages, 2 figure
- …
