6,662,771 research outputs found
Thought/ visual processing: ctrl + x, y, z, v
With this text I wish to revisit a long-standing preoccupation of mine which addresses the extent to which the digital medium may have brought about a paradigm shift in creative process; one which has been effectuated through a conversion of the creative medium itself - from atoms to bits. As a visual practitioner, what is particularly significant to my inquiry is that this change in medium is deemed to have brought about a change in language that affects the very nature of autographic work which comes into being as visual output: Following Goodman’s definitions of allographic and autographic output (1976), McCullough (1996) asserts that visual artworks may now be considered to be allographic productions since they presently share the same attributes of notationally based allographic work, which has traditionally manifested only as music or as literary output. The result is a work environment which wide opens the doors to unprecedented levels of non-linear process and experimentation whilst engaged in the visually creative act
An AER Spike-Processing Filter Simulator and Automatic VHDL Generator Based on Cellular Automata
Spike-based systems are neuro-inspired circuits implementations
traditionally used for sensory systems or sensor signal processing. Address-Event-
Representation (AER) is a neuromorphic communication protocol for transferring
asynchronous events between VLSI spike-based chips. These neuro-inspired
implementations allow developing complex, multilayer, multichip neuromorphic
systems and have been used to design sensor chips, such as retinas and cochlea,
processing chips, e.g. filters, and learning chips. Furthermore, Cellular Automata
(CA) is a bio-inspired processing model for problem solving. This approach
divides the processing synchronous cells which change their states at the same time
in order to get the solution. This paper presents a software simulator able to gather
several spike-based elements into the same workspace in order to test a CA
architecture based on AER before a hardware implementation. Furthermore this
simulator produces VHDL for testing the AER-CA into the FPGA of the USBAER
AER-tool.Ministerio de Ciencia e Innovación TEC2009-10639-C04-0
Readers and Reading in the First World War
This essay consists of three individually authored and interlinked sections. In ‘A Digital Humanities Approach’, Francesca Benatti looks at datasets and databases (including the UK Reading Experience Database) and shows how a systematic, macro-analytical use of digital humanities tools and resources might yield answers to some key questions about reading in the First World War. In ‘Reading behind the Wire in the First World War’ Edmund G. C. King scrutinizes the reading practices and preferences of Allied prisoners of war in Mainz, showing that reading circumscribed by the contingencies of a prison camp created an unique literary community, whose legacy can be traced through their literary output after the war. In ‘Book-hunger in Salonika’, Shafquat Towheed examines the record of a single reader in a specific and fairly static frontline, and argues that in the case of the Salonika campaign, reading communities emerged in close proximity to existing centres of print culture. The focus of this essay moves from the general to the particular, from the scoping of large datasets, to the analyses of identified readers within a specific geographical and temporal space. The authors engage with the wider issues and problems of recovering, interpreting, visualizing, narrating, and representing readers in the First World War
Pay One, Get Hundreds for Free: Reducing Cloud Costs through Shared Query Execution
Cloud-based data analysis is nowadays common practice because of the lower
system management overhead as well as the pay-as-you-go pricing model. The
pricing model, however, is not always suitable for query processing as heavy
use results in high costs. For example, in query-as-a-service systems, where
users are charged per processed byte, collections of queries accessing the same
data frequently can become expensive. The problem is compounded by the limited
options for the user to optimize query execution when using declarative
interfaces such as SQL. In this paper, we show how, without modifying existing
systems and without the involvement of the cloud provider, it is possible to
significantly reduce the overhead, and hence the cost, of query-as-a-service
systems. Our approach is based on query rewriting so that multiple concurrent
queries are combined into a single query. Our experiments show the aggregated
amount of work done by the shared execution is smaller than in a
query-at-a-time approach. Since queries are charged per byte processed, the
cost of executing a group of queries is often the same as executing a single
one of them. As an example, we demonstrate how the shared execution of the
TPC-H benchmark is up to 100x and 16x cheaper in Amazon Athena and Google
BigQuery than using a query-at-a-time approach while achieving a higher
throughput
Polyimide processing additives
A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA
Natural language processing
Beginning with the basic issues of NLP, this chapter aims to chart the major research activities in this area since the last ARIST Chapter in 1996 (Haas, 1996), including: (i) natural language text processing systems - text summarization, information extraction, information retrieval, etc., including domain-specific applications; (ii) natural language interfaces; (iii) NLP in the context of www and digital libraries ; and (iv) evaluation of NLP systems
Variance Reduced Stochastic Gradient Descent with Neighbors
Stochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its
slow convergence can be a computational bottleneck. Variance reduction
techniques such as SAG, SVRG and SAGA have been proposed to overcome this
weakness, achieving linear convergence. However, these methods are either based
on computations of full gradients at pivot points, or on keeping per data point
corrections in memory. Therefore speed-ups relative to SGD may need a minimal
number of epochs in order to materialize. This paper investigates algorithms
that can exploit neighborhood structure in the training data to share and
re-use information about past stochastic gradients across data points, which
offers advantages in the transient optimization phase. As a side-product we
provide a unified convergence analysis for a family of variance reduction
algorithms, which we call memorization algorithms. We provide experimental
results supporting our theory.Comment: Appears in: Advances in Neural Information Processing Systems 28
(NIPS 2015). 13 page
Generalized Inpainting Method for Hyperspectral Image Acquisition
A recently designed hyperspectral imaging device enables multiplexed
acquisition of an entire data volume in a single snapshot thanks to
monolithically-integrated spectral filters. Such an agile imaging technique
comes at the cost of a reduced spatial resolution and the need for a
demosaicing procedure on its interleaved data. In this work, we address both
issues and propose an approach inspired by recent developments in compressed
sensing and analysis sparse models. We formulate our superresolution and
demosaicing task as a 3-D generalized inpainting problem. Interestingly, the
target spatial resolution can be adjusted for mitigating the compression level
of our sensing. The reconstruction procedure uses a fast greedy method called
Pseudo-inverse IHT. We also show on simulations that a random arrangement of
the spectral filters on the sensor is preferable to regular mosaic layout as it
improves the quality of the reconstruction. The efficiency of our technique is
demonstrated through numerical experiments on both synthetic and real data as
acquired by the snapshot imager.Comment: Keywords: Hyperspectral, inpainting, iterative hard thresholding,
sparse models, CMOS, Fabry-P\'ero
- …
