6,725,128 research outputs found
Thought/ visual processing: ctrl + x, y, z, v
With this text I wish to revisit a long-standing preoccupation of mine which addresses the extent to which the digital medium may have brought about a paradigm shift in creative process; one which has been effectuated through a conversion of the creative medium itself - from atoms to bits. As a visual practitioner, what is particularly significant to my inquiry is that this change in medium is deemed to have brought about a change in language that affects the very nature of autographic work which comes into being as visual output: Following Goodman’s definitions of allographic and autographic output (1976), McCullough (1996) asserts that visual artworks may now be considered to be allographic productions since they presently share the same attributes of notationally based allographic work, which has traditionally manifested only as music or as literary output. The result is a work environment which wide opens the doors to unprecedented levels of non-linear process and experimentation whilst engaged in the visually creative act
An AER Spike-Processing Filter Simulator and Automatic VHDL Generator Based on Cellular Automata
Spike-based systems are neuro-inspired circuits implementations
traditionally used for sensory systems or sensor signal processing. Address-Event-
Representation (AER) is a neuromorphic communication protocol for transferring
asynchronous events between VLSI spike-based chips. These neuro-inspired
implementations allow developing complex, multilayer, multichip neuromorphic
systems and have been used to design sensor chips, such as retinas and cochlea,
processing chips, e.g. filters, and learning chips. Furthermore, Cellular Automata
(CA) is a bio-inspired processing model for problem solving. This approach
divides the processing synchronous cells which change their states at the same time
in order to get the solution. This paper presents a software simulator able to gather
several spike-based elements into the same workspace in order to test a CA
architecture based on AER before a hardware implementation. Furthermore this
simulator produces VHDL for testing the AER-CA into the FPGA of the USBAER
AER-tool.Ministerio de Ciencia e Innovación TEC2009-10639-C04-0
Readers and Reading in the First World War
This essay consists of three individually authored and interlinked sections. In ‘A Digital Humanities Approach’, Francesca Benatti looks at datasets and databases (including the UK Reading Experience Database) and shows how a systematic, macro-analytical use of digital humanities tools and resources might yield answers to some key questions about reading in the First World War. In ‘Reading behind the Wire in the First World War’ Edmund G. C. King scrutinizes the reading practices and preferences of Allied prisoners of war in Mainz, showing that reading circumscribed by the contingencies of a prison camp created an unique literary community, whose legacy can be traced through their literary output after the war. In ‘Book-hunger in Salonika’, Shafquat Towheed examines the record of a single reader in a specific and fairly static frontline, and argues that in the case of the Salonika campaign, reading communities emerged in close proximity to existing centres of print culture. The focus of this essay moves from the general to the particular, from the scoping of large datasets, to the analyses of identified readers within a specific geographical and temporal space. The authors engage with the wider issues and problems of recovering, interpreting, visualizing, narrating, and representing readers in the First World War
Pre-Processing and Post-Processing in Group-Cluster Mergers
Galaxies in clusters are more likely to be of early type and to have lower
star formation rates than galaxies in the field. Recent observations and
simulations suggest that cluster galaxies may be `pre-processed' by group or
filament environments and that galaxies that fall into a cluster as part of a
larger group can stay coherent within the cluster for up to one orbital period
(`post-processing'). We investigate these ideas by means of a cosmological
-body simulation and idealized -body plus hydrodynamics simulations of a
group-cluster merger. We find that group environments can contribute
significantly to galaxy pre-processing by means of enhanced galaxy-galaxy
merger rates, removal of galaxies' hot halo gas by ram pressure stripping, and
tidal truncation of their galaxies. Tidal distortion of the group during infall
does not contribute to pre-processing. Post-processing is also shown to be
effective: galaxy-galaxy collisions are enhanced during a group's pericentric
passage within a cluster, the merger shock enhances the ram pressure on group
and cluster galaxies, and an increase in local density during the merger leads
to greater galactic tidal truncation.Comment: Accepted for publication in MNRAS. 25 pages, 21 figure
Variance Reduced Stochastic Gradient Descent with Neighbors
Stochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its
slow convergence can be a computational bottleneck. Variance reduction
techniques such as SAG, SVRG and SAGA have been proposed to overcome this
weakness, achieving linear convergence. However, these methods are either based
on computations of full gradients at pivot points, or on keeping per data point
corrections in memory. Therefore speed-ups relative to SGD may need a minimal
number of epochs in order to materialize. This paper investigates algorithms
that can exploit neighborhood structure in the training data to share and
re-use information about past stochastic gradients across data points, which
offers advantages in the transient optimization phase. As a side-product we
provide a unified convergence analysis for a family of variance reduction
algorithms, which we call memorization algorithms. We provide experimental
results supporting our theory.Comment: Appears in: Advances in Neural Information Processing Systems 28
(NIPS 2015). 13 page
Pay One, Get Hundreds for Free: Reducing Cloud Costs through Shared Query Execution
Cloud-based data analysis is nowadays common practice because of the lower
system management overhead as well as the pay-as-you-go pricing model. The
pricing model, however, is not always suitable for query processing as heavy
use results in high costs. For example, in query-as-a-service systems, where
users are charged per processed byte, collections of queries accessing the same
data frequently can become expensive. The problem is compounded by the limited
options for the user to optimize query execution when using declarative
interfaces such as SQL. In this paper, we show how, without modifying existing
systems and without the involvement of the cloud provider, it is possible to
significantly reduce the overhead, and hence the cost, of query-as-a-service
systems. Our approach is based on query rewriting so that multiple concurrent
queries are combined into a single query. Our experiments show the aggregated
amount of work done by the shared execution is smaller than in a
query-at-a-time approach. Since queries are charged per byte processed, the
cost of executing a group of queries is often the same as executing a single
one of them. As an example, we demonstrate how the shared execution of the
TPC-H benchmark is up to 100x and 16x cheaper in Amazon Athena and Google
BigQuery than using a query-at-a-time approach while achieving a higher
throughput
Multifocal image processing
In this paper, we present a processing method for digital images from
an optical microscope. High-pass type filters are generally used for image focusing. They enhance the high spatial frequencies. These filters are not appropriate if the lack of sharpness is caused by other factors. On the other hand, the (un)sharpness can be taken as an advantage and can be used for studies of the spatial distribution of structures in the observed scene. In many cases, it is possible to construct a three- dimensional model of the observed object by analyzing image sharpness. Interesting two-dimensional images and a three-dimensional model can be obtained by applying the theory for multifocal image processing described in this paper. We improve the quality of the results compared to the previous methods using the Fourier transform
for the analysis of local sharpness in the images
Polyimide processing additives
A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA
- …
