3,847 research outputs found
Self-Organising Stochastic Encoders
The processing of mega-dimensional data, such as images, scales linearly with
image size only if fixed size processing windows are used. It would be very
useful to be able to automate the process of sizing and interconnecting the
processing windows. A stochastic encoder that is an extension of the standard
Linde-Buzo-Gray vector quantiser, called a stochastic vector quantiser (SVQ),
includes this required behaviour amongst its emergent properties, because it
automatically splits the input space into statistically independent subspaces,
which it then separately encodes. Various optimal SVQs have been obtained, both
analytically and numerically. Analytic solutions which demonstrate how the
input space is split into independent subspaces may be obtained when an SVQ is
used to encode data that lives on a 2-torus (e.g. the superposition of a pair
of uncorrelated sinusoids). Many numerical solutions have also been obtained,
using both SVQs and chains of linked SVQs: (1) images of multiple independent
targets (encoders for single targets emerge), (2) images of multiple correlated
targets (various types of encoder for single and multiple targets emerge), (3)
superpositions of various waveforms (encoders for the separate waveforms emerge
- this is a type of independent component analysis (ICA)), (4) maternal and
foetal ECGs (another example of ICA), (5) images of textures (orientation maps
and dominance stripes emerge). Overall, SVQs exhibit a rich variety of
self-organising behaviour, which effectively discovers the internal structure
of the training data. This should have an immediate impact on "intelligent"
computation, because it reduces the need for expert human intervention in the
design of data processing algorithms.Comment: 23 pages, 23 figure
Knowing the honey bee : a multispecies ethnography : a thesis presented in partial fulfilment of the requirements for the degree of Master of Arts in Social Anthropology at Massey University, Palmerston North, New Zealand
Multispecies scholarship argues that the non-human has been relegated to the
background of discussions about who and what inhabits and shapes the world. This
thesis engages with this discussion as an experimental multispecies ethnography with
honey bees in Manawatu, New Zealand. I aim to centre the honey bee in ethnography
through engagement in the practice of fieldwork as well as the representation of the
findings of this engagement. The honey bee is commonly known as an introduced,
domesticated species, kept by humans in beehives in apiculture. This conceals the
agency of the honey bee, rendering it passive, productive and compliant to the desires
of humans, or in need of human intervention for survival. To view the agency of the
bee I undertook embodied, performative ethnography, interviewing beekeepers and
becoming one myself. My methodology, which was shaped by the bee, traced the
networks that honey bees were enrolled in. Encounters were awkward, one-sided, and
sometimes dangerous.
The representation of honey bees demands an approach which attends to multiple,
distinct accounts of honey bee worlds, because the bee is a lively agent, contributing
to, experiencing, and communicating about the multiple networks in which it is
engaged. As such, the findings of this thesis are presented in three accounts of
encounters with honey bees. These accounts are distinct, capturing the honey bee in
different networks, but are also distinct in their narrative styles, progressing from a
description of honey networks in the spirit of Actor-Networks, to writing with honey
bee narrator in poetry. Ethnographic representation is inevitably partial and an act of
imagination. However, becoming sensitive to the ‘bee-ness’ of the bee; the waggle,
hum and sting, and employing narrative inspired by the multisensory apiary, in other
words, shaping representation with honey bees in mind, is an act of privileging honey
bees in writing, and exploring what more can be said of, and with, the bee
The Social Cost of Inertia: How Cost-Benefit Incoherence Threatens to Derail U.S. Climate Action
As EPA rolls out controversial regulations on power plant emissions of greenhouse gases, a vocal group of legislators, industry groups, and legal and economic scholars are crying foul, arguing EPA didn\u27t follow the rules when it conducted its cost-benefit analyses of these regulations.
This article traces the origin of these cost-benefit rules, finding that the methodological handbook alleged to be the worldwide gold standard was actually developed through a fundamentally flawed process, one that intentionally excluded majority viewpoints in several relevant academic disciplines. Unsurprisingly, it also contains serious methodological mistakes. If these mistakes were to be applied to regulations addressing domestic greenhouse gas emissions (that is, if EPA and other executive agencies do follow the rules, as demanded by the critics of these regulations in Congress, academia and regulated industry), this injection of both outright irrationality and arguably unethical subjective biases into domestic regulatory policy would threaten to derail substantive U.S. action on climate change.
This article also describes how the executive order that spawned these rules is impossible to comply with literally, because it creates a series of max/min problems with no common solution. This creates a conundrum that, over and over again, is resolved under these costbenefit rules in favor of maximizing quantifiable, monetized net benefits, at the expense of promoting a set of competing yet also important rights- and duty-based factors that the text of the parent executive order ostensibly puts on equal footing
Grain exports and inflation
Inflation (Finance) ; Grain trade ; Food prices
- …