1,479 research outputs found

    From Financing Social Insurance to Insuring Financial Markets: The Socialisation of Risk and the Privatisation of Profit in an Age of Irresponsibility

    Get PDF
    Commentaries on the financial meltdown that began with Lehman Brothers’ collapse in September 2008 trace its origins to greedy bankers exploiting lax regulatory practices to take excessive risks through exotic and arcane financial instruments. While not wishing to demur from this analysis this chapter takes issue with the frequent failure to acknowledge that this has come about as a consequence of the (mis)application of state power over the past 50 years (see Helleiner 1994). Starting with the tacit support for the development of the Euromarkets in the 1960s and culminating with the responses to the turmoil of 2008-2010 the chapter describes how and why states, when confronted with a choice of restraining or liberalising markets, have invariably plumped for the latter simultaneously cultivating the ideal conditions for the propagation of financial crises and undermining their capacity to cope with the consequences. Much of this is accounted for by states pursuing national interests, in particular funding deficits and extending the competitiveness of financial services industries, but it also reflects the faith amongst financial policymaking elites in the perspicacity of markets presented by neo-classical and neo-Austrian economic paradigms which insist the state confine itself to the alleviation of market failures. Even ardent proponents accept these ideas have been badly tarnished by the present financial imbroglio. Nevertheless, proposals enumerating a greater role for the state have been quietly junked in favour of ‘market friendly’ reforms. It is argued that the reluctance to dispute these ideas will perpetuate or even exacerbate the problems they seek to address. Here a broader role for the state will be advocated rooted in a political vision which does not assert that the world is composed of self-interested, atomistic individuals and firms motivated solely by profit. In contrast to Hayekian models postulating markets as a discovery process for entrepreneurs to innovate in pursuit of profit and their private good the chapter argues that the democratic process, including at the level of international and global governance, can and should be a discovery process for innovations in pursuit of public goods, not least global financial stability

    Using a simple 2D steady-state saturated flow and reactive transport model to elucidate denitrification patterns in a hillslope aquifer

    Get PDF
    In the last 50 years, agricultural intensification has resulted in increasing nutrient losses that threaten the health of the lakes on the volcanic plateau of New Zealand’s North Island. As part of our efforts to understand the transport and transformations of nitrogen in this landscape, the 2D vertical groundwater transport model AquiferSim 2DV was used to simulate water flow, nitrate transport, denitrification, and discharge to surface waters in a hillslope adjacent to a wetland and stream discharging into Lake Taupo, Australasia’s largest lake. AquiferSim 2DV is a steady state model using the finite-difference stream function method for flow modelling and finite-volume mixing cell method for contaminant transport modelling. The ratio of horizontal to vertical hydraulic conductivity must be specified within the aquifer domain, as must effective porosity and denitrification rates. Boundary conditions consist of recharge fluxes and contaminant concentrations, as well as the assumed zone of discharge. Hydrodynamic dispersion is simulated through numerical dispersion, which depends on grid resolution. Denitrification reactions within each computational cell may include both zero-order and first-order rates. All parameters may be spatially heterogeneous. Previous applications of this model have been to essentially horizontal aquifer systems. By contrast, this hillslope system has sloping material layers and a dynamic and sloping water table. Extensions were made to AquiferSim 2DV, including representation of converging/diverging flow, which allowed a 2D steady-state model of this system to be developed. Comparison of model predictions with detailed water level and hydrochemical data from the site, however, showed that the model’s attractive simplicity in this case precluded adequate characterisation of what is essentially a 3D, transient system. While the model produced reasonable agreement with the concentration patterns under an average water table profile, predictions of oxygen and nitrate concentrations under low summer and high spring water table conditions were poor. The seasonal changes reflected an annual recharge pulse of fresh, oxidised water followed by gradual oxygen depletion till the next recharge pulse occurs in the following year, an essentially transient phenomenon which could not be represented using a steady state model. This in itself has provoked fresh thinking about the dynamic nature of flow and chemistry at the site

    Syntheses of 7-substituted anthra[2,3-b]thiophene derivatives and naphtho[2,3-b:6,7-b’]dithiophene

    Get PDF
    7-R-anthra[2,3-b]thiophene derivatives (1, R = H, Me, i-Pr, MeO) are prepared in three steps (in average overall yield >50%) starting from (E)-4-RC6H4CH2(HOCH2)C=CI(CH2OH). The latter are commercial or readily prepared from 2-butyne-1,4-diol and ArCH2Cl (both costin

    Non-equilibrium Quantum Simulations using Thimble Methods

    Get PDF
    Our understanding of time dependent phenomena in particle physics is hampered by our inability to effectively investigate non-equilibrium phenomena, even using computers, due to the ‘Sign Problem’. This problem means that for nonperturbative theories, it is functionally impossible to evaluate Feynman path integrals to calculate the expectation values of operators. Here I present a possible remedy to this problem in the form of Generalised Thimble Techniques which, at great computational cost, suppress the Sign Problem and allow us to make headway in these investigations. The formalism for moving these path integrals onto a discrete lattice is discussed, and is followed by an explanation of the mechanics of these Thimble techniques. These techniques are then compared, both in terms of approach and in terms of performance, to the other prominent approach to dealing with the Sign Problem, Langevin Dynamics. The implementation of these techniques is then demonstrated by comparing my results with literature, and how to best compensate for the computational cost is considered. The discussion then turns to how best to take advantage of the non- perturbative nature of these calculations. The lattice is modified, the characteristic imaginary time extension is removed and replaced with a bespoke density matrix, which is sampled independently of the thimble. This removal of the imaginary time extension opens the door to non-equilibrium density matrices, but initially the focus is on ensuring that these modifications are valid, and reproduction of equilibrium results takes priority. Unfortunately, the requirement to sample the density matrix independently of the thimble poses new computational problems however. The focus therefore briefly returns to optimisations, this time focusing on physical parameters of the system rather than numerical tricks or approximations. With these optimisations higher dimensional simulations are considered, but are still found to be too intensive for the available hardware. Instead, a second field is introduced, allowing the system to start out of equilibrium in a different way. This second field has a higher mass and occupation number, and two different interactions with a range of coupling strengths are considered. This means ‘particle’ decay can be seen between the two fields. The technique is shown to be promising, but hampered by its high computational cost. Possible routes to reducing this through both improvements to the algorithm and promising developments in hardware are discussed

    Nitrogen fertiliser management with zone characterisation in grazed pasture systems

    Get PDF
    Spatial information is frequently used for managing arable crops. The idea of developing management zones is often to enable accurate fertiliser supply for local crop needs. This helps avoid excessive introduction of nutrients, such as nitrogen, into the environment, and also to reduce fertiliser costs. Despite the success of this concept in arable farming, it is a poorly adopted practice for the management of grazed pastures. Grazed pasture systems have an additional level of complexity compared to monoculture, annual crops. Pastures are typically perennial in nature with short intervals between harvests (by a grazing animal) and therefore require fertiliser applications to maintain biomass production. Additionally, pastures often consist of two or more desirable plant species and the distribution of waste from livestock results in many small patches of very high nutrient content. We propose a concept to create management zones of grazed dairy pastures, using the spatial attributes of pasture paddocks. The target will be to identify zones of most likely high nitrogen availability and use this information to estimate the required local fertiliser target. The spatial information required for this approach may include: soil variation, irrigation, animal density, slope, farm infrastructure (i.e troughs and shelter) and previous pasture growth. Using a geographical information system, the spatial information for an area can be utilised to create map layers. These layers can then be spatially related and zones for the application of varying amounts of fertiliser can be developed at the sub-paddock scale. We are in theprocess of deriving response curves for N-ramps on selected paddocks in NZ and Australia which have sufficient spatial variability of the mentioned site characteristics. We undertook a theoretical feasibility study to compare both uniform and variable nitrogen fertiliser application as an initial investigation of the potential benefit of zone management. The integrated result (value of feed –cost of fertiliser –cost of environmental impact) of applying nitrogen variably across a paddock of dynamic soil using a non-linear response function was slightly lower than for uniform application. It is expected however, that increased understanding of spatial variables in pastures will increase the benefits of zone management

    Cuprate addition to a 6-substituted pentafulvene: preparation of sec-alkyl substituted titanocene dichlorides and their biological activity

    Get PDF
    The copper-catalysed (10 mol-% CuBr·SMe2, CuCN·LiCl or CuI/PPh3) addition of RMgBr to the pentafulvene 1-(cyclopenta-2,4-dien-1-ylidenemethyl)-2-methoxybenzene allows the formation of cyclopentadienyl derivatives with α-CHR(2-MeOPh) sidechains (R = Me, Et, nBu, iBu, allyl, Ph) without H– transfer. The deprotonation of these sec-alkyl-substituted cyclopentadienyls followed by the addition of TiCl4 allows the isolation of TiCl2{η5-C5H4CHR(2-OMePh)} as rac/meso mixtures that show activity against human colon, breast and pancreatic cell lines (GI50 2.3–42.4 μM)

    Practical Game Design Tool: State Explorer

    Get PDF
    This paper introduces a computer-game design tool which enables game designers to explore and develop game mechanics for arbitrary game systems. The tool is implemented as a plugin for the Godot game engine. It allows the designer to view an abstraction of a game’s states while in active development and to quickly view and explore which states are navigable from which other states. This information is used to rapidly explore, validate and improve the design of the game. The tool is most practical for game systems which are computer-explorable within roughly 2000 states. The tool is demonstrated by presenting how it was used to create a small, yet complete, commercial game

    Non-equilibrium Quantum Simulations using Thimble Methods

    Get PDF
    Our understanding of time dependent phenomena in particle physics is hampered by our inability to effectively investigate non-equilibrium phenomena, even using computers, due to the ‘Sign Problem’. This problem means that for nonperturbative theories, it is functionally impossible to evaluate Feynman path integrals to calculate the expectation values of operators. Here I present a possible remedy to this problem in the form of Generalised Thimble Techniques which, at great computational cost, suppress the Sign Problem and allow us to make headway in these investigations. The formalism for moving these path integrals onto a discrete lattice is discussed, and is followed by an explanation of the mechanics of these Thimble techniques. These techniques are then compared, both in terms of approach and in terms of performance, to the other prominent approach to dealing with the Sign Problem, Langevin Dynamics. The implementation of these techniques is then demonstrated by comparing my results with literature, and how to best compensate for the computational cost is considered. The discussion then turns to how best to take advantage of the non- perturbative nature of these calculations. The lattice is modified, the characteristic imaginary time extension is removed and replaced with a bespoke density matrix, which is sampled independently of the thimble. This removal of the imaginary time extension opens the door to non-equilibrium density matrices, but initially the focus is on ensuring that these modifications are valid, and reproduction of equilibrium results takes priority. Unfortunately, the requirement to sample the density matrix independently of the thimble poses new computational problems however. The focus therefore briefly returns to optimisations, this time focusing on physical parameters of the system rather than numerical tricks or approximations. With these optimisations higher dimensional simulations are considered, but are still found to be too intensive for the available hardware. Instead, a second field is introduced, allowing the system to start out of equilibrium in a different way. This second field has a higher mass and occupation number, and two different interactions with a range of coupling strengths are considered. This means ‘particle’ decay can be seen between the two fields. The technique is shown to be promising, but hampered by its high computational cost. Possible routes to reducing this through both improvements to the algorithm and promising developments in hardware are discussed

    Optimisation of Thimble Simulations and Quantum Dynamics of Multiple Fields in Real Time

    Get PDF
    We apply the Generalised Thimble approach to the computation of exact path integrals and correlators in real-time quantum field theory. We first investigate the details of the numerical implementation and ways of optimizing the algorithm. We subsequently apply the method to an interacting two-field system in 0+1 dimensions, illustrating the scope for addressing realistic physical processes using real-time Generalised Thimble computations.Comment: 22 page
    • …
    corecore