2,539 research outputs found

    aVsIs: An Analytical-Solution-Based Solver for Model-Predictive Control With Hexagonal Constraints in Voltage-Source Inverter Applications

    Get PDF
    The theory of a new analytical-solution-based algorithm for calculating the optimal solution in model-predictive control applications with hexagonal constraints is discussed in this article. Three-phase voltage-source inverters for power electronic and electric motor drive applications are the target of the proposed method. The indirect model-predictive control requires a constrained quadratic programming (QP) solver to calculate the optimal solution. Most of the QP solvers use numerical algorithms, which may result in unbearable computational burdens. However, the optimal constrained solution can be calculated in an analytical way when the control horizon is limited to the first step. A computationally efficient algorithm with a certain maximum number of operations is proposed in this article. A thorough mathematical description of the solver in both the stationary and rotating reference frames is provided. Experimental results on real test rigs featuring either an electricmotor or a resistive-inductive load are reported to demonstrate the feasibility of the proposed solver, thus smoothing theway for its implementation in industrial applications. The name of the proposed solver is aVsIs, which is released under Apache License 2.0 in GitHub, and a free example is available in Code Ocean

    On Variants of CM-triviality

    Full text link
    We introduce a generalization of CM-triviality relative to a fixed invariant collection of partial types, in analogy to the Canonical Base Property defined by Pillay, Ziegler and Chatzidakis which generalizes one-basedness. We show that, under this condition, a stable field is internal to the family, and a group of finite Lascar rank has a normal nilpotent subgroup such that the quotient is almost internal to the family

    Global oceanic emission of ammonia: constraints from seawater and atmospheric observations

    Get PDF
    Current global inventories of ammonia emissions identify the ocean as the largest natural source. This source depends on seawater pH, temperature, and the concentration of total seawater ammonia (NHx(sw)), which reflects a balance between remineralization of organic matter, uptake by plankton, and nitrification. Here we compare [NHx(sw)] from two global ocean biogeochemical models (BEC and COBALT) against extensive ocean observations. Simulated [NHx(sw)] are generally biased high. Improved simulation can be achieved in COBALT by increasing the plankton affinity for NHx within observed ranges. The resulting global ocean emissions is 2.5 TgN a−1, much lower than current literature values (7–23 TgN a−1), including the widely used Global Emissions InitiAtive (GEIA) inventory (8 TgN a−1). Such a weak ocean source implies that continental sources contribute more than half of atmospheric NHx over most of the ocean in the Northern Hemisphere. Ammonia emitted from oceanic sources is insufficient to neutralize sulfate aerosol acidity, consistent with observations. There is evidence over the Equatorial Pacific for a missing source of atmospheric ammonia that could be due to photolysis of marine organic nitrogen at the ocean surface or in the atmosphere. Accommodating this possible missing source yields a global ocean emission of ammonia in the range 2–5 TgN a−1, comparable in magnitude to other natural sources from open fires and soils

    X-ray diffraction from bone employing annular and semi-annular beams

    Get PDF
    This is the final version of the article. Available from the publisher via the DOI in this record.There is a compelling need for accurate, low cost diagnostics to identify osteo-tissues that are associated with a high risk of fracture within an individual. To satisfy this requirement the quantification of bone characteristics such as 'bone quality' need to exceed that provided currently by densitometry. Bone mineral chemistry and microstructure can be determined from coherent x-ray scatter signatures of bone specimens. Therefore, if these signatures can be measured, in vivo, to an appropriate accuracy it should be possible by extending terms within a fracture risk model to improve fracture risk prediction.In this preliminary study we present an examination of a new x-ray diffraction technique that employs hollow annular and semi-annular beams to measure aspects of 'bone quality'. We present diffractograms obtained with our approach from ex vivo bone specimens at Mo Kα and W Kα energies. Primary data is parameterized to provide estimates of bone characteristics and to indicate the precision with which these can be determined.We acknowledge gratefully the funding provided by the UK Engineering and Physical Sciences Research Council (EPSRC) grant number EP/K020196/

    Language of Lullabies: The Russification and De-Russification of the Baltic States

    Get PDF
    This article argues that the laws for promotion of the national languages are a legitimate means for the Baltic states to establish their cultural independence from Russia and the former Soviet Union

    Intermittent control models of human standing: similarities and differences

    Get PDF
    Two architectures of intermittent control are compared and contrasted in the context of the single inverted pendulum model often used for describing standing in humans. The architectures are similar insofar as they use periods of open-loop control punctuated by switching events when crossing a switching surface to keep the system state trajectories close to trajectories leading to equilibrium. The architectures differ in two significant ways. Firstly, in one case, the open-loop control trajectory is generated by a system-matched hold, and in the other case, the open-loop control signal is zero. Secondly, prediction is used in one case but not the other. The former difference is examined in this paper. The zero control alternative leads to periodic oscillations associated with limit cycles; whereas the system-matched control alternative gives trajectories (including homoclinic orbits) which contain the equilibrium point and do not have oscillatory behaviour. Despite this difference in behaviour, it is further shown that behaviour can appear similar when either the system is perturbed by additive noise or the system-matched trajectory generation is perturbed. The purpose of the research is to come to a common approach for understanding the theoretical properties of the two alternatives with the twin aims of choosing which provides the best explanation of current experimental data (which may not, by itself, distinguish beween the two alternatives) and suggesting future experiments to distinguish between the two alternatives

    Market Efficiency after the Financial Crisis: It's Still a Matter of Information Costs

    Get PDF
    Compared to the worldwide financial carnage that followed the Subprime Crisis of 2007-2008, it may seem of small consequence that it is also said to have demonstrated the bankruptcy of an academic financial institution: the Efficient Capital Market Hypothesis (“ECMH”). Two things make this encounter between theory and seemingly inconvenient facts of consequence. First, the ECMH had moved beyond academia, fueling decades of a deregulatory agenda. Second, when economic theory moves from academics to policy, it also enters the realm of politics, and is inevitably refashioned to serve the goals of political argument. This happened starkly with the ECMH. It was subject to its own bubble – as a result of politics, it expanded from a narrow but important academic theory about the informational underpinnings of market prices to a broad ideological preference for market outcomes over even measured regulation. In this Article we examine the Subprime Crisis as a vehicle to return the ECMH to its information cost roots that support a more modest but sensible regulatory policy. In particular, we argue that the ECMH addresses informational efficiency, which is a relative, not an absolute measure. This focus on informational efficiency leads to a more focused understanding of what went wrong in 2007-2008. Yet informational efficiency is related to fundamental efficiency – if all information relevant to determining a security’s fundamental value is publicly available and the mechanisms by which that information comes to be reflected in the securities market price operate without friction, fundamental and informational efficiency coincide. But where all value relevant information is not publicly available and/or the mechanisms of market efficiency operate with frictions, the coincidence is an empirical question both as to the information efficiency of prices and their relation to fundamental value. Properly framing market efficiency focuses our attention on the frictions that drive a wedge between relative efficiency and efficiency under perfect market conditions. So framed, relative efficiency is a diagnostic tool that identifies the information costs and structural barriers that reduce price efficiency which, in turn, provides part of a realistic regulatory strategy. While it will not prevent future crises, improving the mechanisms of market efficiency will make prices more efficient, frictions more transparent, and the influence of politics on public agencies more observable, which may allow us to catch the next problem earlier. Recall that on September 8, 2008, the Congressional Budget Office publicly stated its uncertainty about whether there would be a recession and predicted 1.5 percent growth in 2009. Eight days later, Lehman Brothers had failed, and AIG was being nationalized
    • 

    corecore