12,812 research outputs found

    The Case against Government Intervention in Energy Markets

    Get PDF
    Many politicians and pundits are panicked over the existing state of the oil and gasoline markets. Disregarding past experience, these parties advocate massive intervention in those markets, which would only serve to repeat and extend previous errors. These interventionists propose solutions to nonexistent problems. This Policy Analysis reviews the academic literature relevant to these matters and argues that the prevailing policy proposals are premised on a misunderstanding of energy economics and market realities. The interventionists do not distinguish between problems that government can remedy and those that it cannot. They ignore lessons that should have been learned from past experience. They embrace at best second- and third-best remedies rather than first-best remedies for the alleged problems. Moreover, they ignore the extreme difficulty associated with ensuring efficient policy response even when it seems to be theoretically warranted. Fear of oil imports is premised on pernicious myths that have long distorted energy policy. The U.S. defense posture probably would not be altered by reducing the extent to which oil is imported from troublesome regions. Fears about a near-term peak in global oil production are unwarranted, and government cannot help markets to respond properly even if the alarm proved correct. Market actors will produce the capital necessary for needed investments; no "Marshall Plans" are necessary. Price signals will efficiently order consumer behavior; energy-consumption mandates are therefore both unwise and unnecessary. Finally, more caution is needed regarding the case for public action to address global warming. The omnipresent calls for more aggressive energy diplomacy are misguided. Economic theory validated by historical experience implies that the diplomatic initiatives are exercises in futility because they seek to divert countries from the wealth maximization that is their goal. Similarly, the search for favorable access to crude oil is futile. Despite their popularity, rules to force reductions in energy use lack economic justification. Attacks on American oil companies and speculators seek to shift blame to those subject to U.S. government control from the uncontrollable foreign oil-producing governments that are truly to blame

    Losing the War Against Dirty Money: Rethinking Global Standards on Preventing Money Laundering and Terrorism Financing

    Get PDF
    Following a brief overview in Part I.A of the overall system to prevent money laundering, Part I.B describes the role of the private sector, which is to identify customers, create a profile of their legitimate activities, keep detailed records of clients and their transactions, monitor their transactions to see if they conform to their profile, examine further any unusual transactions, and report to the government any suspicious transactions. Part I.C continues the description of the preventive measures system by describing the government\u27s role, which is to assist the private sector in identifying suspicious transactions, ensure compliance with the preventive measures requirements, and analyze suspicious transaction reports to determine those that should be investigated. Parts I.D and I.E examine the effectiveness of this system. Part I.D discusses successes and failures in the private sector\u27s role. Borrowing from theory concerning the effectiveness of private sector unfunded mandates, this Part reviews why many aspects of the system are failing, focusing on the subjectivity of the mandate, the disincentives to comply, and the lack of comprehensive data on client identification and transactions. It notes that the system includes an inherent contradiction: the public sector is tasked with informing the private sector how best to detect launderers and terrorists, but to do so could act as a road map on how to avoid detection should such information fall into the wrong hands. Part I.D discusses how financial institutions do not and cannot use scientifically tested statistical means to determine if a particular client or set of transactions is more likely than others to indicate criminal activity. Part I.D then turns to a discussion of a few issues regarding the impact the system has but that are not related to effectiveness, followed by a summary and analysis of how flaws might be addressed. Part I.E continues by discussing the successes and failures in the public sector\u27s role. It reviews why the system is failing, focusing on the lack of assistance to the private sector in and the lack of necessary data on client identification and transactions. It also discusses how financial intelligence units, like financial institutions, do not and cannot use scientifically tested statistical means to determine probabilities of criminal activity. Part I concludes with a summary and analysis tying both private and public roles together. Part II then turns to a review of certain current techniques for selecting income tax returns for audit. After an overview of the system, Part II first discusses the limited role of the private sector in providing tax administrators with information, comparing this to the far greater role the private sector plays in implementing preventive measures. Next, this Part turns to consider how tax administrators, particularly the U.S. Internal Revenue Service, select taxpayers for audit, comparing this to the role of both the private and public sectors in implementing preventive measures. It focuses on how some tax administrations use scientifically tested statistical means to determine probabilities of tax evasion. Part II then suggests how flaws in both private and public roles of implementing money laundering and terrorism financing preventive measures might be theoretically addressed by borrowing from the experience of tax administration. Part II concludes with a short summary and analysis that relates these conclusions to the preventive measures system. Referring to the analyses in Parts I and II, Part III suggests changes to the current preventive measures standard. It suggests that financial intelligence units should be uniquely tasked with analyzing and selecting clients and transactions for further investigation for money laundering and terrorism financing. The private sector\u27s role should be restricted to identifying customers, creating an initial profile of their legitimate activities, and reporting such information and all client transactions to financial intelligence units

    Frequency of occurrence of novel milk protein variants in a New Zealand dairy cattle study population : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Biochemistry, Massey University

    Get PDF
    Since the discovery of genetic polymorphism within milk protein genes, a considerable volume of research has been published relating milk protein genetic variants and milk production properties. Polymorphism of milk proteins can result in two effects: (a) changes in the biological and physico-chemical properties of systems containing the variant protein, (b) changes in the synthesis level of variant proteins. As a result several studies of milk protein variants have identified phenotypes which may be commercially advantageous for specific products. Currently employed methods to determine milk protein phenotypes are generally limited to electrophoretic techniques. The gel electrophoretic techniques commonly used are able to detect most milk protein variants that differ by their net electrical charge. However single amino acid substitutions that result in a change in net charge account for only 25% of the possible substitutions that could occur. The remaining 75% of potential variants are the result of a neutral residue substituted by another neutral residue - a 'silent' variant. Thus it is likely that some substitutions, and hence genetic variants have gone undetected in the past. The purpose of this study was to develop new methods for determining the phenotype of milk proteins, and to determine the frequency of occurrence of silent or other novel variants in a New Zealand dairy cattle study population. Polyacrylamide gel electrophoresis (PAGE), free zone capillary electrophoresis (CE), peptide mapping by reverse-phase HPLC and electrospray mass spectrometry (ESI-MS) were used in the characterisation of milk proteins purified from 109 individual dairy cows. Three different PAGE systems were used. Alkaline-urea PAGE enabled the detection of α si-casin variants B and C, β-casein variants group A (variants A1, A2 and A3) and B, and K-casein variants A and B in the study population. Beta-casein variants A1, A2 and A3 were subsequently resolved in an acid-urea PAGE system. The whey proteins were very poorly resolved in PAGE systems containing urea. Alpha-lactalbumin A, and β-lactoglobulin (β-LG) variants A and B were resolved in a non-denaturing 'native' PAGE system. The frequencies of the various milk protein variants corresponded closely to figures previously published. A free zone CE method that is able to resolve β-LG variants A, B and C was used to check the phenotype of purified β-LG samples. Three samples previously typed as β-LG BB were subsequently determined to be β-LG CC; one sample typed as β-LG BB was re-assigned as β-LG BC. This highlighted the limitations of PAGE systems for the detection of known variants. Tryptic hydrolysis of purified casein proteins and β-LG, followed by reverse-phase HPLC separation of the resultant peptides was used to create peptide 'maps' of the hydrolysis products. Differences in peptide maps were noted between protein variants. The differences corresponded to peptides containing a substitution site. All samples analysed in this way contained more peptide peaks than expected. Analysis revealed that some were the result of incomplete digestion; others the result of chymotryptic-like cleavages. No aberrant peptide maps, indicative of a silent mutation, were detected. Purified casein proteins and β-LG were subjected to ESI-MS for mass analysis. The mass of each protein species was determined as follows: Protein Average mass Std. dev. as1-CN B-8P 23614.9 Da 1.2 Da as2-CN A-11P 25228.9 Da 1.5 Da β-CN A1-5P 24023.9 Da 3.1 Da β-CN A2-5P 23983.5 Da 1.8 Da β-CN B-5P 24092.6 Da n.d. k-CN A-1P 19038.8 Da 1.5 Da k-CN B-1P 19003.8 Da n.d. β-LG A 18362.6 Da 1.0 Da β-LG B 18277.0 Da 0.9 Da β-LG C 18287.2 Da 0.6 Da In all cases the experimentally determined mass corresponded to the mass calculated from published primary sequences of milk protein variants. In addition to the expected β-LG variant in each mass spectrum, additional species were detected differing from the mass of the β-LG species by increments of approximately 324 Da. Although less pronounced, the +324 Da molecular weight species were also detected in a sample of β-LG purchased from the Sigma Chemical Company. The additional species were also detected in whey prepared by ultra-centrifugation, although at a much lower level. The 324 Da molecular weight adducts observed in ESI-MS spectra of purified β-LG are consistent with an addition of a lactosyl residue to the protein. The observation that these species remain after heat denaturation, reduction and RP-HPLC treatment suggest that the linkage is covalent. Lactulosyl-lysine is known to form in milk products during some processing conditions, particularly during heating. The observation of these glycated species in gently treated, unheated milk suggests that glycation may occur to some extent in the udder of the cow. The association of the 324 Da molecule with β-LG does not alter the charge, molecular weight or hydrophobicity sufficiently to be detected by PAGE. CE or RP-HPLC

    The Nuclear Imaging Uncertainty Principle. Do Nuclear Cameras Really Work?

    Get PDF
    The introduction of the Heisenberg Uncertainty principle and Nuclear Cardiology occurred simultaneously in 1925-1927. Thirty years later the Anger gamma camera would allow for a more sophisticated radioactive isotope counting to determine the presence or absence of disease. When employed with technetium-99m isotopes, ischemic heart disease can be inferred by differences in visual appearance of cardiac images. These gestalts of imaging results have been separated from the quantitative information recorded by the cameras computer. We investigated whether current camera and computer systems are sophisticated enough to quantify differences between images to be clinically relevant. Our study demonstrated that efforts to "sharpen" image appearance does so at a reduction in "accuracy". Like Heisenberg, this work shows that one cannot know the exact location AND the amount of activity simultaneously and that a decision must be made for accuracy over image sharpness if one is to truly quantify differences in isotope concentration between images

    How Do Disabilities Affect Future Retirement Benefits?

    Get PDF
    One-quarter of workers ages 51 to 55 develop work disabilities before age 62. Disabilities often force people to curtail their work hours, derailing retirement preparations. However, protections built into Social Security, including disability and spouse benefits and the system's tilt toward workers with low lifetime earnings, cushion the impact of midlife health problems. After other factors are controlled for, the onset of health-related work limitations between ages 51 and 61 reduces Social Security retirement benefits at ages 63 to 67 by only about 2 percent, much less than the impact on other retirement savings

    Polymer Statistics and Fermionic Vector Models

    Get PDF
    We consider a variation of O(N)O(N)-symmetric vector models in which the vector components are Grassmann numbers. We show that these theories generate the same sort of random polymer models as the O(N)O(N) vector models and that they lie in the same universality class in the large-NN limit. We explicitly construct the double-scaling limit of the theory and show that the genus expansion is an alternating Borel summable series that otherwise coincides with the topological expansion of the bosonic models. We also show how the fermionic nature of these models leads to an explicit solution even at finite-NN for the generating functions of the number of random polymer configurations.Comment: 13 pages LaTeX, run twice. Minor technical details corrected (mainly in combinatorics for Feynman graphs) and clarifying comments added; additional reference include

    Designing a programming-based approach for modelling scientific phenomena

    Get PDF
    We describe an iteratively designed sequence of activities involving the modelling of 1- dimensional collisions between moving objects based on programming in ToonTalk. Students aged 13-14 in two settings (London and Cyprus) investigated a number of collision situations, classified into six classes based on the relative velocities and masses of the colliding objects. We describe iterations of the system in which students engaged in a repeating cycle of activity for each collision class: prediction of object behaviour from given collision conditions, observation of a relevant video clip, building a model to represent the phenomena, testing, validating and refining their model, and publishing it – together with comments – on our web-based collaboration system, WebReports. Students were encouraged to consider the limitations of their current model, with the aim that they would eventually appreciate the benefit of constructing a general model that would work for all collision classes, rather than a different model for each class. We describe how our intention to engage students with the underlying concepts of conservation, closed systems and system states was instantiated in the activity design, and how the modelling activities afforded an alternative representational framework to traditional algebraic description

    Fermionic Matrix Models

    Get PDF
    We review a class of matrix models whose degrees of freedom are matrices with anticommuting elements. We discuss the properties of the adjoint fermion one-, two- and gauge invariant D-dimensional matrix models at large-N and compare them with their bosonic counterparts which are the more familiar Hermitian matrix models. We derive and solve the complete sets of loop equations for the correlators of these models and use these equations to examine critical behaviour. The topological large-N expansions are also constructed and their relation to other aspects of modern string theory such as integrable hierarchies is discussed. We use these connections to discuss the applications of these matrix models to string theory and induced gauge theories. We argue that as such the fermionic matrix models may provide a novel generalization of the discretized random surface representation of quantum gravity in which the genus sum alternates and the sums over genera for correlators have better convergence properties than their Hermitian counterparts. We discuss the use of adjoint fermions instead of adjoint scalars to study induced gauge theories. We also discuss two classes of dimensionally reduced models, a fermionic vector model and a supersymmetric matrix model, and discuss their applications to the branched polymer phase of string theories in target space dimensions D>1 and also to the meander problem.Comment: 139 pages Latex (99 pages in landscape, two-column option); Section on Supersymmetric Matrix Models expanded, additional references include
    corecore