236 research outputs found
Die Duale Einkommensteuer des Sachverständigenrates in der Diskussion
n.a. -- This paper discusses the DIT, which is a tax reform proposal of the German Council of Economic Experts. Assessing an income tax system is a complex task. In doing so, you have to review the conditions of the international tax and locational competition, the ability-to-pay principle, the efficiency of the tax system, the possibilities of tax arbitrage and last but not least you have to balance this factors against the background of the costs of a reform concept. The conclusion of the paper is that the DIT is a too oversized and expensive concept to attract FDIs, which additionally creates diverse problems and hazards.
Agent–Based Keynesian Macroeconomics - An Evolutionary Model Embedded in an Agent–Based Computer Simulation
Subject of the present study is the agent-based computer simulation of Agent Island. Agent Island is a macroeconomic model, which belongs to the field of monetary theory. Agent-based modeling is an innovative tool that made much progress in other scientific fields like medicine or logistics. In economics this tool is quite new, and in monetary theory to this date virtual no agent-based simulation model has been developed. It is therefore the topic of this study to close this gap to some extend. Hence, the model integrates in a straightforward way next to the common private sectors (i.e. households, consumer goods firms and capital goods firms)and as an innovation a banking system, a central bank and a monetary circuit. Thereby, the central bank controls the business cycle via an interest rate policy; the according mechanism builds on the seminal idea of Knut Wicksell (natural rate of interest vs. money rate of interest). In addition, the model contains also many Keynesian features and a flow-of-funds accounting system in the tradition of Wolfgang Stützel. Importantly, one objective of the study is the validation of Agent Island, which means that the individual agents (i.e. their rules, variables and parameters) are adjusted in such a way that on the aggregate level certain phenomena emerge. The crucial aspect of the modeling and the validation is therefore the relation between the micro and macro level: Every phenomenon on the aggregate level (e.g. some stylized facts of the business cycle, the monetary transmission mechanism, the Phillips curve relationship, the Keynesian paradox of thrift or the course of the business cycle) emerges out of individual actions and interactions of the many thousand agents on Agent Island. In contrast to models comprising a representative agent, we do not apply a modeling on the aggregate level; and in contrast to orthodox GE models, true interaction between heterogeneous agents takes place (e.g. by face-to-face-trading).Multi-agent system , agent-based macroeconomic computer simulation , stock-flow consistent, monetary theory , Keynesian model, Wicksellian model, monetary policy
Agent–Based Keynesian Macroeconomics - An Evolutionary Model Embedded in an Agent–Based Computer Simulation
Subject of the present study is the agent-based computer simulation of Agent Island. Agent Island is a macroeconomic model, which belongs to the field of monetary theory. Agent-based modeling is an innovative tool that made much progress in other scientific fields like medicine or logistics. In economics this tool is quite new, and in monetary theory to this date virtual no agent-based simulation model has been developed. It is therefore the topic of this study to close this gap to some extend. Hence,
the model integrates in a straightforward way next to the common private sectors (i.e. households, consumer goods firms and capital goods firms)and as an innovation a banking system, a central bank and a monetary circuit. Thereby, the central bank controls the business cycle via an
interest rate policy; the according mechanism builds on the seminal idea of Knut Wicksell (natural rate of interest vs. money rate of interest). In addition, the model contains also many Keynesian features and a flow-of-funds accounting system in the tradition of Wolfgang Stützel.
Importantly, one objective of the study is the validation of Agent Island, which means that the individual agents (i.e. their rules, variables and parameters) are adjusted in such a way that on the aggregate level certain phenomena emerge. The crucial aspect of the modeling and the validation is therefore the relation between the micro and macro level: Every phenomenon on the aggregate level (e.g. some stylized facts of the business cycle, the monetary transmission mechanism, the Phillips curve relationship, the
Keynesian paradox of thrift or the course of the business cycle) emerges out of individual actions and interactions of the many thousand agents on Agent Island. In contrast to models comprising a representative agent, we do not apply a modeling on the aggregate level; and in contrast to orthodox GE models, true interaction between heterogeneous agents takes place (e.g. by face-to-face-trading)
Recommended from our members
Measuring and using information gained by observing diffraction data.
The information gained by making a measurement, termed the Kullback-Leibler divergence, assesses how much more precisely the true quantity is known after the measurement was made (the posterior probability distribution) than before (the prior probability distribution). It provides an upper bound for the contribution that an observation can make to the total likelihood score in likelihood-based crystallographic algorithms. This makes information gain a natural criterion for deciding which data can legitimately be omitted from likelihood calculations. Many existing methods use an approximation for the effects of measurement error that breaks down for very weak and poorly measured data. For such methods a different (higher) information threshold is appropriate compared with methods that account well for even large measurement errors. Concerns are raised about a current trend to deposit data that have been corrected for anisotropy, sharpened and pruned without including the original unaltered measurements. If not checked, this trend will have serious consequences for the reuse of deposited data by those who hope to repeat calculations using improved new methods
Improved estimates of coordinate error for molecular replacement.
The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21,000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates
Interaktive E-Learning-Module in der Humangenetik : Einsatz und Evaluation im Rahmen der Medizinstudierenden- und Humanbiologen-Ausbildung
Einleitung: Die vorliegende Studie beschreibt unser Online-Lehrmaterial Humangenetik im Zusammenhang mit dem k-MED-Projekt (Knowledge in Medical Education) an der Philipps-Universität Marburg. Es besteht aus fünf E-Learning-Modulen: Zytogenetik, Chromosomenstörungen, Formalgenetik, Grundlagen der molekularen Diagnostik sowie Kongenitale Abnormitäten und Fehlbildungssyndrome. Diese E-Module sollen ein einheitliches Wissensniveau der Studierenden gewährleisten und die Dozenten in der Präsenzlehre entlasten. Methoden: Die fünf E-Learning-Module Humangenetik wurden auf freiwilliger Basis einer großen Personengruppe von ca. 3300 Studierenden am Fachbereich Humanmedizin der Universität Marburg über eine Dauer von vier Jahren angeboten. Die Teilnehmer bestanden aus Naturwissenschaftlern (Humanbiologie) im 5. Fachsemester und Studierenden der Humanmedizin, die sich entweder in der Vorklinik (1. Semester) oder im klinischen Studienabschnitt (7./8. Semester) befanden. Von diesen wurden Daten zur Akzeptanz in Form von Usertrackingdaten und klausur-begleitenden Fragebögen erhoben. Ergebnisse und Schlussfolgerung: Die Evaluation zeigte eine breite Akzeptanz unserer Lehrmodule über einen Zeitraum von acht Semestern. Obwohl das Angebot freiwillig ist, werden die Online-Kurse Humangenetik konstant oder sogar in zunehmendem Maße zwischen Wintersemester 2005/06 und Sommersemester 2009 genutzt. Fazit: Unser E-Learning-Modell Humangenetik wird von Studierenden aus unterschiedlichen Semestern und Studiengängen am Fachbereich Humanmedizin gut angenommen und genutzt. Bei sorgfältiger Pflege der Online-Kurse steigern moderate Anpassungen sowohl Akzeptanz als auch Benutzungshäufigkeit in signifikanter Weise. Die Anwendung der E-Learning Module erscheint uns auch in der Ausbildung von MTAs oder Pflegekräften sinnvoll, um ein ausreichendes Grundwissen in Humangenetik zu gewährleisten. Schlüsselwörter: Humangenetik, Evaluation, Multimedia, E-Learnin
Gyre and gimble: a maximum-likelihood replacement for Patterson correlation refinement.
Descriptions are given of the maximum-likelihood gyre method implemented in Phaser for optimizing the orientation and relative position of rigid-body fragments of a model after the orientation of the model has been identified, but before the model has been positioned in the unit cell, and also the related gimble method for the refinement of rigid-body fragments of the model after positioning. Gyre refinement helps to lower the root-mean-square atomic displacements between model and target molecular-replacement solutions for the test case of antibody Fab(26-10) and improves structure solution with ARCIMBOLDO_SHREDDER
- …