5,139 research outputs found
Who will win the ozone game? On building and sustaining cooperation in the Montreal protocoll on substances that deplete the ozone layer
This paper presents an analysis of the Montreal Protocol on Substances that Deplete the Ozone Layer. It advances the view that the Developing World did not exploit its relatively strong bargaining position in negotiations over sidepayments and that the concessional ten-year grace period for Jess developed countries is a cause of instability of the agreement. The paper derives conditions under which sidepayments and sanctions can produce stable cooperation. It applies basic non-cooperative game theory and the subgame perfect Nash equilibrium as solution concept and compares the non-cooperative outcome with the Nash bargaining solution of a hypothetical cooperative game.
Real-valued feature selection for process approximation and prediction
The selection of features for classification, clustering and approximation is an important task in pattern recognition, data mining and soft computing. For real-valued features, this contribution shows how feature selection for a high number of features can be implemented using mutual in-formation. Especially, the common problem for mutual information computation of computing joint probabilities for many dimensions using only a few samples is treated by using the Rènyi mutual information of order two as computational base. For this, the Grassberger-Takens corre-lation integral is used which was developed for estimating probability densities in chaos theory. Additionally, an adaptive procedure for computing the hypercube size is introduced and for real world applications, the treatment of missing values is included. The computation procedure is accelerated by exploiting the ranking of the set of real feature values especially for the example of time series. As example, a small blackbox-glassbox example shows how the relevant features and their time lags are determined in the time series even if the input feature time series determine nonlinearly the output. A more realistic example from chemical industry shows that this enables a better ap-proximation of the input-output mapping than the best neural network approach developed for an international contest. By the computationally efficient implementation, mutual information becomes an attractive tool for feature selection even for a high number of real-valued features
Renormalization of the EWCL and its Application to LEP2
We perform a systematic one-loop renormalization on the electroweak chiral
Lagrangian (EWCL) up to operators and construct the renormalization
group equations (RGE) for the anomalous couplings. We examine the impact of the
triple gauge coupling (TGC) measurement from LEP2 to the uncertainty of the
parameter at the , and find that the uncertainty in the
TGC measurements can shift at least .Comment: 4 pages, 1 eps figure, uses ws-ijmpa.cls. Paralell talk given at
"International Conference on QCD and hadronic Physics", Beijing, China, 16-20
June, 200
Raising the Higgs mass with Yukawa couplings for isotriplets in vector-like extensions of minimal supersymmetry
Extra vector-like matter with both electroweak-singlet masses and large
Yukawa couplings can significantly raise the lightest Higgs boson mass in
supersymmetry through radiative corrections. I consider models of this type
that involve a large Yukawa coupling between weak isotriplet and isodoublet
chiral supermultiplets. The particle content can be completed to provide
perturbative gauge coupling unification, in several different ways. The impact
on precision electroweak observables is shown to be acceptably small, even if
the new particles are as light as the current experimental bounds of order 100
GeV. I study the corrections to the lightest Higgs boson mass, and discuss the
general features of the collider signatures for the new fermions in these
models.Comment: 30 page
Who will win the ozone game? On building and sustaining cooperation in the Montreal protocoll on substances that deplete the ozone layer
This paper presents an analysis of the Montreal Protocol on Substances that Deplete the Ozone Layer. It advances the view that the Developing World did not exploit its relatively strong bargaining position in negotiations over sidepayments and that the concessional ten-year grace period for Jess developed countries is a cause of instability of the agreement. The paper derives conditions under which sidepayments and sanctions can produce stable cooperation. It applies basic non-cooperative game theory and the subgame perfect Nash equilibrium as solution concept and compares the non-cooperative outcome with the Nash bargaining solution of a hypothetical cooperative game
Nichtlineare Merkmalsselektion mit der generalisierten Transinformation
In the context of information theory, the term Mutual Information has first been formulated by Claude Elwood Shannon. Information theory is the consistent mathematical description of technical communication systems. To this day, it is the basis of numerous applications in modern communications engineering and yet became indispensable in this field. This work is concerned with the development of a concept for nonlinear feature selection from scalar, multivariate data on the basis of the mutual information. From the viewpoint of modelling, the successful construction of a realistic model depends highly on the quality of the employed data. In the ideal case, high quality data simply consists of the relevant features for deriving the model. In this context, it is important to possess a suitable method for measuring the degree of the, mostly nonlinear, dependencies between input- and output variables. By means of such a measure, the relevant features could be specifically selected. During the course of this work, it will become evident that the mutual information is a valuable and feasible measure for this task and hence the method of choice for practical applications. Basically and without the claim of being exhaustive, there are two possible constellations that recommend the application of feature selection. On the one hand, feature selection plays an important role, if the computability of a derived system model cannot be guaranteed, due to a multitude of available features. On the other hand, the existence of very few data points with a significant number of features also recommends the employment of feature selection. The latter constellation is closely related to the so called "Curse of Dimensionality". The actual statement behind this is the necessity to reduce the dimensionality to obtain an adequate coverage of the data space. In other word, it is important to reduce the dimensionality of the data, since the coverage of the data space exponentially decreases, for a constant number of data points, with the dimensionality of the available data. In the context of mapping between input- and output space, this goal is ideally reached by selecting only the relevant features from the available data set. The basic idea for this work has its origin in the rather practical field of automotive engineering. It was motivated by the goals of a complex research project in which the nonlinear, dynamic dependencies among a multitude of sensor signals should be identified. The final goal of such activities was to derive so called virtual sensors from identified dependencies among the installed automotive sensors. This enables the real-time computability of the required variable without the expenses of additional hardware. The prospect of doing without additional computing hardware is a strong motive force in particular in automotive engineering. In this context, the major problem was to find a feasible method to capture the linear- as well as the nonlinear dependencies. As mentioned before, the goal of this work is the development of a flexibly applicable system for nonlinear feature selection. The important point here is to guarantee the practicable computability of the developed method even for high dimensional data spaces, which are rather realistic in technical environments. The employed measure for the feature selection process is based on the sophisticated concept of mutual information. The property of the mutual information, regarding its high sensitivity and specificity to linear- and nonlinear statistical dependencies, makes it the method of choice for the development of a highly flexible, nonlinear feature selection framework. In addition to the mere selection of relevant features, the developed framework is also applicable for the nonlinear analysis of the temporal influences of the selected features. Hence, a subsequent dynamic modelling can be performed more efficiently, since the proposed feature selection algorithm additionally provides information about the temporal dependencies between input- and output variables. In contrast to feature extraction techniques, the developed feature selection algorithm in this work has another considerable advantage. In the case of cost intensive measurements, the variables with the highest information content can be selected in a prior feasibility study. Hence, the developed method can also be employed to avoid redundance in the acquired data and thus prevent for additional costs.Der Begriff der Transinformation wurde erstmals von Claude Elwood Shannon im Kontext der Informationstheorie, einer einheitlichen mathematischen Beschreibung technischer Kommunikationssysteme, geprägt. Die vorliegenden Arbeit befaßt sich vor diesem Hintergrund mit der Entwicklung einer in der Praxis anwendbaren Methodik zur nichtlinearen Merkmalselektion quantitativer, multivariater Daten auf der Basis des bereits erwähnten informationstheoretischen Ansatzes der Transinformation. Der Erfolg beim Übergang von realen Meßdaten zu einer geeigneten Modellbeschreibung wird maßgeblich von der Qualität der verwendeten Datenmengen bestimmt. Eine qualitativ hochwertige Datenmenge besteht im Idealfall ausschließlich aus den für eine erfolgreiche Modellformulierung relevanten Daten. In diesem Kontext stellt sich daher sofort die Frage nach der Existenz eines geeigneten Maßes, um den Grad des, im Allgemeinen nichtlinearen, funktionalen Zusammenhangs zwischen Ein- und Ausgaben quantitativ korrekt erfassen zu können. Mit Hilfe einer solchen Größe können die relevanten Merkmale gezielt ausgewählt und somit von den redundanten Merkmalen getrennt werden. Im Verlaufe dieser Arbeit wird deutlich werden, daß die eingangs erwähnte Transinformation ein hierfür geeignetes Maß darstellt und im praktischen Einsatz bestens bestehen kann. Die ursprüngliche Motivation zur Erstellung der vorliegenden Arbeit hat ihren durchaus praktischen Hintergrund in der Automobiltechnik. Sie entstand im Rahmen eines komplexen Forschungsprojektes zur Ermittlung von nichtlinearen, dynamischen Zusammenhängen zwischen einer Vielzahl von meßtechnisch ermittelten Sensorsignalen. Das Ziel dieser Aktivitäten war, durch die Identifikation von nichtlinearen, dynamischen Zusammenhängen zwischen den im Automobil verbauten Sensoren, sog. virtuelle Sensoren abzuleiten. Die konkrete Aufgabenstellung bestand nun darin, die Bestimmung einer zentralen Motorgröße so effizient zu gestalten, daß diese ohne zusätzliche Hardware unter harten Echtzeitvorgaben berechenbar ist. Auf den zusätzlichen Einsatz von Hardware verzichten zu können und mit der bereits vorhandenen Rechenleistung auszukommen, stellt aufgrund des resultierenden, enormen Kostenaufwandes insbesondere in der Automobiltechnik eine unglaublich starke Motivation dar. In diesem Zusammenhang trat immer wieder die große Problematik zutage, eine praktisch berechenbare Methode zu finden, die sowohl lineare- als auch nichtlineare Zusammenhänge zuverlässig quantitativ erfassen kann. Im Verlauf der Arbeit werden nun unterschiedliche Selektionsstrategien mit der Transinformation kombiniert und deren Eigenschaften miteinander verglichen. In diesem Zusammenhang erweist sich die Kombination von Transinformation mit der sogenannten Forward Selection Strategie als besonders interessant. Es wird gezeigt, daß diese Kombination die praktische Berechenbarkeit für hochdimensionale Datenräume, im Vergleich zu anderen Vorgehensweisen, tatsächlich erst ermöglicht. Im Anschluß daran wird die Konvergenz dieses neuen Verfahrens zur Merkmalselektion bewiesen. Wir werden weiterhin sehen, daß die erzielten Ergebnisse bemerkenswert nahe an der optimalen Lösung liegen und im Vergleich mit einer alternativen Selektionsstrategie deutlich überlegen sind. Parallel zur eigentlichen Selektion der relevanten Merkmale ist es mit der in dieser Arbeit entwickelten Methode nun auch problemlos möglich, eine nichtlineare Analyse der zeitlichen Abhängigkeiten von ausgewählten Merkmalen durchzuführen. Eine anschließende dynamische Modellierung kann somit wesentlich effizienter durchgeführt werden, da die entwickelte Merkmalselektion zusätzliche Information hinsichtlich des dynamischen Zusammenhangs von Eingangs- und Ausgangsdaten liefert. Mit der in dieser Arbeit entwickelten Methode ist nun letztendlich gelungen was vorher nicht möglich war. Das quantitative Erfassen der nichtlinearen Zusammenhänge zwischen dedizierten Sensorsignalen, um diese in eine effiziente Merkmalselektion einfließen zu lassen. Im Gegensatz zur Merkmalsextraktion, hat die in diese Arbeit entwickelte Methode der nichtlinearen Merkmalselektion einen weiteren entscheidenden Vorteil. Insbesondere bei sehr kostenintensiven Messungen können diejenigen Variablen ausgewählt werden, die hinsichtlich der Abbildung auf eine Ausgangsgröße den höchsten Informationsgehalt tragen. Neben dem rein technischen Aspekt, die Selektionsentscheidung direkt auf den Informationsgehalt der verfügbaren Daten zu stützen, kann die entwickelte Methode ebenfalls im Vorfeld kostenrelevanter Entscheidungen herangezogen werden, um Redundanz und die damit verbundenen höheren Kosten gezielt zu vermeiden
- …