24 research outputs found

    Applications of stochastic simulation in two-stage multiple comparisons with the best problem and time average variance constant estimation

    Get PDF
    In this dissertation, we study two problems. In the first part, we consider the two-stage methods for comparing alternatives using simulation. Suppose there are a finite number of alternatives to compare, with each alternative having an unknown parameter that is the basis for comparison. The parameters are to be estimated using simulation, where the alternatives are simulated independently. We develop two-stage selection and multiple-comparison procedures for simulations under a general framework. The assumptions are that each alternative has a parameter estimation process that satisfies a random- time-change central limit theorem (CLT), and there is a weakly consistent variance estimator (WCVE) for the variance constant appearing in the CLT. The framework encompasses comparing means of independent populations, functions of means, and steady-state means. One problem we consider of considerable practical interest and not handled in previous work on two-stage multiple-comparison procedures is comparing quantiles of alternative populations. We establish the asymptotic validity of our procedures as the prescribed width of the confidence intervals or indifference-zone parameter shrinks to zero. Also, for the steady-state simulation context, we compare our procedures based on WCVEs with techniques that instead use standardized time series methods. In the second part, we propose a new technique of estimating the variance parameter of a wide variety of stochastic processes. This new technique is better than the existing techniques for some standard stochastic processes in terms of bias and variance properties, since it reduces bias at the cost of no significant increase in variance

    A precise bare simulation approach to the minimization of some distances. Foundations

    Full text link
    In information theory -- as well as in the adjacent fields of statistics, machine learning, artificial intelligence, signal processing and pattern recognition -- many flexibilizations of the omnipresent Kullback-Leibler information distance (relative entropy) and of the closely related Shannon entropy have become frequently used tools. To tackle corresponding constrained minimization (respectively maximization) problems by a newly developed dimension-free bare (pure) simulation method, is the main goal of this paper. Almost no assumptions (like convexity) on the set of constraints are needed, within our discrete setup of arbitrary dimension, and our method is precise (i.e., converges in the limit). As a side effect, we also derive an innovative way of constructing new useful distances/divergences. To illustrate the core of our approach, we present numerous examples. The potential for widespread applicability is indicated, too; in particular, we deliver many recent references for uses of the involved distances/divergences and entropies in various different research fields (which may also serve as an interdisciplinary interface)

    Statistical Yield Analysis and Design for Nanometer VLSI

    Get PDF
    Process variability is the pivotal factor impacting the design of high yield integrated circuits and systems in deep sub-micron CMOS technologies. The electrical and physical properties of transistors and interconnects, the building blocks of integrated circuits, are prone to significant variations that directly impact the performance and power consumption of the fabricated devices, severely impacting the manufacturing yield. However, the large number of the transistors on a single chip adds even more challenges for the analysis of the variation effects, a critical task in diagnosing the cause of failure and designing for yield. Reliable and efficient statistical analysis methodologies in various design phases are key to predict the yield before entering such an expensive fabrication process. In this thesis, the impacts of process variations are examined at three different levels: device, circuit, and micro-architecture. The variation models are provided for each level of abstraction, and new methodologies are proposed for efficient statistical analysis and design under variation. At the circuit level, the variability analysis of three crucial sub-blocks of today's system-on-chips, namely, digital circuits, memory cells, and analog blocks, are targeted. The accurate and efficient yield analysis of circuits is recognized as an extremely challenging task within the electronic design automation community. The large scale of the digital circuits, the extremely high yield requirement for memory cells, and the time-consuming analog circuit simulation are major concerns in the development of any statistical analysis technique. In this thesis, several sampling-based methods have been proposed for these three types of circuits to significantly improve the run-time of the traditional Monte Carlo method, without compromising accuracy. The proposed sampling-based yield analysis methods benefit from the very appealing feature of the MC method, that is, the capability to consider any complex circuit model. However, through the use and engineering of advanced variance reduction and sampling methods, ultra-fast yield estimation solutions are provided for different types of VLSI circuits. Such methods include control variate, importance sampling, correlation-controlled Latin Hypercube Sampling, and Quasi Monte Carlo. At the device level, a methodology is proposed which introduces a variation-aware design perspective for designing MOS devices in aggressively scaled geometries. The method introduces a yield measure at the device level which targets the saturation and leakage currents of an MOS transistor. A statistical method is developed to optimize the advanced doping profiles and geometry features of a device for achieving a maximum device-level yield. Finally, a statistical thermal analysis framework is proposed. It accounts for the process and thermal variations simultaneously, at the micro-architectural level. The analyzer is developed, based on the fact that the process variations lead to uncertain leakage power sources, so that the thermal profile, itself, would have a probabilistic nature. Therefore, by a co-process-thermal-leakage analysis, a more reliable full-chip statistical leakage power yield is calculated

    Advanced Algebraic Concepts for Efficient Multi-Channel Signal Processing

    Get PDF
    Unsere moderne Gesellschaft ist Zeuge eines fundamentalen Wandels in der Art und Weise wie wir mit Technologie interagieren. Geräte werden zunehmend intelligenter - sie verfügen über mehr und mehr Rechenleistung und häufiger über eigene Kommunikationsschnittstellen. Das beginnt bei einfachen Haushaltsgeräten und reicht über Transportmittel bis zu großen überregionalen Systemen wie etwa dem Stromnetz. Die Erfassung, die Verarbeitung und der Austausch digitaler Informationen gewinnt daher immer mehr an Bedeutung. Die Tatsache, dass ein wachsender Anteil der Geräte heutzutage mobil und deshalb batteriebetrieben ist, begründet den Anspruch, digitale Signalverarbeitungsalgorithmen besonders effizient zu gestalten. Dies kommt auch dem Wunsch nach einer Echtzeitverarbeitung der großen anfallenden Datenmengen zugute. Die vorliegende Arbeit demonstriert Methoden zum Finden effizienter algebraischer Lösungen für eine Vielzahl von Anwendungen mehrkanaliger digitaler Signalverarbeitung. Solche Ansätze liefern nicht immer unbedingt die bestmögliche Lösung, kommen dieser jedoch häufig recht nahe und sind gleichzeitig bedeutend einfacher zu beschreiben und umzusetzen. Die einfache Beschreibungsform ermöglicht eine tiefgehende Analyse ihrer Leistungsfähigkeit, was für den Entwurf eines robusten und zuverlässigen Systems unabdingbar ist. Die Tatsache, dass sie nur gebräuchliche algebraische Hilfsmittel benötigen, erlaubt ihre direkte und zügige Umsetzung und den Test unter realen Bedingungen. Diese Grundidee wird anhand von drei verschiedenen Anwendungsgebieten demonstriert. Zunächst wird ein semi-algebraisches Framework zur Berechnung der kanonisch polyadischen (CP) Zerlegung mehrdimensionaler Signale vorgestellt. Dabei handelt es sich um ein sehr grundlegendes Werkzeug der multilinearen Algebra mit einem breiten Anwendungsspektrum von Mobilkommunikation über Chemie bis zur Bildverarbeitung. Verglichen mit existierenden iterativen Lösungsverfahren bietet das neue Framework die Möglichkeit, den Rechenaufwand und damit die Güte der erzielten Lösung zu steuern. Es ist außerdem weniger anfällig gegen eine schlechte Konditionierung der Ausgangsdaten. Das zweite Gebiet, das in der Arbeit besprochen wird, ist die unterraumbasierte hochauflösende Parameterschätzung für mehrdimensionale Signale, mit Anwendungsgebieten im RADAR, der Modellierung von Wellenausbreitung, oder bildgebenden Verfahren in der Medizin. Es wird gezeigt, dass sich derartige mehrdimensionale Signale mit Tensoren darstellen lassen. Dies erlaubt eine natürlichere Beschreibung und eine bessere Ausnutzung ihrer Struktur als das mit Matrizen möglich ist. Basierend auf dieser Idee entwickeln wir eine tensor-basierte Schätzung des Signalraums, welche genutzt werden kann um beliebige existierende Matrix-basierte Verfahren zu verbessern. Dies wird im Anschluss exemplarisch am Beispiel der ESPRIT-artigen Verfahren gezeigt, für die verbesserte Versionen vorgeschlagen werden, die die mehrdimensionale Struktur der Daten (Tensor-ESPRIT), nichzirkuläre Quellsymbole (NC ESPRIT), sowie beides gleichzeitig (NC Tensor-ESPRIT) ausnutzen. Um die endgültige Schätzgenauigkeit objektiv einschätzen zu können wird dann ein Framework für die analytische Beschreibung der Leistungsfähigkeit beliebiger ESPRIT-artiger Algorithmen diskutiert. Verglichen mit existierenden analytischen Ausdrücken ist unser Ansatz allgemeiner, da keine Annahmen über die statistische Verteilung von Nutzsignal und Rauschen benötigt werden und die Anzahl der zur Verfügung stehenden Schnappschüsse beliebig klein sein kann. Dies führt auf vereinfachte Ausdrücke für den mittleren quadratischen Schätzfehler, die Schlussfolgerungen über die Effizienz der Verfahren unter verschiedenen Bedingungen zulassen. Das dritte Anwendungsgebiet ist der bidirektionale Datenaustausch mit Hilfe von Relay-Stationen. Insbesondere liegt hier der Fokus auf Zwei-Wege-Relaying mit Hilfe von Amplify-and-Forward-Relays mit mehreren Antennen, da dieser Ansatz ein besonders gutes Kosten-Nutzen-Verhältnis verspricht. Es wird gezeigt, dass sich die nötige Kanalkenntnis mit einem einfachen algebraischen Tensor-basierten Schätzverfahren gewinnen lässt. Außerdem werden Verfahren zum Finden einer günstigen Relay-Verstärkungs-Strategie diskutiert. Bestehende Ansätze basieren entweder auf komplexen numerischen Optimierungsverfahren oder auf Ad-Hoc-Ansätzen die keine zufriedenstellende Bitfehlerrate oder Summenrate liefern. Deshalb schlagen wir algebraische Ansätze zum Finden der Relayverstärkungsmatrix vor, die von relevanten Systemmetriken inspiriert sind und doch einfach zu berechnen sind. Wir zeigen das algebraische ANOMAX-Verfahren zum Erreichen einer niedrigen Bitfehlerrate und seine Modifikation RR-ANOMAX zum Erreichen einer hohen Summenrate. Für den Spezialfall, in dem die Endgeräte nur eine Antenne verwenden, leiten wir eine semi-algebraische Lösung zum Finden der Summenraten-optimalen Strategie (RAGES) her. Anhand von numerischen Simulationen wird die Leistungsfähigkeit dieser Verfahren bezüglich Bitfehlerrate und erreichbarer Datenrate bewertet und ihre Effektivität gezeigt.Modern society is undergoing a fundamental change in the way we interact with technology. More and more devices are becoming "smart" by gaining advanced computation capabilities and communication interfaces, from household appliances over transportation systems to large-scale networks like the power grid. Recording, processing, and exchanging digital information is thus becoming increasingly important. As a growing share of devices is nowadays mobile and hence battery-powered, a particular interest in efficient digital signal processing techniques emerges. This thesis contributes to this goal by demonstrating methods for finding efficient algebraic solutions to various applications of multi-channel digital signal processing. These may not always result in the best possible system performance. However, they often come close while being significantly simpler to describe and to implement. The simpler description facilitates a thorough analysis of their performance which is crucial to design robust and reliable systems. The fact that they rely on standard algebraic methods only allows their rapid implementation and test under real-world conditions. We demonstrate this concept in three different application areas. First, we present a semi-algebraic framework to compute the Canonical Polyadic (CP) decompositions of multidimensional signals, a very fundamental tool in multilinear algebra with applications ranging from chemistry over communications to image compression. Compared to state-of-the art iterative solutions, our framework offers a flexible control of the complexity-accuracy trade-off and is less sensitive to badly conditioned data. The second application area is multidimensional subspace-based high-resolution parameter estimation with applications in RADAR, wave propagation modeling, or biomedical imaging. We demonstrate that multidimensional signals can be represented by tensors, providing a convenient description and allowing to exploit the multidimensional structure in a better way than using matrices only. Based on this idea, we introduce the tensor-based subspace estimate which can be applied to enhance existing matrix-based parameter estimation schemes significantly. We demonstrate the enhancements by choosing the family of ESPRIT-type algorithms as an example and introducing enhanced versions that exploit the multidimensional structure (Tensor-ESPRIT), non-circular source amplitudes (NC ESPRIT), and both jointly (NC Tensor-ESPRIT). To objectively judge the resulting estimation accuracy, we derive a framework for the analytical performance assessment of arbitrary ESPRIT-type algorithms by virtue of an asymptotical first order perturbation expansion. Our results are more general than existing analytical results since we do not need any assumptions about the distribution of the desired signal and the noise and we do not require the number of samples to be large. At the end, we obtain simplified expressions for the mean square estimation error that provide insights into efficiency of the methods under various conditions. The third application area is bidirectional relay-assisted communications. Due to its particularly low complexity and its efficient use of the radio resources we choose two-way relaying with a MIMO amplify and forward relay. We demonstrate that the required channel knowledge can be obtained by a simple algebraic tensor-based channel estimation scheme. We also discuss the design of the relay amplification matrix in such a setting. Existing approaches are either based on complicated numerical optimization procedures or on ad-hoc solutions that to not perform well in terms of the bit error rate or the sum-rate. Therefore, we propose algebraic solutions that are inspired by these performance metrics and therefore perform well while being easy to compute. For the MIMO case, we introduce the algebraic norm maximizing (ANOMAX) scheme, which achieves a very low bit error rate, and its extension Rank-Restored ANOMAX (RR-ANOMAX) that achieves a sum-rate close to an upper bound. Moreover, for the special case of single antenna terminals we derive the semi-algebraic RAGES scheme which finds the sum-rate optimal relay amplification matrix based on generalized eigenvectors. Numerical simulations evaluate the resulting system performance in terms of bit error rate and system sum rate which demonstrates the effectiveness of the proposed algebraic solutions

    Modeling of Species Distribution and Biodiversity in Forests

    Get PDF
    Understanding the patterns of biodiversity and their relationship with environmental gradients is a key issue in ecological research and conservation in forests. Several environmental factors can influence species distributions in these complex ecosystems. It is therefore important to distinguish the effects of natural factors from the anthropogenic ones (e.g., environmental pollution, climate change, and forest management) by adopting reliable models able to predict future scenarios of species distribution. In the last 20 years, the use of statistical tools, such as Species Distribution Models (SDM) or Ecological Niche Models (ENM), allowed researchers to make great strides in the subject, with hundreds of scientific research works in this field. This book collects several research articles where these methodological approaches are the starting point to deepen the knowledge in many timely and emerging topics in forest ecosystems around the world, from Eurasia to America
    corecore