7,812 research outputs found
PATH OF IDEOLOGICAL AND POLITICAL TEACHING REFORM IN HIGHER VOCATIONAL COLLEGES UNDER THE BACKGROUND OF EDUCATIONAL PSYCHOLOGY
DESIGN AND IMPLEMENTATION OF RECONFIGURABLE PATCH ANTENNAS FOR WIRELESS COMMUNICATIONS
Reconfigurable patch antennas have drawn a lot of research interest for future wireless communication systems due to their ability to adapt to changes of environmental conditions or system requirements. The features of reconfigurable patch antennas, such as enhanced bandwidths, operating frequencies, polarizations, radiation patterns, etc., enables accommodation of multiple wireless services.
The major objective of this study was to design, fabricate and test two kinds of novel reconfigurable antennas: a dual-frequency antenna array with multiple pattern reconfigurabilities, and a pattern and frequency reconfigurable Yagi-Uda patch antenna. Comprehensive parametric studies were carried out to determine how to design these proposed patch antennas based on their materials dimensions and their geometry. Simulations have been conducted using Advanced Design Systems (ADS) software. As a result of this study, two kinds of novel reconfigurable patch antennas have been designed and validated at the expected frequency bands.
For the new reconfigurable antenna array, the beam pattern selectivity can be obtained by utilizing a switchable feeding network and the structure of the truncated corners. Opposite corners have been slotted on each patch, and a diode on each slot is used for switchable patterns. By controlling the states of the four PIN diodes through the corresponding DC voltage source, the radiation pattern can be reconfigured. The simulation and measurement results agree well with each other.
For the novel frequency and pattern reconfigurable Yagi-Uda patch antenna detailed in Chapter 4, two slots have been used on driven element to achieve frequency and pattern reconfigurability, and two open-end stubs have been used to adjust working frequency and increase bandwidth. In this design, an ideal model was used to imitate a PIN diode. The absence and presence of a small metal piece has been used to imitate the off-state and on-state of the PIN-diode. Pattern reconfigurability and directivities with an overall 8.1dBi has been achieved on both operating frequencies. The simulation and measurement results agree closely with each other.
Advisor: Yaoqing Yan
Recommended from our members
Distributionally Robust Performance Analysis: Data, Dependence and Extremes
This dissertation focuses on distributionally robust performance analysis, which is an area of applied probability whose aim is to quantify the impact of model errors. Stochastic models are built to describe phenomena of interest with the intent of gaining insights or making informed decisions. Typically, however, the fidelity of these models (i.e. how closely they describe the underlying reality) may be compromised due to either the lack of information available or tractability considerations. The goal of distributionally robust performance analysis is then to quantify, and potentially mitigate, the impact of errors or model misspecifications. As such, distributionally robust performance analysis affects virtually any area in which stochastic modelling is used for analysis or decision making.
This dissertation studies various aspects of distributionally robust performance analysis. For example, we are concerned with quantifying the impact of model error in tail estimation using extreme value theory. We are also concerned with the impact of the dependence structure in risk analysis when marginal distributions of risk factors are known. In addition, we also are interested in connections recently found to machine learning and other statistical estimators which are based on distributionally robust optimization.
The first problem that we consider consists in studying the impact of model specification in the context of extreme quantiles and tail probabilities. There is a rich statistical theory that allows to extrapolate tail behavior based on limited information. This body of theory is known as extreme value theory and it has been successfully applied to a wide range of settings, including building physical infrastructure to withstand extreme environmental events and also guiding the capital requirements of insurance companies to ensure their financial solvency. Not surprisingly, attempting to extrapolate out into the tail of a distribution from limited observations requires imposing assumptions which are impossible to verify. The assumptions imposed in extreme value theory imply that a parametric family of models (known as generalized extreme value distributions) can be used to perform tail estimation. Because such assumptions are so difficult (or impossible) to be verified, we use distributionally robust optimization to enhance extreme value statistical analysis. Our approach results in a procedure which can be easily applied in conjunction with standard extreme value analysis and we show that our estimators enjoy correct coverage even in settings in which the assumptions imposed by extreme value theory fail to hold.
In addition to extreme value estimation, which is associated to risk analysis via extreme events, another feature which often plays a role in the risk analysis is the impact of dependence structure among risk factors. In the second chapter we study the question of evaluating the worst-case expected cost involving two sources of uncertainty, each of them with a specific marginal probability distribution. The worst-case expectation is optimized over all joint probability distributions which are consistent with the marginal distributions specified for each source of uncertainty. So, our formulation allows to capture the impact of the dependence structure of the risk factors. This formulation is equivalent to the so-called Monge-Kantorovich problem studied in optimal transport theory, whose theoretical properties have been studied in the literature substantially. However, rates of convergence of computational algorithms for this problem have been studied only recently. We show that if one of the random variables takes finitely many values, a direct Monte Carlo approach allows to evaluate such worst case expectation with convergence rate as the number of Monte Carlo samples, , increases to infinity.
Next, we continue our investigation of worst-case expectations in the context of multiple risk factors, not only two of them, assuming that their marginal probability distributions are fixed. This problem does not fit the mold of standard optimal transport (or Monge-Kantorovich) problems. We consider, however, cost functions which are separable in the sense of being a sum of functions which depend on adjacent pairs of risk factors (think of the factors indexed by time). In this setting, we are able to reduce the problem to the study of several separate Monge-Kantorovich problems. Moreover, we explain how we can even include martingale constraints which are often natural to consider in settings such as financial applications.
While in the previous chapters we focused on the impact of tail modeling or dependence, in the later parts of the dissertation we take a broader view by studying decisions which are made based on empirical observations. So, we focus on so-called distributionally robust optimization formulations. We use optimal transport theory to model the degree of distributional uncertainty or model misspecification. Distributionally robust optimization based on optimal transport has been a very active research topic in recent years, our contribution consists in studying how to specify the optimal transport metric in a data-driven way. We explain our procedure in the context of classification, which is of substantial importance in machine learning applications
PATH OF IDEOLOGICAL AND POLITICAL TEACHING REFORM IN HIGHER VOCATIONAL COLLEGES UNDER THE BACKGROUND OF EDUCATIONAL PSYCHOLOGY
Fast recalibration of test and measurement equipment
A testing line in a factory comprises equipment that tests the stream of consumer devices-under-test (DUTs) that come down a line. The equipment needs periodic, e.g., daily, calibration to ensure that measurements produced by the equipment are accurate. Calibration of test equipment is done using reference devices, known as golden units. Golden units are expensive and are rated only for a limited number of calibration cycles. Golden units wear away relatively rapidly and need frequent replacement, a significant expense at a large factory.
This disclosure presents techniques that enable periodic calibration of test equipment using the very devices undergoing test. A golden station, which is maintained for reliable and accurate measurement, is introduced and kept separate from testing lines. Production DUTs that have undergone testing along the testing line are randomly picked and reverified at the golden station. Measurement differences between testing line and golden stations are traced to mis-calibrated testing-line equipment. The wear-and-tear, expense, and burden of maintenance and replacement associated with golden units is thereby reduced or eliminated
- …