73 research outputs found

    Online monitoring and control of voltage stability margin via machine learning-based adaptive approaches

    Get PDF
    Voltage instability or voltage collapse, observed in many blackout events, poses a significant threat to power system reliability. To prevent voltage collapse, the countermeasures suggested by the post analyses of the blackouts usually include the adoption of better online voltage stability monitoring and control tools. Recently, the variability and uncertainty imposed by the increasing penetration of renewable energy further magnifies this need. This work investigates the methodologies for online voltage stability margin (VSM) monitoring and control in the new era of smart grid and big data. It unleashes the value of online measurements and leverages the fruitful results in machine learning and demand response. An online VSM monitoring approach based on local regression and adaptive database is proposed. Considering the increasing variability and uncertainty of power system operation, this approach utilizes the locality of underlying pattern between VSM and reactive power reserve (RPR), and can adapt to the changing condition of system. LASSO (Least Absolute Shrinkage and Selection Operator) is tailored to solve the local regression problem so as to mitigate the curse of dimensionality for large-scale system. Along with the VSM prediction, its prediction interval is also estimated simultaneously in a simple but effective way, and utilized as an evidence to trigger the database updating. IEEE 30-bus system and a 60,000-bus large system are used to test and demonstrate the proposed approach. The results show that the proposed approach can be successfully employed in online voltage stability monitoring for real size systems, and the adaptivity of model and data endows the proposed approach with the advantage in the circumstances where large and unforeseen changes of system condition are inevitable. In case degenerative system conditions are identified, a control strategy is needed to steer the system back to security. A model predictive control (MPC) based framework is proposed to maintain VSM in near-real-time while minimizing the control cost. VSM is locally modeled as a linear function of RPRs based on the VSM monitoring tool, which convexifies the intricate VSM-constrained optimization problem. Thermostatically controlled loads (TCLs) are utilized through a demand response (DR) aggregator as the efficient measure to enhance voltage stability. For such an advanced application of the energy management system (EMS), plug-and-play is a necessary feature that makes the new controller really applicable in a cooperative operating environment. In this work, the cooperation is realized by a predictive interface strategy, which predicts the behaviors of relevant controllers using the simple models declared and updated by those controllers. In particular, the customer dissatisfaction, defined as the cumulative discomfort caused by DR, is explicitly constrained in respect of customers\u27 interests. This constraint maintains the applicability of the control. IEEE 30-bus system is used to demonstrate the proposed control strategy. Adaptivity and proactivity lie at the heart of the proposed approach. By making full use of real-time information, the proposed approach is competent at the task of VSM monitoring and control in a non-stationary and uncertain operating environment

    Algorithms for Multiclass Classification and Regularized Regression

    Get PDF
    Multiclass classification and regularized regression problems are very common in modern statistical and machine learning applications. On the one hand, multiclass classification problems require the prediction of class labels: given observations of objects that belong to certain classes, can we predict to which class a new object belongs? On the other hand, the reg

    On the Finite-Time Complexity and Practical Computation of Approximate Stationarity Concepts of Lipschitz Functions

    Full text link
    We report a practical finite-time algorithmic scheme to compute approximately stationary points for nonconvex nonsmooth Lipschitz functions. In particular, we are interested in two kinds of approximate stationarity notions for nonconvex nonsmooth problems, i.e., Goldstein approximate stationarity (GAS) and near-approximate stationarity (NAS). For GAS, our scheme removes the unrealistic subgradient selection oracle assumption in (Zhang et al., 2020, Assumption 1) and computes GAS with the same finite-time complexity. For NAS, Davis & Drusvyatskiy (2019) showed that ρ\rho-weakly convex functions admit finite-time computation, while Tian & So (2021) provided the matching impossibility results of dimension-free finite-time complexity for first-order methods. Complement to these developments, in this paper, we isolate a new class of functions that could be Clarke irregular (and thus not weakly convex anymore) and show that our new algorithmic scheme can compute NAS points for functions in that class within finite time. To demonstrate the wide applicability of our new theoretical framework, we show that ρ\rho-margin SVM, 11-layer, and 22-layer ReLU neural networks, all being Clarke irregular, satisfy our new conditions.Comment: 20 pages, 3 figures, ICML 202

    Computer Simulation Studies for the Production of 7-Tetradecene by Reactive Distillation

    Get PDF
    The production of 7-tetradecene was examined. Properties for this compound were estimated using group contribution methods and compared to experimental data. Process simulation was used as a tool to identify competitive processing strategies. For reactive distillation, three different models were compared to determine the model complexity needed to describe the process: Model A, with the assumption of physical and chemical equilibrium; Model B, with kinetics described by a second order reaction and physical equilibrium; and Model C, a non-equilibrium stage model that accounts for mass transfer. A conceptual design was obtained with Model B and was checked with Model C, which described the process more accurately but was more difficult to converge. Since, Model A was easier to converge, it was used to predict process conversions at different pressures. Predictions favor working at 1 bar, due to the lower heat duty and the minimum stages required

    THREE-DIMENSIONAL VISION FOR STRUCTURE AND MOTION ESTIMATION

    Get PDF
    1997/1998Questa tesi, intitolata Visione Tridimensionale per la stima di Struttura e Moto, tratta di tecniche di Visione Artificiale per la stima delle proprietà geometriche del mondo tridimensionale a partire da immagini numeriche. Queste proprietà sono essenziali per il riconoscimento e la classificazione di oggetti, la navigazione di veicoli mobili autonomi, il reverse engineering e la sintesi di ambienti virtuali. In particolare, saranno descritti i moduli coinvolti nel calcolo della struttura della scena a partire dalle immagini, e verranno presentati contributi originali nei seguenti campi. Rettificazione di immagini steroscopiche. Viene presentato un nuovo algoritmo per la rettificazione, il quale trasforma una coppia di immagini stereoscopiche in maniera che punti corrispondenti giacciano su linee orizzontali con lo stesso indice. Prove sperimentali dimostrano il corretto comportamento del metodo, come pure la trascurabile perdita di accuratezza nella ricostruzione tridimensionale quando questa sia ottenuta direttamente dalle immagini rettificate. Calcolo delle corrispondenze in immagini stereoscopiche. Viene analizzato il problema della stereovisione e viene presentato un un nuovo ed efficiente algoritmo per l'identificazione di coppie di punti corrispondenti, capace di calcolare in modo robusto la disparità stereoscopica anche in presenza di occlusioni. L'algoritmo, chiamato SMW, usa uno schema multi-finestra adattativo assieme al controllo di coerenza destra-sinistra per calcolare la disparità e l'incertezza associata. Gli esperimenti condotti con immagini sintetiche e reali mostrano che SMW sortisce un miglioramento in accuratezza ed efficienza rispetto a metodi simili Inseguimento di punti salienti. L'inseguitore di punti salienti di Shi-Tomasi- Kanade viene migliorato introducendo uno schema automatico per lo scarto di punti spuri basato sulla diagnostica robusta dei campioni periferici ( outliers ). Gli esperimenti con immagini sintetiche e reali confermano il miglioramento rispetto al metodo originale, sia qualitativamente che quantitativamente. Ricostruzione non calibrata. Viene presentata una rassegna ragionata dei metodi per la ricostruzione di un modello tridimensionale della scena, a partire da una telecamera che si muove liberamente e di cui non sono noti i parametri interni. Il contributo consiste nel fornire una visione critica e unificata delle più recenti tecniche. Una tale rassegna non esiste ancora in letterarura. Moto tridimensionale. Viene proposto un algoritmo robusto per registrate e calcolare le corrispondenze in due insiemi di punti tridimensionali nei quali vi sia un numero significativo di elementi mancanti. Il metodo, chiamato RICP, sfrutta la stima robusta con la Minima Mediana dei Quadrati per eliminare l'effetto dei campioni periferici. Il confronto sperimentale con una tecnica simile, ICP, mostra la superiore robustezza e affidabilità di RICP.This thesis addresses computer vision techniques estimating geometrie properties of the 3-D world /rom digital images. Such properties are essential for object recognition and classification, mobile robots navigation, reverse engineering and synthesis of virtual environments. In particular, this thesis describes the modules involved in the computation of the structure of a scene given some images, and offers original contributions in the following fields. Stereo pairs rectification. A novel rectification algorithm is presented, which transform a stereo pair in such a way that corresponding points in the two images lie on horizontal lines with the same index. Experimental tests prove the correct behavior of the method, as well as the negligible decrease oLthe accuracy of 3-D reconstruction if performed from the rectified images directly. Stereo matching. The problem of computational stereopsis is analyzed, and a new, efficient stereo matching algorithm addressing robust disparity estimation in the presence of occlusions is presented. The algorithm, called SMW, is an adaptive, multi-window scheme using left-right consistency to compute disparity and its associated uncertainty. Experiments with both synthetic and real stereo pairs show how SMW improves on closely related techniques for both accuracy and efficiency. Features tracking. The Shi-Tomasi-Kanade feature tracker is improved by introducing an automatic scheme for rejecting spurious features, based on robust outlier diagnostics. Experiments with real and synthetic images confirm the improvement over the original tracker, both qualitatively and quantitatively. 111 Uncalibrated vision. A review on techniques for computing a three-dimensional model of a scene from a single moving camera, with unconstrained motion and unknown parameters is presented. The contribution is to give a critical, unified view of some of the most promising techniques. Such review does not yet exist in the literature. 3-D motion. A robust algorithm for registering and finding correspondences in two sets of 3-D points with significant percentages of missing data is proposed. The method, called RICP, exploits LMedS robust estimation to withstand the effect of outliers. Experimental comparison with a closely related technique, ICP, shows RICP's superior robustness and reliability.XI Ciclo1968Versione digitalizzata della tesi di dottorato cartacea

    Singularity of Data Analytic Operations

    Full text link
    Statistical data by their very nature are indeterminate in the sense that if one repeated the process of collecting the data the new data set would be somewhat different from the original. Therefore, a statistical method, a map Φ\Phi taking a data set xx to a point in some space F, should be stable at xx: Small perturbations in xx should result in a small change in Φ(x)\Phi(x). Otherwise, Φ\Phi is useless at xx or -- and this is important -- near xx. So one doesn't want Φ\Phi to have "singularities," data sets xx s.t.\ the the limit of Φ(y)\Phi(y) as yy approaches xx doesn't exist. (Yes, the same issue arises elsewhere in applied math.) However, broad classes of statistical methods have topological obstructions of continuity: They must have singularities. We show why and give lower bounds on the Hausdorff dimension, even Hausdorff measure, of the set of singularities of such data maps. There seem to be numerous examples. We apply mainly topological methods to study the (topological) singularities of functions defined (on dense subsets of) "data spaces" and taking values in spaces with nontrivial homology. At least in this book, data spaces are usually compact manifolds. The purpose is to gain insight into the numerical conditioning of statistical description, data summarization, and inference and learning methods. We prove general results that can often be used to bound below the dimension of the singular set. We apply our topological results to develop lower bounds on Hausdorff measure of the singular set. We apply these methods to the study of plane fitting and measuring location of data on spheres. \emph{This is not a "final" version, merely another attempt.}Comment: 325 pages, 8 figure

    Untangling hotel industry’s inefficiency: An SFA approach applied to a renowned Portuguese hotel chain

    Get PDF
    The present paper explores the technical efficiency of four hotels from Teixeira Duarte Group - a renowned Portuguese hotel chain. An efficiency ranking is established from these four hotel units located in Portugal using Stochastic Frontier Analysis. This methodology allows to discriminate between measurement error and systematic inefficiencies in the estimation process enabling to investigate the main inefficiency causes. Several suggestions concerning efficiency improvement are undertaken for each hotel studied.info:eu-repo/semantics/publishedVersio

    Compressive Sensing of Multiband Spectrum towards Real-World Wideband Applications.

    Get PDF
    PhD Theses.Spectrum scarcity is a major challenge in wireless communication systems with their rapid evolutions towards more capacity and bandwidth. The fact that the real-world spectrum, as a nite resource, is sparsely utilized in certain bands spurs the proposal of spectrum sharing. In wideband scenarios, accurate real-time spectrum sensing, as an enabler of spectrum sharing, can become ine cient as it naturally requires the sampling rate of the analog-to-digital conversion to exceed the Nyquist rate, which is resourcecostly and energy-consuming. Compressive sensing techniques have been applied in wideband spectrum sensing to achieve sub-Nyquist-rate sampling of frequency sparse signals to alleviate such burdens. A major challenge of compressive spectrum sensing (CSS) is the complexity of the sparse recovery algorithm. Greedy algorithms achieve sparse recovery with low complexity but the required prior knowledge of the signal sparsity. A practical spectrum sparsity estimation scheme is proposed. Furthermore, the dimension of the sparse recovery problem is proposed to be reduced, which further reduces the complexity and achieves signal denoising that promotes recovery delity. The robust detection of incumbent radio is also a fundamental problem of CSS. To address the energy detection problem in CSS, the spectrum statistics of the recovered signals are investigated and a practical threshold adaption scheme for energy detection is proposed. Moreover, it is of particular interest to seek the challenges and opportunities to implement real-world CSS for systems with large bandwidth. Initial research on the practical issues towards the real-world realization of wideband CSS system based on the multicoset sampler architecture is presented. In all, this thesis provides insights into two critical challenges - low-complexity sparse recovery and robust energy detection - in the general CSS context, while also looks into some particular issues towards the real-world CSS implementation based on the i multicoset sampler

    Essays in Macroeconomics and Macroeconometrics

    Get PDF
    This thesis contributes to macroeconomics and macroeconometrics. Chapters 2-4 study the role of producer heterogeneity for business cycles and macroeconomic development. Chapters 5-6 provide inference for structural vector autoregressions. Chapter 2 examines the role of time to build for business cycles. We document that time to build is volatile and largest during recessions. In a model with producer heterogeneity and capital adjustment frictions, the longer time to build, the less frequently firms invest, and the less firm investment reflects firm productivity. Longer time to build thus worsens the allocation of capital across firm. In the calibrated model, one month longer time to build lowers GDP by 0.5%. Chapter 3 investigates the role of uncertainty fluctuations. We exploit highly disaggregated industry-level data to study the empirical importance of various transmission channels of uncertainty shocks. We provide testable implications for the interaction between various frictions and the job flow responses to uncertainty shocks. Empirically, uncertainty shocks lower job creation and raise job destruction in more than 80% of industries. In line with theory, these responses are significantly magnified by the severity of financial frictions. In contrast, we do not find supportive evidence for other transmission channels. Chapter 4 re-examines the importance of misallocation for macroeconomic development. We ask whether differences in micro-level factor productivities should be understood as a result of frictions in technology choice. We document that the bulk of all productivity differences is persistent and related to highly persistent differences in the capital-labor ratio. This suggests a cost of adjusting this ratio. In fact, a model with such friction can explain our findings. At the same time, the loss in productive efficiency from this friction is modest. Chapter 5 studies structural VAR models that impose equality and/or inequality restrictions on a single shock, e.g. a monetary policy shock. The paper proposes a computationally convenient algorithm to evaluate the smallest and largest feasible value of the structural impulse response. We further show under which conditions these values are directionally differentiable and propose delta-method inference for the set-identified structural impulse response. We apply our method to set-identify the effect of unconventional monetary policy shocks. In Chapter 6 we study models that impose restrictions on multiple shocks. The projection region is the collection of structural impulse responses compatible with the vectors of reduced-form parameters contained in a Wald ellipsoid. We show that the projection region has both frequentist coverage and robust Bayesian credibility. To address projection conservatism, we propose a feasible calibration algorithm, which achieves exact robust Bayesian credibility of the desired credibility level, and, additionally, exact frequentist coverage under differentiability assumptions

    Computer Science for Continuous Data:Survey, Vision, Theory, and Practice of a Computer Analysis System

    Get PDF
    Building on George Boole's work, Logic provides a rigorous foundation for the powerful tools in Computer Science that underlie nowadays ubiquitous processing of discrete data, such as strings or graphs. Concerning continuous data, already Alan Turing had applied "his" machines to formalize and study the processing of real numbers: an aspect of his oeuvre that we transform from theory to practice.The present essay surveys the state of the art and envisions the future of Computer Science for continuous data: natively, beyond brute-force discretization, based on and guided by and extending classical discrete Computer Science, as bridge between Pure and Applied Mathematics
    corecore