136,452 research outputs found

    The Weight of Economic Growth and Urbanization on Electricity Demand in UAE

    Get PDF
    This study aims to explore the relationship between economic growth, urbanization, financial development and electricity consumption in case of United Arab Emirates. The study covers the time period of 1975-2011. We have applied the ARDL bounds testing to examine long run relationship between the variables in the presence of structural breaks. The VECM Granger causality is applied to investigate the direction of causal relationship between the variables. Our empirical exercise found cointegration between the series in case of United Arab Emirates. Further, results reveal that inverted U-shaped relationship is found between economic growth and electricity consumption i.e. economic growth raises electricity consumption initially and declines it after a threshold level of income per capita. Financial development adds in electricity consumption. The relationship between urbanization and electricity consumption is also inverted U-shaped. This implies that urbanization increases electricity consumption initially and after a threshold level of urbanization, electricity demand falls. The causality analysis finds feedback hypothesis between economic growth and electricity consumption i.e. economic growth and electricity consumption are interdependent. The bidirectional causality is found between financial development and electricity consumption. Economic growth and urbanization Granger cause each other. The feedback hypothesis is also found between urbanization and financial development, financial development and economic growth and same is true for electricity consumption and urbanization.

    Compressive Mining: Fast and Optimal Data Mining in the Compressed Domain

    Full text link
    Real-world data typically contain repeated and periodic patterns. This suggests that they can be effectively represented and compressed using only a few coefficients of an appropriate basis (e.g., Fourier, Wavelets, etc.). However, distance estimation when the data are represented using different sets of coefficients is still a largely unexplored area. This work studies the optimization problems related to obtaining the \emph{tightest} lower/upper bound on Euclidean distances when each data object is potentially compressed using a different set of orthonormal coefficients. Our technique leads to tighter distance estimates, which translates into more accurate search, learning and mining operations \textit{directly} in the compressed domain. We formulate the problem of estimating lower/upper distance bounds as an optimization problem. We establish the properties of optimal solutions, and leverage the theoretical analysis to develop a fast algorithm to obtain an \emph{exact} solution to the problem. The suggested solution provides the tightest estimation of the L2L_2-norm or the correlation. We show that typical data-analysis operations, such as k-NN search or k-Means clustering, can operate more accurately using the proposed compression and distance reconstruction technique. We compare it with many other prevalent compression and reconstruction techniques, including random projections and PCA-based techniques. We highlight a surprising result, namely that when the data are highly sparse in some basis, our technique may even outperform PCA-based compression. The contributions of this work are generic as our methodology is applicable to any sequential or high-dimensional data as well as to any orthogonal data transformation used for the underlying data compression scheme.Comment: 25 pages, 20 figures, accepted in VLD

    Quantification of the performance of iterative and non-iterative computational methods of locating partial discharges using RF measurement techniques

    Get PDF
    Partial discharge (PD) is an electrical discharge phenomenon that occurs when the insulation materialof high voltage equipment is subjected to high electric field stress. Its occurrence can be an indication ofincipient failure within power equipment such as power transformers, underground transmission cableor switchgear. Radio frequency measurement methods can be used to detect and locate discharge sourcesby measuring the propagated electromagnetic wave arising as a result of ionic charge acceleration. Anarray of at least four receiving antennas may be employed to detect any radiated discharge signals, thenthe three dimensional position of the discharge source can be calculated using different algorithms. These algorithms fall into two categories; iterative or non-iterative. This paper evaluates, through simulation, the location performance of an iterative method (the standardleast squares method) and a non-iterative method (the Bancroft algorithm). Simulations were carried outusing (i) a "Y" shaped antenna array and (ii) a square shaped antenna array, each consisting of a four-antennas. The results show that PD location accuracy is influenced by the algorithm's error bound, thenumber of iterations and the initial values for the iterative algorithms, as well as the antenna arrangement for both the non-iterative and iterative algorithms. Furthermore, this research proposes a novel approachfor selecting adequate error bounds and number of iterations using results of the non-iterative method, thus solving some of the iterative method dependencies

    On the Complexity of Solving Zero-Dimensional Polynomial Systems via Projection

    Full text link
    Given a zero-dimensional polynomial system consisting of n integer polynomials in n variables, we propose a certified and complete method to compute all complex solutions of the system as well as a corresponding separating linear form l with coefficients of small bit size. For computing l, we need to project the solutions into one dimension along O(n) distinct directions but no further algebraic manipulations. The solutions are then directly reconstructed from the considered projections. The first step is deterministic, whereas the second step uses randomization, thus being Las-Vegas. The theoretical analysis of our approach shows that the overall cost for the two problems considered above is dominated by the cost of carrying out the projections. We also give bounds on the bit complexity of our algorithms that are exclusively stated in terms of the number of variables, the total degree and the bitsize of the input polynomials

    Computing Real Roots of Real Polynomials

    Full text link
    Computing the roots of a univariate polynomial is a fundamental and long-studied problem of computational algebra with applications in mathematics, engineering, computer science, and the natural sciences. For isolating as well as for approximating all complex roots, the best algorithm known is based on an almost optimal method for approximate polynomial factorization, introduced by Pan in 2002. Pan's factorization algorithm goes back to the splitting circle method from Schoenhage in 1982. The main drawbacks of Pan's method are that it is quite involved and that all roots have to be computed at the same time. For the important special case, where only the real roots have to be computed, much simpler methods are used in practice; however, they considerably lag behind Pan's method with respect to complexity. In this paper, we resolve this discrepancy by introducing a hybrid of the Descartes method and Newton iteration, denoted ANEWDSC, which is simpler than Pan's method, but achieves a run-time comparable to it. Our algorithm computes isolating intervals for the real roots of any real square-free polynomial, given by an oracle that provides arbitrary good approximations of the polynomial's coefficients. ANEWDSC can also be used to only isolate the roots in a given interval and to refine the isolating intervals to an arbitrary small size; it achieves near optimal complexity for the latter task.Comment: to appear in the Journal of Symbolic Computatio

    Institutional quality and foreign direct investment inflows : evidence from cross-country data with policy implication

    Get PDF
    Purpose: The study examines the impact of institutional quality on Foreign Direct Investment (FDI) inflows for emerging economies from South Asiain the period 2002-2016. Other economic factors such as globalisation, financial development, and GDP are also considered. Design/Methodology/Approach: The study uses Im-Pesaran-Shin (IPS) panel unit root test to check stationarity property. It uses cross dependency (CD) and cross-sectional augments IPS tests to check cross-sectional dependency and heterogeneity across the group countries. Next, it uses panel ARDL-PMG tests to check the existence of long-relationship among variables. Then, we apply the panel Granger causality test to check the direction of causality. Finally, for the robustness of results, we use the Pedroni co-integration technique. Findings: The study finds the existence of a long-run relationship between institutional quality and FDI inflows. Other economic factors such as globalization and financial development show long-run and strong causality with FDI inflows. However, the short-run unidirectional causality from institutional quality to FDI inflows is not found for all the countries. Finally, institutional quality strongly causes FDI inflows provided paired with either globalisation or financial development. Practical Implications: Institutional quality increases the FDI inflows. Therefore, policymakers should focus on institutional quality along with globalization and financial development for higher inflows of FDI in emerging countries. Originality/Value: The study considers institutional quality as one of the inputs for FDI inflows in selected emerging economies from South Asia. Further, it creates an institutional quality index for the emerging countries to examine the impact on FDI inflows.peer-reviewe
    corecore