30 research outputs found

    Weighted inequalities for multivariable dyadic paraproducs

    Get PDF
    Using Wilson's Haar basis in Rn\R^n, which is different than the usual tensor product Haar functions, we define its associated dyadic paraproduct in Rn\R^n. We can then extend "trivially" Beznosova's Bellman function proof of the linear bound in L2(w)L^2(w) with respect to [w]A2[w]_{A_2} for the 1-dimensional dyadic paraproduct. Here trivial means that each piece of the argument that had a Bellman function proof has an nn-dimensional counterpart that holds with the same Bellman function. The lemma that allows for this painless extension we call the good Bellman function Lemma. Furthermore the argument allows to obtain dimensionless bounds in the anisotropic case.Comment: 23 page

    Cloaking due to anomalous localized resonance in plasmonic structures of confocal ellipses

    Full text link
    If a core of dielectric material is coated by a plasmonic structure of negative dielectric material with non-zero loss parameter, then anomalous localized resonance may occur as the loss parameter tends to zero and the source outside the structure can be cloaked. It has been proved that the cloaking due to anomalous localized resonance (CALR) takes place for structures of concentric disks and the critical radius inside which the sources are cloaked has been computed. In this paper, it is proved that CALR takes place for structures of confocal ellipses and the critical elliptic radii are computed. The method of this paper uses the spectral analysis of the Neumann-Poincar\'e type operator associated with two interfaces (the boundaries of the core and the shell)

    Sharp bounds for general commutators on weighted Lebesgue spaces

    Get PDF
    We show that if a linear operator T is bounded on weighted Lebesgue space L2(w) and obeys a linear bound with respect to the A2 constant of the weight, then its commutator [b, T ] with a function b in BMO will obey a quadratic bound with respect to the A2 constant of the weight. We also prove that the kth-order commutator T k b = [b, T k−1 b ] will obey a bound that is a power (k + 1) of the A2 constant of the weight. Sharp extrapolation provides corresponding Lp(w) estimates. In particular these estimates hold for T any Calder´on-Zygmund singular integral operator. The results are sharp in terms of the growth of the operator norm with respect to the Ap constant of the weight for all 1 < p < ∞, all k, and all dimensions, as examples involving the Riesz transforms, power functions and power weights show.Ministerio de Ciencia e Innovació

    Sharp estimates for the commutator of the Hilbert transform on weighted Lebesgue spaces

    Full text link
    This paper has been withdrawn by the author due to some error in the resultComment: This paper has been withdraw

    Neural Network Optimization Based on Complex Network Theory: A Survey

    No full text
    Complex network science is an interdisciplinary field of study based on graph theory, statistical mechanics, and data science. With the powerful tools now available in complex network theory for the study of network topology, it is obvious that complex network topology models can be applied to enhance artificial neural network models. In this paper, we provide an overview of the most important works published within the past 10 years on the topic of complex network theory-based optimization methods. This review of the most up-to-date optimized neural network systems reveals that the fusion of complex and neural networks improves both accuracy and robustness. By setting out our review findings here, we seek to promote a better understanding of basic concepts and offer a deeper insight into the various research efforts that have led to the use of complex network theory in the optimized neural networks of today

    Validation on Residual Variation and Covariance Matrix of USSTRATCOM Two Line Element

    No full text
    Satellite operating agencies are constantly monitoring conjunctions between satellites and space objects. Two line element (TLE) data, published by the Joint Space Operations Center of the United States Strategic Command, are available as raw data for a preliminary analysis of initial conjunction with a space object without any orbital information. However, there exist several sorts of uncertainties in the TLE data. In this paper, we suggest and analyze a method for estimating the uncertainties in the TLE data through mean, standard deviation of state vector residuals and covariance matrix. Also the estimation results are compared with actual results of orbit determination to validate the estimation method. Characteristics of the state vector residuals depending on the orbital elements are examined by applying the analysis to several satellites in various orbits. Main source of difference between the covariance matrices are also analyzed by comparing the matrices. Particularly, for the Korea Multi-Purpose Satellite-2, we examine the characteristics of the residual variation of state vector and covariance matrix depending on the orbital elements. It is confirmed that a realistic consideration on the space situation of space objects is possible using information from the analysis of mean, standard deviation of the state vector residuals of TLE and covariance matrix

    Predictive Distillation Method of Anchor-Free Object Detection Model for Continual Learning

    No full text
    Continual learning (CL) is becoming increasingly important, not only for storage space because of the ever-increasing amount of data being generated, but also for associated copyright problems. In this study, we propose ground truth’ (GT’), which is a combination of ground truth (GT) and a prediction of the teacher model that distills the prediction results of the previously trained model, called the teacher model, by applying the knowledge distillation (KD) technique to an anchor-free object detection model. Among all the objects predicted by the teacher model, an object for which the prediction score is higher than the threshold value is distilled into the current trained model, called the student model. To avoid interference with new class learning, the IoU is obtained between every object of the GT and the predicted objects. Through the continual learning scenario, even if the reuse of past data is limited, if new data are sufficient, the proposed model minimizes catastrophic forgetting problems and enables learning for newly added classes. The proposed model was learned in PascalVOC 2007 + 2012 and tested in PascalVOC2007, with better results of 9.6% p mAP and 13.7% p F1i shown in the scenario 19 + 1. The result in scenario 15 + 5 showed better results than the compared algorithm, with 1.6% p mAP and 0.9% p F1i. The scenario 10 + 10 also outperformed the other alternatives, with 0.9% p mAP and 0.6% p F1i
    corecore