75,073 research outputs found

    One parameter family of Compacton Solutions in a class of Generalized Korteweg-DeVries Equations

    Full text link
    We study the generalized Korteweg-DeVries equations derivable from the Lagrangian: L(l,p)=(12φxφt(φx)ll(l1)+α(φx)p(φxx)2)dx, L(l,p) = \int \left( \frac{1}{2} \varphi_{x} \varphi_{t} - { {(\varphi_{x})^{l}} \over {l(l-1)}} + \alpha(\varphi_{x})^{p} (\varphi_{xx})^{2} \right) dx, where the usual fields u(x,t)u(x,t) of the generalized KdV equation are defined by u(x,t)=φx(x,t)u(x,t) = \varphi_{x}(x,t). For pp an arbitrary continuous parameter 0<p2,l=p+20< p \leq 2 ,l=p+2 we find compacton solutions to these equations which have the feature that their width is independent of the amplitude. This generalizes previous results which considered p=1,2p=1,2. For the exact compactons we find a relation between the energy, mass and velocity of the solitons. We show that this relationship can also be obtained using a variational method based on the principle of least action.Comment: Latex 4 pages and one figure available on reques

    Binary Classifier Calibration using an Ensemble of Near Isotonic Regression Models

    Full text link
    Learning accurate probabilistic models from data is crucial in many practical tasks in data mining. In this paper we present a new non-parametric calibration method called \textit{ensemble of near isotonic regression} (ENIR). The method can be considered as an extension of BBQ, a recently proposed calibration method, as well as the commonly used calibration method based on isotonic regression. ENIR is designed to address the key limitation of isotonic regression which is the monotonicity assumption of the predictions. Similar to BBQ, the method post-processes the output of a binary classifier to obtain calibrated probabilities. Thus it can be combined with many existing classification models. We demonstrate the performance of ENIR on synthetic and real datasets for the commonly used binary classification models. Experimental results show that the method outperforms several common binary classifier calibration methods. In particular on the real data, ENIR commonly performs statistically significantly better than the other methods, and never worse. It is able to improve the calibration power of classifiers, while retaining their discrimination power. The method is also computationally tractable for large scale datasets, as it is O(NlogN)O(N \log N) time, where NN is the number of samples
    corecore