1,040 research outputs found

    Ristluu stressimurd raseduse ajal

    Get PDF
    Eesti Arst 2022; 101(6–7):382–38

    On Best Approximation by Ridge Functions

    Get PDF
    AbstractWe consider best approximation of some function classes by the manifold Mn consisting of sums of n arbitrary ridge functions. It is proved that the deviation of the Sobolev class Wr, d2 from the manifold Mn in the space L2 behaves asymptotically as n−r/(d−1)

    Byzantium, Rus and Cumans in the early 13th century

    Get PDF
    This paper examines the foreign policy of the Galician-Volhynian prince Roman Mstislavich. Roman became the main military ally of the Byzantine Empire in the early 13th century. Byzantium was going through a severe political crisis caused by the Serbian and the Bulgarian uprisings and by the crushing raids of the Cumans. According to Niketas Choniates, the nomads’ aggression could have been stopped only thanks to the aid of the Galician prince Roman. The circumstances and the time of Roman’s campaign in Choniates’ account are the same as in the Russian chronicles reporting the steppe campaigns of the Galician-Volhynian prince

    The degree of approximation of sets in euclidean space using sets with bounded Vapnik-Chervonenkis dimension

    Get PDF
    AbstractThe degree of approximation of infinite-dimensional function classes using finite n-dimensional manifolds has been the subject of a classical field of study in the area of mathematical approximation theory. In Ratsaby and Maiorov (1997), a new quantity ρn(F, Lq) which measures the degree of approximation of a function class F by the best manifold Hn of pseudo-dimension less than or equal to n in the Lq-metric has been introduced. For sets F ⊂Rm it is defined as ρn(F, lmq) = infHn dist(F, Hn), where dist(F, Hn) = supxϵF infyϵHn∥x−y ∥lmq and Hn ⊂Rm is any set of VC-dimension less than or equal to n where n<m. It measures the degree of approximation of the set F by the optimal set Hn ⊂Rm of VC-dimension less than or equal to n in the lmq-metric. In this paper we compute ρn(F, lmq) for F being the unit ball Bmp = {x ϵ Rm : ∥x∥lmp⩽ 1} for any 1 ⩽ p, q ⩽ ∞, and for F being any subset of the boolean m-cube of size larger than 2mγ, for any 12 <γ< 1

    On the Value of Partial Information for Learning from Examples

    Get PDF
    AbstractThe PAC model of learning and its extension to real valued function classes provides a well-accepted theoretical framework for representing the problem of learning a target functiong(x) using a random sample {(xi,g(xi))}i=1m. Based on the uniform strong law of large numbers the PAC model establishes the sample complexity, i.e., the sample sizemwhich is sufficient for accurately estimating the target function to within high confidence. Often, in addition to a random sample, some form of prior knowledge is available about the target. It is intuitive that increasing the amount of information should have the same effect on the error as increasing the sample size. But quantitatively how does the rate of error with respect to increasing information compare to the rate of error with increasing sample size? To answer this we consider a new approach based on a combination of information-based complexity of Traubet al.and Vapnik–Chervonenkis (VC) theory. In contrast to VC-theory where function classes of finite pseudo-dimension are used only for statistical-based estimation, we let such classes play a dual role of functional estimation as well as approximation. This is captured in a newly introduced quantity, ρd(F), which represents a nonlinear width of a function class F. We then extend the notion of thenth minimal radius of information and define a quantityIn,d(F) which measures the minimal approximation error of the worst-case targetg∈ F by the family of function classes having pseudo-dimensiondgiven partial information ongconsisting of values taken bynlinear operators. The error rates are calculated which leads to a quantitative notion of the value of partial information for the paradigm of learning from examples
    corecore