5,097 research outputs found

    The Burlington Races Revisited: A Revised Analysis of an 1813 Naval Battle for Supremacy on Lake Ontario

    Get PDF
    From the safety of the Lake Ontario shore near Burlington, military and civilian observers witnessed the jockeying for position for many sailing vessels during the afternoon of Tuesday, September 28, 1813. They likened the event to a yacht race. Thus, a pivotal naval engagement that would determine the outcome of the War of 1812 was facetiously labelled, The Burlington Races. The facts of this important piece of Canadiana, have, like so many significant historical events, been cloaked by myth and misconception until recently. The discovery in the US National Archives of the log of the British flagship of the Lake Ontario Squadron, HMS Wolfe, has made it possible to interpret this episode in Canadian history more accurately

    The First Encounter: Fighting for Naval Supremacy on Lake Ontario, 7–10 August 1813

    Get PDF
    To upgrade the fighting ability of the Provincial Marine, the Royal Navy sent one of their best young commodores along with 465 officers and ratings to operate the ships of the Lake Ontario squadron. This detachment of Royal Navy personnel, including four commanders, were all veterans with a wealth of sea experience. Commodore Sir James Lucas Yeo was described as a zealous, enterprising officer whose daring was unequalled in the annuals of the Royal Navy. Hence his rapid rise to flag rank and his knighthood at the age of thirty-one. The purpose of this article is to illustrate that the way in which Chauncey and Yeo conducted their operations on Lake Ontario was very much in keeping with their background and experience. It was evident from their first encounter that Yeo, the veteran, was the confident aggressor while Chauncey, the administrator, was wary of the reputation of his knighted opponent and unsure of his own squadron’s capabilities

    Senior Recital, Robert Williamson III, trumpet

    Get PDF
    Senior RecitalRobert Williamson III, trumpetMagdalena Adamek, pianoMonday, December 9, 2019 at 6pmSonia Vlahcevic Concert HallSingleton Center for the Performing Arts922 Park AvenueRichmond, VirginiaThe presentation of this senior recital will fulfill in part the requirements for the Bachelor of Music degree in Performance. Robert Williamson studies trumpet with Rex Richardson and Brian Strawley

    From Stochastic Mixability to Fast Rates

    Full text link
    Empirical risk minimization (ERM) is a fundamental learning rule for statistical learning problems where the data is generated according to some unknown distribution P\mathsf{P} and returns a hypothesis ff chosen from a fixed class F\mathcal{F} with small loss â„“\ell. In the parametric setting, depending upon (â„“,F,P)(\ell, \mathcal{F},\mathsf{P}) ERM can have slow (1/n)(1/\sqrt{n}) or fast (1/n)(1/n) rates of convergence of the excess risk as a function of the sample size nn. There exist several results that give sufficient conditions for fast rates in terms of joint properties of â„“\ell, F\mathcal{F}, and P\mathsf{P}, such as the margin condition and the Bernstein condition. In the non-statistical prediction with expert advice setting, there is an analogous slow and fast rate phenomenon, and it is entirely characterized in terms of the mixability of the loss â„“\ell (there being no role there for F\mathcal{F} or P\mathsf{P}). The notion of stochastic mixability builds a bridge between these two models of learning, reducing to classical mixability in a special case. The present paper presents a direct proof of fast rates for ERM in terms of stochastic mixability of (â„“,F,P)(\ell,\mathcal{F}, \mathsf{P}), and in so doing provides new insight into the fast-rates phenomenon. The proof exploits an old result of Kemperman on the solution to the general moment problem. We also show a partial converse that suggests a characterization of fast rates for ERM in terms of stochastic mixability is possible.Comment: 21 pages, accepted to NIPS 201

    Le Cam meets LeCun: Deficiency and Generic Feature Learning

    Full text link
    "Deep Learning" methods attempt to learn generic features in an unsupervised fashion from a large unlabelled data set. These generic features should perform as well as the best hand crafted features for any learning problem that makes use of this data. We provide a definition of generic features, characterize when it is possible to learn them and provide methods closely related to the autoencoder and deep belief network of deep learning. In order to do so we use the notion of deficiency and illustrate its value in studying certain general learning problems.Comment: 25 pages, 2 figure
    • …
    corecore