14,693 research outputs found

    A Prototype 1:6 Million Map

    Get PDF

    Stock Repurchase Agreements: Close Corporation Use of Designee Provision Permits Repurchase Despite Insufficient Earned Surplus

    Get PDF
    Statistical random number testing is a well studied field focusing on pseudo-random number generators, that is to say algorithms that produce random-looking sequences of numbers. These generators tend to have certain kinds of flaws, which have been exploited through rigorous testing. Such testing has led to advancements, and today pseudo random number generators are both very high-speed and produce seemingly random numbers. Recent advancements in quantum physics have opened up new doors, where products called quantum random number generators that produce acclaimed true randomness have emerged. Of course, scientists want to test such randomness, and turn to the old tests used for pseudo random number generators to do this. The main question this thesis seeks to answer is if publicly available such tests are good enough to evaluate a quantum random number generator. We also seek to compare sequences from such generators with those produced by state of the art pseudo random number generators, in an attempt to compare their quality. Another potential problem with quantum random number generators is the possibility of them breaking without the user knowing. Such a breakdown could have dire consequences. For example, if such a generator were to control the output of a slot machine, an malfunction could cause the machine to generate double earnings for a player compared to what was planned. Thus, we look at the possibilities to implement live tests to quantum random number generators, and propose such tests. Our study has covered six commonly available tools for random number testing, and we show that in particular one of these stands out in that it has a series of tests that fail our quantum random number generator as not random enough, despite passing an pseudo random number generator. This implies that the quantum random number generator behave differently from the pseudo random number ones, and that we need to think carefully about how we test, what we expect from an random sequence and what we want to use it for.Statistisk slumptalstestning Àr ett vÀl studerat Àmne som fokuserar pÄ sÄ kallade pseudoslumpgeneatorer, det vill sÀga algorithmer som producerar slump-liknande sekvenser med tal. SÄdana generatorer tenderar att ha vissa defekter, som har exploaterats genom rigorös tesning. SÄdan testning har lett till framsteg och idag Àr pseudoslumpgeneratorer bÄde otroligt snabba och producerar till synes slumpade tal. Framsteg inom kvantfysiken har lett till utvecklingen av kvantslumpgeneratorer, som producerar vad som hÀvdas vara Àkta slump. SjÀlvklart vill forskare utvÀrdera sÄdan slump, och har dÄ vÀnt sig till de gamla testerna som utvecklats för pseudoslumpgeneratorer. Den hÀr uppsatsen söker utvÀrdera hurvida allmÀnt tillgÀngliga slumptester Àr nog bra för att utvÀrdera kvantslumpgeneratorer. Vi jÀmför Àven kvantslumpsekvenser med pseudoslumpsekvenser för att se om dessa vÀsentligen skiljer sig frÄn varandra och pÄ vilket sÀtt. Ett annat potentiellt problem med kvantslumpgeneratorer Àr möjligheten att dessa gÄr sönder under drift. Om till exempel en kvantslumpgenerator anvÀnds för att slumpgenerera resultatet hos en enarmad bandit kan ett fel göra sÄ att maskinen ger dubbel vinst för en spelare jÀmfört med planerat. DÀrmed ser vi över möjligheten att implementera live-tester i kvantslumpgeneratorer, och föreslÄr nÄgra sÄdana tester. VÄr studie har tÀckt sex allmÀnt tillgÀngliga verktyg för slumptalstestning, och vi visar att i synnerhet ett av dessa stÄr ut pÄ sÄ sÀtt att det har en serie av tester som slumptalen frÄn vÄr kvantslumpgenerator inte anser Àr nog slumpade. Trots det visar samma test att sekvensen frÄn pseudoslumpgeneratorerna Àr bra nog. Detta antyder att kvantslumpgeneratorn beter sig annorlunda mot pseudoslumpgeneratorerna, och att vi behöver tÀnka över ordentligt kring hur vi testar slumpgeneratorer, vad vi förvÀntar oss att fÄ ut och hurvida detta pÄverkar det vi skall anvÀnda slumpgeneratorn till

    Improvement to the International Bathymetric Chart of the Arctic Ocean (IBCAO): Updating the Data Base and the Grid Model

    Get PDF
    The project to develop the IBCAO grid model was initiated in 1997 with the objective of providing to the Arctic research community an improved portrayal of the seabed north of 64-deg N, in a form suitable for digital manipulation and visualization. The model was constructed from a compilation of all single-beam and multibeam echo soundings that were available for the polar region, complemented where appropriate by newly released contour information. The grid features a cell size of 2.5 x 2.5 km on a polar stereographic projection; it is constructed on the WGS 84 datum, with true scale at 75-deg N. Designated the Beta Version, a preliminary implementation of IBCAO was introduced to the geophysical community in December 1999, and released four months later as a digital grid that could be downloaded from a project website hosted by the National Geophysical Data Center in Boulder, CO. Since that release, the Beta Version has seen widespread use in Earth Science applications, with the website continuing to garner between 500 and 1000 visitors per week; this reportedly makes it one of the most heavily-visited of all NGDC websites. IBCAO has since been updated with the development of Version 1.0, which incorporates new information and formats, along with an expanded range of bathymetric products that will be released for public use through the same project website. Improvements include corrections to errors that were identified off Svalbard, in Canada Basin, and in Barrow Strait, as well as contributions of significant new data sets that were collected by the Norwegian Petroleum Directorate and the Alfred Wegener Institute off Norway and Svalbard, in Fram Strait, and over the Lomonosov Ridge. In addition, the portrayal of the Alaskan landmass was enhanced with a new topographic model extracted from NGDC\u27s GLOBE data set. New formats include downloadable Cartesian grids that can be imported directly into ArcInfo and Intergraph\u27s module Terrain Analyst. A geographic grid has been produced as well, with a resolution of 1\u27 x 1\u27 for compatibility with the global grid of bathymetry that is now under construction by a working group operating under the auspices of the General Bathymetric Chart of the Oceans (GEBCO). New products include a suite of bathymetric contours derived from the grid at depths ranging from 20 metres to 5000 metres, and poster-sized Postscript maps showing isobaths printed over a shaded relief background. These latest developments reflect a commitment to maintain IBCAO as a `live\u27 product for the foreseeable future, with periodic upgrades to improve its quality and usefulness

    The International Bathymetric Chart of the Arctic Ocean (IBCAO): An Improved Morphological Framework for Oceanographic Investigations

    Get PDF
    The IBCAO initiative set out in late 1997 to assemble and merge all available bathymetric observations from northern regions, with the intent of constructing a reliable and up-to-date portrayal of the Arctic seabed in digital and printed form. In early 2000, a provisional grid and map were placed in circulation for public review and comment. Available for free downloading from a website hosted by the U.S. National Geophysical Data Center, these products won immediate acceptance from a broad spectrum of Arctic investigators who recognized the potential worth of the new information in a variety of applications ranging from straightforward map production to analysing the influence of underwater topography on ocean circulation. At the same time, error reports and new data sets were being forwarded to the creators of IBCAO, so that by the middle of 2002 a new and more definitive grid was ready to be placed into circulation. This was soon followed by the construction of a prototype shaded relief map that has been proposed as a successor to Sheet 5.17 of the General Bathymetric Chart of the Oceans (GEBCO)

    Dust extinction bias in the column density distribution of gamma-ray bursts; high column density, low redshift GRBs are more heavily obscured

    Full text link
    The afterglows of gamma-ray bursts (GRBs) have more soft X-ray absorption than expected from the foreground gas column in the Galaxy. While the redshift of the absorption can in general not be constrained from current X-ray observations, it has been assumed that the absorption is due to metals in the host galaxy of the GRB. The large sample of X-ray afterglows and redshifts now available allows the construction of statistically meaningful distributions of the metal column densities. We construct such a sample and show, as found in previous studies, that the typical absorbing column density (N_HX) increases substantially with redshift, with few high column density objects found at low to moderate redshifts. We show, however, that when highly extinguished bursts are included in the sample, using redshifts from their host galaxies, high column density sources are also found at low to moderate redshift. We infer from individual objects in the sample and from observations of blazars, that the increase in column density with redshift is unlikely to be related to metals in the intergalactic medium or intervening absorbers. Instead we show that the origin of the apparent increase with redshift is primarily due to dust extinction bias: GRBs with high X-ray absorption column densities found at zâ‰Č4z\lesssim4 typically have very high dust extinction column densities, while those found at the highest redshifts do not. It is unclear how such a strongly evolving N_HX/A_V ratio would arise, and based on current data, remains a puzzle.Comment: 7 pages, 3 figures. Accepted for publication in ApJ, 1 August 201

    Results on Binary Linear Codes With Minimum Distance 8 and 10

    Full text link
    All codes with minimum distance 8 and codimension up to 14 and all codes with minimum distance 10 and codimension up to 18 are classified. Nonexistence of codes with parameters [33,18,8] and [33,14,10] is proved. This leads to 8 new exact bounds for binary linear codes. Primarily two algorithms considering the dual codes are used, namely extension of dual codes with a proper coordinate, and a fast algorithm for finding a maximum clique in a graph, which is modified to find a maximum set of vectors with the right dependency structure.Comment: Submitted to the IEEE Transactions on Information Theory, May 2010 To be presented at the ACCT 201
    • 

    corecore