139,202 research outputs found

    Virtual Laboratories in Cloud Infrastructure of Educational Institutions

    Full text link
    Modern educational institutions widely used virtual laboratories and cloud technologies. In practice must deal with security, processing speed and other tasks. The paper describes the experience of the construction of an experimental stand cloud computing and network management. Models and control principles set forth herein.Comment: 3 pages, Published in: 2014 2nd International Conference on Emission Electronics (ICEE), Saint-Petersburg, Russi

    Chemical applications of escience to interfacial spectroscopy

    No full text
    This report is a summary of works carried out by the author between October 2003 and September 2004, in the first year of his PhD studie

    $=€=Bitcoin?

    Get PDF
    Bitcoin (and other virtual currencies) have the potential to revolutionize the way that payments are processed, but only if they become ubiquitous. This Article argues that if virtual currencies are used at that scale, it would pose threats to the stability of the financial system—threats that have been largely unexplored to date. Such threats will arise because the ability of a virtual currency to function as money is very fragile—Bitcoin can remain money only for so long as people have confidence that bitcoins will be readily accepted by others as a means of payment. Unlike the U.S. dollar, which is backed by both a national government and a central bank, and the euro, which is at least backed by a central bank, there is no institution that can shore up confidence in Bitcoin (or any other virtual currency) in the event of a panic. This Article explores some regulatory measures that could help address the systemic risks posed by virtual currencies, but argues that the best way to contain those risks is for regulated institutions to out-compete virtual currencies by offering better payment services, thus consigning virtual currencies to a niche role in the economy. This Article therefore concludes by exploring how the distributed ledger technology pioneered by Bitcoin could be adapted to allow regulated entities to provide vastly more efficient payment services for sovereign currency-denominated transactions, while at the same time seeking to avoid concentrating the provision of those payment services within “too big to fail” banks

    Cumulative reports and publications thru 31 December 1982

    Get PDF
    Institute for Computer Applications in Science and Engineering (ICASE) reports are documented

    The H.E.S.S. central data acquisition system

    Full text link
    The High Energy Stereoscopic System (H.E.S.S.) is a system of Imaging Atmospheric Cherenkov Telescopes (IACTs) located in the Khomas Highland in Namibia. It measures cosmic gamma rays of very high energies (VHE; >100 GeV) using the Earth's atmosphere as a calorimeter. The H.E.S.S. Array entered Phase II in September 2012 with the inauguration of a fifth telescope that is larger and more complex than the other four. This paper will give an overview of the current H.E.S.S. central data acquisition (DAQ) system with particular emphasis on the upgrades made to integrate the fifth telescope into the array. At first, the various requirements for the central DAQ are discussed then the general design principles employed to fulfil these requirements are described. Finally, the performance, stability and reliability of the H.E.S.S. central DAQ are presented. One of the major accomplishments is that less than 0.8% of observation time has been lost due to central DAQ problems since 2009.Comment: 17 pages, 8 figures, published in Astroparticle Physic

    Plasma Edge Kinetic-MHD Modeling in Tokamaks Using Kepler Workflow for Code Coupling, Data Management and Visualization

    Get PDF
    A new predictive computer simulation tool targeting the development of the H-mode pedestal at the plasma edge in tokamaks and the triggering and dynamics of edge localized modes (ELMs) is presented in this report. This tool brings together, in a coordinated and effective manner, several first-principles physics simulation codes, stability analysis packages, and data processing and visualization tools. A Kepler workflow is used in order to carry out an edge plasma simulation that loosely couples the kinetic code, XGC0, with an ideal MHD linear stability analysis code, ELITE, and an extended MHD initial value code such as M3D or NIMROD. XGC0 includes the neoclassical ion-electron-neutral dynamics needed to simulate pedestal growth near the separatrix. The Kepler workflow processes the XGC0 simulation results into simple images that can be selected and displayed via the Dashboard, a monitoring tool implemented in AJAX allowing the scientist to track computational resources, examine running and archived jobs, and view key physics data, all within a standard Web browser. The XGC0 simulation is monitored for the conditions needed to trigger an ELM crash by periodically assessing the edge plasma pressure and current density profiles using the ELITE code. If an ELM crash is triggered, the Kepler workflow launches the M3D code on a moderate-size Opteron cluster to simulate the nonlinear ELM crash and to compute the relaxation of plasma profiles after the crash. This process is monitored through periodic outputs of plasma fluid quantities that are automatically visualized with AVS/Express and may be displayed on the Dashboard. Finally, the Kepler workflow archives all data outputs and processed images using HPSS, as well as provenance information about the software and hardware used to create the simulation. The complete process of preparing, executing and monitoring a coupled-code simulation of the edge pressure pedestal buildup and the ELM cycle using the Kepler scientific workflow system is described in this paper

    An overview of the VRS virtual platform

    Get PDF
    This paper provides an overview of the development of the virtual platform within the European Commission funded VRShips-ROPAX (VRS) project. This project is a major collaboration of approximately 40 industrial, regulatory, consultancy and academic partners with the objective of producing two novel platforms. A physical platform will be designed and produced representing a scale model of a novel ROPAX vessel with the following criteria: 2000 passengers; 400 cabins; 2000 nautical mile range, and a service speed of 38 knots. The aim of the virtual platform is to demonstrate that vessels may be designed to meet these criteria, which was not previously possible using individual tools and conventional design approaches. To achieve this objective requires the integration of design and simulation tools representing concept, embodiment, detail, production, and operation life-phases into the virtual platform, to enable distributed design activity to be undertaken. The main objectives for the development of the virtual platform are described, followed by the discussion of the techniques chosen to address the objectives, and finally a description of a use-case for the platform. Whilst the focus of the VRS virtual platform was to facilitate the design of ROPAX vessels, the components within the platform are entirely generic and may be applied to the distributed design of any type of vessel, or other complex made-to-order products

    Cumulative reports and publications through 31 December 1983

    Get PDF
    All reports for the calendar years 1975 through December 1983 are listed by author. Since ICASE reports are intended to be preprints of articles for journals and conference proceedings, the published reference is included when available. Thirteen older journal and conference proceedings references are included as well as five additional reports by ICASE personnel. Major categories of research covered include: (1) numerical methods, with particular emphasis on the development and analysis of basic algorithms; (2) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, structural analysis, and chemistry; and (3) computer systems and software, especially vector and parallel computers, microcomputers, and data management
    corecore