38 research outputs found

    Under the Radar

    Get PDF
    Western democracy is currently under attack by a resurgent Russia, weaponizing new technologies and social media. How to respond? During the Cold War, the West fought off similar Soviet propaganda assaults with shortwave radio broadcasts. Founded in 1949, the US-funded Radio Free Europe/Radio Liberty broadcast uncensored information to the Soviet republics in their own languages. About one-third of Soviet urban adults listened to Western radio. The broadcasts played a key role in ending the Cold War and eroding the communist empire. R. Eugene Parta was for many years the director of Soviet Area Audience Research at RFE/RL, charged among others with gathering listener feedback. In this book he relates a remarkable Cold War operation to assess the impact of Western radio broadcasts on Soviet listeners by using a novel survey research approach. Given the impossibility of interviewing Soviet citizens in their own country, it pioneered audacious interview methods in order to fly under the radar and talk to Soviets traveling abroad, ultimately creating a database of 51,000 interviews which offered unparalleled insights into the media habits and mindset of the Soviet public. By recounting how the “impossible” mission was carried out, Under the Radar also shows how the lessons of the past can help counter the threat from a once and current adversary

    Hands-on Science. Advancing Science. Improving Education

    Get PDF
    The book herein aims to contribute to the advancement of Science to the improvement of Science Education and to an effective implementation of a sound widespread scientific literacy at all levels of society. Its chapters reunite a variety of diverse and valuable works presented in this line of thought at the 15th International Conference on Hands-on Science “Advancing Science. Improving Education

    Technological roadmap on AI planning and scheduling

    Get PDF
    At the beginning of the new century, Information Technologies had become basic and indispensable constituents of the production and preparation processes for all kinds of goods and services and with that are largely influencing both the working and private life of nearly every citizen. This development will continue and even further grow with the continually increasing use of the Internet in production, business, science, education, and everyday societal and private undertaking. Recent years have shown, however, that a dramatic enhancement of software capabilities is required, when aiming to continuously provide advanced and competitive products and services in all these fast developing sectors. It includes the development of intelligent systems – systems that are more autonomous, flexible, and robust than today’s conventional software. Intelligent Planning and Scheduling is a key enabling technology for intelligent systems. It has been developed and matured over the last three decades and has successfully been employed for a variety of applications in commerce, industry, education, medicine, public transport, defense, and government. This document reviews the state-of-the-art in key application and technical areas of Intelligent Planning and Scheduling. It identifies the most important research, development, and technology transfer efforts required in the coming 3 to 10 years and shows the way forward to meet these challenges in the short-, medium- and longer-term future. The roadmap has been developed under the regime of PLANET – the European Network of Excellence in AI Planning. This network, established by the European Commission in 1998, is the co-ordinating framework for research, development, and technology transfer in the field of Intelligent Planning and Scheduling in Europe. A large number of people have contributed to this document including the members of PLANET non- European international experts, and a number of independent expert peer reviewers. All of them are acknowledged in a separate section of this document. Intelligent Planning and Scheduling is a far-reaching technology. Accepting the challenges and progressing along the directions pointed out in this roadmap will enable a new generation of intelligent application systems in a wide variety of industrial, commercial, public, and private sectors

    Towards a philosophical understanding of agile software methodologies : the case of Kuhn versus Popper

    Get PDF
    This dissertation is original in using the contrasting ideas of two leading 20th century philosophers of science, Karl Popper and Thomas Kuhn, to provide a philosophical understanding, firstly, of the shift from traditional software methodologies to the so-called Agile methodologies, and, secondly, of the values, principles and practices underlying the most prominent of the Agile methodologies, Extreme Programming (XP). This dissertation will take a revisionist approach, following Fuller—the founder of social epistemology—in reading Popper against Kuhn's epistemological hegemony. The investigations in this dissertation relate to two main branches of philosophy— epistemology and ethics. The epistemological part of this dissertation compares both Kuhn and Popper's alternative ideas of the development of scientific knowledge to the Agile methodologists' ideas of the development of software, in order to assess the extent to which Agile software development resembles a scientific discipline. The investigations relating to ethics in this dissertation transfer concepts from social engineering—in particular, Popper's distinction between piecemeal and utopian social engineering—to software engineering, in order to assess both the democratic and authoritarian aspects of Agile software development and management. The use of Kuhn's ideas of scientific revolutions and paradigm shift by several leading figures of the Agile software methodologies—most notably, Kent Beck, the leader of the most prominent Agile software methodology, Extreme Programming (XP)—to predict a fundamental shift from traditional to Agile software methodologies, is critically assessed in this dissertation. A systematic investigation into whether Kuhn's theory as a whole, can provide an adequate account of the day-to-day practice of Agile software development is also provided. As an alternative to the use of Kuhn's ideas, the critical rationalist philosophy of Karl Popper is investigated. On the one hand, this dissertation assesses whether the epistemological aspects of Popper's philosophy—especially his notions of falsificationism, evolutionary epistemology, and three worlds metaphysics—provide a suitable framework for understanding the philosophical basis of everyday Agile software development. On the other hand, the aspects of Popper's philosophy relating to ethics, which provide an ideal for scientific practice in an open society, are investigated in order to determine whether they coincide with the avowedly democratic values of Agile software methodologies. The investigations in this dissertation led to the following conclusions. Firstly, Kuhn's ideas are useful in predicting the effects of the full-scale adoption of Agile methodologies, and they describe the way in which several leaders of the Agile methodologies promote their methodologies; they do not, however, account for the detailed methodological practice of Agile software development. Secondly, several aspects of Popper's philosophy, were found to be aligned with several aspects of Agile software development. In relation to epistemology, Popper's principle of falsificationism provides a criterion for understanding the rational and scientific basis of several Agile principles and practices, his evolutionary epistemology resembles the iterative-incremental design approach of Agile methodologies, and his three worlds metaphysical model provides an understanding of both the nature of software, and the approach advocated by the Agile methodologists' of creating and sharing knowledge. In relation to ethics, Popper's notion of an open society provides an understanding of the rational and ethical basis of the values underlying Agile software development and management, as well as the piecemeal adoption of Agile software methodologies.Dissertation (MSc)--University of Pretoria, 2009.Computer Scienceunrestricte

    K-State undergraduate catalog, 2004-2006

    Get PDF
    Course catalogs were published under the following titles: Catalogue of the officers and students of the Kansas State Agricultural College, with a brief history of the institution, 1st (1863/4); Annual catalogue of the officers and students of the Kansas State Agricultural College for, 2nd (1864/5)-4th (1868/9); Catalogue of the officers and students of the Kansas State Agricultural College for the year, 1869-1871/2; Hand-book of the Kansas State Agricultural College, Manhattan, Kansas, 1873/4; Biennial catalogue of the Kansas State Agricultural College, Manhattan, Kansas, calendar years, 1875/77; Catalogue of the State Agricultural College of Kansas, 1877/80-1896/97; Annual catalogue of the officers, students and graduates of the Kansas State Agricultural College, Manhattan, 35th (1897/98)-46th (1908/09); Catalogue, 47th (1909/10)-67th (1929/30); Complete catalogue number, 68th (1930/31)-81st (1943/1944); Catalogue, 1945/1946-1948/1949?; General catalogue, 1949/1950?-1958/1960; General catalog, 1960/1962-1990/1992. Course catalogs then split into undergraduate and graduate catalogs respectively: K-State undergraduate catalog, 1992/1994- ; K-State graduate catalog, 1993/1995-Citation: Kansas State University. (2004). K-State undergraduate catalog, 2004-2006. Manhattan, KS: Kansas State University.Call number: LD2668.A11711 K7

    K-State undergraduate catalog, 2002-2004

    Get PDF
    Course catalogs were published under the following titles: Catalogue of the officers and students of the Kansas State Agricultural College, with a brief history of the institution, 1st (1863/4); Annual catalogue of the officers and students of the Kansas State Agricultural College for, 2nd (1864/5)-4th (1868/9); Catalogue of the officers and students of the Kansas State Agricultural College for the year, 1869-1871/2; Hand-book of the Kansas State Agricultural College, Manhattan, Kansas, 1873/4; Biennial catalogue of the Kansas State Agricultural College, Manhattan, Kansas, calendar years, 1875/77; Catalogue of the State Agricultural College of Kansas, 1877/80-1896/97; Annual catalogue of the officers, students and graduates of the Kansas State Agricultural College, Manhattan, 35th (1897/98)-46th (1908/09); Catalogue, 47th (1909/10)-67th (1929/30); Complete catalogue number, 68th (1930/31)-81st (1943/1944); Catalogue, 1945/1946-1948/1949?; General catalogue, 1949/1950?-1958/1960; General catalog, 1960/1962-1990/1992. Course catalogs then split into undergraduate and graduate catalogs respectively: K-State undergraduate catalog, 1992/1994- ; K-State graduate catalog, 1993/1995-Citation: Kansas State University. (2002). K-State undergraduate catalog, 2002-2004. Manhattan, KS: Kansas State University.Call number: LD2668.A11711 K7

    Algorithmische und Code-Optimierungen Molekulardynamiksimulationen fĂĽr Verfahrenstechnik

    Get PDF
    The focus of this work lies on implementational improvements and, in particular, node-level performance optimization of the simulation software ls1-mardyn. Through data structure improvements, SIMD vectorization and, especially, OpenMP parallelization, the world’s first simulation of 2*1013 molecules at over 1 PFLOP/sec was enabled. To allow for long-range interactions, the Fast Multipole Method was introduced to ls1-mardyn. The algorithm was optimized for sequential, shared-memory, and distributed-memory execution on up to 32,768 MPI processes.Der Fokus dieser Arbeit liegt auf Code-Optimierungen und insbesondere Leistungsoptimierung auf Knoten-Ebene für die Simulationssoftware ls1-mardyn. Durch verbesserte Datenstrukturen, SIMD-Vektorisierung und vor allem OpenMP-Parallelisierung wurde die weltweit erste Petaflop-Simulation von 2*1013 Molekülen ermöglicht. Zur Simulation von langreichweitigen Wechselwirkungen wurde die Fast-Multipole-Methode in ls1-mardyn eingeführt. Sequenzielle, Shared- und Distributed-Memory-Optimierungen wurden angewandt und erlaubten eine Ausführung auf bis zu 32768 MPI-Prozessen

    Fast and accurate finite-element multigrid solvers for PDE simulations on GPU clusters

    Get PDF
    Der wichtigste Beitrag dieser Dissertation ist es aufzuzeigen, dass Grafikprozessoren (GPUs) als Repräsentanten der Entwicklung hin zu Vielkern-Architekturen sehr gut geeignet sind zur schnellen und genauen Lösung großer, dünn besetzter linearer Gleichungssysteme, insbesondere mit parallelen Mehrgittermethoden auf heterogenen Rechenclustern. Solche Systeme treten bspw. bei der Diskretisierung (elliptischer) partieller Differentialgleichungen mittels finiter Elemente auf. Wir demonstrieren Beschleunigungsfaktoren von mindestens einer Größenordnung gegenüber konventionellen, hochoptimierten CPU-Implementierungen, ohne Verlust von Genauigkeit und Funktionsumfang. Im Detail liefert diese Dissertation die folgenden Beiträge: Berechnungen in einfach genauer Fließkommadarstellung können für die hier betrachteten Problemklassen nicht ausreichen. Wir greifen die Methode gemischt genauer iterativer Verfeinerung (Nachiteration) wieder auf, um nicht nur die Genauigkeit von berechneten Lösungen zu verbessern, sondern vielmehr die Effizienz des Lösungsprozesses als ganzes zu steigern. Sowohl auf CPUs als auch auf GPUs demonstrieren wir eine deutliche Leistungssteigerung ohne Genauigkeitsverlust im Vergleich zur Berechnung in höherer Fliesskomma-Genauigkeit. Wir präsentieren effiziente Parallelisierungstechniken für Mehrgitter-Löser auf Grafik-Hardware, insbesondere für numerisch starke Glätter und Vorkonditionierer, die für stark anisotrope Gitter und Operatoren geeignet sind. Ein Beispiel ist die Entwicklung einer effizienten Reformulierung des Verfahrens der zyklischen Reduktion für die Lösung tridiagonaler Gleichungssysteme. Im Hinblick auf Hardware-orientierte Numerik analysieren wir sorgfältig den Kompromiss zwischen numerischer und Laufzeit-Effizienz für inexakte Parallelisierungstechniken, die einige der inhärent sequentiellen Charakteristiken solcher starker Glätter zugunsten besserer Parallelisierungseigenschaften entkoppeln. Die Reimplementierung großer, etablierter Softwarepakete zur Anpassung auf neue Hardwareplattformen ist oft inakzeptabel teuer. Wir entwickeln einen "minimalinvasiven" Zugang zur Integration von Co-Prozessoren wie GPUs in FEAST, einem exemplarischen finite Elemente Diskretisierungs- und Löserpaket. Der Hauptvorteil unserer Technik ist, dass Applikationen, die auf FEAST aufsetzen, nicht geändert werden müssen um von der Beschleunigung durch solche Co-Prozessoren zu profitieren. Wir evaluieren unseren Zugang auf großen GPU-beschleunigten Rechenclustern für klassische Benchmarkprobleme aus der linearisierten Elastizität und der Simulation stationärer laminarer Strömungsvorgänge, und beobachten gute Beschleunigungsfaktoren und gute schwache Skalierbarkeit. Die maximal erreichbare Beschleunigung wird zudem analysiert und theoretisch modelliert, um bspw. Vorhersagen treffen zu können. Weiterhin fassen wir die historische Entwicklung des Forschungsgebiets "wissenschaftliches Rechnen auf Grafikhardware" seit 2001/2002 zusammen, d.h. die Entwicklung von GPGPU als obskures Nischenthema hin zum fachübergreifenden Einsatz heute. Die Darstellung umfasst gleichermaßen die Hardware und das Programmiermodell und beinhaltet eine ausgiebige Bibliografie von Veröffentlichungen im Bereich der Simulation von PDE-Problemen auf GPUs.The main contribution of this thesis is to demonstrate that graphics processors (GPUs) as representatives of emerging many-core architectures are very well-suited for the fast and accurate solution of large sparse linear systems of equations, using parallel multigrid methods on heterogeneous compute clusters. Such systems arise for instance in the discretisation of (elliptic) partial differential equations with finite elements. We report on at least one order of magnitude speedup over highly-tuned conventional CPU implementations, without sacrificing neither accuracy nor functionality. In more detail, this thesis includes the following contributions: Single precision floating point computations may be insufficient for the class of problems considered in this thesis. We revisit mixed precision iterative refinement techniques to not only increase the accuracy of computed results, but also to increase the efficiency of the solution process. Both on CPUs and on GPUs, we demonstrate a significant performance improvement without loss of accuracy compared to computing in high precision only. We present efficient parallelisation techniques for multigrid solvers on graphics hardware, in particular for numerically strong smoothers and preconditioners that are suitable for highly anisotropic grids and operators. For instance, an efficient formulation of the cyclic reduction algorithm to solve tridiagonal systems is developed. In view of hardware-oriented numerics, we carefully analyse the trade-off between numerical and runtime performance for inexact parallelisation techniques that decouple some of the inherently sequential characteristics of strong smoothing operators. For large-scale established software frameworks, the re-implementation tailored to novel hardware platforms is often prohibitively expensive. We develop a 'minimally invasive' approach to integrate support for co-processor hardware like GPUs into FEAST, a finite element discretisation and solver toolbox. Our technique has the major advantage that applications built on top of the toolbox do not have to be changed at all to benefit from co-processor acceleration. The approach is evaluated for benchmark problems in linearised elasticity and stationary laminar flow computed on large-scale GPU-enhanced clusters. Good speedup factors and near-ideal weak scalability are observed. The achievable speedup is analysed and a theoretical speedup model is presented. Finally, we provide a historical overview of scientific computing on graphics hardware since the early beginnings in 2001/2002, when GPGPU was an obscure research topic pursued by few, to the widespread adoption nowadays. We discuss the evolution of the hardware and the programming model, and provide a comprehensive bibliography of publications related to PDE simulations on GPUs
    corecore