42 research outputs found

    Cogent: uniqueness types and certifying compilation

    Get PDF
    This paper presents a framework aimed at significantly reducing the cost of proving functional correctness for low-level operating systems components. The framework is designed around a new functional programming language, Cogent. A central aspect of the language is its uniqueness type system, which eliminates the need for a trusted runtime or garbage collector while still guaranteeing memory safety, a crucial property for safety and security. Moreover, it allows us to assign two semantics to the language: The first semantics is imperative, suitable for efficient C code generation, and the second is purely functional, providing a user-friendly interface for equational reasoning and verification of higher-level correctness properties. The refinement theorem connecting the two semantics allows the compiler to produce a proof via translation validation certifying the correctness of the generated C code with respect to the semantics of the Cogent source program. We have demonstrated the effectiveness of our framework for implementation and for verification through two file system implementations

    The 2nd Conference of PhD Students in Computer Science

    Get PDF

    Cogent: uniqueness types and certifying compilation.

    Get PDF
    This paper presents a framework aimed at significantly reducing the cost of proving functional correctness for low-level operating systems components. The framework is designed around a new functional programming language, Cogent. A central aspect of the language is its uniqueness type system, which eliminates the need for a trusted runtime or garbage collector while still guaranteeing memory safety, a crucial property for safety and security. Moreover, it allows us to assign two semantics to the language: The first semantics is imperative, suitable for efficient C code generation, and the second is purely functional, providing a user-friendly interface for equational reasoning and verification of higher-level correctness properties. The refinement theorem connecting the two semantics allows the compiler to produce a proof via translation validation certifying the correctness of the generated C code with respect to the semantics of the Cogent source program. We have demonstrated the effectiveness of our framework for implementation and for verification through two file system implementations

    Saturation-based decision procedures for extensions of the guarded fragment

    Get PDF
    We apply the framework of Bachmair and Ganzinger for saturation-based theorem proving to derive a range of decision procedures for logical formalisms, starting with a simple terminological language EL, which allows for conjunction and existential restrictions only, and ending with extensions of the guarded fragment with equality, constants, functionality, number restrictions and compositional axioms of form S ◦ T ⊆ H. Our procedures are derived in a uniform way using standard saturation-based calculi enhanced with simplification rules based on the general notion of redundancy. We argue that such decision procedures can be applied for reasoning in expressive description logics, where they have certain advantages over traditionally used tableau procedures, such as optimal worst-case complexity and direct correctness proofs.Wir wenden das Framework von Bachmair und Ganzinger fĂŒr saturierungsbasiertes Theorembeweisen an, um eine Reihe von Entscheidungsverfahren fĂŒr logische Formalismen abzuleiten, angefangen von einer simplen terminologischen Sprache EL, die nur Konjunktionen und existentielle Restriktionen erlaubt, bis zu Erweiterungen des Guarded Fragment mit Gleichheit, Konstanten, FunktionalitĂ€t, Zahlenrestriktionen und Kompositionsaxiomen der Form S ◦ T ⊆ H. Unsere Verfahren sind einheitlich abgeleitet unter Benutzung herkömmlicher saturierungsbasierter KalkĂŒle, verbessert durch Simplifikationsregeln, die auf dem Konzept der Redundanz basieren. Wir argumentieren, daß solche Entscheidungsprozeduren fĂŒr das Beweisen in ausdrucksvollen Beschreibungslogiken angewendet werden können, wo sie gewisse Vorteile gegenĂŒber traditionell benutzten Tableauverfahren besitzen, wie z.B. optimale worst-case KomplexitĂ€t und direkte Korrektheitsbeweise

    Relations between Modern Mathematics and Poetry: CzesƂaw MiƂosz; Zbigniew Herbert; Ion Barbu/Dan Barbilian

    No full text
    This doctoral thesis is an examination of the relationship between poetry and mathematics, centred on three twentieth-century case studies: the Polish poets CzesƂaw MiƂosz (1911-2004) and Zbigniew Herbert (1924-1998), and the Romanian mathematician and poet Dan Barbilian/Ion Barbu (1895-1961). Part One of the thesis is a review of current scholarly literature, divided into two chapters. The first chapter looks at the nature of mathematics, outlining its historical developments and describing some major mathematical concepts as they pertain to the later case studies. This entails a focus on non-Euclidean geometries, modern algebra, and the foundations of mathematics in Europe; the nature of mathematical truth and language; and the modern historical evolution of mathematical schools in Poland and Romania. The second chapter examines some existing attempts to bring together mathematics and poetry, drawing on literature and science as an academic field; the role of the imagination and invention in the languages of both poetics and mathematics; the interest in mathematics among certain Symbolist poets, notably MallarmĂ©; and the experimental work of the French groups of mathematicians and mathematician-poets, Bourbaki and Oulipo. The role of metaphor is examined in particular. Part Two of the thesis is the case studies. The first presents the ethical and moral stance of CzesƂaw MiƂosz, investigating his attitudes towards classical and later relativistic science, in the light of the Nazi occupation and the Marxist regimes in Poland, and how these are reflected in his poetry. The study of Zbigniew Herbert is structured around a wide selection of his poetic oeuvre, and identifying his treatment of evolving and increasingly more complex mathematical concepts. The third case study, on Dan Barbilian, who published his poetry under the name Ion Barbu, begins with an examination of the mathematical school at Göttingen in the 1920s, tracing the influence of Gauss, Riemann, Klein, Hilbert and Noether in Barbilian’s own mathematical work, particularly in the areas of metric spaces and axiomatic geometry. In the discussion, the critical analysis of the mathematician and linguist Solomon Marcus is examined. This study finishes with a close reading of seven of Barbu’s poems. The relationship of mathematics and poetry has rarely been studied as a coherent academic field, and the relevant scholarship is often disconnected. A feature of this thesis is that it brings together a wide range of scholarly literature and discussion. Although primarily in English, a considerable amount of the academic literature collated here is in French, Romanian, Polish and some German. The poems themselves are presented in the original Polish and Romanian with both published and working translations appended in the footnotes. In the case of the two Polish poets, one a Nobel laureate and the other a multiple prize-winning figure highly regarded in Poland, this thesis is unusual in its concentration on mathematics as a feature of the poetry which is otherwise much-admired for its politically-engaged and lyrical qualities. In the case of the Romanian, Dan Barbilian, he is widely known in Romania as a mathematician, and most particularly as the published poet Ion Barbu, yet his work is little studied outside that country, and indeed much of it is not yet translated into English. This thesis suggests at an array of both theoretical and specific starting points for examining the multi-stranded and intricate relationship between mathematics and poetry, pointing to a number of continuing avenues of further research

    Discovering Causal Relations and Equations from Data

    Full text link
    Physics is a field of science that has traditionally used the scientific method to answer questions about why natural phenomena occur and to make testable models that explain the phenomena. Discovering equations, laws and principles that are invariant, robust and causal explanations of the world has been fundamental in physical sciences throughout the centuries. Discoveries emerge from observing the world and, when possible, performing interventional studies in the system under study. With the advent of big data and the use of data-driven methods, causal and equation discovery fields have grown and made progress in computer science, physics, statistics, philosophy, and many applied fields. All these domains are intertwined and can be used to discover causal relations, physical laws, and equations from observational data. This paper reviews the concepts, methods, and relevant works on causal and equation discovery in the broad field of Physics and outlines the most important challenges and promising future lines of research. We also provide a taxonomy for observational causal and equation discovery, point out connections, and showcase a complete set of case studies in Earth and climate sciences, fluid dynamics and mechanics, and the neurosciences. This review demonstrates that discovering fundamental laws and causal relations by observing natural phenomena is being revolutionised with the efficient exploitation of observational data, modern machine learning algorithms and the interaction with domain knowledge. Exciting times are ahead with many challenges and opportunities to improve our understanding of complex systems.Comment: 137 page

    Holistic Approach for Authoring Immersive and Smart Environments for the Integration in Engineering Education

    Get PDF
    Die vierte industrielle Revolution und der rasante technologische Fortschritt stellen die etablierten Bildungsstrukturen und traditionellen Bildungspraktiken in Frage. Besonders in der Ingenieurausbildung erfordert das lebenslange Lernen, dass man sein Wissen und seine FĂ€higkeiten stĂ€ndig verbessern muss, um auf dem Arbeitsmarkt wettbewerbsfĂ€hig zu sein. Es besteht die Notwendigkeit eines Paradigmenwechsels in der Bildung und Ausbildung hin zu neuen Technologien wie virtueller RealitĂ€t und kĂŒnstlicher Intelligenz. Die Einbeziehung dieser Technologien in ein Bildungsprogramm ist jedoch nicht so einfach wie die Investition in neue GerĂ€te oder Software. Es mĂŒssen neue Bildungsprogramme geschaffen oder alte von Grund auf umgestaltet werden. Dabei handelt es sich um komplexe und umfangreiche Prozesse, die Entscheidungsfindung, Design und Entwicklung umfassen. Diese sind mit erheblichen Herausforderungen verbunden, die die Überwindung vieler Hindernisse erfordert. Diese Arbeit stellt eine Methodologie vor, die sich mit den Herausforderungen der Nutzung von Virtueller RealitĂ€t und KĂŒnstlicher Intelligenz als SchlĂŒsseltechnologien in der Ingenieurausbildung befasst. Die Methodologie hat zum Ziel, die Hauptakteure anzuleiten, um den Lernprozess zu verbessern, sowie neuartige und effiziente Lernerfahrungen zu ermöglichen. Da jedes Bildungsprogramm einzigartig ist, folgt die Methodik einem ganzheitlichen Ansatz, um die Erstellung maßgeschneiderter Kurse oder Ausbildungen zu unterstĂŒtzen. Zu diesem Zweck werden die Wechselwirkung zwischen verschiedenen Aspekten berĂŒcksichtigt. Diese werden in den drei Ebenen - Bildung, Technologie und Management zusammengefasst. Die Methodik betont den Einfluss der Technologien auf die Unterrichtsgestaltung und die Managementprozesse. Sie liefert Methoden zur Entscheidungsfindung auf der Grundlage einer umfassenden pĂ€dagogischen, technologischen und wirtschaftlichen Analyse. DarĂŒber hinaus unterstĂŒtzt sie den Prozess der didaktischen Gestaltung durch eine umfassende Kategorisierung der Vor- und Nachteile immersiver Lernumgebungen und zeigt auf, welche ihrer Eigenschaften den Lernprozess verbessern können. Ein besonderer Schwerpunkt liegt auf der systematischen Gestaltung immersiver Systeme und der effizienten Erstellung immersiver Anwendungen unter Verwendung von Methoden aus dem Bereich der kĂŒnstlichen Intelligenz. Es werden vier AnwendungsfĂ€lle mit verschiedenen Ausbildungsprogrammen vorgestellt, um die Methodik zu validieren. Jedes Bildungsprogramm hat seine eigenen Ziele und in Kombination decken sie die Validierung aller Ebenen der Methodik ab. Die Methodik wurde iterativ mit jedem Validierungsprojekt weiterentwickelt und verbessert. Die Ergebnisse zeigen, dass die Methodik zuverlĂ€ssig und auf viele Szenarien sowie auf die meisten Bildungsstufen und Bereiche ĂŒbertragbar ist. Durch die Anwendung der in dieser Arbeit vorgestellten Methoden können Interessengruppen immersiven Technologien effektiv und effizient in ihre Unterrichtspraxis integrieren. DarĂŒber hinaus können sie auf der Grundlage der vorgeschlagenen AnsĂ€tze Aufwand, Zeit und Kosten fĂŒr die Planung, Entwicklung und Wartung der immersiven Systeme sparen. Die Technologie verlagert die Rolle des Lehrenden in eine Moderatorrolle. Außerdem bekommen die LehrkrĂ€fte die Möglichkeit die Lernenden individuell zu unterstĂŒtzen und sich auf deren kognitive FĂ€higkeiten höherer Ordnung zu konzentrieren. Als Hauptergebnis erhalten die Lernenden eine angemessene, qualitativ hochwertige und zeitgemĂ€ĂŸe Ausbildung, die sie qualifizierter, erfolgreicher und zufriedener macht
    corecore