32 research outputs found

    REALISTIC CORRECT SYSTEMS IMPLEMENTATION

    Get PDF
    The present article and the forthcoming second part on Trusted Compiler Implementation\ud address correct construction and functioning of large computer based systems. In view\ud of so many annoying and dangerous system misbehaviors we ask: Can informaticians\ud righteously be accounted for incorrectness of systems, will they be able to justify systems\ud to work correctly as intended? We understand the word justification in the sense: design\ud of computer based systems, formulation of mathematical models of information flows, and\ud construction of controlling software are to be such that the expected system effects, the\ud absence of internal failures, and the robustness towards misuses and malicious external attacks\ud are foreseeable as logical consequences of the models.\ud Since more than 40 years, theoretical informatics, software engineering and compiler\ud construction have made important contributions to correct specification and also to correct\ud high-level implementation of compilers. But the third step, translation - bootstrapping - of\ud high level compiler programs to host machine code by existing host compilers, is as important.\ud So far there are no realistic recipes to close this correctness gap, although it is known\ud for some years that trust in executable code can dangerously be compromised by Trojan\ud Horses in compiler executables, even if they pass strongest tests.\ud In the present first article we will give a comprehensive motivation and develop\ud a mathematical theory in order to conscientiously prove the correctness of an initial fully\ud trusted compiler executable. The task will be modularized in three steps. The third step of\ud machine level compiler implementation verification is the topic of the forthcoming second\ud part on Trusted Compiler Implementation. It closes the implementation gap, not only for\ud compilers but also for correct software-based systems in general. Thus, the two articles together\ud give a rather confident answer to the question raised in the title

    Establishing static scope name binding and direct superclassing in the external language of the object oriented Java with inner classes is a difficult and subtle task

    Get PDF
    In [IP02] an axiomatic approach towards the semantics of FJI, Featherweight Java with Inner classes, essentially a subset of Java-programming language, is presented. In this way the authors contribute to an ambitious project: to give an axiomatic definition of the semantics of programming language Java. A similar project with a partly axiomatic flavour, with so called Abstract State Machines ASM, was initiated by E. Boerger and his colleagues [Boe01] in 2001, but did not yet include inner classes. At a first glance the approach of reducing Java's semantics to that of FJI seems promising. We are going to show that several questions have been left unanswered. It turns out that the theory how to elaborate or bind types and thus to determine direct superclasses as proposed in [IP02] has different models. Therefore the suggestion that the formal system of [IP02] defines the (exactly one) semantics of Java is not justified. We present our contribution to the project showing that it must be attacked from another starting point. Quite frequently one encounters a set of inference rules and a claim that a semantics is defined by the rules. Such a claim should be proved. One should present arguments: 101^0 that the system has a model and hence it is a consistent system, and 202^0 that all models are isomorphic. Sometimes such a proposed system contains a rule with a premise which reads: \underline{there is no proof of something}. One should notice that this is a metatheoretic property. It seems strange to accept a metatheorem as a premise, especially if such a system does not offer any other inference rules which would enable a proof of the premise. We are going to study the system in [IP02]. We shall show that it has many non-isomorphic model. We present a repair of Igarashi's and Pierce's calculus such that their ideas are preserved as close as possible

    Realistic correct systems implementation

    Get PDF
    Подана перша частина статті і наступна її друга частина присвячені методам коректної побудови і функціонування великих комп'ютерних систем. У центрі уваги – проблема обґрунтування, що подається в сенсі формулювання математичної моделі інформаційних потоків у комп'ютерній системі і побудови керуючого програмного забезпечення, що контролює слушність поводження, відсутність внутрішніх помилок і усталеність стосовно зовнішніх атак як логічні наслідки, що одержуються з моделі. У першій частині статті викладена математична теорія доказової побудови компіляторівПредставленная первая часть статьи и последующая ее вторая часть посвящены методам корректного построения и функционирования больших компьютерных систем. В центре внимания – проблема обоснования, понимаемая в смысле формулирования математической модели информационных потоков в компьютерной системе и построения управляющего программного обеспечения, контролирующего правильность поведения, отсутствие внутренних ошибок и устойчивость по отношению к внешним атакам как логические следствия, получаемые из модели. В первой части статьи изложена математическая теория доказательного построения компиляторов

    Will Informatics be able to Justify the Construction of Large Computer Based Systems? Part II. Trusted compiler implementation

    Get PDF
    The present and the previous article on Realistic Correct Systems Implementation together address correct construction and functioning of large computer based systems. In view of so many annoying and dangerous system misbehaviors we want to ask: Can informaticians righteously be accounted for incorrectness of ystems, will they be able to justify systems to work correctly as intended? We understand the word justification in this sense, i.e. for the design of computer based systems, the formulation of mathematical models of information flows, and the construction of controlling software to be such that the expected system effects, the absence of internal failures, and the robustness towards misuses and malicious external attacks are foreseeable as logical consequences of the models. Since more than 40 years, theoretical informatics, software engineering and compiler construction have made important contributions to correct specification and also to correct high-level implementation of compilers. But the third step, translation — bootstrapping — of high level compiler programs into host machine code by existing host ompilers, is as important. So far there are no realistic recipes to close this gap, although it is known for many years that trust in executable code can dangerously be compromised by Trojan Horses in compiler executables, even if they pass strongest tests. Our article will show how to close this low level gap. We demonstrate the method of rigorous syntactic a-posteriori code inspection, which has been developed by the research group Verifix funded by the Deutsche Forschungsgemeinschaft (DFG).Багато років теоретична інформатика вчастині розробки програмного забезпечення і побудови компіляторів займалась проблемами правильності специфікацій і високорівневих реалізацій компіляторів. У другій частині статті розглядається проблема коректного і безпечного перекладу (bootstrapping) програм з мови високого рівня в коди машини. Показано, як вирішуються проблеми коректності програм на мовах низького рівня. Продемонстрований метод строго синтаксичного апостеріор ного аналізу, котрий був розроблений дослідною групою Verifix в університеті м. Киля (ФРН).Много лет теоретическая информатика в части разработки программного обеспечения и построения компиляторов занималась проблемами правильности спецификаций и высокоуровневых реализаций компиляторов. Во второй части статьи рассматривается проблема корректного и безопасного перевода (bootstrapping) программ с языка высокого уровня в коды машины. Показано, как решаются проблемы корректности программ на языках низкого уровня. Продемонстрирован метод строго синтаксического апостериорного анализа, который был разработан исследовательской группой Verifix в университете г. Киля (ФРГ)

    Long-Term Soil Structure Observatory for Monitoring Post-Compaction Evolution of Soil Structure

    Get PDF
    The projected intensification of agriculture to meet food targets of a rapidly growing world population are likely to accentuate already acute problems of soil compaction and deteriorating soil structure in many regions of the world. The key role of soil structure for soil functions, the sensitivity of soil structure to agronomic management practices, and the lack of reliable observations and metrics for soil structure recovery rates after compaction motivated the establishment of a long-term Soil Structure Observatory (SSO) at the Agroscope research institute in Zürich, Switzerland. The primary objective of the SSO is to provide long-term observation data on soil structure evolution after disturbance by compaction, enabling quantification of compaction recovery rates and times. The SSO was designed to provide information on recovery of compacted soil under different post-compaction soil management regimes, including natural recovery of bare and vegetated soil as well as recovery with and without soil tillage. This study focused on the design of the SSO and the characterization of the pre- and post-compaction state of the field. We deployed a monitoring network for continuous observation of soil state variables related to hydrologic and biophysical functions (soil water content, matric potential, temperature, soil air O2 and CO2 concentrations, O2 diffusion rates, and redox states) as well as periodic sampling and in situ measurements of infiltration, mechanical impedance, soil porosity, gas and water transport properties, crop yields, earthworm populations, and plot-scale geophysical measurements. Besides enabling quantification of recovery rates of compacted soil, we expect that data provided by the SSO will help improve our general understanding of soil structure dynamics

    On correct procedure parameter transmission in higher programming languages

    No full text
    The paper starts with the observation that in ALGOL 60 no specifications for formal procedure parameters are prescribed, whereas ALGOL 68 demands complete specifications. As a consequence, every ALGOL 68 program accepted by the compiler does not have wrong parameter transmissions at run time whereas ALGOL 60 programs may have them. The property of ALGOL 60 programs to have only correct parameter transmissions obviously is undecidable if all data, conditional statements, etc. have to be taken into consideration and it is unfair to demand that the compiler shall figure out these programs by a finite process. Therefore, we investigate this question of decidability under a much fairer condition, namely to take into consideration no data and no conditions and to give all procedure calls occurring in the same block equal rights. Even this fairer problem turns out to be algorithmically unsolvable, in general (Theorem 3), but it is solvable as soon as the programs do not have global formal procedure parameters (Theorem 1). Analogous answers can be given to the problems of formal equivalence of programs and of formal reachability, formal recursivity, and strong recursivity of procedures (Theorems 5-8). Procedures which are not strongly recursive have great importance in compilation techniques as is shown in Section X
    corecore