15 research outputs found

    A brief history of software engineering

    Get PDF
    Abstract We present a personal perspective of the Art of Programming. We start with its state around 1960 and follow its development to the present day. The term Software Engineering became known after a conference in 1968, when the difficulties and pitfalls of designing complex systems were frankly discussed. A search for solutions began. It concentrated on better methodologies and tools. The most prominent were programming languages reflecting the procedural, modular, and then object-oriented styles. Software engineering is intimately tied to their emergence and improvement. Also of significance were efforts of systematizing, even automating program documentation and testing. Ultimately, analytic verification and correctness proofs were supposed to replace testing. More recently, the rapid growth of computing power made it possible to apply computing to ever more complicated tasks. This trend dramatically increased the demands on software engineers. Programs and systems became complex and almost impossible to fully understand. The sinking cost and the abundance of computing resources inevitably reduced the care for good design. Quality seemed extravagant, a loser in the race for profit. But we should be concerned about the resulting deterioration in quality. Our limitations are no longer given by slow hardware, but by our own intellectual capability. From experience we know that most programs could be significantly improved, made more reliable, economical and comfortable to use. The 1960s and the Origin of Software Engineering It is unfortunate that people dealing with computers often have little interest in the history of their subject. As a result, many concepts and ideas are propagated and advertised as being new, which existed decades ago, perhaps under a different terminology. I believe it worth while to occasionally spend some time to consider the past and to investigate how terms and concepts originated. I consider the late 1950s as an essential period of the era of computing. Large computers became available to research institutions and universities. Their presence was noticed mainly in engineering and natural sciences, but also in business they soon became indispensable. The time when they were accessible only to a few insiders in laboratories, when they broke down every time one wanted to use them, belonged to the past. Their emergence from the closed laboratory of electrical engineers into the public domain meant that their use, in particular their programming, became an activity of many. A new profession was born; but the large computers themselves became hidden within closely guarded cellars. Programmers brought their programs to the counter, where a dispatcher would pick them up, queue them, and where the results could be fetched hours or days later. There was no interactivity between man and computer. 2 Programming was known to be a sophisticated task requiring devotion and scrutiny, and a love for obscure codes and tricks. In order to facilitate this coding, formal notations were created. We now call them programming languages. The primary idea was to replace sequences of special instruction code by mathematical formulas. The first widely known language, Fortran, was issued by IBM (Backus, 1957), soon followed by Algol (1958) and its official successor in 1960. As computers were then used for computing rather than storing and communicating, these languages catered mainly to numerical mathematics. In 1962 the language Cobol was issued by the US Department of Defense for business applications. But as computing capacity grew, so did the demands on programs and on programmers: Tasks became more and more intricate. It was slowly recognized that programming was a difficult task, and that mastering complex problems was non-trivial, even when -or because -computers were so powerful. Salvation was sought in "better" programming languages, in more "tools", even in automation. A better language should be useful in a wider area of application, be more like a "natural" language, offer more facilities. PL/1 was designed to unify scientific and commercial worlds. It was advertised under the slogan "Everybody can program thanks to PL/1". Programming languages and their compilers became a principal cornerstone of computing science. But they neither fitted into mathematics nor electronics, the two traditional sectors where computers were used. A new discipline emerged, called Computer Science in America, Informatics in Europe. In 1963 the first time-sharing system appeared (MIT, Stanford, McCarthy, DEC PDP-1). It brought back the missing interactivity. Computer manufacturers picked the idea up and announced time-sharing systems for their large mainframes (IBM 360/67, GE 645). It turned out that the transition from batch processing systems to time-sharing systems, or in fact their merger, was vastly more difficult then anticipated. Systems were announced and could not be delivered on time. The problems were too complex. Research was to be conducted "on the job". The new, hot topics were multiprocessing and concurrent programming. The difficulties brought big companies to the brink of collapse. In 1968 a conference sponsored by NATO was dedicated to the topic (1968 at Garmisch-Partenkirchen, Germany) [1]. Although critical comments had occasionally been voiced earlier Programming as a Discipline In the academic world it was mainly E.W.Dijkstra and C.A.R.Hoare, who recognized the problems and offered new ideas. In 1965 Dijkstra wrote his famous Notes on Structured Programming Furthermore, in 1966 Dijkstra wrote a seminal paper about harmoniously cooperating processes Of course, all this did not change the situation, nor dispel all difficulties over night. Industry could change neither policies nor tools rapidly. Nevertheless, intensive training courses on structured programming were organized, notably by H. D. Mills in IBM. None less than the US Department of Defense realized that problems were urgent and growing. It started a project that ultimately led to the programming language Ada, a highly structured language suitable for a wide variety of applications. Software development within the DoD would then be based exclusively on Ada UNIX and C Yet, another trend started to pervade the programming arena, notably academia, pointing in the opposite direction. It was spawned by the spread of the UNIX operating system, contrasting to MIT's MULTICS and used on the quickly emerging minicomputers. UNIX was a highly welcome relief from the large operating systems established on mainframe computers. In its tow UNIX carried the language C [8], which had been explicitly designed to support the development of UNIX. Evidently, it was therefore at least attractive, if not even mandatory to use C for the development of applications running under UNIX, which acted like a Trojan horse for C. From the point of view of software engineering, the rapid spread of C represented a great leap backward. It revealed that the community at large had hardly grasped the true meaning of the term "high-level language" which became an ill-understood buzzword. What, if anything, was to be "high-level"? As this issue lies at the core of software engineering, we need to elaborate. Abstraction Computer systems are machines of large complexity. This complexity can be mastered intellectually by one tool only: Abstraction. A language represents an abstract computer whose objects and constructs lie closer (higher) to the problem to be represented than to the concrete machine. For example, in a high-level language we deal with numbers, indexed arrays, data types, conditional and repetitive statements, rather than with bits and bytes, addressed words, jumps and condition codes. However, these abstractions are beneficial only, if they are consistently and completely defined in terms of their own properties. If this is not so, if the abstractions can be understood only in terms of the facilities of an underlying computer, then the benefits are marginal, almost given away. If debugging a program -undoubtedly the most pervasive activity in software engineeringrequires a "hexadecimal dump", then the language is hardly worth the trouble

    The Need for Autonomy and Real-Time in Mobile Robotics: A Case Study of XO/2 and Pygmalion

    Get PDF
    Starting from a user point of view the paper discusses the requirements of a development environment (operating system and programming language) for mechatronic systems, especially mobile robots. We argue that user requirements from research, education, ergonomics and applications impose a certain functionality on the embedded operating system and programming language, and that a deadline-driven real-time operating system helps to fulfil these requirements. A case study of the operating system XO/2, its programming language Oberon-2 and the mobile robot Pygmalion is presented. XO/2 explicitly addresses issues like scalabilty, safety and abstraction, previously found to be relevant for many user scenarios

    A STUDY ON VARIOUS PROGRAMMING LANGUAGES TO KEEP PACE WITH INNOVATION

    Get PDF
    A programming language is a formal computer language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs to control the behaviour of a machine or to express algorithms. The earliest known programmable machine preceded the invention of the digital computer and is the automatic flute player described in the 9th century by the brothers Musa in Baghdad, "during the Islamic Golden Age". From the early 1800s, "programs" were used to direct the behavior of machines such as Jacquard looms and player pianos. Thousands of different programming languages have been created, mainly in the computer field, and many more still are being created every year. Many programming languages require computation to be specified in an imperative form (i.e., as a sequence of operations to perform) while other languages use other forms of program specification such as the declarative form (i.e. the desired result is specified, not how to achieve it). The description of a programming language is usually split into the two components of syntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, the C programming language is specified by an ISO Standard) while other languages (such as Perl) have a dominant implementation that is treated as a reference. Some languages have both, with the basic language defined by a standard and extensions taken from the dominant implementation being common. An attempt is made in this paper to have a study on various programming languages

    Англійська мова для студентів електромеханічних спеціальностей

    Get PDF
    Навчальний посібник розрахований на студентів напряму підготовки 6.050702 Електромеханіка. Містить уроки, що структуровані за тематичними розділами, граматичний коментар, короткі англо-український і українсько- англійський словники та додатки, які спрямовані на закріплення загальних навичок володіння англійською мовою. Акцентований на ɨсобливості термінології, що застосовується у науково-технічній галузі, зокрема, в електромеханіці та виконання запропонованих завдань, що буде сприяти формуванню навичок перекладу з англійської та української мов, сприйняттю письмової та усної англійської мови, вмінню письмового викладення англійською мовою науково-технічних та інших текстів під час професійної діяльності, спілкуванню з професійних та загальних питань тощо

    Hybrid, metric - topological, mobile robot navigation

    Get PDF
    This thesis presents a recent research on the problem of environmental modeling for both localization and map building for wheel-based, differential driven, fully autonomous and self-contained mobile robots. The robots behave in an indoor office environment. They have a multi-sensor setup where the encoders are used for odometry and two exteroperceptive sensors, a 360° laser scanner and a monocular vision system, are employed to perceive the surrounding. The whole approach is feature based meaning that instead of directly using the raw data from the sensor features are firstly extracted. This allows the filtering of noise from the sensors and permits taking account of the dynamics in the environment. Furthermore, a properly chosen feature extraction has the characteristic of better isolating informative patterns. When describing these features care has to be taken that the uncertainty from the measurements is taken into account. The representation of the environment is crucial for mobile robot navigation. The model defines which perception capabilities are required and also which navigation technique is allowed to be used. The presented environmental model is both metric and topological. By coherently combining the two paradigms the advantages of both methods are added in order to face the drawbacks of a single approach. The capabilities of the hybrid approach are exploited to model an indoor office environment where metric information is used locally in structures (rooms, offices), which are naturally defined by the environment itself while the topology of the whole environment is resumed separately thus avoiding the need of global metric consistency. The hybrid model permits the use of two different and complementary approaches for localization, map building and planning. This combination permits the grouping of all the characteristics which enables the following goals to be met: Precision, robustness and practicability. Metric approaches are, per definition, precise. The use of an Extended Kalman Filter (EKF) permits to have a precision which is just bounded by the quality of the sensor data. Topological approaches can easily handle large environments because they do not heavily rely on dead reckoning. Global consistency can, therefore, be maintained for large environments. Consistent mapping, which handle large environments, is achieved by choosing a topological localization approach, based on a Partially Observable Markov Decision Process (POMDP), which is extended to simultaneous localization and map building. The theory can be mathematically proven by making some assumptions. However, as stated during the whole work, at the end the robot itself has to show how good the theory is when used in the real world. For this extensive experimentation for a total of more than 9 km is performed with fully autonomous self-contained robots. These experiments are then carefully analyzed. With the metric approach precision with error bounds of about 1 cm and less than 1 degree is further confirmed by ground truth measurements with a mean error of less than 1 cm. The topological approach is successfully tested by simultaneous localization and map building where the automatically created maps turned out to work better than the a priori maps. Relocation and closing the loop are also successfully tested

    Music in Evolution and Evolution in Music

    Get PDF
    Music in Evolution and Evolution in Music by Steven Jan is a comprehensive account of the relationships between evolutionary theory and music. Examining the ‘evolutionary algorithm’ that drives biological and musical-cultural evolution, the book provides a distinctive commentary on how musicality and music can shed light on our understanding of Darwin’s famous theory, and vice-versa. Comprised of seven chapters, with several musical examples, figures and definitions of terms, this original and accessible book is a valuable resource for anyone interested in the relationships between music and evolutionary thought. Jan guides the reader through key evolutionary ideas and the development of human musicality, before exploring cultural evolution, evolutionary ideas in musical scholarship, animal vocalisations, music generated through technology, and the nature of consciousness as an evolutionary phenomenon. A unique examination of how evolutionary thought intersects with music, Music in Evolution and Evolution in Music is essential to our understanding of how and why music arose in our species and why it is such a significant presence in our lives

    Oberon-2 as Successor of Modula-2 in Simulation

    No full text

    A Language-Independent Static Checking System for Coding Conventions

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Despite decades of research aiming to ameliorate the difficulties of creating software, programming still remains an error-prone task. Much work in Computer Science deals with the problem of specification, or writing the right program, rather than the complementary problem of implementation, or writing the program right. However, many desirable software properties (such as portability) are obtained via adherence to coding standards, and therefore fall outside the remit of formal specification and automatic verification. Moreover, code inspections and manual detection of standards violations are time consuming. To address these issues, this thesis describes Exstatic, a novel framework for the static detection of coding standards violations. Unlike many other static checkers Exstatic can be used to examine code in a variety of languages, including program code, in-line documentation, markup languages and so on. This means that checkable coding standards adhered to by a particular project or institution can be handled by a single tool. Consequently, a major challenge in the design of Exstatic has been to invent a way of representing code from a variety of source languages. Therefore, this thesis describes ICODE, which is an intermediate language suitable for representing code from a number of different programming paradigms. To substantiate the claim that ICODE is a universal intermediate language, a proof strategy has been developed: for a number of different programming paradigms (imperative, declarative, etc.), a proof is constructed to show that semantics-preserving translation exists from an exemplar language (such as IMP or PCF) to ICODE. The usefulness of Exstatic has been demonstrated by the implementation of a number of static analysers for different languages. This includes a checker for technical documentation written in Javadoc which validates documents against the Sun Microsystems (now Oracle) Coding Conventions and a checker for HTML pages against a site-specifc standard. A third system is targeted at a variant of the Python language, written by the author, called python-csp, based on Hoare's Communicating Sequential Processes
    corecore