73 research outputs found

    The deep space network

    Get PDF
    Progress is reported in flight project support, tracking and data acquisition, research technology, network engineering, hardware and software implementation, and operations

    Standardized development of computer software. Part 1: Methods

    Get PDF
    This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change

    Standardized development of computer software. Part 2: Standards

    Get PDF
    This monograph contains standards for software development and engineering. The book sets forth rules for design, specification, coding, testing, documentation, and quality assurance audits of software; it also contains detailed outlines for the documentation to be produced

    A methodology for producing reliable software, volume 1

    Get PDF
    An investigation into the areas having an impact on producing reliable software including automated verification tools, software modeling, testing techniques, structured programming, and management techniques is presented. This final report contains the results of this investigation, analysis of each technique, and the definition of a methodology for producing reliable software

    A framework for the language and logic of computer-aided phenomena-based process modeling

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2000.Includes bibliographical references (p. 273-277).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Chemical process engineering activities such as design, optimization, analysis, control, scheduling, diagnosis, and training all rely on mathematical models for solution of some engineering problem. Likewise, most of the undergraduate chemical engineering curricula are model-based. However, the lack of formalization and systematization associated with model development leads most students and engineers to view modeling as an art, not as a science. Consequently, model development in practice is usually left to specialized modeling experts. This work seeks to address this issue through development of a framework that raises the level of model development from procedural computations and mathematical equations to the fundamental concepts of chemical engineering science. This framework, suitable for implementation in a computer-aided environment, encompasses a phenomena-based modeling language and logical operators. The modeling language, which represents chemical processes interms of interacting physicochemical phenomena, provides a high-level vocabulary for describing the topological and hierarchical structure of lumped or spatially distributed systems, mechanistic characterization of relevant phenomena (e.g., reactions, equilibria, heat and mass transport), and thermodynamic and physical characterization of process materials. Thelogical operators systematize the modeling process by explicitly capturing procedural and declarative aspects of the model ingactivity.(cont.) This enables a computer to provide assistance for analyzing and constructing phenomena-based models, detect model inconsistencies and incompleteness, and automatically derive and explain the resulting model equations from chemical engineering first principles. In order to provide an experimental apparatus suitable for evaluating this framework, the phenomena-based language and logical operators have been implemented in a computer-aided modeling environment, named MODEL.LA. MODEL.LA enables phenomena-based modeling of dynamic systems of arbitrary structure and spatial distribution, hierarchical levels of detail, and multicontext depictions. Additional components allow incorporation of thermodynamic and physical property data, integration of control structures, operational task scheduling, and external models,and assistance for specification and solution of the resulting mathematical model. Application of this environment to several modeling examples, as well as its classroom and industrial deployment, demonstrate the potential benefits of rapid, reliable, and documented chemical process modeling that may be realized from this high-level phenomena-based approach.by Jerry Bieszczad.Ph.D

    The deep space network

    Get PDF
    The objectives, functions, and organization of the deep space network are summarized. Progress in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations is reported. Interface support for the Mariner Venus Mercury 1973 flight and Pioneer 10 and 11 missions is included

    Performance requirements verification during software systems development

    Get PDF
    Requirements verification refers to the assurance that the implemented system reflects the specified requirements. Requirement verification is a process that continues through the life cycle of the software system. When the software crisis hit in 1960, a great deal of attention was placed on the verification of functional requirements, which were considered to be of crucial importance. Over the last decade, researchers have addressed the importance of integrating non-functional requirement in the verification process. An important non-functional requirement for software is performance. Performance requirement verification is known as Software Performance Evaluation. This thesis will look at performance evaluation of software systems. The performance evaluation of software systems is a hugely valuable task, especially in the early stages of a software project development. Many methods for integrating performance analysis into the software development process have been proposed. These methodologies work by utilising the software architectural models known in the software engineering field by transforming these into performance models, which can be analysed to gain the expected performance characteristics of the projected system. This thesis aims to bridge the knowledge gap between performance and software engineering domains by introducing semi-automated transformation methodologies. These are designed to be generic in order for them to be integrated into any software engineering development process. The goal of these methodologies is to provide performance related design guidance during the system development. This thesis introduces two model transformation methodologies. These are the improved state marking methodology and the UML-EQN methodology. It will also introduce the UML-JMT tool which was built to realise the UML-EQN methodology. With the help of automatic design models to performance model algorithms introduced in the UML-EQN methodology, a software engineer with basic knowledge of performance modelling paradigm can conduct a performance study on a software system design. This was proved in a qualitative study where the methodology and the tool deploying this methodology were tested by software engineers with varying levels of background, experience and from different sectors of the software development industry. The study results showed an acceptance for this methodology and the UML-JMT tool. As performance verification is a part of any software engineering methodology, we have to define frame works that would deploy performance requirements validation in the context of software engineering. Agile development paradigm was the result of changes in the overall environment of the IT and business worlds. These techniques are based on iterative development, where requirements, designs and developed programmes evolve continually. At present, the majority of literature discussing the role of requirements engineering in agile development processes seems to indicate that non-functional requirements verification is an unchartered territory. CPASA (Continuous Performance Assessment of Software Architecture) was designed to work in software projects where the performance can be affected by changes in the requirements and matches the main practices of agile modelling and development. The UML-JMT tool was designed to deploy the CPASA Performance evaluation tests

    Alamprotsessidest, protsesside variatsioonidest ja nendevahelisest koosmõjust: Integreeritud “jaga ja valitse” meetod äriprotsesside ja nende variatsioonide modelleerimiseks

    Get PDF
    Igat organisatsiooni võib vaadelda kui süsteemi, mis rakendab äriprotsesse väärtuste loomiseks. Suurtes organisatsioonides on tavapärane esitada äriprotsesse kasutades protsessimudeleid, mida kasutatakse erinevatel eesmärkidel nagu näiteks sisekommunikatsiooniks, koolitusteks, protsesside parendamiseks ja infosüsteemide arendamiseks. Arvestades protsessimudelite multifunktsionaalset olemust tuleb protsessimudeleid koostada selliselt, et see võimaldab nendest arusaamist ning haldamist erinevate osapoolte poolt. Käesolev doktoritöö pakkudes välja integreeritud dekompositsioonist ajendatud meetodi äriprotsesside modelleerimiseks koos nende variatsioonidega. Meetodi kandvaks ideeks on järkjärguline äriprotsessi ja selle variatsioonide dekomponeerimine alamprotsessideks. Igal dekompositsiooni tasemel ning iga alamprotsessi jaoks määratletakse esmalt kas vastavat alamprotsessi tuleks modelleerida konsolideeritud moel (üks alamprotsessi mudel kõikide või osade variatsioonide jaoks) või fragmenteeritud moel (üks alamprotsess ühe variatsiooni jaoks). Sel moel kasutades ülalt-alla lähenemist viilutatakse ja tükeldatakse äriprotsess väiksemateks osadeks. Äriprotsess viilutatakse esmalt tema variatsioonideks ning seejärel tükeldatakse dekompositsioonideks kasutades kaht peamist parameetrit. Esimeseks on äri ajendid variatsioonide jaoks – igal äriprotsessi variatsioonil on oma juurpõhjus, mis pärineb ärist endast ja põhjustab protsesside käivitamisel erisusi. Need juurpõhjused jagatakse viide kategooriasse – ajendid kliendist, tootest, operatiivsetest põhjustest, turust ja ajast. Teine parameeter on erinevuste hulk viisides (tegevuste järjekord, tulemuste väärtused jms) kuidas variatsioonid oma väljundit toodavad. Käesolevas töös esitatud meetod on valideeritud kahes praktilises juhtumiuuringus. Kui esimeses juhtumiuuringus on põhirõhk olemasolevate protsessimudelite konsolideerimisel, siis teises protsessimudelite avastamisel. Sel moel rakendatakse meetodit kahes eri kontekstis kahele üksteisest eristatud juhtumile. Mõlemas juhtumiuuringus tootis meetod protsessimudelite hulgad, milles oli liiasust kuni 50% vähem võrreldes tavapäraste meetoditega jättes samas mudelite keerukuse nendega võrreldes enamvähem samale tasemele.Every organization can be conceived as a system where value is created by means of business processes. In large organizations, it is common for business processes to be represented by means of process models, which are used for a range of purposes such as internal communication, training, process improvement and information systems development. Given their multifunctional character, process models need to be captured in a way that facilitates understanding and maintenance by a variety of stakeholders. This thesis proposes an integrated decomposition-driven method for modeling business processes with variants. The core idea of the method is to incrementally construct a decomposition of a business process and its variants into subprocesses. At each level of the decomposition and for each subprocess, we determine if this subprocess should be modeled in a consolidated manner (one subprocess model for all variants or for multiple variants) or in a fragmented manner (one subprocess model per variant). In this manner, a top-down approach of slicing and dicing a business process is taken. The process model is sliced in accordance with its variants, and then diced (decomposed). This decision is taken based on two parameters. The first is the business drivers for the existence of the variants. All variants of a business process has a root cause i.e. a reason stemming from the business that causes the processes to have differences in how they are executed. The second parameter considered when deciding how to model the variants is the degree of difference in the way the variants produce their outcomes. As such, the modeling of business process variations is dependent on their degree of similarity in regards to how they produce value (such as values, execution order and so on). The method presented in this thesis is validated by two real-life case studies. The first case study concerns a case of consolidation existing process models. The other deals with green-field process discovery. As such, the method is applied in two different contexts (consolidation and discovery) on two different cases that differ from each other. In both cases, the method produced sets of process models that had reduced the duplicity rate by up to 50 % while keeping the degree of complexity of the models relatively stable

    The Interpretation of Tables in Texts

    Get PDF
    corecore