1,215 research outputs found

    Deep Space Network information system architecture study

    Get PDF
    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control

    Advanced information processing system for advanced launch system: Avionics architecture synthesis

    Get PDF
    The Advanced Information Processing System (AIPS) is a fault-tolerant distributed computer system architecture that was developed to meet the real time computational needs of advanced aerospace vehicles. One such vehicle is the Advanced Launch System (ALS) being developed jointly by NASA and the Department of Defense to launch heavy payloads into low earth orbit at one tenth the cost (per pound of payload) of the current launch vehicles. An avionics architecture that utilizes the AIPS hardware and software building blocks was synthesized for ALS. The AIPS for ALS architecture synthesis process starting with the ALS mission requirements and ending with an analysis of the candidate ALS avionics architecture is described

    A dynamic systems engineering methodology research study. Phase 2: Evaluating methodologies, tools, and techniques for applicability to NASA's systems projects

    Get PDF
    A study of NASA's Systems Management Policy (SMP) concluded that the primary methodology being used by the Mission Operations and Data Systems Directorate and its subordinate, the Networks Division, is very effective. Still some unmet needs were identified. This study involved evaluating methodologies, tools, and techniques with the potential for resolving the previously identified deficiencies. Six preselected methodologies being used by other organizations with similar development problems were studied. The study revealed a wide range of significant differences in structure. Each system had some strengths but none will satisfy all of the needs of the Networks Division. Areas for improvement of the methodology being used by the Networks Division are listed with recommendations for specific action

    A Reasoning Framework for Dependability in Software Architectures

    Get PDF
    The degree to which a software system possesses specified levels of software quality attributes, such as performance and modifiability, often have more influence on the success and failure of those systems than the functional requirements. One method of improving the level of a software quality that a product possesses is to reason about the structure of the software architecture in terms of how well the structure supports the quality. This is accomplished by reasoning through software quality attribute scenarios while designing the software architecture of the system. As society relies more heavily on software systems, the dependability of those systems becomes critical. In this study, a framework for reasoning about the dependability of a software system is presented. Dependability is a multi-faceted software quality attribute that encompasses reliability, availability, confidentiality, integrity, maintainability and safety. This makes dependability more complex to reason about than other quality attributes. The goal of this reasoning framework is to help software architects build dependable software systems by using quantitative and qualitative techniques to reason about dependability in software architectures

    Engineering security into distributed systems: a survey of methodologies

    Get PDF
    Rapid technological advances in recent years have precipitated a general shift towards software distribution as a central computing paradigm. This has been accompanied by a corresponding increase in the dangers of security breaches, often causing security attributes to become an inhibiting factor for use and adoption. Despite the acknowledged importance of security, especially in the context of open and collaborative environments, there is a growing gap in the survey literature relating to systematic approaches (methodologies) for engineering secure distributed systems. In this paper, we attempt to fill the aforementioned gap by surveying and critically analyzing the state-of-the-art in security methodologies based on some form of abstract modeling (i.e. model-based methodologies) for, or applicable to, distributed systems. Our detailed reviews can be seen as a step towards increasing awareness and appreciation of a range of methodologies, allowing researchers and industry stakeholders to gain a comprehensive view of the field and make informed decisions. Following the comprehensive survey we propose a number of criteria reflecting the characteristics security methodologies should possess to be adopted in real-life industry scenarios, and evaluate each methodology accordingly. Our results highlight a number of areas for improvement, help to qualify adoption risks, and indicate future research directions.Anton V. Uzunov, Eduardo B. Fernandez, Katrina Falkne

    Study of fault-tolerant software technology

    Get PDF
    Presented is an overview of the current state of the art of fault-tolerant software and an analysis of quantitative techniques and models developed to assess its impact. It examines research efforts as well as experience gained from commercial application of these techniques. The paper also addresses the computer architecture and design implications on hardware, operating systems and programming languages (including Ada) of using fault-tolerant software in real-time aerospace applications. It concludes that fault-tolerant software has progressed beyond the pure research state. The paper also finds that, although not perfectly matched, newer architectural and language capabilities provide many of the notations and functions needed to effectively and efficiently implement software fault-tolerance

    Agile development in cloud computing for eliciting non-functional requirements

    Get PDF
    Agile is a popular and growing software development methodology. In the agile methodology, requirements are refined based on collaborations with customers and team members. However, the agile process faces a lack of visibility across the development and delivery processes, has complex and disjointed development processes and lacks communication agility between disconnected owners, development teams, and users. Furthermore, Non-Functional Requirements (NFR) are ignored due to the nature of agile development that lacks knowledge of the user and developer about NFR. In addition, extraction of the NFR is difficult and this difficulty is increased because the agile methodology promotes change in requirement at any stage of the development. Cloud computing services have helped solve some of the issues in the agile process. However, to address the issues in agile development, this research developed a framework for Agile Development in Cloud Computing (ADCC) that uses the facilitation of cloud computing to solve the above-mentioned issues. An Automated NFR eXtraction (ANFRX) method was developed to extract NFR from the software requirement documents and interview notes wrote during requirement gathering. The ANFRX method exploited the semantic knowledge of words in the requirement to classify and extract the NFR. Furthermore, an NFR Elicitation (NFRElicit) approach was developed to help users and development teams in elicitation of NFR in cloud computing. NFRElicit approach used components such as an organization’s projects history, ANFRX method, software quality standards, and templates. The ADCC framework was evaluated by conducting a case study and industrial survey. The results of the case study showed that the use of ADCC framework facilitated the agile development process. In addition, the industrial survey results revealed that the ADCC framework had a positive significant impact on communication, development infrastructure provision, scalability, transparency and requirement engineering activities in agile development. The ANFRX method was evaluated by applying it on PROMISE-NFR dataset. ANFRX method improved 40% and 26% in terms of f-measure from the Cleland and Slankas studies, respectively. The NFRElicit approach was applied to eProcurement dataset and evaluated in terms of more “Successful”, less “Partial Success” and “Failure” to identify NFR in requirement sentences. The NFRElicit approach improved 11.36% and 2.27% in terms of increase in “Successful” NFR, decrease of 5.68% and 1.14% in terms of “Partial success” and decrease of 5.68% and 1.13% in terms of “Failure” from the Non-functional requirement, Elicitation, Reasoning and Validation (NERV) and Capturing, Eliciting and Predicting (CEP) methodologies, respectively. The findings have shown the process was able to elicit and extract NFR for agile development in cloud computing

    A Reference Architecture for Service Lifecycle Management – Construction and Application to Designing and Analyzing IT Support

    Get PDF
    Service-orientation and the underlying concept of service-oriented architectures are a means to successfully address the need for flexibility and interoperability of software applications, which in turn leads to improved IT support of business processes. With a growing level of diffusion, sophistication and maturity, the number of services and interdependencies is gradually rising. This increasingly requires companies to implement a systematic management of services along their entire lifecycle. Service lifecycle management (SLM), i.e., the management of services from the initiating idea to their disposal, is becoming a crucial success factor. Not surprisingly, the academic and practice communities increasingly postulate comprehensive IT support for SLM to counteract the inherent complexity. The topic is still in its infancy, with no comprehensive models available that help evaluating and designing IT support in SLM. This thesis presents a reference architecture for SLM and applies it to the evaluation and designing of SLM IT support in companies. The artifact, which largely resulted from consortium research efforts, draws from an extensive analysis of existing SLM applications, case studies, focus group discussions, bilateral interviews and existing literature. Formal procedure models and a configuration terminology allow adapting and applying the reference architecture to a company’s individual setting. Corresponding usage examples prove its applicability and demonstrate the arising benefits within various SLM IT support design and evaluation tasks. A statistical analysis of the knowledge embodied within the reference data leads to novel, highly significant findings. For example, contemporary standard applications do not yet emphasize the lifecycle concept but rather tend to focus on small parts of the lifecycle, especially on service operation. This forces user companies either into a best-of-breed or a custom-development strategy if they are to implement integrated IT support for their SLM activities. SLM software vendors and internal software development units need to undergo a paradigm shift in order to better reflect the numerous interdependencies and increasing intertwining within services’ lifecycles. The SLM architecture is a first step towards achieving this goal.:Content Overview List of Figures....................................................................................... xi List of Tables ...................................................................................... xiv List of Abbreviations.......................................................................xviii 1 Introduction .................................................................................... 1 2 Foundations ................................................................................... 13 3 Architecture Structure and Strategy Layer .............................. 57 4 Process Layer ................................................................................ 75 5 Information Systems Layer ....................................................... 103 6 Architecture Application and Extension ................................. 137 7 Results, Evaluation and Outlook .............................................. 195 Appendix ..........................................................................................203 References .......................................................................................... 463 Curriculum Vitae.............................................................................. 498 Bibliographic Data............................................................................ 49

    Model driven validation approach for enterprise architecture and motivation extensions

    Get PDF
    As the endorsement of Enterprise Architecture (EA) modelling continues to grow in diversity and complexity, management of its schema, artefacts, semantics and relationships has become an important business concern. To maintain agility and flexibility within competitive markets, organizations have also been compelled to explore ways of adjusting proactively to innovations, changes and complex events also by use of EA concepts to model business processes and strategies. Thus the need to ensure appropriate validation of EA taxonomies has been considered severally as an essential requirement for these processes in order to exert business motivation; relate information systems to technological infrastructure. However, since many taxonomies deployed today use widespread and disparate modelling methodologies, the possibility to adopt a generic validation approach remains a challenge. The proliferation of EA methodologies and perspectives has also led to intricacies in the formalization and validation of EA constructs as models often times have variant schematic interpretations. Thus, disparate implementations and inconsistent simulation of alignment between business architectures and heterogeneous application systems is common within the EA domain (Jonkers et al., 2003). In this research, the Model Driven Validation Approach (MDVA) is introduced. MDVA allows modelling of EA with validation attributes, formalization of the validation concepts and transformation of model artefacts to ontologies. The transformation simplifies querying based on motivation and constraints. As the extended methodology is grounded on the semiotics of existing tools, validation is executed using ubiquitous query language. The major contributions of this work are the extension of a metamodel of Business Layer of an EAF with Validation Element and the development of EAF model to ontology transformation Approach. With this innovation, domain-driven design and object-oriented analysis concepts are applied to achieve EAF model’s validation using ontology querying methodology. Additionally, the MDVA facilitates the traceability of EA artefacts using ontology graph patterns
    corecore