2,643 research outputs found

    Using an Architecture Description Language to Model a Large- Scale Information System – An Industrial Experience Report

    Get PDF
    An organisation that had developed a large Information System wanted to embark on a programme of significant evolution for the system. As a precursor to this, it was decided to create a comprehensive architectural description. T his undertaking faced a number of challenges, including a low general awareness of software modelling and software architecture practices . The approach taken for this project included the definition of a simple, specific, architecture description language. This paper describes the experiences of the project and the ADL created as part of it

    Early aspects: aspect-oriented requirements engineering and architecture design

    Get PDF
    This paper reports on the third Early Aspects: Aspect-Oriented Requirements Engineering and Architecture Design Workshop, which has been held in Lancaster, UK, on March 21, 2004. The workshop included a presentation session and working sessions in which the particular topics on early aspects were discussed. The primary goal of the workshop was to focus on challenges to defining methodical software development processes for aspects from early on in the software life cycle and explore the potential of proposed methods and techniques to scale up to industrial applications

    Applying Model Driven Engineering Techniques and Tools to the Planets Game Learning Scenario

    Get PDF
    24 pagesInternational audienceCPM (Cooperative Problem-Based learning Metamodel) is a visual language for the instructional design of Problem-Based Learning (PBL) situations. This language is a UML profile implemented on top of the Objecteering UML Case tool. In this article, we first present the way we used CPM language to bring about the pedagogical transposition of the planets game learning scenario. Then, we propose some related works conducted to improve CPM usability: on the one hand, we outline a MOF solution and an Eclipse GMF solution instead of the UML profile approach. On the other hand, we propose some explanations about transforming CPM models into LMS compliant data, and tool functionality

    Preserving the Quality of Architectural Tactics in Source Code

    Get PDF
    In any complex software system, strong interdependencies exist between requirements and software architecture. Requirements drive architectural choices while also being constrained by the existing architecture and by what is economically feasible. This makes it advisable to concurrently specify the requirements, to devise and compare alternative architectural design solutions, and ultimately to make a series of design decisions in order to satisfy each of the quality concerns. Unfortunately, anecdotal evidence has shown that architectural knowledge tends to be tacit in nature, stored in the heads of people, and lost over time. Therefore, developers often lack comprehensive knowledge of underlying architectural design decisions and inadvertently degrade the quality of the architecture while performing maintenance activities. In practice, this problem can be addressed through preserving the relationships between the requirements, architectural design decisions and their implementations in the source code, and then using this information to keep developers aware of critical architectural aspects of the code. This dissertation presents a novel approach that utilizes machine learning techniques to recover and preserve the relationships between architecturally significant requirements, architectural decisions and their realizations in the implemented code. Our approach for recovering architectural decisions includes the two primary stages of training and classification. In the first stage, the classifier is trained using code snippets of different architectural decisions collected from various software systems. During this phase, the classifier learns the terms that developers typically use to implement each architectural decision. These ``indicator terms\u27\u27 represent method names, variable names, comments, or the development APIs that developers inevitably use to implement various architectural decisions. A probabilistic weight is then computed for each potential indicator term with respect to each type of architectural decision. The weight estimates how strongly an indicator term represents a specific architectural tactics/decisions. For example, a term such as \emph{pulse} is highly representative of the heartbeat tactic but occurs infrequently in the authentication. After learning the indicator terms, the classifier can compute the likelihood that any given source file implements a specific architectural decision. The classifier was evaluated through several different experiments including classical cross-validation over code snippets of 50 open source projects and on the entire source code of a large scale software system. Results showed that classifier can reliably recognize a wide range of architectural decisions. The technique introduced in this dissertation is used to develop the Archie tool suite. Archie is a plug-in for Eclipse and is designed to detect wide range of architectural design decisions in the code and to protect them from potential degradation during maintenance activities. It has several features for performing change impact analysis of architectural concerns at both the code and design level and proactively keep developers informed of underlying architectural decisions during maintenance activities. Archie is at the stage of technology transfer at the US Department of Homeland Security where it is purely used to detect and monitor security choices. Furthermore, this outcome is integrated into the Department of Homeland Security\u27s Software Assurance Market Place (SWAMP) to advance research and development of secure software systems

    Artificial Intelligence in a Main Warehouse in Panasonic: Los Indios, Texas

    Get PDF
    The Panasonic Company warehouse is located in Los Indios Texas. The warehouse presents the limitation of the great distances between its headquarters and the Main Warehouse for supplying the branches and main customers, which requires a considerable amount of time to maintain effective communication in the inventory area. In addition, during an online review, it can be confirmed that the website is disabled, contradicting its corporate policy. The structure of the thesis proposal is arranged in four chapters from the Introduction, Statement of the Problem and Purposes; Previous Studies and Definition of the literature; the Research Methodology and the resources for data collection, the results, the proposal, and the conclusions. This paper ends with a list of references from different substantial sources that facilitated the research

    Towards an Approach for Applying Early Testing to Smart Contracts

    Get PDF
    Immutability -  the ability for a Blockchain (BC) Ledger to remain an unalterable, permanent and indelible history of transactions - is a feature that is highlighted as a key benefit of BC. This ability is very important when several companies work collaboratively to achieve common objectives. This collaboration is usually represented by using business process models. BC is considered as a suitable technology to reduce the complexity of designing these collaborative processes using Smart Contracts. This paper discusses how to combine Model-based Software Development, modelling techniques, such as use cases models and activity diagram models based on Unified Model Languages (UML) in order to simplify and improve the modelling, management and execution of collaborative business processes between multiple companies in the BC network. This paper includes the neccessity of using transformation protocols to obtain Smart Contract code. In addition, it presents systematic mechanisms to evaluate and validate Smart Contract, applying early testing techniques, before deploying the Smart Contract code in the BC network.Ministerio de Economía y Competitividad TIN2016-76956-C3-2-R (POLOLAS

    Managing knowledge for capability engineering

    Get PDF
    The enterprises that deliver capability are trying to evolve into through-life businesses by shifting away from the traditional pattern of designing and manufacturing successive generations of products, towards a new paradigm centred on support, sustainability and the incremental enhancements of existing capabilities from technology insertions and changes to process. The provision of seamless through-life customer solutions depends heavily on management of information and knowledge between, and within the different parts of the supply chain enterprise. This research characterised and described Capability Engineering (CE) as applied in the defence enterprise and identified to BAE Systems important considerations for managing knowledge within that context. The terms Capability Engineering and Through Life Capability Management (TLCM), used synonymously in this thesis, denote a complex evolving domain that requires new approaches to better understand the different viewpoints, models and practices. The findings and novelty of this research is demonstrated through the following achievements: Defined the problem space that Requirements Engineers can use in through-life management projects. Made a contribution to the development of models for Systems Architects to enable them to incorporate ‘soft’ systems within their consideration. Independently developed a TLCM activity model against which BAE Systems validated the BAE Systems TLCM activity model, which is now used by UK Ministry of Defence (MoD). Developed, and published within INCOSE1, the INCOSE Capability Engineering ontology. Through the novel analysis of a directly applicable case study, highlighted to Functional Delivery Managers the significance of avoiding the decoupling of information and knowledge in the context of TLCM. Through experimentation and knowledge gained within this research, identified inadequacies in the TechniCall (rapid access to experts) service which led to the generation of requirements for an improved service which is now being implemented by BAE Systems. The results showed that managing knowledge is distinct when compared to information management. Over-reliance on information management in the absence of tacit knowledge can lead to a loss in the value of the information, which can result in unintended consequences. Capability is realised through a combination of component systems and Capability Engineering is equivalent to a holistic perspective of Systems Engineering. A sector-independent Capability Engineering ontology is developed to enable semantic interoperability between different domains i.e. defence, rail and information technology. This helped to better understand the dependencies of contributing component systems within defence, and supported collaboration across different domains. Although the evaluation of the ontology through expert review has been accomplished; the ontology, KM analysis framework and soft systems transitioning approach developed still need to undergo independent verification and validation. This requires application to other case studies to check and exploit their suitability. This Engineering Doctorate research has been disseminated through a number of peer reviewed publications

    Handling Failures in Data Quality Measures

    Get PDF
    Successful data quality (DQ) measure is importantfor many data consumers (or data guardians) to decide on theacceptability of data of concerned. Nevertheless, little is knownabout how “failures” of DQ measures can be handled by dataguardians in the presence of factor(s) that contributes to thefailures. This paper presents a review of failure handling mechanismsfor DQ measures. The failure factors faced by existing DQmeasures will be presented, together with the research gaps inrespect to failure handling mechanisms in DQ frameworks. Inparticular, by comparing existing DQ frameworks in terms of: theinputs used to measure DQ, the way DQ scores are computed andthey way DQ scores are stored, we identified failure factorsinherent within the frameworks. Understanding of how failurescan be handled will lead to the design of a systematic failurehandling mechanism for robust DQ measures
    corecore