92 research outputs found

    Unpacking Agile Enterprise Architecture Innovation work practices: A Qualitative Case Study of a Railroad Company

    Get PDF
    Agile EA is the process for managing enterprise architecture modeling and redesign efforts with principles of agile methods. However, very little work has been done till date on how organizations adopt these methodological innovations such as integration of agile methods with enterprise architecture. This is problematic, because we know that organizations face stiff challenges in bringing new innovations that fundamentally disrupt their enterprise architecture. Hence we ask: How does agile EA get adopted in practice and what are the underlying mechanisms through which teams self-organize and adapt? To this end, we studied a large-scale agile EA development effort to modernize the legacy systems at a top railroad company referred to as “Alpha” (a pseudonym). Our qualitative analysis shows how multi-teams self-organize and adjust the pace of the development efforts by strategically (1) choosing different type of agile methods and (2) embedding resources across teams for increasing communications

    Computational Molecular Coevolution

    Get PDF
    A major goal in computational biochemistry is to obtain three-dimensional structure information from protein sequence. Coevolution represents a biological mechanism through which structural information can be obtained from a family of protein sequences. Evolutionary relationships within a family of protein sequences are revealed through sequence alignment. Statistical analyses of these sequence alignments reveals positions in the protein family that covary, and thus appear to be dependent on one another throughout the evolution of the protein family. These covarying positions are inferred to be coevolving via one of two biological mechanisms, both of which imply that coevolution is facilitated by inter-residue contact. Thus, high-quality multiple sequence alignments and robust coevolution-inferring statistics can produce structural information from sequence alone. This work characterizes the relationship between coevolution statistics and sequence alignments and highlights the implicit assumptions and caveats associated with coevolutionary inference. An investigation of sequence alignment quality and coevolutionary-inference methods revealed that such methods are very sensitive to the systematic misalignments discovered in public databases. However, repairing the misalignments in such alignments restores the predictive power of coevolution statistics. To overcome the sensitivity to misalignments, two novel coevolution-inferring statistics were developed that show increased contact prediction accuracy, especially in alignments that contain misalignments. These new statistics were developed into a suite of coevolution tools, the MIpToolset. Because systematic misalignments produce a distinctive pattern when analyzed by coevolution-inferring statistics, a new method for detecting systematic misalignments was created to exploit this phenomenon. This new method called ``local covariation\u27\u27 was used to analyze publicly-available multiple sequence alignment databases. Local covariation detected putative misalignments in a database designed to benchmark sequence alignment software accuracy. Local covariation was incorporated into a new software tool, LoCo, which displays regions of potential misalignment during alignment editing assists in their correction. This work represents advances in multiple sequence alignment creation and coevolutionary inference

    Coevolving Residues and the Expansion of Substrate Permissibility in LAGLIDADG Homing Endonucleases

    Get PDF
    Genome-editing (GE) is a form of genetic engineering that permits the deliberate manipulation of genetic material for the study of biological processes, agricultural and industrial biotechnologies, and developing targeted therapies to cure human disease. While the potential application of GE is wide-ranging, the efficacy of most strategies is dependent upon the ability to accurately introduce a double-stranded break at the genomic location where alterations are desired. LAGLIDADG homing endonucleases (LHEs) are a class of mobile genetic element that recognize and cleave 22-bp sequences of DNA. Given this high degree of specificity, LHEs are powerful GE reagents, but re-engineering their recognition sites has been hindered by a limited understanding of structural constraints within the family, and how cleavage specificity is regulated in the central target site region. In the present studies, a covariation analysis of the LHE family recognized a set of coevolving residues within the enzyme active site. These positions were found to modulate catalytic efficiency, and are thought to create a barrier to active site evolution and re-engineering by constraining the LHE fitness landscape towards a set of functionally permissive combinations. Interestingly, mutation of these positions led to the identification of a catalytic residue variant that demonstrates cleavage activity against a greater number of central target site substrates than wild-type enzymes. To facilitate these investigations, high-throughput and unbiased methods were developed to functionally screen large mutagenic libraries and simultaneously profile cleavage specificity against 256 different substrates. Lastly, structural studies aimed at increasing our understanding of the LHE coevolving network led to the discovery of direct protein-DNA contacts in the central target site region. Significantly, these findings increase our understanding of functionally important structural constraints within the LHE family and have the potential to increase the sequence targeting capacity of LHE scaffolds. More broadly, the methodologies described in this thesis can assist large-scale structure-function studies and facilitate investigations of substrate specificity for most DNA-binding proteins. Finally, the thorough biochemical validation I provide for computational predictions of coevolution showcases a strategy to infer protein function-structure from genetic information and emphasizes the need to expand these studies to other protein families

    Visualizing the customization endeavor in product-based-evolving software product lines: a case of action design research

    Get PDF
    [EN] Software Product Lines (SPLs) aim at systematically reusing software assets, and deriving products (a.k.a., variants) out of those assets. However, it is not always possible to handle SPL evolution directly through these reusable assets. Time-to-market pressure, expedited bug fixes, or product specifics lead to the evolution to first happen at the product level, and to be later merged back into the SPL platform where the core assets reside. This is referred to as product-based evolution. In this scenario, deciding when and what should go into the next SPL release is far from trivial. Distinct questions arise. How much effort are developers spending on product customization? Which are the most customized core assets? To which extent is the core asset code being reused for a given product? We refer to this endeavor as Customization Analysis, i.e., understanding the functional increments in adjusting products from the last SPL platform release. The scale of the SPLs' code-base calls for customization analysis to be conducted through Visual Analytics tools. This work addresses the design principles for such tools through a joint effort between academia and industry, specifically, Danfoss Drives, a company division in charge of the P400 SPL. Accordingly, we adopt an Action Design Research approach where answers are sought by interacting with the practitioners in the studied situations. We contribute by providing informed goals for customization analysis as well as an intervention in terms of a visual analytics tool. We conclude by discussing to what extent this experience can be generalized to product-based evolving SPL organizations other than Danfoss Drives.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work is supported by the Spanish Ministry of Science, Innovation and Universities grant number RTI2018099818-B-I00 and MCIU-AEI TIN2017-90644-REDT (TASOVA). ONEKIN enjoys support from the program 'Grupos de Investigacion del Sistema Univesitario Vasco 2019-2021' under contract IT1235-19. Raul Medeiros enjoys a doctoral grant from the Spanish Ministry of Science and Innovation

    The state of adoption and the challenges of systematic variability management in industry

    Get PDF
    Handling large-scale software variability is still a challenge for many organizations. After decades of research on variability management concepts, many industrial organizations have introduced techniques known from research, but still lament that pure textbook approaches are not applicable or efficient. For instance, software product line engineering—an approach to systematically develop portfolios of products—is difficult to adopt given the high upfront investments; and even when adopted, organizations are challenged by evolving their complex product lines. Consequently, the research community now mainly focuses on re-engineering and evolution techniques for product lines; yet, understanding the current state of adoption and the industrial challenges for organizations is necessary to conceive effective techniques. In this multiple-case study, we analyze the current adoption of variability management techniques in twelve medium- to large-scale industrial cases in domains such as automotive, aerospace or railway systems. We identify the current state of variability management, emphasizing the techniques and concepts they adopted. We elicit the needs and challenges expressed for these cases, triangulated with results from a literature review. We believe our results help to understand the current state of adoption and shed light on gaps to address in industrial practice.This work is supported by Vinnova Sweden, Fond Unique Interminist®eriel (FUI) France, and the Swedish Research Council. Open access funding provided by University of Gothenbur

    Towards Automatic Parsing of Structured Visual Content through the Use of Synthetic Data

    Get PDF
    Structured Visual Content (SVC) such as graphs, flow charts, or the like are used by authors to illustrate various concepts. While such depictions allow the average reader to better understand the contents, images containing SVCs are typically not machine-readable. This, in turn, not only hinders automated knowledge aggregation, but also the perception of displayed in-formation for visually impaired people. In this work, we propose a synthetic dataset, containing SVCs in the form of images as well as ground truths. We show the usage of this dataset by an application that automatically extracts a graph representation from an SVC image. This is done by training a model via common supervised learning methods. As there currently exist no large-scale public datasets for the detailed analysis of SVC, we propose the Synthetic SVC (SSVC) dataset comprising 12,000 images with respective bounding box annotations and detailed graph representations. Our dataset enables the development of strong models for the interpretation of SVCs while skipping the time-consuming dense data annotation. We evaluate our model on both synthetic and manually annotated data and show the transferability of synthetic to real via various metrics, given the presented application. Here, we evaluate that this proof of concept is possible to some extend and lay down a solid baseline for this task. We discuss the limitations of our approach for further improvements. Our utilized metrics can be used as a tool for future comparisons in this domain. To enable further research on this task, the dataset is publicly available at https://bit.ly/3jN1pJ

    A conceptual model for unifying variability in space and time: Rationale, validation, and illustrative applications

    Get PDF
    With the increasing demand for customized systems and rapidly evolving technology, software engineering faces many challenges. A particular challenge is the development and maintenance of systems that are highly variable both in space (concurrent variations of the system at one point in time) and time (sequential variations of the system, due to its evolution). Recent research aims to address this challenge by managing variability in space and time simultaneously. However, this research originates from two different areas, software product line engineering and software configuration management, resulting in non-uniform terminologies and a varying understanding of concepts. These problems hamper the communication and understanding of involved concepts, as well as the development of techniques that unify variability in space and time. To tackle these problems, we performed an iterative, expert-driven analysis of existing tools from both research areas to derive a conceptual model that integrates and unifies concepts of both dimensions of variability. In this article, we first explain the construction process and present the resulting conceptual model. We validate the model and discuss its coverage and granularity with respect to established concepts of variability in space and time. Furthermore, we perform a formal concept analysis to discuss the commonalities and differences among the tools we considered. Finally, we show illustrative applications to explain how the conceptual model can be used in practice to derive conforming tools. The conceptual model unifies concepts and relations used in software product line engineering and software configuration management, provides a unified terminology and common ground for researchers and developers for comparing their works, clarifies communication, and prevents redundant developments

    EXPLOITING KASPAROV'S LAW: ENHANCED INFORMATION SYSTEMS INTEGRATION IN DOD SIMULATION-BASED TRAINING ENVIRONMENTS

    Get PDF
    Despite recent advances in the representation of logistics considerations in DOD staff training and wargaming simulations, logistics information systems (IS) remain underrepresented. Unlike many command and control (C2) systems, which can be integrated with simulations through common protocols (e.g., OTH-Gold), many logistics ISs require manpower-intensive human-in-the-loop (HitL) processes for simulation-IS (sim-IS) integration. Where automated sim-IS integration has been achieved, it often does not simulate important sociotechnical system (STS) dynamics, such as information latency and human error, presenting decision-makers with an unrealistic representation of logistics C2 capabilities in context. This research seeks to overcome the limitations of conventional sim-IS interoperability approaches by developing and validating a new approach for sim-IS information exchange through robotic process automation (RPA). RPA software supports the automation of IS information exchange through ISs’ existing graphical user interfaces. This “outside-in” approach to IS integration mitigates the need for engineering changes in ISs (or simulations) for automated information exchange. In addition to validating the potential for an RPA-based approach to sim-IS integration, this research presents recommendations for a Distributed Simulation Engineering and Execution Process (DSEEP) overlay to guide the engineering and execution of sim-IS environments.Major, United States Marine CorpsApproved for public release. Distribution is unlimited

    Consistent View-Based Management of Variability in Space and Time

    Get PDF
    Developing variable systems faces many challenges. Dependencies between interrelated artifacts within a product variant, such as code or diagrams, across product variants and across their revisions quickly lead to inconsistencies during evolution. This work provides a unification of common concepts and operations for variability management, identifies variability-related inconsistencies and presents an approach for view-based consistency preservation of variable systems

    Design Thinking for Innovation Within Manufacturing SMEs: A Multiple Case Study

    Get PDF
    Manufacturing small and medium enterprise (SME) leaders have sparse information on using design thinking to support their firm’s business sustainability and competitive advantage. The purpose of this qualitative multiple case study was to describe design thinking experts’ views on how manufacturing SME leaders may successfully drive design thinking within their firm as an innovation strategy to support business sustainability and competitive advantage. A multiple case study design was used to collect data from a purposeful sample of seven design thinking experts. Semistructured interviews, archival data, and reflective field notes drove credibility of the findings through data triangulation. This study was framed by two concepts developed by Bjoerklund et al. within their integrating design across the organization model: (a) the concept of coevolving design capabilities and (b) the concept of the design-driven organization. Twenty-eight themes emerged from the data analysis, with six coding categories grounded in the conceptual framework: (a) leadership competencies for implementing a design strategy in SMEs, (b) leading a cross-functional team to adopt design thinking, (c) sustaining design thinking within a cross-functional team, (d) developing a design thinking business model for sustainability, (e) gaining competitive advantage with a design thinking business model, and (f) embedding design thinking in a manufacturing SME to drive competitive advantage. This study’s results may drive positive social change by providing manufacturing SME leaders with a better understanding of how to successfully use design thinking to achieve business sustainability and competitive advantage, creating better business longevity
    • 

    corecore