688 research outputs found

    Software Evolution for Industrial Automation Systems. Literature Overview

    Get PDF

    Interactive Software Refactoring Bot

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/153326/1/ASE2019_RefactoringBot__Copy_deepblue.pd

    A principle based system architecture framework applied for defining, modeling & designing next generation smart grid systems

    Get PDF
    Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 81).A strong and growing desire exists, throughout society, to consume electricity from clean and renewable energy sources, such as solar, wind, biomass, geothermal, and others. Due to the intermittent and variable nature of electricity from these sources, our current electricity grid is incapable of collecting, transmitting, and distributing this energy effectively. The "Smart Grid" is a term which has come to represent this 'next generation' grid, capable of delivering, not only environmental benefits, but also key economic, reliability and energy security benefits as well. Due to the high complexity of the electricity grid, a principle based System Architecture framework is presented as a tool for analyzing, defining, and outlining potential pathways for infrastructure transformation. Through applying this framework to the Smart Grid, beneficiaries and stakeholders are identified, upstream and downstream influences on design are analyzed, and a succinct outline of benefits and functions is produced. The first phase of grid transformation is establishing a robust communications and measurement network. This network will enable customer participation and increase energy efficiency through smart metering, real time pricing, and demand response programs. As penetration of renewables increases, the high variability and uncontrollability of additional energy sources will cause significant operation and control challenges. To mitigate this variability reserve margins will be adjusted and grid scale energy storage (such as compressed air, flow batteries, and plugin hybrid electric vehicles or PHEV's) will begin to be introduced. Achieving over 15% renewable energy penetration marks the second phase of transformation. The third phase is enabling mass adoption, whereby over 40% of our energy will come from renewable sources. This level of penetration will only be achieved through fast supply and demand balancing controls and large scale storage. Robust modeling must be developed to test various portfolio configurations.by Gregory Sachs.S.M.in Engineering and Managemen

    The Evolution of Sociology of Software Architecture

    Get PDF
    The dialectical interplay of technology and sociological development goes back to the early days of human development, starting with stone tools and fire, and coming through the scientific and industrial revolutions; but it has never been as intense or as rapid as in the modern information age of software development and accelerating knowledge society (Mansell and Wehn, 1988; and Nico, 1994, p. 1602-1604). Software development causes social change, and social challenges demand software solutions. In turn, software solutions demand software application architecture. Software architecture (“SA”) (Fielding and Taylor, 2000) is a process for “defining a structural solution that meets all the technical and operations requirements...” (Microsoft, 2009, Chapter I). In the SA process, there is neither much emphasis on the sociological requirements of all social stakeholders nor on the society in w hich these stakeholders use, operate, group, manage, transact, dispute, and resolve social conflicts. For problems of society demanding sociological as well as software solutions, this study redefines software application architecture as “the process of defining a structured solution that meets all of the sociological , technical, and operational requirements…” This investigation aims to l ay the groundwork for, evolve, and develop an innovative and novel sub-branch of scientific study we name the “Sociology of Software Architecture” (hereinafter referred to as “SSA”). SSA is an interdisciplinary and comparative study integrating, synthesizing, and combining elements of the disciplines of sociology, sociology of technology, history of technology, sociology of knowledge society, epistemology, science methodology (philosophy of science), and software architecture. Sociology and technology have a strong, dynamic, and dialectical relationship and interplay, especially in software development. This thesis investigates and answers important and relevant questions, evolves and develops new scientific knowledge, proposes solutions, demonstrates and validates its benefits, shares its case studies and experiences, and advocates, promotes, and helps the future and further development of this novel method of science

    Preserving the Quality of Architectural Tactics in Source Code

    Get PDF
    In any complex software system, strong interdependencies exist between requirements and software architecture. Requirements drive architectural choices while also being constrained by the existing architecture and by what is economically feasible. This makes it advisable to concurrently specify the requirements, to devise and compare alternative architectural design solutions, and ultimately to make a series of design decisions in order to satisfy each of the quality concerns. Unfortunately, anecdotal evidence has shown that architectural knowledge tends to be tacit in nature, stored in the heads of people, and lost over time. Therefore, developers often lack comprehensive knowledge of underlying architectural design decisions and inadvertently degrade the quality of the architecture while performing maintenance activities. In practice, this problem can be addressed through preserving the relationships between the requirements, architectural design decisions and their implementations in the source code, and then using this information to keep developers aware of critical architectural aspects of the code. This dissertation presents a novel approach that utilizes machine learning techniques to recover and preserve the relationships between architecturally significant requirements, architectural decisions and their realizations in the implemented code. Our approach for recovering architectural decisions includes the two primary stages of training and classification. In the first stage, the classifier is trained using code snippets of different architectural decisions collected from various software systems. During this phase, the classifier learns the terms that developers typically use to implement each architectural decision. These ``indicator terms\u27\u27 represent method names, variable names, comments, or the development APIs that developers inevitably use to implement various architectural decisions. A probabilistic weight is then computed for each potential indicator term with respect to each type of architectural decision. The weight estimates how strongly an indicator term represents a specific architectural tactics/decisions. For example, a term such as \emph{pulse} is highly representative of the heartbeat tactic but occurs infrequently in the authentication. After learning the indicator terms, the classifier can compute the likelihood that any given source file implements a specific architectural decision. The classifier was evaluated through several different experiments including classical cross-validation over code snippets of 50 open source projects and on the entire source code of a large scale software system. Results showed that classifier can reliably recognize a wide range of architectural decisions. The technique introduced in this dissertation is used to develop the Archie tool suite. Archie is a plug-in for Eclipse and is designed to detect wide range of architectural design decisions in the code and to protect them from potential degradation during maintenance activities. It has several features for performing change impact analysis of architectural concerns at both the code and design level and proactively keep developers informed of underlying architectural decisions during maintenance activities. Archie is at the stage of technology transfer at the US Department of Homeland Security where it is purely used to detect and monitor security choices. Furthermore, this outcome is integrated into the Department of Homeland Security\u27s Software Assurance Market Place (SWAMP) to advance research and development of secure software systems

    Abstraction : a notion for reverse engineering.

    Get PDF

    Building a Systematic Legacy System Modernization Approach

    Full text link
    A systematic legacy system modernizing approach represents a new approach for modernizing legacy systems. Systematic legacy system modernization has software reuse as an integral part of modernization. We have developed a modernization approach which uses software architecture reconstruction to find reusable components within the legacy system. The practice of software development and modernization continues to shift towards the reuse of components from legacy systems to handle the complexities of software development. Modernization of a legacy system requires reuse of software artefacts from legacy system to conserve the business rules and improve the system’s quality attributes. Software reuse is an integral part of our systematic legacy modernization approach. Software should be considered as an asset and reuse of these assets is essential to increase the return on the development costs. Software reuse ranges from reuse of ideas to algorithms to any documents that are created during the software development life cycle. Software reuse has many potential benefits which include increased software quality, and decreased software development cost and time. Demands for lower software production and maintenance costs, faster delivery of systems and increased quality can only be met by widespread and systematic software reuse. In spite of all these benefits software reuse adoption is not widespread in the software development communities. Software reuse cannot possibly become an engineering discipline so long as issues and concerns have not been clearly understood and dealt with. We have conducted two surveys to understand the issues and concerns of software reuse in the Conventional Software Engineering (CSE) Community and the Software Product Line (SPL) Community where reuse is an integral part of the product development. The quantitative and qualitative analysis of our surveys identified the critical factors which affect and inhibit software engineers and developers adopting software reuse. Software reuse has been talked about in generic terms in software product lines. Though software reuse is a core concept in SPL it has however failed to become a standardized practice. The survey conducted on the SPL Community investigates how software reuse is adopted in SPL so as to provide the necessary degree of support for engineering software product line applications and to identify some of the issues and concerns in software reuse. The identified issues and concerns have helped us to understand the difference between software reuse in the CSE and SPL Communities. It has also given us an indication of how both communities can learn good software reuse practices from each other in order to develop a common software reuse process. Based on the outcome of our surveys we have developed a systematic software reuse process, called the Knowledge Based Software Reuse (KBSR) Process, which incorporates a Repository of reusable software assets to build a systematic legacy system modernization approach. Being able to reuse software artefacts, be it software requirement specification, design, or code, would greatly enhance software productivity and reliability. All of these software artefacts can go in the Knowledge Based Software Reuse Repository and be candidates for reuse

    Low-overhead Online Code Transformations.

    Full text link
    The ability to perform online code transformations - to dynamically change the implementation of running native programs - has been shown to be useful in domains as diverse as optimization, security, debugging, resilience and portability. However, conventional techniques for performing online code transformations carry significant runtime overhead, limiting their applicability for performance-sensitive applications. This dissertation proposes and investigates a novel low-overhead online code transformation technique that works by running the dynamic compiler asynchronously and in parallel to the running program. As a consequence, this technique allows programs to execute with the online code transformation capability at near-native speed, unlocking a host of additional opportunities that can take advantage of the ability to re-visit compilation choices as the program runs. This dissertation builds on the low-overhead online code transformation mechanism, describing three novel runtime systems that represent in best-in-class solutions to three challenging problems facing modern computer scientists. First, I leverage online code transformations to significantly increase the utilization of multicore datacenter servers by dynamically managing program cache contention. Compared to state-of-the-art prior work that mitigate contention by throttling application execution, the proposed technique achieves a 1.3-1.5x improvement in application performance. Second, I build a technique to automatically configure and parameterize approximate computing techniques for each program input. This technique results in the ability to configure approximate computing to achieve an average performance improvement of 10.2x while maintaining 90% result accuracy, which significantly improves over oracle versions of prior techniques. Third, I build an operating system designed to secure running applications from dynamic return oriented programming attacks by efficiently, transparently and continuously re-randomizing the code of running programs. The technique is able to re-randomize program code at a frequency of 300ms with an average overhead of 9%, a frequency fast enough to resist state-of-the-art return oriented programming attacks based on memory disclosures and side channels.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120775/1/mlaurenz_1.pd

    Categorizing Non-Functional Requirements Using a Hierarchy in UML.

    Get PDF
    Non-functional requirements (NFRs) are a subset of requirements, the means by which software system developers and clients communicate about the functionality of the system to be built. This paper has three main parts: first, an overview of how non-functional requirements relate to software engineering is given, along with a survey of NFRs in the software engineering literature. Second, a collection of 161 NFRs is diagrammed using the Unified Modelling Language, forming a tool with which developers may more easily identify and write additional NFRs. Third, a lesson plan is presented, a learning module intended for an undergraduate software engineering curriculum. The results of presenting this learning module to a class in Spring, 2003 is presented
    • …
    corecore