1,595 research outputs found

    Cohesion Metrics for Improving Software Quality

    Get PDF
    Abstract Software product metrics aim at measuring the quality of software. Modu- larity is an essential factor in software quality. In this work, metrics related to modularity and especially cohesion of the modules, are considered. The existing metrics are evaluated, and several new alternatives are proposed. The idea of cohesion of modules is that a module or a class should consist of related parts. The closely related principle of coupling says that the relationships between modules should be minimized. First, internal cohesion metrics are considered. The relations that are internal to classes are shown to be useless for quality measurement. Second, we consider external relationships for cohesion. A detailed analysis using design patterns and refactorings confirms that external cohesion is a better quality indicator than internal. Third, motivated by the successes (and problems) of external cohesion metrics, another kind of metric is proposed that represents the quality of modularity of software. This metric can be applied to refactorings related to classes, resulting in a refactoring suggestion system. To describe the metrics formally, a notation for programs is developed. Because of the recursive nature of programming languages, the properties of programs are most compactly represented using grammars and formal lan- guages. Also the tools that were used for metrics calculation are described.Siirretty Doriast

    Technical Debt in Software Development : Examining Premises and Overcoming Implementation for Efficient Management

    Get PDF
    Software development is a unique field of engineering: all software constructs retain their modifiability — arguably, at least — until client release, no single project stakeholder has exhaustive knowledge about the project, and even this portion of the knowledge is generally acquired only at project completion. These characteristics imply that the field of software development is subject to design decisions that are known to be sub-optimal—either deliberately emphasizing interests of particular stakeholders or indeliberately harming the project due to lack of exhaustive knowledge. Technical debt is a concept that accounts for these decisions and their effects. The concept’s intention is to capture, track, and manage the decisions and their products: the affected software constructs. Reviewing the previous, it is vital for software development projects to acknowledge technical debt both as an enabler and as a hindrance. This thesis looks into facilitating efficient technical debt management for varying software development projects. In the thesis, examination of technical debt’s role in software development produces the premises on to which a management implementation approach is introduced. The thesis begins with a revision of motivations. Basing on prior research in the fields of technical debt management and software engineering in general, the five motivations establish the premises for technical debt in software development. These include notions of subjectivity in technical debt estimation, update frequency demands posed on technical debt information, and technical debt’s polymorphism. Three research questions are derived from the motivations. They ask for tooling support for technical debt management, capturing and modelling technical debt propagation, and characterizing software development environments and their technical debt instances. The questions imply consecutive completion as the first pursued tool would benefit from—possibly automatically assessable—propagation models, and finally the tool’s introduction to software development organizations could be assisted by tailoring it based on the software development environment and the technical debt instance characterizations. The thesis has seven included publications. In introducing them, the thesis maps their backgrounds to the motivations and their outcomes to the research questions. Amongst the outcomes are the DebtFlag tool for technical debt management, the procedures for retrospectively capturing technical debt from software repositories, a procedure for technical debt propagation model creation from these retrospectives, and a multi-national survey characterizing software development environments and their technical debt instances. The thesis concludes that the tooling support, the technical debt propagation modelling, and the software environment and technical debt instance characterization describe an implementation approach to further efficient technical debt management. Simultaneously, future work is implied as all previously described efforts need to be continued and extended. Challenges also remain in the introduced approach. An example of this is the combinatorial explosion of technology-development-context-combinations that technical debt propagation modelling needs to consider. All combinations have to be managed if exhaustive modelling is desired. There is, however, a great deal of motivation to pursue these efforts when one re-notes that technical debt is a permanent component of software development that, when correctly managed, is a development efficiency mechanism comparable to a financial loan investment.Ohjelmistokehitys on uniikki tekniikan ala: kaikki ohjelmistorakenteet säilyttävät muokattavuutensa — otaksuttavasti ainakin — asiakasjulkaisuun asti. Yhdenkään projektiosakkaan tietämys ei kata koko projektia ja merkittävä osa tästäkin tiedosta karttuu vasta projektin suorittamisen aikana. Nämä ominaisuudet antavat ymmärtää, että ohjelmistokehitysala on sellaisten suunnitelupäätösten kohde, joiden tiedetään olevan epätäydellisiä—joko tarkoituksella tiettyjen projektiosakkaiden intressejä painottavia tai tahattomasti projektia vahingoittavia puutteelliseen tietoon perustuvia. Tekninen velka on konsepti, joka huomioi nämä päätökset sekä niiden vaikutukset. Konseptin tarkoitus on havaita, seurata ja hallita näitä päätöksiä sekä tuloksena syntyviä teknisen velan vaikutuksen alla olevia ohjelmistorakenteita. Edellisen kuvauksen valossa ohjelmistokehitysprojekteille on erityisen tärkeää huomioida tekninen velka sekä mahdollistajana että hidasteena. Tämän vuoksi kyseinen väitöskirja perehtyy tehokkaan teknisen velan hallinnan fasilitointiin moninaisille ohjelmistokehitysprojekteille. Väitöskirjassa tarkastellaan teknisen velan roolia osana ohjelmistokehitystä. Tarkastelu tuottaa joukon premissejä, joihin perustuen esitellään lähestymistapa teknisen velan hallinnan toteuttamiselle. Viisi väitöskirjan alussa esitettyä motivaatiota kiinnittävät ne premissit,joille ratkaisu esitetään. Motivaatiot rakennetaan olemassa olevaan teknisen velan sekä ohjelmistotekniikan tutkimustietoon perustuen. Näihin lukeutuvat muun muassa subjektiivisuus teknisen velan estimoinnissa, teknisen velan informaatiolle nähdyt päivitystaajuusvaatimukset sekä teknisen velan polymorfismi. Havainnoista johdetaan kolme tutkimuskysymystä. Ne tavoittelevat työkalutukea teknisen velan hallinnalle, velan propagoitumisen havainnointia sekä mallinnusta kuin myös ohjelmistotuotantoympäristöjen ja niiden velka instanssien kuvaamista. Tutkimuskysymykset implikoivat peräkkäistä suoritusta: tavoiteltu työkalu hyötyy—mahdollisesti automaattisesti arvoitavista—teknisen velan propagaatiomalleista. Valmiin työkalun käyttöönottoa voidaan taas edistää jos kuvaukset kehitysympäristöistä sekä niiden velkainstansseista ovat käytettävissä työkalun räätälöintiin. Väitöskirjaaan sisältyy seitsemän julkaisua. Väitöskirja esittelee ne kiinnittämällä julkaisujen taustatyön aikaisemmin mainittuihin motivaatioihin sekä niiden tulokset edellisiin tutkimuskysymyksiin. Tuloksista huomioidaan esimerkiksi DebtFlag-työkalu teknisen velan hallintaan, retrospektiivinen prosessi teknisen velan kartoittamiselle versionhallintajärjestelmistä, prosessi teknisen velan mallien rakentamiselle näistä kartoituksista ja monikansallinen kyselytutkimus ohjelmistokehitysympäristöjen sekä näiden teknisen velan instanssien luonnehtimiseksi. Väitöskirjan yhteenvetona huomioidaan, että teknisen velan hallinnan työkalutuki, teknisen velan propagaatiomallinnus ja ohjelmistokehitysympäristöjen sekä niiden teknisen velan instanssien luonnehdinta muodostavat toteutustavan, jolla teknisen velan tehokasta hallintaa voidaan kehittää. Samalla implikoidaan jatkotoimia, sillä kaikkia edellä kuvattuja työn osia tulee jatkaa ja laajentaa. Toteutustavalle nähdään myös haasteita. Eräs näistä on kombinatorinen räjähdys teknologia- ja kehityskontekstikombinaatioille. Kaikki kombinaatiot tulee huomioida mikäli teknisen velan propagaatiomallinnuksesta halutaan kattavaa. Motivaatio väitöskirjassa esitetyn työn jatkamiselle on huomattavaa ja sitä kasvattaa entuudestaan edellä tehty huomio siitä, että tekninen velka on pysyvä komponentti ohjelmistokehityksessä, joka oikein hallittuna on kehitystehokkuutta edistävänä komponenttina verrattavissa finanssialan lainainvestointiin.Siirretty Doriast

    Geographic Information Systems for Real-Time Environmental Sensing at Multiple Scales

    Get PDF
    The purpose of this investigation was to design, implement, and apply a real-time geographic information system for data intensive water resource research and management. The research presented is part of an ongoing, interdisciplinary research program supporting the development of the Intelligent River® observation instrument. The objectives of this research were to 1) design and describe software architecture for a streaming environmental sensing information system, 2) implement and evaluate the proposed information system, and 3) apply the information system for monitoring, analysis, and visualization of an urban stormwater improvement project located in the City of Aiken, South Carolina, USA. This research contributes to the fields of software architecture and urban ecohydrology. The first contribution is a formal architectural description of a streaming environmental sensing information system. This research demonstrates the operation of the information system and provides a reference point for future software implementations. Contributions to urban ecohydrology are in three areas. First, a characterization of soil properties for the study region of the City of Aiken, SC is provided. The analysis includes an evaluation of spatial structure for soil hydrologic properties. Findings indicate no detectable structure at the scales explored during the study. The second contribution to ecohydrology comes from a long-term, continuous monitoring program for bioinfiltration basin structures located in the study area. Results include an analysis of soil moisture dynamics based on data collected at multiple depths with high spatial and temporal resolution. A novel metric is introduced to evaluate the long-term performance of bioinfiltration basin structures based on soil moisture observation data. Findings indicate a decrease in basin performance over time for the monitored sites. The third contribution to the field of ecohydrology is the development and application of a spatially and temporally explicit rainfall infiltration and excess model. The model enables the simulation and visualization of bioinfiltration basin hydrologic response at within-catchment scales. The model is validated against observed soil moisture data. Results include visualizations and stormwater volume calculations based on measured versus predicted bioinfiltration basin performance over time

    The Automated analysis of object-oriented designs

    Get PDF
    This thesis concerns the use of software measures to assess the quality of object-oriented designs. It examines the ways in which design assessment can be assisted by measurement and the areas in which it can't. Other work in software measurement looks at defining and validating measures,or building prediction systems. This work is distinctive in that it examines the use of measures to help improve design quality during design time. To evaluate a design based on measurement results requires a means of relating measurement values to particular design problems or quality levels. Design heuristics were used to make this connection between measurement and quality. A survey was carried out to find suggestions for guidelines, rules and heuristics from the 00 design literature. This survey resulted in a catalogue of 288 suggestions for 00 design heuristics. The catalogue was structured around the 00 constructs to which the heuristics relate, and includes information on various heuristic attributes. This scheme is intended to allow suitable heuristics to be quickly located and correctly applied. Automation requires tool support. A tool was built which augmented the functionality available in existing sets, and taking input from multiple sources of design information (e.g., CASE tools and source code) and the described so far presents a potential method for automated design assessment provides the means of automation. An empirical study was then required to consider the efficacy of the method and evaluate the novel features of the tool. A case study was used to explore the approach taken by, and evaluate the effectiveness of, 15 subjects using measures and heuristics to assess the design of a small 00 system(IS classes). This study showed that semantic heuristics tended to highlight significant problems, but where attempts were made to automate these it often led to false problems being identified. This result, along with a previous finding that around half of quality criteria are not automatically assessable at design time, strongly suggeststhat people are still a necessary part of design assessment. The main result of the case study was that the subjects correctly identified 90% of the major design problems and were very positive about their experience of using measurement to support design assessment

    Intelligent Web Services Architecture Evolution Via An Automated Learning-Based Refactoring Framework

    Full text link
    Architecture degradation can have fundamental impact on software quality and productivity, resulting in inability to support new features, increasing technical debt and leading to significant losses. While code-level refactoring is widely-studied and well supported by tools, architecture-level refactorings, such as repackaging to group related features into one component, or retrofitting files into patterns, remain to be expensive and risky. Serval domains, such as Web services, heavily depend on complex architectures to design and implement interface-level operations, provided by several companies such as FedEx, eBay, Google, Yahoo and PayPal, to the end-users. The objectives of this work are: (1) to advance our ability to support complex architecture refactoring by explicitly defining Web service anti-patterns at various levels of abstraction, (2) to enable complex refactorings by learning from user feedback and creating reusable/personalized refactoring strategies to augment intelligent designers’ interaction that will guide low-level refactoring automation with high-level abstractions, and (3) to enable intelligent architecture evolution by detecting, quantifying, prioritizing, fixing and predicting design technical debts. We proposed various approaches and tools based on intelligent computational search techniques for (a) predicting and detecting multi-level Web services antipatterns, (b) creating an interactive refactoring framework that integrates refactoring path recommendation, design-level human abstraction, and code-level refactoring automation with user feedback using interactive mutli-objective search, and (c) automatically learning reusable and personalized refactoring strategies for Web services by abstracting recurring refactoring patterns from Web service releases. Based on empirical validations performed on both large open source and industrial services from multiple providers (eBay, Amazon, FedEx and Yahoo), we found that the proposed approaches advance our understanding of the correlation and mutual impact between service antipatterns at different levels, revealing when, where and how architecture-level anti-patterns the quality of services. The interactive refactoring framework enables, based on several controlled experiments, human-based, domain-specific abstraction and high-level design to guide automated code-level atomic refactoring steps for services decompositions. The reusable refactoring strategy packages recurring refactoring activities into automatable units, improving refactoring path recommendation and further reducing time-consuming and error-prone human intervention.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/142810/1/Wang Final Dissertation.pdfDescription of Wang Final Dissertation.pdf : Dissertatio

    New Fundamental Technologies in Data Mining

    Get PDF
    The progress of data mining technology and large public popularity establish a need for a comprehensive text on the subject. The series of books entitled by "Data Mining" address the need by presenting in-depth description of novel mining algorithms and many useful applications. In addition to understanding each section deeply, the two books present useful hints and strategies to solving problems in the following chapters. The contributing authors have highlighted many future research directions that will foster multi-disciplinary collaborations and hence will lead to significant development in the field of data mining

    Reengineering of Legacy Systems to Distributed Environments.

    Get PDF
    The object-oriented paradigm and client/server and distributed technologies have become widely used in the last decade. There is an increasing interest to migrate and reengineer legacy systems to these new hardware technologies and software development paradigms. Software engineers who wish to reengineer such legacy systems face challenges, such as lack of documentation and programs that are difficult to comprehend. Middleware technologies such as CORBA and DCOM make the development of new distributed systems, as well as the migration of legacy systems to distributed platforms, more feasible. Distribution of a system consists of two parts: (1) subsystem decomposition and (2) allocation of the subsystems to different sites. In this research, we define a reengineering environment that assists with the migration of legacy systems to distributed environments. We define a reengineering methodology that uses reverse engineering, software metrics, clustering, and data mining to migrate legacy systems to object-based distributed environments. The reengineering environment includes the methodology and an integrated set of tools that support the implementation of the methodology. The methodology consists of multiple phases. First, we use reverse engineering techniques for program comprehension and design recovery. We then decompose the system into a hierarchy of subsystems by defining relationships between the entities of the underlying paradigm of the legacy system. The decomposition is driven by data mining, software metrics, and clustering techniques. Next, if the underlying paradigm of the legacy system is not object-based, we perform object-based adaptations on the subsystems. We then create components by wrapping objects and defining an interface. Finally, we allocate components to different sites by specifying the requirements of the system and characteristics of the network as an integer-programming model that minimizes the remote communication. We use middleware technologies for the implementation of the distributed object system
    corecore