864,992 research outputs found

    Training, Quality Assurance Factors, and Tools Investigation: a Work Report and Suggestions on Software Quality Assurance

    Get PDF
    Previously, several research tasks have been conducted, some observations were obtained, and several possible suggestions have been contemplated involving software quality assurance engineering at NASA Johnson. These research tasks are briefly described. Also, a brief discussion is given on the role of software quality assurance in software engineering along with some observations and suggestions. A brief discussion on a training program for software quality assurance engineers is provided. A list of assurance factors as well as quality factors are also included. Finally, a process model which can be used for searching and collecting software quality assurance tools is presented

    Scanamorphos: a map-making software for Herschel and similar scanning bolometer arrays

    Full text link
    Scanamorphos is one of the public softwares available to post-process scan observations performed with the Herschel photometer arrays. This post-processing mainly consists in subtracting the total low-frequency noise (both its thermal and non-thermal components), masking high-frequency artefacts such as cosmic ray hits, and projecting the data onto a map. Although it was developed for Herschel, it is also applicable with minimal adjustment to scan observations made with some other imaging arrays subjected to low-frequency noise, provided they entail sufficient redundancy; it was successfully applied to P-Artemis, an instrument operating on the APEX telescope. Contrary to matrix-inversion softwares and high-pass filters, Scanamorphos does not assume any particular noise model, and does not apply any Fourier-space filtering to the data, but is an empirical tool using purely the redundancy built in the observations -- taking advantage of the fact that each portion of the sky is sampled at multiple times by multiple bolometers. It is an interactive software in the sense that the user is allowed to optionally visualize and control results at each intermediate step, but the processing is fully automated. This paper describes the principles and algorithm of Scanamorphos and presents several examples of application.Comment: This is the final version as accepted by PASP (on July 27, 2013). A copy with much better-quality figures is available on http://www2.iap.fr/users/roussel/herschel

    What can screen capture reveal about students’ use of software tools when undertaking a paraphrasing task?

    Get PDF
    Previous classroom observations, and examination of students’ written drafts, had suggested that when summarising or paraphrasing source texts, some of our students were using software tools (for example the copy-paste function and synonym lookup) in possibly unhelpful ways. To test these impressions we used screen capture software to record 20 university students paraphrasing a short text using the word-processing package on a networked PC, and analysed how they utilised software to fulfil the task. Participants displayed variable proficiency in using word-processing tools, and very few accessed external sites. The most frequently enlisted tool was the synonym finder. Some of the better writers (assessed in terms of their paraphrase quality) availed themselves little of software aids. We discuss how teachers of academic writing could help students make more efficient and judicious use of commonly available tools, and suggest further uses of screen capture in teaching and researching academic writing

    Software quality attribute measurement and analysis based on class diagram metrics

    Get PDF
    Software quality measurement lies at the heart of the quality engineering process. Quality measurement for object-oriented artifacts has become the key for ensuring high quality software. Both researchers and practitioners are interested in measuring software product quality for improvement. It has recently become more important to consider the quality of products at the early phases, especially at the design level to ensure that the coding and testing would be conducted more quickly and accurately. The research work on measuring quality at the design level progressed in a number of steps. The first step was to discover the correct set of metrics to measure design elements at the design level. Chidamber and Kemerer (C&K) formulated the first suite of OO metrics. Other researchers extended on this suite and provided additional metrics. The next step was to collect these metrics by using software tools. A number of tools were developed to measure the different suites of metrics; some represent their measurements in the form of ordinary numbers, others represent them in 3D visual form. In recent years, researchers developed software quality models which went a bit further by computing quality attributes from collected design metrics. In this research we extended on the software quality modelers’ work by adding a quality attribute prioritization scheme and a design metric analysis layer. Our work is all focused on the class diagram, the most fundamental constituent in any object oriented design. Using earlier researchers’ work, we extract a class diagram’s metrics and compute its quality attributes. We then analyze the results and inform the user. We present our figures and observations in the form of an analysis report. Our target user could be a project manager or a software quality engineer or a developer who needs to improve the class diagram’s quality. We closely examine the design metrics that affect quality attributes. We pinpoint the weaknesses in the class diagram, based on these metrics, inform the user about the problems that emerged from these classes, and advice him/her as to how he/she can go about improving the overall design quality. We consider the six basic quality attributes: “Reusability”, “Functionality”, “Understandability”, “Flexibility”, “Extendibility”, and “Effectiveness” of the whole class diagram. We allow the user to set priorities on these quality attributes in a sequential manner based on his/her requirements. Using a geometric series, we calculate a weighted average value for the arranged list of quality attributes. This weighted average value indicates the overall quality of the product, the class diagram. Our experimental work gave us much insight into the meanings and dependencies between design metrics and quality attributes. This helped us refine our analysis technique and give more concrete observations to the user

    Social Network Analysis of Open Source Projects

    Get PDF
    A large amount of widespread software used today is either open source or includes open-source projects. Much open-source software has proved to be of very high quality despite being developed through unconventional methods. The success of open-source products has sparked an interest in the software industry in why these projects are so successful and how this seemingly unstructured development process can yield such great results. This thesis presents a study done on the projects hosted by one of the largest and most well-known open-software communities that exists. The study involves gathering developer collaboration data and then using social network analysis to find trends in the data that eventually might be used to create benchmarks for open-source software development. The results show that several interesting trends can be found.By applying social network analysis on the collaboration of open-source developers for a wide variety of projects a few observations can be made that can give some valuable insight in the development process of open-source projects

    Experiences of software engineering training

    Get PDF
    In this paper some experiences from laboratories of Advanced Software Engineering course are presented. This laboratory consists of seven exercises. The subjects of exercises are following: requirements engineering, system design with UML [1], reuse, precise modelling with the Object Constraint Language (OCL [2]), code coverage testing, memory leaks detection and improving application efficiency. For each laboratory exercise a set of training materials and instructions was developed. These internet materials are stored on a department server and are available for all students and lecturers of this course. Rational Suite tools [3] are used in the laboratory. The goal of introducing Internet materials was to improve the quality of SE education. Some experiences and observations are presented. The evaluation of students results is also given

    Empirical array quality weights in the analysis of microarray data

    Get PDF
    BACKGROUND: Assessment of array quality is an essential step in the analysis of data from microarray experiments. Once detected, less reliable arrays are typically excluded or "filtered" from further analysis to avoid misleading results. RESULTS: In this article, a graduated approach to array quality is considered based on empirical reproducibility of the gene expression measures from replicate arrays. Weights are assigned to each microarray by fitting a heteroscedastic linear model with shared array variance terms. A novel gene-by-gene update algorithm is used to efficiently estimate the array variances. The inverse variances are used as weights in the linear model analysis to identify differentially expressed genes. The method successfully assigns lower weights to less reproducible arrays from different experiments. Down-weighting the observations from suspect arrays increases the power to detect differential expression. In smaller experiments, this approach outperforms the usual method of filtering the data. The method is available in the limma software package which is implemented in the R software environment. CONCLUSION: This method complements existing normalisation and spot quality procedures, and allows poorer quality arrays, which would otherwise be discarded, to be included in an analysis. It is applicable to microarray data from experiments with some level of replication

    An investigation into manufacturing execution systems

    Get PDF
    Hardware and software developments of this decade have exposed an hiatus between business/management applications and process control in heavy industry in the implementation of computer technology. This document examines the development of discrete manufacturing and of relevant implementations of computing. It seeks to examine and to clarify the issues involved in a perceived current drive to bridge this gap, to integrate all the systems in a manufacturing enterprise in a Manufacturing Execution System (MES) in order to address two hypotheses: I) That overseas trends towards the development of manufacturing execution systems have application in the Australian industrial context. 2) That significant gains in production efficiency and quality may be achieved by the application of an MES. It became apparent early in this study that any understanding the function of an MES requires an understanding of the context in which it works. Following the Introduction, therefore, Section Two contains a brief overview of the history and development of modem industry with particular attention to the subject of inventory and inventory management. Since the 1970s, three main streams of change in manufacturing management methodology developed. These are dealt with in some detail in Section Three. Section Four outlines a variety of areas of increasing computerisation on the shop floor while Section Five addresses the integration of the whole system, management and shop floor, seeking to demonstrate the complexity of the subject and to discover current trends and developments. Section Five includes a survey of some of the software and hardware options currently available and Section Six summarises the work and presents some observations and conclusions. Three appendices provide more detailed information on MES software availability, pricing and market penetratio

    Agile Succes Factors : a qualitative study about what makes agile projects successful

    Get PDF
    Various studies show great improvements in software projects when agile software development is applied. However, there are still remaining problems and there are also reports about project failures in the agile community. This raises the question of what factors distinguish successful agile software projects and teams from less successful ones? The authors of the Swiss Agile Study wanted to shed some light on these questions. We conducted a qualitative interview study with eight successful agile IT companies. We asked them about the essential success factors in their agile projects. The findings are divided into three different categories: Engineering practices, management practices and the values, or culture, they live. On the engineering level it was found that these companies apply many technical practices in a very disciplined way, with a strong emphasis on quality assuring practices like unit testing, continuous integration and automation, and clean coding. On the management level it was pointed out that clear requirements, which are verified and validated in very close collaboration with the customer, are essential. The same was true for very close communication within the team. The third aspect that was found, was that in each successful team there was a kind of Agile Champion who motivated and inspired the team to use agility. On the value level we found that successful agile teams live a culture of openness and transparency. They establish an agile culture at least on the team and organizational level (we found only one company who had established the agile method in the whole company). Third, they live an attitude of craftsmanship, being proud of their work and striving for high quality work. Finally we noticed, that while putting high emphasize on the above practices, mature agile teams start adapting these practices and the agile process to their needs, when they notice that some of the practices do not work or that following the recipe is insufficient. A constant probing, sensing and appropriate responding was observed. This is the typical pattern for moving forward in complex adaptive systems. Applying a sense-making methodology like the Cynefin framework, theoretically explains the observations in the present study. Companies should therefore be aware, that software projects are often located in the complex domain, i.e. can be modeled as complex adaptive systems. These kinds of problems rather require emergent practices instead of good or best practices and an understanding of the implications of complexity theory is of merit
    corecore