836,774 research outputs found
Reading students’ minds : design assessment in distance education
This paper considers the design of assessment for students of design according to behaviourist versus experiential pedagogical approaches, relating these to output-oriented as opposed to process-oriented assessment methods. It is part case study and part recognition of the importance of process in design education and how this might be applied in other disciplines generally, through the use of visual thinking and assessment. Making use of experience gained from The Open University’s entry level design course, U101: Design Thinking, the main assessment software (CompendiumDS) is described and presented as an alternative to ‘convergent endpoint’ artefacts of assessment. It is argued that the software and assessment design allow the evaluation of ‘unseen thinking’, providing an immediate focus on process rather than deterministic or behaviourist outcomes alone. Moreover, this evaluation can be applied at scale, without extensive changes to existing systems and may offer a compromise between measuring outcomes and the value of student-centred learning experiences
Methods to evaluate lightweight software process assessment methods based on evaluation theory and engineering design principles
Achieving a mature software development process has become essential for many software organizations. A mature development process permits software organizations to provide their customers with a high quality software product delivered on time and within budget.
Software organizations have been struggling for decades to improve the quality of their products by improving their software development processes. Designing an improvement program for a software development process is a demanding and complex task. This task consists of two main processes: the assessment process and the improvement process. A successful improvement process requires first a successful assessment; failing to assess the organization's software development process could create unsatisfactory results.
Software processes assessment (SPA) can be used either to determine the capability of another organization, for subcontracting purposes, or to determine and understand the status of the organization's current processes to initiate an improvement process. The increasing number of assessment approaches available; the ISO 15504 standard that sets out the requirements for process assessment; and the popularity of the CMMI model, illustrate the relevance of software process assessment for the software development industry.
Currently, several methods are available to assess the maturity and capability of a software development process based on well-known software process assessment and improvement frameworks such as CMMI and ISO-15504. The success of these assessment methods and improvement frameworks is supported by post-development studies on the validity, reliability and effectiveness of these methods. Unfortunately, many researchers consider that such methods are too large to implement in SME organizations. As a result, many researchers have studied process assessment and improvement in SME organizations and proposed assessment methods, usually called lightweight SPA methods, suitable to the organizations' needs.
The current research in the SPA field focuses on proposing convenient and easy-to-use assessment methods, without investigating to what extent the design of these methods is related to the engineering design perspective. This unclear alignment with the engineering discipline raises questions about the relevance and representativeness of the results produced by these methods from an engineering viewpoint. Moreover, although numerous SPA methods are currently available which offer help and guidance, unfortunately they only partially address evidences found essential for achieving an SPA success.
This thesis presents and discusses the evaluation of lightweight SPA methods. The evaluation is two-fold: evaluating the SPA methods design using a top-down approach and based on engineering viewpoints and evaluating the success of SPA methods using a bottom-up approach. The evaluation theory concepts are used as a framework to formally develop both evaluation methods.
To develop the first evaluation method using the top-down approach, an exploratory analytical study of SPA methods from an engineering design viewpoint has been conducted. Vincenti's classification has been used as a tool for this analysis. The aim of this exploratory study is to put the developed SPA methods into an engineering design framework, and use this framework as a guideline to put the new SPA methods to be designed into the same engineering design framework. To develop the second evaluation method using the bottom-up approach, a systematic literature review was conducted to extract the set of evidences for the success of the SPA method based on requirements, observations, lessons learned and recommendations which have been formulated within the industry and published in books, conferences and journals.
The development process of the two evaluation methods has then been verified using a set of verification criteria and the proposed evaluation methods were tested by conducting three case studies. The first evaluation method would be useful mainly for the designers of new SPA methods during the design phase, while the second evaluation method would be useful for both designers and practitioners of SPA methods to verify the success of the assessment method in question.
This research project forms an entry point to study the alignment of SPA methods design with engineering design principles and sheds light on achieving successful assessment results by studying the successful evidences that should be supported by assessment methods separated from the improvement process. The proposed evaluation methods in this thesis have great benefits for SPA methods designed mainly for SME organizations, because these assessments methods, contrary to well-known methods, lack comprehensive studies on their reliability and effectiveness
Risk identification in the early design stage using thermal simulations—A case study
The likely increasing temperature predicted by UK Climate Impacts Program (UKCIP) underlines the risk of overheating and potential increase in cooling loads in most of UK dwellings. This could also increase the possibility of failure in building performance evaluation methods and add even more uncertainty to the decision-making process in a low-carbon building design process. This paper uses a 55-unit residential unit project in Cardiff, UK as a case study to evaluate the potential of thermal simulations to identify risk in the early design stage. Overheating, increase in energy loads, carbon emissions, and thermal bridges are considered as potential risks in this study. DesignBuilder (DesignBuilder Software Ltd., Stroud, UK) was the dynamic thermal simulation software used in this research. Simulations compare results in the present, 2050, and 2080 time slices and quantifies the overall cooling and heating loads required to keep the operative temperature within the comfort zone. Overall carbon emissions are also calculated and a considerable reduction in the future is predicted. Further analysis was taken by THERM (Lawrence Berkeley National Laboratory, Berkeley, CA, USA) and Psi THERM (Passivate, London, UK) to evaluate the thermal bridge risk in most common junctions of the case study and the results reveal the potential of thermal assessment methods to improve design details before the start of construction stage
Recommended from our members
Uncertainty explicit assessment of off-the-shelf software: A Bayesian approach
Assessment of software COTS components is an essential part of component-based software development. Poorly chosen components may lead to solutions of low quality and that are difficult to maintain. The assessment may be based on incomplete knowledge about the COTS component itself and other aspects (e.g. vendor’s credentials, etc.), which may affect the decision of selecting COTS component(s). We argue in favor of assessment methods in which uncertainty is explicitly represented (‘uncertainty explicit’ methods) using probability distributions. We provide details of a Bayesian model, which can be used to capture the uncertainties in the simultaneous assessment of two attributes, thus, also capturing the dependencies that might exist between them. We also provide empirical data from the use of this method for the assessment of off-the-shelf database servers which illustrate the advantages of ‘uncertainty explicit’ methods over conventional methods of COTS component assessment which assume that at the end of the assessment the values of the attributes become known with certainty
Integrating automated support for a software management cycle into the TAME system
Software managers are interested in the quantitative management of software quality, cost and progress. An integrated software management methodology, which can be applied throughout the software life cycle for any number purposes, is required. The TAME (Tailoring A Measurement Environment) methodology is based on the improvement paradigm and the goal/question/metric (GQM) paradigm. This methodology helps generate a software engineering process and measurement environment based on the project characteristics. The SQMAR (software quality measurement and assurance technology) is a software quality metric system and methodology applied to the development processes. It is based on the feed forward control principle. Quality target setting is carried out before the plan-do-check-action activities are performed. These methodologies are integrated to realize goal oriented measurement, process control and visual management. A metric setting procedure based on the GQM paradigm, a management system called the software management cycle (SMC), and its application to a case study based on NASA/SEL data are discussed. The expected effects of SMC are quality improvement, managerial cost reduction, accumulation and reuse of experience, and a highly visual management reporting system
Recommended from our members
Empirical evaluation of accuracy of mathematical software used for availability assessment of fault-tolerant computer systems
Dependability assessment is typically based on complex probabilistic models. Markov and semi-Markov models are widely used to model dependability of complex hardware/software architectures. Solving such models, especially when they are stiff, is not trivial and is usually done using sophisticated mathematical software packages. We report a practical experience of comparing the accuracy of solutions stiff Markov models obtained using well known commercial and research software packages. The study is conducted on a contrived but realistic cases study of computer system with hardware redundancy and diverse software under the assumptions that the rate of failure of software may vary over time, a realistic assumption. We observe that the disagreement between the solutions obtained with the different packages may be very significant. We discuss these findings and directions for future research
Measuring Software Process: A Systematic Mapping Study
Context: Measurement is essential to reach predictable performance and high capability processes. It provides
support for better understanding, evaluation, management, and control of the development process
and project, as well as the resulting product. It also enables organizations to improve and predict its process’s
performance, which places organizations in better positions to make appropriate decisions. Objective:
This study aims to understand the measurement of the software development process, to identify studies,
create a classification scheme based on the identified studies, and then to map such studies into the scheme
to answer the research questions. Method: Systematic mapping is the selected research methodology for this
study. Results: A total of 462 studies are included and classified into four topics with respect to their focus
and into three groups based on the publishing date. Five abstractions and 64 attributes were identified,
25 methods/models and 17 contexts were distinguished. Conclusion: capability and performance were the
most measured process attributes, while effort and performance were the most measured project attributes.
Goal Question Metric and Capability Maturity Model Integration were the main methods and models used
in the studies, whereas agile/lean development and small/medium-size enterprise were the most frequently
identified research contexts.Ministerio de EconomÃa y Competitividad TIN2013-46928-C3-3-RMinisterio de EconomÃa y Competitividad TIN2016-76956-C3-2- RMinisterio de EconomÃa y Competitividad TIN2015-71938-RED
- …