326,357 research outputs found
CSM-424- Evolutionary Complexity: Investigations into Software Flexibility
Flexibility has been hailed as a desirable quality since the earliest days of software engineering. Classic and modern literature suggest that particular programming paradigms, architectural styles and design patterns are more “flexible” than others but stop short of suggesting objective criteria for measuring such claims.
We suggest that flexibility can be measured by applying notions of measurement from computational complexity to the software evolution process. We define evolution complexity (EC) metrics, and demonstrate that—
(a) EC can be used to establish informal claims on software flexibility;
(b) EC can be constant ����or linear� in the size of the change;
(c) EC can be used to choose the most flexible software design policy.
We describe a small-scale experiment designed to test these claims
Acceleration computing process in wavelength scanning interferometry
The optical interferometry has been widely explored for surface measurement due to the advantages of non-contact and high accuracy interrogation. Eventually, some interferometers are used to measure both rough and smooth surfaces such as white light interferometry and wavelength scanning interferometry (WSI). The WSI can be used to measure large discontinuous surface profiles without the phase ambiguity problems. However, the WSI usually needs to capture hundreds of interferograms at different wavelength in order to evaluate the surface finish for a sample. The evaluating process for this large amount of data needs long processing time if CPUs traditional programming is used. This paper presents a parallel programming model to achieve the data parallelism for accelerating the computing analysis of the captured data. This parallel programming is based on CUDATM C program structure that developed by NVIDIA. Additionally, this paper explains the mathematical algorithm that has been used for evaluating the surface profiles. The computing time and accuracy obtained from CUDA program, using GeForce GTX 280 graphics processing unit (GPU), were compared to those obtained from sequential execution Matlab program, using Intel® Core™2 Duo CPU. The results of measuring a step height sample shows that the parallel programming capability of the GPU can highly accelerate the floating point calculation throughput compared to multicore CPU
Computational Complexity Results for Genetic Programming and the Sorting Problem
Genetic Programming (GP) has found various applications. Understanding this
type of algorithm from a theoretical point of view is a challenging task. The
first results on the computational complexity of GP have been obtained for
problems with isolated program semantics. With this paper, we push forward the
computational complexity analysis of GP on a problem with dependent program
semantics. We study the well-known sorting problem in this context and analyze
rigorously how GP can deal with different measures of sortedness.Comment: 12 page
Towards a Theory of Software Development Expertise
Software development includes diverse tasks such as implementing new
features, analyzing requirements, and fixing bugs. Being an expert in those
tasks requires a certain set of skills, knowledge, and experience. Several
studies investigated individual aspects of software development expertise, but
what is missing is a comprehensive theory. We present a first conceptual theory
of software development expertise that is grounded in data from a mixed-methods
survey with 335 software developers and in literature on expertise and expert
performance. Our theory currently focuses on programming, but already provides
valuable insights for researchers, developers, and employers. The theory
describes important properties of software development expertise and which
factors foster or hinder its formation, including how developers' performance
may decline over time. Moreover, our quantitative results show that developers'
expertise self-assessments are context-dependent and that experience is not
necessarily related to expertise.Comment: 14 pages, 5 figures, 26th ACM Joint European Software Engineering
Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE
2018), ACM, 201
Development of a novel virtual coordinate measuring machine
Existing VCMMs (virtual coordinate measuring machine) have been mainly developed to either simulate the measurement process hence enabling the off-line programming, or to perform error analysis and uncertainty evaluation. Their capability and performance could be greatly improved if there is a complete solution to cover the whole process and provide an integrated environment. The aim of this study is to develop such a VCMM that not only supports measurement process simulation, but also performs uncertainty evaluation. It makes use of virtual reality techniques to provide an accurate model of a physical CMM, together with uncertainty evaluation. An interface is also provided to communicate with CMM controller, allowing the measuring programs generated and simulated in the VCMM to be executed or tested on the physical CMM afterwards. This paper discusses the proposal of a novel VCMM design and the preliminary results
Assessing DFID's Results in Water, Sanitation and Hygiene: An Impact Review
Over the 2011 to 2015 period, the UK government set itself the goal of providing 60 million people with clean water, improved sanitation or hygiene promotion interventions (a type of development assistance known collectively as WASH). In 2015 it reported that it had exceeded this target, reaching 62.9 million people. We conducted an impact review of DFID's WASH portfolio to identify whether its results claims were credible, and to explore whether programmes were doing all they could to maximise impact and value for money. The review concluded that UK aid has made a significant contribution to improving WASH access in low-income countries, and that its claim of reaching 62.9 million people is based on sound evidence. However, the review highlighted sustainability as an area of particular concern, with not enough being done to ensure that improved WASH access was becoming a permanent part of people's lives. It concluded that DFID needs to do more to address long-term problems like water security, maintenance of infrastructure, strengthening local institutions so they can manage services, and changing behaviour. It also found improvements are needed to ensure value for money – with a stronger focus on lifetime investment costs – and to target vulnerable people as well as hard-to-reach communities, in line with the Global Goals commitment to 'leave no one behind'. The review gave DFID a 'Green-Amber' rating, recognising the impressive results, but also underscoring the need to better maximise the impact and sustainability of UK aid in this important sector
- …