3,163 research outputs found
The preferable test documentation using IEEE 829
During software development, testing is one of the processes to find
errors and aimed at evaluating a program meets its required results. In testing
phase there are several testing activity involve user acceptance test, test
procedure and others. If there is no documentation involve in testing the phase
the difficulty happen during test with no solution. It because no reference they
can refer to overcome the same problem. IEEE 829 is one of the standard to
conformance the address requirements. In this standard has several
documentation provided during testing including during preparing test, running
the test and completion test. In this paper we used this standard as guideline to
analyze which documentation our companies prefer the most. From our
analytical study, most company in Malaysia they prepare document for Test
Plan and Test Summary
Target-adaptive CNN-based pansharpening
We recently proposed a convolutional neural network (CNN) for remote sensing
image pansharpening obtaining a significant performance gain over the state of
the art. In this paper, we explore a number of architectural and training
variations to this baseline, achieving further performance gains with a
lightweight network which trains very fast. Leveraging on this latter property,
we propose a target-adaptive usage modality which ensures a very good
performance also in the presence of a mismatch w.r.t. the training set, and
even across different sensors. The proposed method, published online as an
off-the-shelf software tool, allows users to perform fast and high-quality
CNN-based pansharpening of their own target images on general-purpose hardware
A research review of quality assessment for software
Measures were recommended to assess the quality of software submitted to the AdaNet program. The quality factors that are important to software reuse are explored and methods of evaluating those factors are discussed. Quality factors important to software reuse are: correctness, reliability, verifiability, understandability, modifiability, and certifiability. Certifiability is included because the documentation of many factors about a software component such as its efficiency, portability, and development history, constitute a class for factors important to some users, not important at all to other, and impossible for AdaNet to distinguish between a priori. The quality factors may be assessed in different ways. There are a few quantitative measures which have been shown to indicate software quality. However, it is believed that there exists many factors that indicate quality and have not been empirically validated due to their subjective nature. These subjective factors are characterized by the way in which they support the software engineering principles of abstraction, information hiding, modularity, localization, confirmability, uniformity, and completeness
Software component testing : a standard and the effectiveness of techniques
This portfolio comprises two projects linked by the theme of software component testing, which is also
often referred to as module or unit testing. One project covers its standardisation, while the other
considers the analysis and evaluation of the application of selected testing techniques to an existing
avionics system. The evaluation is based on empirical data obtained from fault reports relating to the
avionics system.
The standardisation project is based on the development of the BC BSI Software Component Testing
Standard and the BCS/BSI Glossary of terms used in software testing, which are both included in the
portfolio. The papers included for this project consider both those issues concerned with the adopted
development process and the resolution of technical matters concerning the definition of the testing
techniques and their associated measures.
The test effectiveness project documents a retrospective analysis of an operational avionics system to
determine the relative effectiveness of several software component testing techniques. The methodology
differs from that used in other test effectiveness experiments in that it considers every possible set of
inputs that are required to satisfy a testing technique rather than arbitrarily chosen values from within
this set. The three papers present the experimental methodology used, intermediate results from a failure
analysis of the studied system, and the test effectiveness results for ten testing techniques, definitions for
which were taken from the BCS BSI Software Component Testing Standard.
The creation of the two standards has filled a gap in both the national and international software testing
standards arenas. Their production required an in-depth knowledge of software component testing
techniques, the identification and use of a development process, and the negotiation of the
standardisation process at a national level. The knowledge gained during this process has been
disseminated by the author in the papers included as part of this portfolio. The investigation of test
effectiveness has introduced a new methodology for determining the test effectiveness of software
component testing techniques by means of a retrospective analysis and so provided a new set of data that
can be added to the body of empirical data on software component testing effectiveness
Do the Machine Learning Models on a Crowd Sourced Platform Exhibit Bias? An Empirical Study on Model Fairness
Machine learning models are increasingly being used in important
decision-making software such as approving bank loans, recommending criminal
sentencing, hiring employees, and so on. It is important to ensure the fairness
of these models so that no discrimination is made based on protected attribute
(e.g., race, sex, age) while decision making. Algorithms have been developed to
measure unfairness and mitigate them to a certain extent. In this paper, we
have focused on the empirical evaluation of fairness and mitigations on
real-world machine learning models. We have created a benchmark of 40 top-rated
models from Kaggle used for 5 different tasks, and then using a comprehensive
set of fairness metrics, evaluated their fairness. Then, we have applied 7
mitigation techniques on these models and analyzed the fairness, mitigation
results, and impacts on performance. We have found that some model optimization
techniques result in inducing unfairness in the models. On the other hand,
although there are some fairness control mechanisms in machine learning
libraries, they are not documented. The mitigation algorithm also exhibit
common patterns such as mitigation in the post-processing is often costly (in
terms of performance) and mitigation in the pre-processing stage is preferred
in most cases. We have also presented different trade-off choices of fairness
mitigation decisions. Our study suggests future research directions to reduce
the gap between theoretical fairness aware algorithms and the software
engineering methods to leverage them in practice.Comment: To be appeared in ESEC/FSE 202
Data distribution satellite
A description is given of a data distribution satellite (DDS) system. The DDS would operate in conjunction with the tracking and data relay satellite system to give ground-based users real time, two-way access to instruments in space and space-gathered data. The scope of work includes the following: (1) user requirements are derived; (2) communication scenarios are synthesized; (3) system design constraints and projected technology availability are identified; (4) DDS communications payload configuration is derived, and the satellite is designed; (5) requirements for earth terminals and network control are given; (6) system costs are estimated, both life cycle costs and user fees; and (7) technology developments are recommended, and a technology development plan is given. The most important results obtained are as follows: (1) a satellite designed for launch in 2007 is feasible and has 10 Gb/s capacity, 5.5 kW power, and 2000 kg mass; (2) DDS features include on-board baseband switching, use of Ku- and Ka-bands, multiple optical intersatellite links; and (3) system user costs are competitive with projected terrestrial communication costs
A mechanism design for Crowdsourcing Multi-Objective Recommendation System
A mechanism design for Crowdsourcing Multi-Objective Recommendation Syste
Recommended from our members
Optimization of cool roof and night ventilation in office buildings: A case study in Xiamen, China
Increasing roof albedo (using a “cool” roof) and night ventilation are passive cooling technologies that can reduce the cooling loads in buildings, but existing studies have not comprehensively explored the potential benefits of integrating these two technologies. This study combines an experiment in the summer and transition seasons with an annual simulation so as to evaluate the thermal performance, energy savings and thermal comfort improvement that could be obtained by coupling a cool roof with night ventilation. A holistic approach integrating sensitivity analysis and multi-objective optimization is developed to explore key design parameters (roof albedo, night ventilation air change rate, roof insulation level and internal thermal mass level) and optimal design options for the combined application of the cool roof and night ventilation. The proposed approach is validated and demonstrated through studies on a six-storey office building in Xiamen, a cooling-dominated city in southeast China. Simulations show that combining a cool roof with night ventilation can significantly decrease the annual cooling energy consumption by 27% compared to using a black roof without night ventilation and by 13% compared to using a cool roof without night ventilation. Roof albedo is the most influential parameter for both building energy performance and indoor thermal comfort. Optimal use of the cool roof and night ventilation can reduce the annual cooling energy use by 28% during occupied hours when air-conditioners are on and reduce the uncomfortable time slightly during occupied hours when air-conditioners are off
A review of the literature on citation impact indicators
Citation impact indicators nowadays play an important role in research
evaluation, and consequently these indicators have received a lot of attention
in the bibliometric and scientometric literature. This paper provides an
in-depth review of the literature on citation impact indicators. First, an
overview is given of the literature on bibliographic databases that can be used
to calculate citation impact indicators (Web of Science, Scopus, and Google
Scholar). Next, selected topics in the literature on citation impact indicators
are reviewed in detail. The first topic is the selection of publications and
citations to be included in the calculation of citation impact indicators. The
second topic is the normalization of citation impact indicators, in particular
normalization for field differences. Counting methods for dealing with
co-authored publications are the third topic, and citation impact indicators
for journals are the last topic. The paper concludes by offering some
recommendations for future research
- …