151,843 research outputs found
What is the Connection Between Issues, Bugs, and Enhancements? (Lessons Learned from 800+ Software Projects)
Agile teams juggle multiple tasks so professionals are often assigned to
multiple projects, especially in service organizations that monitor and
maintain a large suite of software for a large user base. If we could predict
changes in project conditions changes, then managers could better adjust the
staff allocated to those projects.This paper builds such a predictor using data
from 832 open source and proprietary applications. Using a time series analysis
of the last 4 months of issues, we can forecast how many bug reports and
enhancement requests will be generated next month. The forecasts made in this
way only require a frequency count of this issue reports (and do not require an
historical record of bugs found in the project). That is, this kind of
predictive model is very easy to deploy within a project. We hence strongly
recommend this method for forecasting future issues, enhancements, and bugs in
a project.Comment: Accepted to 2018 International Conference on Software Engineering, at
the software engineering in practice track. 10 pages, 10 figure
A Platform-Based Software Design Methodology for Embedded Control Systems: An Agile Toolkit
A discrete control system, with stringent hardware constraints, is effectively an embedded real-time system and hence requires a rigorous methodology to develop the software involved. The development methodology proposed in this paper adapts agile principles and patterns to support the building of embedded control systems, focusing on the issues relating to a system's constraints and safety. Strong unit testing, to ensure correctness, including the satisfaction of timing constraints, is the foundation of the proposed methodology. A platform-based design approach is used to balance costs and time-to-market in relation to performance and functionality constraints. It is concluded that the proposed methodology significantly reduces design time and costs, as well as leading to better software modularity and reliability
Towards Automated Performance Bug Identification in Python
Context: Software performance is a critical non-functional requirement,
appearing in many fields such as mission critical applications, financial, and
real time systems. In this work we focused on early detection of performance
bugs; our software under study was a real time system used in the
advertisement/marketing domain.
Goal: Find a simple and easy to implement solution, predicting performance
bugs.
Method: We built several models using four machine learning methods, commonly
used for defect prediction: C4.5 Decision Trees, Na\"{\i}ve Bayes, Bayesian
Networks, and Logistic Regression.
Results: Our empirical results show that a C4.5 model, using lines of code
changed, file's age and size as explanatory variables, can be used to predict
performance bugs (recall=0.73, accuracy=0.85, and precision=0.96). We show that
reducing the number of changes delivered on a commit, can decrease the chance
of performance bug injection.
Conclusions: We believe that our approach can help practitioners to eliminate
performance bugs early in the development cycle. Our results are also of
interest to theoreticians, establishing a link between functional bugs and
(non-functional) performance bugs, and explicitly showing that attributes used
for prediction of functional bugs can be used for prediction of performance
bugs
A study of QoS support for real time multimedia communication over IEEE802.11 WLAN : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Systems Engineering, Massey University, Albany, New Zealand
Quality of Service (QoS) is becoming a key problem for Real Time (RT) traffic transmitted over Wireless Local Area Network (WLAN). In this project the recent proposals for enhanced QoS performance for RT multimedia is evaluated and analyzed. Two simulation models for EDCF and HCF protocols are explored using OPNET and NS-2 simulation packages respectively. From the results of the simulation, we have studied the limitations of the 802.1 le standard for RT multimedia communication and analysed the reasons of the limitations happened and proposed the solutions for improvement. Since RT multimedia communication encompasses time-sensitive traffic, the measure of quality of service generally is minimal delay (latency) and delay variation (jitter). 802.11 WLAN standard focuses on the PHY layer and the MAC layer. The transmitted data rate on PHY layer are increased on standards 802.1 lb, a, g, j, n by different code mapping technologies while 802.1 le is developed specially for the QoS performance of RT-traffics at the MAC layer. Enhancing the MAC layer protocols are the significant topic for guaranteeing the QoS performance of RT-traffics. The original MAC protocols of 802.11 are DCF (Distributed Coordination Function) and PCF (Point Coordinator Function). They cannot achieve the required QoS performance for the RT-traffic transmission. IEEE802.lle draft has developed EDCF and HCF instead. Simulation results of EDCF and HCF models that we explored by OPNET and NS-2, show that minimal latency and jitter can be achieved. However, the limitations of EDCF and HCF are identified from the simulation results. EDCF is not stable under the high network loading. The channel utilization is low by both protocols. Furthermore, the fairness index is very poor by the HCF. It means the low priority traffic should starve in the WLAN network. All these limitations are due to the priority mechanism of the protocols. We propose a future work to develop dynamic self-adaptive 802.11c protocol as practical research directions. Because of the uncertainly in the EDCF in the heavy loading, we can add some parameters to the traffic loading and channel condition efficiently. We provide indications for adding some parameters to increase the EDCF performance and channel utilization. Because all the limitations are due to the priority mechanism, the other direction is doing away with the priority rule for reasonable bandwidth allocation. We have established that the channel utilization can be increased and collision time can be reduced for RT-traffics over the EDCF protocol. These parameters can include loading rate, collision rate and total throughput saturation. Further simulation should look for optimum values for the parameters. Because of the huge polling-induced overheads, HCF has the unsatisfied tradeoff. This leads to poor fairness and poor throughput. By developing enhanced HCF it may be possible to enhance the RI polling interval and TXOP allocation mechanism to get better fairness index and channel utilization. From the simulation, we noticed that the traffics deployment could affect the total QoS performance, an indication to explore whether the classification of traffics deployments to different categories is a good idea. With different load-based traffic categories, QoS may be enhanced by appropriate bandwidth allocation Strategy
The solar wind structures associated with cosmic ray decreases and particle acceleration in 1978-1982
The time histories of particles in the energy range 1 MeV to 1 GeV at times of all greater than 3 percent cosmic ray decreases in the years 1978 to 1982 are studied. Essentially all 59 of the decreases commenced at or before the passages of interplanetary shocks, the majority of which accelerated energetic particles. We use the intensity-time profiles of the energetic particles to separate the cosmic ray decreases into four classes which we subsequently associate with four types of solar wind structures. Decreases in class 1 (15 events) and class 2 (26 events) can be associated with shocks which are driven by energetic coronal mass ejections. For class 1 events the ejecta is detected at 1 AU whereas this is not the case for class 2 events. The shock must therefore play a dominant role in producing the depression of cosmic rays in class 2 events. In all class 1 and 2 events (which comprise 69 percent of the total) the departure time of the ejection from the sun (and hence the location) can be determined from the rapid onset of energetic particles several days before the shock passage at Earth. The class 1 events originate from within 50 deg of central meridian. Class 3 events (10 decreases) can be attributed to less energetic ejections which are directed towards the Earth. In these events the ejecta is more important than the shock in causing a depression in the cosmic ray intensity. The remaining events (14 percent of the total) can be attributed to corotating streams which have ejecta material embedded in them
Recommended from our members
Estimating software project effort using analogies
Accurate project effort prediction is an important goal for the software engineering community. To date most work has focused upon building algorithmic models of effort, for example COCOMO. These can be calibrated to local environments. We describe an alternative approach to estimation based upon the use of analogies. The underlying principle is to characterise projects in terms of features (for example, the number of interfaces, the development method or the size of the functional requirements document). Completed projects are stored and then the problem becomes one of finding the most similar projects to the one for which a prediction is required. Similarity is defined as Euclidean distance in n-dimensional space where n is the number of project features. Each dimension is standardised so all dimensions have equal weight. The known effort values of the nearest neighbours to the new project are then used as the basis for the prediction. The process is automated using a PC based tool known as ANGEL. The method is validated on nine different industrial datasets (a total of 275 projects) and in all cases analogy outperforms algorithmic models based upon stepwise regression. From this work we argue that estimation by analogy is a viable technique that, at the very least, can be used by project managers to complement current estimation techniques
- …