59 research outputs found
Multi-objective software effort estimation
We introduce a bi-objective effort estimation algorithm that combines Confidence Interval Analysis and assessment of Mean Absolute Error. We evaluate our proposed algorithm on three different alternative formulations, baseline comparators and current state-of-the-art effort estimators applied to five real-world datasets from the PROMISE repository, involving 724 different software projects in total. The results reveal that our algorithm outperforms the baseline, state-of-the-art and all three alternative formulations, statistically significantly (p < 0:001) and with large effect size (A12≥ 0:9) over all five datasets. We also provide evidence that our algorithm creates a new state-of-the-art, which lies within currently claimed industrial human-expert-based thresholds, thereby demonstrating that our findings have actionable conclusions for practicing software engineers
Multi-Objective Software Effort Estimation: A Replication Study
Replication studies increase our confidence in previous results when the findings are similar each time, and help mature our knowledge by addressing both internal and external validity aspects. However, these studies are still rare in certain software engineering fields. In this paper, we replicate and extend a previous study, which denotes the current state-of-the-art for multi-objective software effort estimation, namely CoGEE. We investigate the original research questions with an independent implementation and the inclusion of a more robust baseline (LP4EE), carried out by the first author, who was not involved in the original study. Through this replication, we strengthen both the internal and external validity of the original study. We also answer two new research questions investigating the effectiveness of CoGEE by using four additional evolutionary algorithms (i.e., IBEA, MOCell, NSGA-III, SPEA2) and a well-known Java framework for evolutionary computation, namely JMetal (rather than the previously used R software), which allows us to strengthen the external validity of the original study. The results of our replication confirm that: (1) CoGEE outperforms both baseline and state-of-the-art benchmarks statistically significantly (p < 0:001); (2) CoGEE’s multi-objective nature makes it able to reach such a good performance; (3) CoGEE’s estimation errors lie within claimed industrial human-expert-based thresholds. Moreover, our new results show that the effectiveness of CoGEE is generally not limited to nor dependent on the choice of the multi-objective algorithm. Using CoGEE with either NSGA-II, NSGA-III, or MOCell produces human competitive results in less than a minute. The Java version of CoGEE has decreased the running time by over 99.8% with respect to its R counterpart. We have made publicly available the Java code of CoGEE to ease its adoption, as well as, the data used in this study in order to allow for future replication and extension of our work
Learning From Mistakes: Machine Learning Enhanced Human Expert Effort Estimates
In this paper, we introduce a novel approach to predictive modeling for software engineering, named Learning From Mistakes (LFM). The core idea underlying our proposal is to automatically learn from past estimation errors made by human experts, in order to predict the characteristics of their future misestimates, therefore resulting in improved future estimates. We show the feasibility of LFM by investigating whether it is possible to predict the type, severity and magnitude of errors made by human experts when estimating the development effort of software projects, and whether it is possible to use these predictions to enhance future estimations. To this end we conduct a thorough empirical study investigating 402 maintenance and new development industrial software projects. The results of our study reveal that the type, severity and magnitude of errors are all, indeed, predictable. Moreover, we find that by exploiting these predictions, we can obtain significantly better estimates than those provided by random guessing, human experts and traditional machine learners in 31 out of the 36 cases considered (86%), with large and very large effect sizes in the majority of these cases (81%). This empirical evidence opens the door to the development of techniques that use the power of machine learning, coupled with the observation that human errors are predictable, to support engineers in estimation tasks rather than replacing them with machine-provided estimates
Use of polyaspartates for the tartaric stabilisation of white and red wines and side effects on wine characteristics
Aim: The stabilising efficacy against tartaric precipitations of polyaspartates-based products (PAs), in particular potassium polyaspartate (KPA), was tested with six different wines (three white and three red). Some side effects on wine characteristics (white wine colour stability, wine turbidity and filterability) were also studied.
Results and conclusions: All PAs showed good stabilising efficacy against tartaric precipitations according to the cold test. With the same test, the PAs were stable in wine for 1 year of storage, which was the total duration of the study. The dose of 100 mg/L was sufficient to stabilise the tested wines. No differences in filterability were observed in comparison with MTA (metatartaric acid). The hypothesised protective effect against colour browning in white wines was not observed. Significance and impact of the study: The international wine trade requires stable wines. This paper provides information to support wineries in managing the use of KPA, as little information is available to date in the literature on this stabilising additive
Characterisation of Refined Marc Distillates with Alternative Oak Products Using Different Analytical Approaches
The use of oak barrel alternatives, including oak chips, oak staves and oak powder, is quite common in the production of spirits obtained from the distillation of vegetal fermented products such as grape pomace. This work explored the use of unconventional wood formats such as peeled and sliced wood. The use of poplar wood was also evaluated to verify its technological uses to produce aged spirits. To this aim, GC-MS analyses were carried out to obtain an aromatic characterisation of experimental distillates treated with these products. Moreover, the same spirits were studied for classification purposes using NMR, NIR and e-nose. A significant change in the original composition of grape pomace distillate due to sorption phenomena was observed; the intensity of this effect was greater for poplar wood. The release of aroma compounds from wood depended both on the toasting level and wood assortment. Higher levels of xylovolatiles, namely, whisky lactone, were measured in samples aged using sliced woods. Both the NIR and NMR analyses highlighted similarities among samples refined with oak tablets, differentiating them from the other wood types. Finally, E-nose seemed to be a promising alternative to spectroscopic methods both for the simplicity of sample preparation and method portability
Effect of the extent of ethanol removal on the volatile compounds of a Chardonnay wine dealcoholized by vacuum distillation
“Beverages obtained from the partial dealcoholization of wine” are those drinks whose final alcoholic degree after dealcoholization is lower than that of a wine and higher than or equal to 0.5% v/v. When the total alcoholic degree is lower than 0.5% v/v the denomination is “Beverages obtained from the dealcoholization of wine”. The practices to be authorized for the production of these drinks with the dealcoholized wine fractions are currently being studied at OIV. The characterization of the composition of these fractions is essential to identify the necessary corrective practices. The present work was aimed at monitoring the losses of the main volatile compounds of a Chardonnay wine with the proceeding of the dealcoholization process by vacuum distillation. The wine was subjected to total dealcoholization, and during the process the evaporated fractions, re-condensed at 9 ∘C, were collected in aliquots of 1.25 L each. The ethanol content of each fraction was measured, and for the first 20 fractions the content in volatile compounds was determined with GC-MS. The results show that the losses of volatile compounds during the dealcoholization process follow different trends depending on the molecules considered. The most volatile compounds, generally with the lowest perception thresholds, were mainly present in the first evaporated fractions. The greatest losses concerned isoamylacetate, ethyl hexanoate and ethyl octanoate. Conversely, a greater number of molecules were present at similar concentrations in the different fractions, and their losses followed a linear or sometimes exponential trend: in particular, these compounds included n-hexanol, 2-phenylethanol, diethyl succinate and medium chain fatty acids (hexanoic, octanoic and decanoic acids). In the wine dealcoholized at 3.36% v/v (loss of ethanol equal to 7.43% v/v, corresponding to the 20th and last recondensed fraction), some volatile compounds were no longer detectable or quantifiable; in particular, these compounds were isoamylacetate, ethylhexanoate, hexylacetate, n-hexanol and other alcohols with 6 carbon atoms and ethyl octanoate. Other compounds, such as hexanoic, octanoic and decanoic acids, and, in particolar, β-phenylethanol, benzylic aalcohol and γ-butyrolactone, underwent lower percentage losses than those of ethanol. The dealcoholization process can therefore deeply modify the original aromatic profile of the wines, intervening on the absolute concentration and on the relative ratios of the single molecules
- …