812 research outputs found

    Optically thin composite resonant absorber at the near-infrared band: a polarization independent and spectrally broadband configuration

    Get PDF
    Cataloged from PDF version of article.We designed, fabricated, and experimentally characterized thin absorbers utilizing both electrical and magnetic impedance matching at the near-infrared regime. The absorbers consist of four main layers: a metal back plate, dielectric spacer, and two artificial layers. One of the artificial layers provides electrical resonance and the other one provides magnetic resonance yielding a polarization independent broadband perfect absorption. The structure response remains similar for the wide angle of incidence due to the sub-wavelength unit cell size of the constituting artificial layers. The design is useful for applications such as thermal photovoltaics, sensors, and camouflage. (C)2011 Optical Society of Americ

    Bir Procrutes Hikâyesi: Türkçe Fransızca Gibi İşlenirmi ?

    Get PDF
    International audienc

    An external replication on the effects of test-driven development using a multi-site blind analysis approach

    Get PDF
    Context: Test-driven development (TDD) is an agile practice claimed to improve the quality of a software product, as well as the productivity of its developers. A previous study (i.e., baseline experiment) at the University of Oulu (Finland) compared TDD to a test-last development (TLD) approach through a randomized controlled trial. The results failed to support the claims. Goal: We want to validate the original study results by replicating it at the University of Basilicata (Italy), using a different design. Method: We replicated the baseline experiment, using a crossover design, with 21 graduate students. We kept the settings and context as close as possible to the baseline experiment. In order to limit researchers bias, we involved two other sites (UPM, Spain, and Brunel, UK) to conduct blind analysis of the data. Results: The Kruskal-Wallis tests did not show any significant difference between TDD and TLD in terms of testing effort (p-value = .27), external code quality (p-value = .82), and developers' productivity (p-value = .83). Nevertheless, our data revealed a difference based on the order in which TDD and TLD were applied, though no carry over effect. Conclusions: We verify the baseline study results, yet our results raises concerns regarding the selection of experimental objects, particularly with respect to their interaction with the order in which of treatments are applied. We recommend future studies to survey the tasks used in experiments evaluating TDD. Finally, to lower the cost of replication studies and reduce researchers' bias, we encourage other research groups to adopt similar multi-site blind analysis approach described in this paper.This research is supported in part by the Academy of Finland Project 278354

    Molecular and antigenic characterization of Piscine orthoreovirus (PRV) from rainbow trout (Oncorhynchus mykiss)

    Get PDF
    Piscine orthoreovirus (PRV-1) causes heart and skeletal muscle inflammation (HSMI) in farmed Atlantic salmon (Salmo salar). Recently, a novel PRV (formerly PRV-Om, here called PRV-3), was found in rainbow trout (Oncorhynchus mykiss) with HSMI-like disease. PRV is considered to be an emerging pathogen in farmed salmonids. In this study, molecular and antigenic characterization of PRV-3 was performed. Erythrocytes are the main target cells for PRV, and blood samples that were collected from experimentally challenged fish were used as source of virus. Virus particles were purified by gradient ultracentrifugation and the complete coding sequences of PRV-3 were obtained by Illumina sequencing. When compared to PRV-1, the nucleotide identity of the coding regions was 80.1%, and the amino acid identities of the predicted PRV-3 proteins varied from 96.7% (λ1) to 79.1% (σ3). Phylogenetic analysis showed that PRV-3 belongs to a separate cluster. The region encoding σ3 were sequenced from PRV-3 isolates collected from rainbow trout in Europe. These sequences clustered together, but were distant from PRV-3 that was isolated from rainbow trout in Norway. Bioinformatic analyses of PRV-3 proteins revealed that predicted secondary structures and functional domains were conserved between PRV-3 and PRV-1. Rabbit antisera raised against purified virus or various recombinant virus proteins from PRV-1 all cross-reacted with PRV-3. Our findings indicate that despite different species preferences of the PRV subtypes, several genetic, antigenic, and structural properties are conserved between PRV-1 and-3

    Software defect prediction: do different classifiers find the same defects?

    Get PDF
    Open Access: This article is distributed under the terms of the Creative Commons Attribution 4.0 International License CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.During the last 10 years, hundreds of different defect prediction models have been published. The performance of the classifiers used in these models is reported to be similar with models rarely performing above the predictive performance ceiling of about 80% recall. We investigate the individual defects that four classifiers predict and analyse the level of prediction uncertainty produced by these classifiers. We perform a sensitivity analysis to compare the performance of Random Forest, Naïve Bayes, RPart and SVM classifiers when predicting defects in NASA, open source and commercial datasets. The defect predictions that each classifier makes is captured in a confusion matrix and the prediction uncertainty of each classifier is compared. Despite similar predictive performance values for these four classifiers, each detects different sets of defects. Some classifiers are more consistent in predicting defects than others. Our results confirm that a unique subset of defects can be detected by specific classifiers. However, while some classifiers are consistent in the predictions they make, other classifiers vary in their predictions. Given our results, we conclude that classifier ensembles with decision-making strategies not based on majority voting are likely to perform best in defect prediction.Peer reviewedFinal Published versio

    Results from an ethnographically-informed study in the context of test driven development

    Get PDF
    Background: Test-driven development (TDD) is an iterative software development technique where unit tests are defined before production code. Previous studies fail to analyze the values, beliefs, and assumptions that inform and shape TDD. Aim: We designed and conducted a qualitative study to understand the values, beliefs, and assumptions of TDD. In particular, we sought to understand how novice and professional software developers, arranged in pairs (a driver and a pointer), perceive and apply TDD. Method: 14 novice software developers, i.e., graduate students in Computer Science at the University of Basilicata, and six professional software developers (with one to 10 years work experience) participated in our ethnographicallyinformed study. We asked the participants to implement a new feature for an existing software written in Java. We immersed ourselves in the context of the study, and collected data by means of contemporaneous field notes, audio recordings, and other artifacts. Results: A number of insights emerge from our analysis of the collected data, the main ones being: (i) refactoring (one of the phases of TDD) is not performed as often as the process requires and it is considered less important than other phases, (ii) the most important phase is implementation, (iii) unit tests are almost never up-to-date, (iv) participants first build a sort of mental model of the source code to be implemented and only then write test cases on the basis of this model; and (v) apart from minor differences, professional developers and students applied TDD in a similar fashion. Conclusions: Developers write quick-and-dirty production code to pass the tests and ignore refactoring.Copyright is held by the owner/auther(s)

    A benchmark study on the effectiveness of search-based data selection and feature selection for cross project defect prediction

    Get PDF
    Abstract Context: Previous studies have shown that steered training data or dataset selection can lead to better performance for cross project defect prediction( CPDP). On the other hand, feature selection and data quality are issues to consider in CPDP. Objective: We aim at utilizing the Nearest Neighbor (NN)-Filter, embedded in genetic algorithm to produce validation sets for generating evolving training datasets to tackle CPDP while accounting for potential noise in defect labels. We also investigate the impact of using di erent feature sets. Method: We extend our proposed approach, Genetic Instance Selection (GIS), by incorporating feature selection in its setting. We use 41 releases of 11 multi-version projects to assess the performance GIS in comparison with benchmark CPDP (NN- lter and Naive-CPDP) and within project (Cross- Validation(CV) and Previous Releases(PR)). To assess the impact of feature sets, we use two sets of features, SCM+OO+LOC(all) and CK+LOC(ckloc) as well as iterative info-gain subsetting(IG) for feature selection. Results: GIS variant with info gain feature selection is signi cantly better than NN-Filter (all,ckloc,IG) in terms of F1 (p = values 0:001, Cohen's d = f0:621; 0:845; 0:762g) and G (p = values 0:001, Cohen's d = f0:899; 1:114; 1:056g), and Naive CPDP (all,ckloc,IG) in terms of F1 (p = values 0:001, Cohen's d = f0:743; 0:865; 0:789g) and G (p = values 0:001, Cohen's d = f1:027; 1:119; 1:050g). Overall, the performance of GIS is comparable to that of within project defect prediction (WPDP) benchmarks, i.e. CV and PR. In terms of multiple comparisons test, all variants of GIS belong to the top ranking group of approaches. Conclusions: We conclude that datasets obtained from search based approaches combined with feature selection techniques is a promising way to tackle CPDP. Especially, the performance comparison with the within project scenario encourages further investigation of our approach. However, the performance of GIS is based on high recall in the expense of a loss in precision. Using di erent optimization goals, utilizing other validation datasets and other fea

    Effect of Time-pressure on Perceived and Actual Performance in Functional Software Testing

    Get PDF
    Background: Time-pressure is an inevitable reality of software industry that influences the performance of software engineers. It may result in adverse effects on software quality or distort the perception of performance on executed tasks to differ from actual performance. Objective: We aim to investigate the effect of timepressure on perceived and actual performance of software testers in the context of functional software testing. Method: We performed two controlled experiments with 87 graduate students in two academic terms. We assessed actual performance in terms of coverage (i.e. percentage of test cases correctly identified) and perceived performance using NASA-TLX. We have an independent factorial design for our experimental study. Results: The results reveal a significant effect of time-pressure on actual performance. However, we could not observe a significant effect of time-pressure on the perceived performance of the participants for the task undertaken. We also observed a significant negative correlation between actual and perceived performance when controlled for time-pressure and experimental session factors. Conclusion: Time-pressure affects the actual performance in a testing task but the perception of accomplishment by the testers is sustained irrespective of time-pressure, indicating an over-estimation issue. Perception of performance should be adjusted to align with reality to account for the effect of time pressure. This will lead to better self estimates of performance.Academy of Finland Projec

    Ground truth deficiencies in software engineering: when codifying the past can be counterproductive

    Get PDF
    Many software engineering tools build and evaluate their models based on historical data to support development and process decisions. These models help us answer numerous interesting questions, but have their own caveats. In a real-life setting, the objective function of human decision-makers for a given task might be influenced by a whole host of factors that stem from their cognitive biases, subverting the ideal objective function required for an optimally functioning system. Relying on this data as ground truth may give rise to systems that end up automating software engineering decisions by mimicking past sub-optimal behaviour. We illustrate this phenomenon and suggest mitigation strategies to raise awareness

    Use of locust bean flour as a substitute for cocoa in the production of chocolate spread: Quality attributes and storage stability

    Get PDF
    The effects of locust bean (carob) flour (LBF) as a substitute for cocoa in the production, quality attributes, and storage stability of chocolate spreads were investigated. CON (4.5% cocoa), CF15 (3.0% cocoa + 1.5% LBF), CF30 (1.5% cocoa + 3.0% LBF), and CF45 (4.5% LBF) formulations were produced, and stored at 22 and 35 °C for 12 weeks. Appearance, odor, sweetness, color, and overall acceptability scores decreased with increasing LBF, but up to 3.0% LBF did not affect the scores compared to the CON. Replacing cocoa with LBF at a low level resulted in higher hardness and spreadability. Hardness, free fatty acid, and peroxide values (PV) increased, aw values generally decreased during storage, but PV was still lower than 10 meq O2/kg. As the LBF ratio increased, darkening occurred in the chocolates. Thus, up to 3% of LBF can be used as a cocoa substitute with minimal quality and sensory changes in the production of chocolate spread
    corecore