1,187 research outputs found

    BlogForever D2.6: Data Extraction Methodology

    Get PDF
    This report outlines an inquiry into the area of web data extraction, conducted within the context of blog preservation. The report reviews theoretical advances and practical developments for implementing data extraction. The inquiry is extended through an experiment that demonstrates the effectiveness and feasibility of implementing some of the suggested approaches. More specifically, the report discusses an approach based on unsupervised machine learning that employs the RSS feeds and HTML representations of blogs. It outlines the possibilities of extracting semantics available in blogs and demonstrates the benefits of exploiting available standards such as microformats and microdata. The report proceeds to propose a methodology for extracting and processing blog data to further inform the design and development of the BlogForever platform

    Collective Mind, Part II: technical report

    Get PDF
    Nowadays, engineers have to develop software often without even knowing which hardware it will eventually run on in numerous mobile phones, tablets, laptops, data centers, supercomputers and cloud services. Unfortunately, optimizing compilers often fail to produce fast and energy efficient code across all hardware configurations. In this technical report, we present the first to our knowledge practical, collaborative, publicly available and Wikipedia-inspired solution to this problem based on our recent Collective Mind Infrastructure and Repository

    A Holistic Usability Framework For Distributed Simulation Systems

    Get PDF
    This dissertation develops a holistic usability framework for distributed simulation systems (DSSs). The framework is developed considering relevant research in human-computer interaction, computer science, technical writing, engineering, management, and psychology. The methodology used consists of three steps: (1) framework development, (2) surveys of users to validate and refine the framework, and to determine attribute weights, and (3) application of the framework to two real-world systems. The concept of a holistic usability framework for DSSs arose during a project to improve the usability of the Virtual Test Bed, a prototypical DSS, and the framework is partly a result of that project. In addition, DSSs at Ames Research Center were studied for additional insights. The framework has six dimensions: end user needs, end user interface(s), programming, installation, training, and documentation. The categories of participants in this study include managers, researchers, programmers, end users, trainers, and trainees. The first survey was used to obtain qualitative and quantitative data to validate and refine the framework. Attributes that failed the validation test were dropped from the framework. A second survey was used to obtain attribute weights. The refined framework was used to evaluate two existing DSSs, measuring their holistic usabilities. Ensuring that the needs of the variety of types of users who interact with the system during design, development, and use are met is important to launch a successful system. Adequate consideration of system usability along the several dimensions in the framework will not only ensure system success but also increase productivity, lower life cycle costs, and result in a more pleasurable working experience for people who work with the system

    A general software defect-proneness prediction framework

    Get PDF
    This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.BACKGROUND - Predicting defect-prone software components is an economically important activity and so has received a good deal of attention. However, making sense of the many, and sometimes seemingly inconsistent, results is difficult. OBJECTIVE - We propose and evaluate a general framework for software defect prediction that supports 1) unbiased and 2) comprehensive comparison between competing prediction systems. METHOD - The framework is comprised of 1) scheme evaluation and 2) defect prediction components. The scheme evaluation analyzes the prediction performance of competing learning schemes for given historical data sets. The defect predictor builds models according to the evaluated learning scheme and predicts software defects with new data according to the constructed model. In order to demonstrate the performance of the proposed framework, we use both simulation and publicly available software defect data sets. RESULTS - The results show that we should choose different learning schemes for different data sets (i.e., no scheme dominates), that small details in conducting how evaluations are conducted can completely reverse findings, and last, that our proposed framework is more effective and less prone to bias than previous approaches. CONCLUSIONS - Failure to properly or fully evaluate a learning scheme can be misleading; however, these problems may be overcome by our proposed framework.National Natural Science Foundation of Chin

    Sparse Predictive Modeling : A Cost-Effective Perspective

    Get PDF
    Many real life problems encountered in industry, economics or engineering are complex and difficult to model by conventional mathematical methods. Machine learning provides a wide variety of methods and tools for solving such problems by learning mathematical models from data. Methods from the field have found their way to applications such as medical diagnosis, financial forecasting, and web-search engines. The predictions made by a learned model are based on a vector of feature values describing the input to the model. However, predictions do not come for free in real world applications, since the feature values of the input have to be bought, measured or produced before the model can be used. Feature selection is a process of eliminating irrelevant and redundant features from the model. Traditionally, it has been applied for achieving interpretable and more accurate models, while the possibility of lowering prediction costs has received much less attention in the literature. In this thesis we consider novel feature selection techniques for reducing prediction costs. The contributions of this thesis are as follows. First, we propose several cost types characterizing the cost of performing prediction with a trained model. Particularly, we consider costs emerging from multitarget prediction problems as well as a number of cost types arising when the feature extraction process is structured. Second, we develop greedy regularized least-squares methods to maximize the predictive performance of the models under given budget constraints. Empirical evaluations are performed on numerous benchmark data sets as well as on a novel water quality analysis application. The results demonstrate that in settings where the considered cost types apply, the proposed methods lead to substantial cost savings compared to conventional methods

    An Extensive Analysis of Machine Learning Based Boosting Algorithms for Software Maintainability Prediction

    Get PDF
    Software Maintainability is an indispensable factor to acclaim for the quality of particular software. It describes the ease to perform several maintenance activities to make a software adaptable to the modified environment. The availability & growing popularity of a wide range of Machine Learning (ML) algorithms for data analysis further provides the motivation for predicting this maintainability. However, an extensive analysis & comparison of various ML based Boosting Algorithms (BAs) for Software Maintainability Prediction (SMP) has not been made yet. Therefore, the current study analyzes and compares five different BAs, i.e., AdaBoost, GBM, XGB, LightGBM, and CatBoost, for SMP using open-source datasets. Performance of the propounded prediction models has been evaluated using Root Mean Square Error (RMSE), Mean Magnitude of Relative Error (MMRE), Pred(0.25), Pred(0.30), & Pred(0.75) as prediction accuracy measures followed by a non-parametric statistical test and a post hoc analysis to account for the differences in the performances of various BAs. Based on the residual errors obtained, it was observed that GBM is the best performer, followed by LightGBM for RMSE, whereas, in the case of MMRE, XGB performed the best for six out of the seven datasets, i.e., for 85.71% of the total datasets by providing minimum values for MMRE, ranging from 0.90 to 3.82. Further, on applying the statistical test and on performing the post hoc analysis, it was found that significant differences exist in the performance of different BAs and, XGB and CatBoost outperformed all other BAs for MMRE. Lastly, a comparison of BAs with four other ML algorithms has also been made to bring out BAs superiority over other algorithms. This study would open new doors for the software developers for carrying out comparatively more precise predictions well in time and hence reduce the overall maintenance costs
    • 

    corecore