338 research outputs found

    Improved point center algorithm for K-Means clustering to increase software defect prediction

    Get PDF
    The k-means is a clustering algorithm that is often and easy to use. This algorithm is susceptible to randomly chosen centroid points so that it cannot produce optimal results. This research aimed to improve the k-means algorithm’s performance by applying a proposed algorithm called point center. The proposed algorithm overcame the random centroid value in k-means and then applied it to predict software defects modules’ errors. The point center algorithm was proposed to determine the initial centroid value for the k-means algorithm optimization. Then, the selection of X and Y variables determined the cluster center members. The ten datasets were used to perform the testing, of which nine datasets were used for predicting software defects. The proposed center point algorithm showed the lowest errors. It also improved the k-means algorithm’s performance by an average of 12.82% cluster errors in the software compared to the centroid value obtained randomly on the simple k-means algorithm. The findings are beneficial and contribute to developing a clustering model to handle data, such as to predict software defect modules more accurately

    MEG: Multi-objective Ensemble Generation for Software Defect Prediction

    Get PDF
    Background: Defect Prediction research aims at assisting software engineers in the early identification of software defect during the development process. A variety of automated approaches, ranging from traditional classification models to more sophisticated learning approaches, have been explored to this end. Among these, recent studies have proposed the use of ensemble prediction models (i.e., aggregation of multiple base classifiers) to build more robust defect prediction models. / Aims: In this paper, we introduce a novel approach based on multi-objective evolutionary search to automatically generate defect prediction ensembles. Our proposal is not only novel with respect to the more general area of evolutionary generation of ensembles, but it also advances the state-of-the-art in the use of ensemble in defect prediction. / Method: We assess the effectiveness of our approach, dubbed as Multi-objective Ensemble Generation (MEG), by empirically benchmarking it with respect to the most related proposals we found in the literature on defect prediction ensembles and on multi-objective evolutionary ensembles (which, to the best of our knowledge, had never been previously applied to tackle defect prediction). / Result: Our results show that MEG is able to generate ensembles which produce similar or more accurate predictions than those achieved by all the other approaches considered in 73% of the cases (with favourable large effect sizes in 80% of them). / Conclusions: MEG is not only able to generate ensembles that yield more accurate defect predictions with respect to the benchmarks considered, but it also does it automatically, thus relieving the engineers from the burden of manual design and experimentation

    Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016)

    Get PDF
    Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016) Timisoara, Romania. February 8-11, 2016.The PhD Symposium was a very good opportunity for the young researchers to share information and knowledge, to present their current research, and to discuss topics with other students in order to look for synergies and common research topics. The idea was very successful and the assessment made by the PhD Student was very good. It also helped to achieve one of the major goals of the NESUS Action: to establish an open European research network targeting sustainable solutions for ultrascale computing aiming at cross fertilization among HPC, large scale distributed systems, and big data management, training, contributing to glue disparate researchers working across different areas and provide a meeting ground for researchers in these separate areas to exchange ideas, to identify synergies, and to pursue common activities in research topics such as sustainable software solutions (applications and system software stack), data management, energy efficiency, and resilience.European Cooperation in Science and Technology. COS

    Multiple Objective Co-Optimization of Switched Reluctance Machine Design and Control

    Get PDF
    This dissertation includes a review of various motor types, a motivation for selecting the switched reluctance motor (SRM) as a focus of this work, a review of SRM design and control optimization methods in literature, a proposed co-optimization approach, and empirical evaluations to validate the models and proposed co-optimization methods. The switched reluctance motor (SRM) was chosen as a focus of research based on its low cost, easy manufacturability, moderate performance and efficiency, and its potential for improvement through advanced design and control optimization. After a review of SRM design and control optimization methods in the literature, it was found that co-optimization of both SRM design and controls is not common, and key areas for improvement in methods for optimizing SRM design and control were identified. Among many things, this includes the need for computationally efficient transient models with the accuracy of FEA simulations and the need for co-optimization of both machine geometry and control methods throughout the entire operation range with multiple objectives such as torque ripple, efficiency, etc. A modeling and optimization framework with multiple stages is proposed that includes robust transient simulators that use mappings from FEA in order to optimize SRM geometry, windings, and control conditions throughout the entire operation region with multiple objectives. These unique methods include the use of particle swarm optimization to determine current profiles for low to moderate speeds and other optimization methods to determine optimal control conditions throughout the entire operation range with consideration of various characteristics and boundary conditions such as voltage and current constraints. This multi-stage optimization process includes down-selections in two previous stages based on performance and operational characteristics at zero and maximum speed. Co-optimization of SRM design and control conditions is demonstrated as a final design is selected based on a fitness function evaluating various operational characteristics including torque ripple and efficiency throughout the torque-speed operation range. The final design was scaled, fabricated, and tested to demonstrate the viability of the proposed framework and co-optimization method. Accuracy of the models was confirmed by comparing simulated and empirical results. Test results from operation at various torques and speeds demonstrates the effectiveness of the optimization approach throughout the entire operating range. Furthermore, test results confirm the feasibility of the proposed torque ripple minimization and efficiency maximization control schemes. A key benefit of the overall proposed approach is that a wide range of machine design parameters and control conditions can be swept, and based on the needs of an application, the designer can select the appropriate geometry, winding, and control approach based on various performance functions that consider torque ripple, efficiency, and other metrics

    Dynamically reconfigurable bio-inspired hardware

    Get PDF
    During the last several years, reconfigurable computing devices have experienced an impressive development in their resource availability, speed, and configurability. Currently, commercial FPGAs offer the possibility of self-reconfiguring by partially modifying their configuration bitstream, providing high architectural flexibility, while guaranteeing high performance. These configurability features have received special interest from computer architects: one can find several reconfigurable coprocessor architectures for cryptographic algorithms, image processing, automotive applications, and different general purpose functions. On the other hand we have bio-inspired hardware, a large research field taking inspiration from living beings in order to design hardware systems, which includes diverse topics: evolvable hardware, neural hardware, cellular automata, and fuzzy hardware, among others. Living beings are well known for their high adaptability to environmental changes, featuring very flexible adaptations at several levels. Bio-inspired hardware systems require such flexibility to be provided by the hardware platform on which the system is implemented. In general, bio-inspired hardware has been implemented on both custom and commercial hardware platforms. These custom platforms are specifically designed for supporting bio-inspired hardware systems, typically featuring special cellular architectures and enhanced reconfigurability capabilities; an example is their partial and dynamic reconfigurability. These aspects are very well appreciated for providing the performance and the high architectural flexibility required by bio-inspired systems. However, the availability and the very high costs of such custom devices make them only accessible to a very few research groups. Even though some commercial FPGAs provide enhanced reconfigurability features such as partial and dynamic reconfiguration, their utilization is still in its early stages and they are not well supported by FPGA vendors, thus making their use difficult to include in existing bio-inspired systems. In this thesis, I present a set of architectures, techniques, and methodologies for benefiting from the configurability advantages of current commercial FPGAs in the design of bio-inspired hardware systems. Among the presented architectures there are neural networks, spiking neuron models, fuzzy systems, cellular automata and random boolean networks. For these architectures, I propose several adaptation techniques for parametric and topological adaptation, such as hebbian learning, evolutionary and co-evolutionary algorithms, and particle swarm optimization. Finally, as case study I consider the implementation of bio-inspired hardware systems in two platforms: YaMoR (Yet another Modular Robot) and ROPES (Reconfigurable Object for Pervasive Systems); the development of both platforms having been co-supervised in the framework of this thesis

    Systematic analysis of software development in cloud computing perceptions

    Get PDF
    Cloud computing is characterized as a shared computing and communication infrastructure. It encourages the efficient and effective developmental processes that are carried out in various organizations. Cloud computing offers both possibilities and solutions of problems for outsourcing and management of software developmental operations across distinct geography. Cloud computing is adopted by organizations and application developers for developing quality software. The cloud has the significant impact on utilizing the artificial complexity required in developing and designing quality software. Software developmental organization prefers cloud computing for outsourcing tasks because of its available and scalable nature. Cloud computing is the ideal choice utilized for development modern software as they have provided a completely new way of developing real-time cost-effective, efficient, and quality software. Tenants (providers, developers, and consumers) are provided with platforms, software services, and infrastructure based on pay per use phenomenon. Cloud-based software services are becoming increasingly popular, as observed by their widespread use. Cloud computing approach has drawn the interest of researchers and business because of its ability to provide a flexible and resourceful platform for development and deployment. To determine a cohesive understanding of the analyzed problems and solutions to improve the quality of software, the existing literature resources on cloud-based software development should be analyzed and synthesized systematically. Keyword strings were formulated for analyzing relevant research articles from journals, book chapters, and conference papers. The research articles published in (2011–2021) various scientific databases were extracted and analyzed for retrieval of relevant research articles. A total of 97 research publications are examined in this SLR and are evaluated to be appropriate studies in explaining and discussing the proposed topic. The major emphasis of the presented systematic literature review (SLR) is to identify the participating entities of cloud-based software development, challenges associated with adopting cloud for software developmental processes, and its significance to software industries and developers. This SLR will assist organizations, designers, and developers to develop and deploy user-friendly, efficient, effective, and real time software applications.Qatar University Internal Grant - No. IRCC‐2021‐010

    Deep geothermal exploration by means of electromagnetic methods: New insights from the Larderello geothermal field (Italy)

    Get PDF
    The main target of this research is the improvement of the knowledge on the deep structures of the Larderello-Travale geothermal field (Tuscany, Italy), with a focus on the Lago Boracifero sector, particularly on the heat source of the system, the tectonics and its relation with the hydrothermal circulation. In the frame of the PhD program and of the IMAGE project (Integrated Methods for Advanced Geothermal Exploration; EU FP7), we acquired new magnetotelluric (MT) and Time Domain EM (TDEM) data in a key sector of the field (Lago Boracifero). These data integrate the MT datasets previously acquired in the frame of exploration and scientific projects. This study is based also on a integrated modelling, which included and organized in Petrel (Schlumberger) environment, a large quantity of geological and geophysical data. We also propose an integrated approach to improve the reliability of the 2D MT inversion models, by using external information from the integrated model of the field as well as an innovative probabilistic analysis of the MT data. We present our attempt to treat the 1D magnetotelluric inverse problem with a probabilistic approach, by adopting the Particle Swarm Optimization (PSO), a heuristic method based on the concept of the adaptive behaviour to solve complex problems. The user-friendly software “GlobalEM” was implemented for the analysis and probabilistic optimization of MT data. The results from theoretical and measured MT data are promising, also for the possibility to implement different schemes of constrained optimization as well as joint optimization (e.g. MT and TDEM). The analysis of the a-posteriori distribution of the results can be of help to understand the reliability of the model. The 2D MT inversion models and the integrated study of the Larderello-Travale geothermal field improved the knowledge about the deep structures of the system, with a relevant impact on the conceptual geothermal model. In Micaschist and Gneiss complexes we observed a generally high electrical resistivity response locally interrupted by low resistivity anomalies that are well correlated with the most productive sectors of the field. A still partial melted igneous intrusion beneath the Lago Boracifero sector was detected based on the interpretation of the low resistivity anomalies located at a mid-crustal level (> 6 km). New insights on the tectonics are proposed in this research. The fundamental role of a large tectonic structure, i.e. the Cornia Fault, located along the homonymous river, was highlighted. In our opinion, this fault played an important role in the geothermal evolution of the Lago Boracifero sector, favouring both the hydrothermal circulation and the emplacement of magma bodies. In our opinion, the system can be ascribed to a “young convective and intrusive” field feed by a complex composite batholite

    Design and analysis of current stress minimalisation controllers in multi-active bridge DC-DC converters.

    Get PDF
    Multi active bridge (MAB) DC-DC converters have attracted significant research attention in power conversion applications within DC microgrids, medium voltage DC and high voltage DC transmission systems. This is encouraged by MAB's several functionalities such as DC voltage stepping/matching, bidirectional power flow regulation and DC fault isolation. In that sense this family of DC-DC converters is similar to AC transformers in AC grids and are hence called DC transformers. However, DC transformers are generally less efficient compared to AC transformers, due to the introduction of power electronics. Moreover, the control scheme design is challenging in DC transformers, due to its nonlinear characteristics and multi degrees of freedom introduced by the phase shift control technique of the converter bridges. The main purpose of this research is to devise control techniques that enhance the conversion efficiency of DC transformers via the minimisation of current stresses. This is achieved by designing two generalised controllers that minimise current stresses in MAB DC transformers. The first controller is for a dual active bridge (DAB). This is the simplest form of MAB, where particle swarm optimisation (PSO) is implemented offline to obtain optimal triple phase shift (TPS) parameters, for minimising the RMS current. This is achieved by applying PSO on DAB steady-state model, with generic per unit expressions of converter AC RMS current and transferred power under all possible switching modes. Analysing the generic data pool generated by the offline PSO algorithm enabled the design of a generic real-time closed-loop PI-based controller. The proposed control scheme achieves bidirectional active power regulation in DAB over the 1 to -1 pu power range with minimum-RMS-current for buck/boost/unity modes, without the need for online optimisation or memory-consuming look-up tables. Extending the same controller design procedure for MAB was deemed not feasible, as it would involve a highly complex PSO exercise that is difficult to generalise for N number of bridges; it would therefore generate a massive data pool that would be quite cumbersome to analyse and generalise. For this reason, a second controller is developed for MAB converter without using a converter-based model, where current stress is minimised and active power is regulated. This is achieved through a new real-time minimum-current point-tracking (MCPT) algorithm, which realises iterative-based optimisation search using adaptive-step perturb and observe (P&O) method. Active power is regulated in each converter bridge using a new power decoupler algorithm. The proposed controller is generalised to MAB regardless of the number of ports, power level and values of DC voltage ratios between the different ports. Therefore, it does not require an extensive look-up table for implementation, the need for complex non-linear converter modelling and it is not circuit parameter-dependent. The main disadvantages of this proposed controller are the slightly slow transient response and the number of sensors it requires

    A survey on scheduling and mapping techniques in 3D Network-on-chip

    Full text link
    Network-on-Chips (NoCs) have been widely employed in the design of multiprocessor system-on-chips (MPSoCs) as a scalable communication solution. NoCs enable communications between on-chip Intellectual Property (IP) cores and allow those cores to achieve higher performance by outsourcing their communication tasks. Mapping and Scheduling methodologies are key elements in assigning application tasks, allocating the tasks to the IPs, and organising communication among them to achieve some specified objectives. The goal of this paper is to present a detailed state-of-the-art of research in the field of mapping and scheduling of applications on 3D NoC, classifying the works based on several dimensions and giving some potential research directions
    • 

    corecore