80 research outputs found

    Hatékony rendszer-szintű hatásanalízis módszerek és alkalmazásuk a szoftverfejlesztés folyamatában = Efficient whole-system impact analysis methods with applications in software development

    Get PDF
    Szoftver hatásanalízis során a rendszer megváltoztatásának következményeit becsüljük, melynek fontos alkalmazásai vannak például a változtatás-propagálás, költségbecslés, szoftverminőség és tesztelés területén. A kutatás során olyan hatásanalízis módszereket dolgoztunk ki, melyek hatékonyan és sikeresen alkalmazhatók nagyméretű és heterogén architektúrájú, valós alkalmazások esetében is. A korábban rendelkezésre álló módszerek csak korlátozott méretben és környezetekben voltak képesek eredményt szolgáltatni. A meglévő statikus és dinamikus programszeletelés és függőség elemzési algoritmusok továbbfejlesztése mellett számos kapcsolódó területen értünk el eredményeket úgy, mint függőségek metrikákkal történő vizsgálata, fogalmi csatolás kutatása, minőségi modellek, hiba- és produktivitás előrejelzés. Ezen területeknek a módszerek gyakorlatban történő alkalmazásában van jelentősége. Speciális technológiákra koncentrálva újszerű eredmények születtek, például adatbázis rendszerek vagy alacsony szintű nyelvek esetében. A hatásanalízis módszerek alkalmazásai terén kidolgoztunk újszerű módszereket a tesztelés optimalizálása, teszt lefedettség mérés, -priorizálás és változás propagálás területeken. A kidolgozott módszerek alapját képezték további projekteknek, melyek során szoftvertermékeket is kiegészítettek módszereink alapján. | During software change impact analysis, we assess the consequences of changes made to a software system, which has important applications in, for instance, change propagation, cost estimation, software quality and testing. We developed impact analysis methods that can be effectively and efficiently used for large and heterogeneous real life applications as well. Previously available methods could provide results only in limited environments and for systems of limited size. Apart from the enhancements developed for the existing static and dynamic slicing and dependence analysis algorithms, we achieved results in different related areas such as investigation of dependences based on metrics, conceptual coupling, quality models and prediction of defects and productivity. These areas mostly support the application of the methods in practice. We have contributions in the fields of different special technologies, for instance, dependences in database systems or analysis of low level languages. Regarding the applications of impact analysis, we developed novel methods for test optimization, test coverage measurement and prioritization, and change propagation. The developed methods provided basis for further projects, also for extension of certain software products

    A travel time-based variable grid approach for an activity-based cellular automata model

    Get PDF
    Urban growth and population growth are used in numerous models to determine their potential impacts on both the natural and the socio-economic systems. Cellular automata (CA) land-use models became popular for urban growth modelling since they predict spatial interactions between different land uses in an explicit and straightforward manner. A common deficiency of land-use models is that they only deal with abstract categories, while in reality, several activities are often hosted at one location (e.g. population, employment, agricultural yield, nature…). Recently, a multiple activity-based variable grid CA model was proposed to represent several urban activities (population and economic activities) within single model cells. The distance-decay influence rules of the model included both short- and long-distance interactions, but all distances between cells were simply Euclidean distances. The geometry of the real transportation system, as well as its interrelations with the evolving activities, were therefore not taken into account. To improve this particular model, we make the influence rules functions of time travelled on the transportation system. Specifically, the new algorithm computes and stores all travel times needed for the variable grid CA. This approach provides fast run times, and it has a higher resolution and more easily modified parameters than the alternative approach of coupling the activity-based CA model to an external transportation model. This paper presents results from one Euclidean scenario and four different transport network scenarios to show the effects on land-use and activity change in an application to Belgium. The approach can add value to urban scenario analysis and the development of transport- and activity-related spatial indicators, and constitutes a general improvement of the activity-based CA model

    cuHinesBatch: solving multiple hines systems on GPUs Human Brain Project

    Get PDF
    The simulation of the behavior of the Human Brain is one of the most important challenges today in computing. The main problem consists of finding efficient ways to manipulate and compute the huge volume of data that this kind of simulations need, using the current technology. In this sense, this work is focused on one of the main steps of such simulation, which consists of computing the Voltage on neurons’ morphology. This is carried out using the Hines Algorithm. Although this algorithm is the optimum method in terms of number of operations, it is in need of non-trivial modifications to be efficiently parallelized on NVIDIA GPUs. We proposed several optimizations to accelerate this algorithm on GPU-based architectures, exploring the limitations of both, method and architecture, to be able to solve efficiently a high number of Hines systems (neurons). Each of the optimizations are deeply analyzed and described. To evaluate the impact of the optimizations on real inputs, we have used 6 different morphologies in terms of size and branches. Our studies have proven that the optimizations proposed in the present work can achieve a high performance on those computations with a high number of neurons, being our GPU implementations about 4× and 8× faster than the OpenMP multicore implementation (16 cores), using one and two K80 NVIDIA GPUs respectively. Also, it is important to highlight that these optimizations can continue scaling even when dealing with number of neurons.This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 720270 (HBP SGA1), from the Spanish Ministry of Economy and Competitiveness under the project Computación de Altas Prestaciones VII (TIN2015-65316-P) and the Departament d’Innovació, Universitats i Empresa de la Generalitat de Catalunya, under project MPEXPAR: Models de Programació i Entorns d’Execució Paral·lels (2014-SGR-1051). We thank the support of NVIDIA through the BSC/UPC NVIDIA GPU Center of Excellence. Antonio J. Peña is cofinanced by the Spanish Ministry of Economy and Competitiveness under Juan de la Cierva fellowship number IJCI-2015-23266.Peer ReviewedPostprint (published version

    Simulating the behavior of the human brain on GPUS

    Get PDF
    The simulation of the behavior of the Human Brain is one of the most important challenges in computing today. The main problem consists of finding efficient ways to manipulate and compute the huge volume of data that this kind of simulations need, using the current technology. In this sense, this work is focused on one of the main steps of such simulation, which consists of computing the Voltage on neurons’ morphology. This is carried out using the Hines Algorithm and, although this algorithm is the optimum method in terms of number of operations, it is in need of non-trivial modifications to be efficiently parallelized on GPUs. We proposed several optimizations to accelerate this algorithm on GPU-based architectures, exploring the limitations of both, method and architecture, to be able to solve efficiently a high number of Hines systems (neurons). Each of the optimizations are deeply analyzed and described. Two different approaches are studied, one for mono-morphology simulations (batch of neurons with the same shape) and one for multi-morphology simulations (batch of neurons where every neuron has a different shape). In mono-morphology simulations we obtain a good performance using just a single kernel to compute all the neurons. However this turns out to be inefficient on multi-morphology simulations. Unlike the previous scenario, in multi-morphology simulations a much more complex implementation is necessary to obtain a good performance. In this case, we must execute more than one single GPU kernel. In every execution (kernel call) one specific part of the batch of the neurons is solved. These parts can be seen as multiple and independent tridiagonal systems. Although the present paper is focused on the simulation of the behavior of the Human Brain, some of these techniques, in particular those related to the solving of tridiagonal systems, can be also used for multiple oil and gas simulations. Our studies have proven that the optimizations proposed in the present work can achieve high performance on those computations with a high number of neurons, being our GPU implementations about 4× and 8× faster than the OpenMP multicore implementation (16 cores), using one and two NVIDIA K80 GPUs respectively. Also, it is important to highlight that these optimizations can continue scaling, even when dealing with a very high number of neurons.This project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 720270 (HBP SGA1), from the Spanish Ministry of Economy and Competitiveness under the project Computación de Altas Prestaciones VII (TIN2015-65316-P), the Departament d’Innovació, Universitats i Empresa de la Generalitat de Catalunya, under project MPEXPAR: Models de Programació i Entorns d’Execució Parallels (2014-SGR-1051). We thank the support of NVIDIA through the BSC/UPC NVIDIA GPU Center of Excellence, and the European Union’s Horizon 2020 Research and Innovation Program under the Marie Sklodowska-Curie Grant Agreement No. 749516.Peer ReviewedPostprint (published version

    Requirements Engineering in the Development Process of Web Systems: A Systematic Literature Review

    Get PDF
    Requirements Engineering (RE) is the first phase in the software development process during which designers attempt to fully satisfy users’ needs. Web Engineering (WE) methods should consider adapting RE to the Web’s large and diverse user groups. The objective of this work is to classify the literature with regard to the RE applied in WE in order to obtain the current “state-of-the-art”. The present work is based on the Systematic Literature Review (SLR) method proposed by Kitchenham; we have reviewed publications from ACM, IEEE, Science Direct, DBLP and World Wide Web. From a population of 3059 papers, we identified 14 primary studies, which provide information concerning RE when used in WE methods.This work has been partially supported by the Programa de Fomento y Apoyo a Proyectos de Investigación (PROFAPI) from the Universidad Autónoma de Sinaloa (México), and the MANTRA project (GRE09-17) from the University of Alicante, Spain, and GV/2011/035 from the Valencia Government

    Firefly algorithm for polynomial Bézier surface parameterization

    Get PDF
    A classical issue in many applied fields is to obtain an approximating surface to a given set of data points. This problem arises in Computer-Aided Design and Manufacturing (CAD/CAM), virtual reality, medical imaging, computer graphics, computer animation, and many others. Very often, the preferred approximating surface is polynomial, usually described in parametric form. This leads to the problem of determining suitable parametric values for the data points, the so-called surface parameterization. In real-world settings, data points are generally irregularly sampled and subjected to measurement noise, leading to a very difficult nonlinear continuous optimization problem, unsolvable with standard optimization techniques. This paper solves the parameterization problem for polynomial Bézier surfaces by applying the firefly algorithm, a powerful nature-inspired metaheuristic algorithm introduced recently to address difficult optimization problems. The method has been successfully applied to some illustrative examples of open and closed surfaces, including shapes with singularities. Our results show that the method performs very well, being able to yield the best approximating surface with a high degree of accuracy

    Towards Combining Individual and Collaborative Work Spaces under a Unified E-Portfolio

    Get PDF
    Proceedings of: 11th International Conference on Computational Science and Applications (ICCSA 2011). Santander, Spain, June 20-23, 2011E-portfolios in learning environments have been attributed numerous benefits and their presence has been steadily increasing. And so has the variety of environments in which a student participates. Collaborative learning requires communication and resource sharing among team members. Students may participate in multiple teams throughout a long period of time, sometimes even simultaneously. Conventional eportfolios are oriented toward showcasing individual achievements, but they need to also equally reflect collaborative achievements. The approach described in this paper has the objective of offering students an e-portfolio as a local folder their personal computer containing a combined view of their individual and collaborative work spaces. The content of this folder can be synchronized with a remote server thus achieving resource sharing and publication of a clearly identified set of resources.Work partially funded by the Learn3 project, “Plan Nacional de I+D+I TIN2008- 05163/TSI”, the Consejo Social - Universidad Carlos III de Madrid, the Acción Integrada Ref. DE2009-0051, and the “Emadrid: Investigación y desarrollo de tecnologías para el e-learning en la Comunidad de Madrid” project (S2009/TIC-1650).Publicad

    Smart Planning & Smart Cities

    Get PDF
    In the light of a comprehensive social and technological change, spatial planning is confronted with major changes in its basic conditions. It is faced with an increasing ubiquity of spatial relevant information of which the potentials and risks need to be discussed in the use for planning purposes. Besides the increasing pervasion of sensors in everyday life and the use of mobile communication devices, the networking and communication possibilities will play a major role in the conception of a connected and “smart” city. In addition to the above mentioned aspects and social networking capabilities, it seems that committed citizens appear increasingly as active stakeholders for planning purposes via inductive processes. Based on the mentioned technological possibilities, topics such as Smart Cities are increasingly being discussed in the public debate in recent times. It is unclear if the term “Smart City” is based more on a scientific foundation or on marketing ideas. And what can planners do, to make the city more smart and especially to make it a better place for people to live? This paper embraces an examination of the various technologies an methodological approaches in relation to planning-relevant information and knowledge creation. Besides the proclaimed potential of making a city more efficient, there will also be a critical consideration of the problems of having a city, where all urban data is connected

    Stability of the weighted splitting finite-difference scheme for a two-dimensional parabolic equation with two nonlocal integral conditions

    Get PDF
    AbstractNonlocal conditions arise in mathematical models of various physical, chemical or biological processes. Therefore, interest in developing computational techniques for the numerical solution of partial differential equations (PDEs) with various types of nonlocal conditions has been growing fast. We construct and analyse a weighted splitting finite-difference scheme for a two-dimensional parabolic equation with nonlocal integral conditions. The main attention is paid to the stability of the method. We apply the stability analysis technique which is based on the investigation of the spectral structure of the transition matrix of a finite-difference scheme. We demonstrate that depending on the parameters of the finite-difference scheme and nonlocal conditions the proposed method can be stable or unstable. The results of numerical experiments with several test problems are also presented and they validate theoretical results

    Simulating the Behaviour of the Human Brain on NVIDIA GPU: cuHinesBatch & cuThomasBatch implementations

    Get PDF
    Understand the human brain is one of the century challenges. On this work we are going to achieve a small step towards this objective presenting a novel data layout in order to compute more efficiently the Hines algorithm on GPU. A more general tridiagonal solver is going to be presented too
    corecore