539 research outputs found

    High-performance computing and communication models for solving the complex interdisciplinary problems on DPCS

    Get PDF
    The paper presents some advanced high performance (HPC) and parallel computing (PC) methodologies for solving a large space complex problem involving the integrated difference research areas. About eight interdisciplinary problems will be accurately solved on multiple computers communicating over the local area network. The mathematical modeling and a large sparse simulation of the interdisciplinary effort involve the area of science, engineering, biomedical, nanotechnology, software engineering, agriculture, image processing and urban planning. The specific methodologies of PC software under consideration include PVM, MPI, LUNA, MDC, OpenMP, CUDA and LINDA integrated with COMSOL and C++/C. There are different communication models of parallel programming, thus some definitions of parallel processing, distributed processing and memory types are explained for understanding the main contribution of this paper. The matching between the methodology of PC and the large sparse application depends on the domain of solution, the dimension of the targeted area, computational and communication pattern, the architecture of distributed parallel computing systems (DPCS), the structure of computational complexity and communication cost. The originality of this paper lies in obtaining the complex numerical model dealing with a large scale partial differential equation (PDE), discretization of finite difference (FDM) or finite element (FEM) methods, numerical simulation, high-performance simulation and performance measurement. The simulation of PDE will perform by sequential and parallel algorithms to visualize the complex model in high-resolution quality. In the context of a mathematical model, various independent and dependent parameters present the complex and real phenomena of the interdisciplinary application. As a model executes, these parameters can be manipulated and changed. As an impact, some chemical or mechanical properties can be predicted based on the observation of parameter changes. The methodologies of parallel programs build on the client-server model, slave-master model and fragmented model. HPC of the communication model for solving the interdisciplinary problems above will be analyzed using a flow of the algorithm, numerical analysis and the comparison of parallel performance evaluations. In conclusion, the integration of HPC, communication model, PC software, performance and numerical analysis happens to be an important approach to fulfill the matching requirement and optimize the solution of complex interdisciplinary problems

    High-Performance Computational and Information Technologies for Numerical Models and Data Processing

    Get PDF
    This chapter discusses high-performance computational and information technologies for numerical models and data processing. In the first part of the chapter, the numerical model of the oil displacement problem was considered by injection of chemical reagents to increase oil recovery of reservoir. Moreover the fragmented algorithm was developed for solving this problem and the algorithm for high-performance visualization of calculated data. Analysis and comparison of parallel algorithms based on the fragmented approach and using MPI technologies are also presented. The algorithm for solving given problem on mobile platforms and analysis of computational results is given too. In the second part of the chapter, the problem of unstructured and semi-structured data processing was considered. It was decided to address the task of n-gram extraction which requires a lot of computing with large amount of textual data. In order to deal with such complexity, there was a need to adopt and implement parallelization patterns. The second part of the chapter also describes parallel implementation of the document clustering algorithm that used a heuristic genetic algorithm. Finally, a novel UPC implementation of MapReduce framework for semi-structured data processing was introduced which allows to express data parallel applications using simple sequential code

    Восьмая Сибирская конференция по параллельным и высокопроизводительным вычислениям : программа и тезисы докладов (28-30 октября 2015 года)

    Get PDF
    Представлены программа и тезисы докладов участников Восьмой Сибирской конференции по параллельным и высокопроизводительным вычислениям, которая пройдет в Томском государственном университете с 28 по 30 октября 2015 года при поддержке Министерства образования и науки РФ, Суперкомпьютерного кон-сорциума России, Российского фонда фундаментальных исследований (грант № 15-07-20872) и ЗАО «Intel Software». Для научных сотрудников, преподавателей, аспирантов, студентов, исполь-зующих высокопроизводительные вычислительные ресурсы в научной и учебной работе

    Integrated modelling of control and adaptive building envelope: development of a modelling solution using a co-simulation approach

    Get PDF
    Adaptive building envelopes can dynamically adapt to environmental changes, often supported by a control system. Although adaptive building envelopes can play a significant role in improving thermal building performance, uncertainties and risks have led to a slow uptake in the built environment. A reason for this is the reluctance of practitioners to consider integrating adaptive building envelopes in building design. This may be due to Building Performance Simulation (BPS) tools that can be employed for performance prediction of design proposals with adaptive building envelopes. However, a shortcoming of existing tools is their limited adaptation that hinders proper modelling of the influence of control decisions on the dynamic behaviour of these building envelopes. This thesis investigates an approach for the integrated modelling of control and adaptive building envelope. To this aim, an interview-based industry study with experts in adaptive building envelope simulation was conducted. The interview study aimed to advance the understanding of the limitations of adaptive building envelope simulation in current design practice and to identify implications for future tool developments. The feedback from the interviewees was then used to inform the development of an integrated modelling approach using co-simulation, the accuracy and functionality of which were subsequently tested through a validation study and a multiple case study. The findings of the interview study outline the need for more flexible modelling approaches that enable designers to fully exploit adaptive building envelopes in building design. The proposed modelling approach for predicting the thermal performance of adaptive building envelopes has shown that its co-simulation setup seems to offer more flexibility in integrating the dynamic behaviour of adaptive building envelopes. What is now needed is to observe the execution of the modelling approach in design practice to obtain realistic feedback from its users and to verify that it works as intended

    Activity Report 2022

    Get PDF

    Computational methods for analyzing complex high-throughput data from cancers

    Get PDF
    Cancers are a heterogeneous group of diseases that cause 7.6 million deaths yearly worldwide. At the cellular level, cancer is characterized by increased proliferation and invasion of tissue. These phenotypes are caused by environmental or inherited factors that increase the mutability of the genome, leading to dysregulation of a number of cellular processes. Identifying the genotypic changes and their phenotypic consequences is key to accurate diagnosis and prognosis, as well as improved treatment regimens. Cancer cells can be investigated at a genome-wide scale using high-throughput measurement techniques such as DNA sequencing and microarrays. These rapidly evolving technologies provide experimental data that have two challenging characteristics: the volume of data is large and data are structurally complex. These data need to be analyzed in an accurate and scalable manner to arrive at biomedically relevant conclusions. I have developed three computational methods for analyzing high-throughput genomic data, and applied the methods to experimental data from three cancers. The first computational method is an extensible workflow framework, Anduril, for organizing the overall software structure of an analysis in a scalable manner. The second method, SPINLONG, is a flexible algorithm for analyzing chromatin immunoprecipitation followed by deep sequencing (ChIP-seq) data from complex experimental designs, such as time series measurements of multiple markers. The third method, GROK, is used for preprocessing deep sequencing data. Its design is based on a mathematical formalism that provides a succinct language for these operations. The experimental part studies gene regulation and expression in glioblastoma multiforme, and breast and prostate cancer. The results demonstrate the applicability of the developed methods to cancer research and provide insights into the dysregulation of gene expression in cancer. All three studies use both cell line and clinical material to connect the molecular and disease outcome aspects of cancer. These experiments yield results at two conceptual levels. At the holistic level, lists of significant genes or genomic regions provide a genome-wide view into genomic alterations in cancer. At the specific level, we focus on one or a few central genes, which are experimentally validated, to provide an accessible starting point for understanding the results. Together, the thesis focuses on understanding the complexity of cancer and managing the complexity of genome-wide data.Syövät ovat heterogeeninen joukko sairauksia, jotka aiheuttavat vuosittain 7,6 miljoonaa kuolemaa maailmanlaajuisesti. Solutasolla syövälle on ominaista lisääntynyt solukasvu sekä leviäminen ympäröivään kudokseen. Nämä solutason ilmiöt johtuvat ympäristö- ja perinnöllisistä tekijöistä, jotka lisäävät genomin mutaatioalttiutta ja häiritsevät solun biokemiallisia prosesseja. Syövän hoidolle sekä diagnoosille on tärkeää tunnistaa geneettiset muutokset syöpäsoluissa sekä niiden vaikutukset fenotyyppiin. Syövän solumuutoksia voi tutkia hiljattain kehitetyillä genominlaajuisilla mittaustekniikoilla, kuten DNA:n sekvensoinnilla ja mikrosiruilla. Nämä uuden sukupolven tekniikat tuottavat mittaustietoa, jolla on kaksi ominaispiirrettä: sitä on määrällisesti paljon ja se on rakenteeltaan monimutkaista. Tällainen mittaustieto on kyettävä analysoimaan täsmällisesti ja laskennallisesti skaalautuvasti, jotta tutkimuksesta saadaan lääketieteellistä lisäarvoa. Tässä työssä on kehitetty kolme laskennallista menetelmää genominlaajuisten aineistojen analyysiin, sekä hyödynnetty näitä menetelmiä kokeellisesti kolmen syövän tutkimuksessa. Ensimmäinen laskennallinen menetelmä on ohjelmistokehys Anduril, joka tarjoaa laajennettavan työnkulkuihin perustuvan alustan suurten ja monimutkaisten aineistojen analysointiin. Toinen menetelmä on SPINLONG-algoritmi, jolla analysoidaan proteiinien sitoutumista DNA:han genominlaajuisesti. Kolmas menetelmä, GROK, on ohjelmisto laajojen DNA-sekvensointiaineistojen tehokkaaseen esikäsittelyyn. Työn kokeellinen osuus käsittelee geenien ilmentymistä ja säätelyä glioblastoomassa sekä rinta- ja eturauhassyövässä. Saadut tulokset osoittavat kehitettyjen laskennallisten menetelmien soveltuvuutta kokeelliseen tutkimukseen ja lisäävät tietämystä näissä syövissä tapahtuvista genomitason muutoksista. Kokeellisissa tutkimuksissa on hyödynnetty sekä soluviljelmiä että potilasnäytteitä kytkemään molekyylitason muutokset kliiniseen tulokseen. Kokeista saatuja tuloksia voi tarkastella kahdella abstraktiotasolla. Holistisella tasolla, johon kuuluu listoja muuntuneista geeneistä sekä kromosomialueista, saadaan kokonaiskuva genominlaajuisista muutoksista syövissä. Spesifisellä tasolla tarkennetaan oleellisimpiin geeneihin, joiden merkitys on kokeellisesti todennettu, mikä tarjoaa luontevan lähtökohdan tuloksien tulkintaan. Kokonaisuutena väitöskirja tutkii syövän monimutkaisuutta ja kehittää menetelmiä monimutkaisten genominlaajuisten aineistojen tulkitsemiseen

    Metaheuristic models for decision support in the software construction process

    Get PDF
    En la actualidad, los ingenieros software no solo tienen la responsabilidad de construir sistemas que desempe~nen una determinada funcionalidad, sino que cada vez es más importante que dichos sistemas también cumplan con requisitos no funcionales como alta disponibilidad, efciencia o seguridad, entre otros. Para lograrlo, los ingenieros se enfrentan a un proceso continuo de decisión, pues deben estudiar las necesidades del sistema a desarrollar y las alternativas tecnológicas existentes para implementarlo. Todo este proceso debe estar encaminado a la obtención de sistemas software de gran calidad, reutilizables y que faciliten su mantenimiento y modificación en un escenario tan exigente y competitivo. La ingeniería del software, como método sistemático para la construcción de software, ha aportado una serie de pautas y tareas que, realizadas de forma disciplinada y adaptadas al contexto de desarrollo, posibilitan la obtención de software de calidad. En concreto, el proceso de análisis y diseño del software ha adquirido una gran importancia, pues en ella se concibe la estructura del sistema, en términos de sus bloques funcionales y las interacciones entre ellos. Es en este momento cuando se toman las decisiones acerca de la arquitectura, incluyendo los componentes que la conforman, que mejor se adapta a los requisitos, tanto funcionales como no funcionales, que presenta el sistema y que claramente repercuten en su posterior desarrollo. Por tanto, es necesario que el ingeniero analice rigurosamente las alternativas existentes, sus implicaciones en los criterios de calidad impuestos y la necesidad de establecer compromisos entre ellos. En este contexto, los ingenieros se guían principalmente por sus habilidades y experiencia, por lo que dotarles de métodos de apoyo a la decisión representaría un avance significativo en el área. La aplicación de técnicas de inteligencia artificial en este ámbito ha despertado un gran interés en los últimos años. En particular, la inteligencia artificial ha encontrado en la ingeniería del software un ámbito de aplicación complejo, donde diferentes técnicas pueden ayudar a conseguir la semi-automatización de tareas tradicionalmente realizadas de forma manual. De la unión de ambas áreas surge la denominada ingeniería del software basada en búsqueda, que propone la reformulación de las actividades propias de la ingeniería del software como problemas de optimización. A continuación, estos problemas podrían ser resueltos mediante técnicas de búsqueda como las metaheurísticas. Este tipo de técnicas se caracterizan por explorar el espacio de posibles soluciones de una manera \inteligente", a menudo simulando procesos naturales como es el caso de los algoritmos evolutivos. A pesar de ser un campo de investigación muy reciente, es posible encontrar propuestas para automatizar una gran variedad de tareas dentro del ciclo de vida del software, como son la priorización de requisitos, la planifcación de recursos, la refactorización del código fuente o la generación de casos de prueba. En el ámbito del análisis y diseño de software, cuyas tareas requieren de creatividad y experiencia, conseguir una automatización completa resulta poco realista. Es por ello por lo que la resolución de sus tareas mediante enfoques de búsqueda debe ser tratada desde la perspectiva del ingeniero, promoviendo incluso la interacción con ellos. Además, el alto grado de abstracción de algunas de sus tareas y la dificultad de evaluar cuantitativamente la calidad de un diseño software, suponen grandes retos en la aplicación de técnicas de búsqueda durante las fases tempranas del proceso de construcción de software. Esta tesis doctoral busca realizar aportaciones significativas al campo de la ingeniería del software basada en búsqueda y, más concretamente, al área de la optimización de arquitecturas software. Aunque se están realizando importantes avances en este área, la mayoría de propuestas se centran en la obtención de arquitecturas de bajo nivel o en la selección y despliegue de artefactos software ya desarrollados. Por tanto, no existen propuestas que aborden el modelado arquitectónico a un nivel de abstracción elevado, donde aún no existe un conocimiento profundo sobre cómo será el sistema y, por tanto, es más difícil asistir al ingeniero. Como problema de estudio, se ha abordado principalmente la tarea del descubrimiento de arquitecturas software basadas en componentes. El objetivo de este problema consiste en abstraer los bloques arquitectónicos que mejor definen la estructura actual del software, así como sus interacciones, con el fin de facilitar al ingeniero su posterior análisis y mejora. Durante el desarrollo de esta tesis doctoral se ha explorado el uso de una gran variedad de técnicas de búsqueda, estudiando su idoneidad y realizando las adaptaciones necesarias para hacer frente a los retos mencionados anteriormente. La primera propuesta se ha centrado en la formulación del descubrimiento de arquitecturas como problema de optimización, abordando la representación computacional de los artefactos software que deben ser modelados y definiendo medidas software para evaluar su calidad durante el proceso de búsqueda. Además, se ha desarrollado un primer modelo basado en algoritmos evolutivos mono-objetivo para su resolución, el cual ha sido validado experimentalmente con sistemas software reales. Dicho modelo se caracteriza por ser comprensible y exible, pues sus componentes han sido diseñados considerando estándares y herramientas del ámbito de la ingeniería del software, siendo además configurable en función de las necesidades del ingeniero. A continuación, el descubrimiento de arquitecturas ha sido tratado desde una perspectiva multiobjetivo, donde varias medidas software, a menudo en con icto, deben ser simultáneamente optimizadas. En este caso, la resolución del problema se ha llevado a cabo mediante ocho algoritmos del estado del arte, incluyendo propuestas recientes del ámbito de la optimización de muchos objetivos. Tras ser adaptados al problema, estos algoritmos han sido comparados mediante un extenso estudio experimental con el objetivo de analizar la ifnuencia que tiene el número y la elección de las métricas a la hora de guiar el proceso de búsqueda. Además de realizar una validación del rendimiento de estos algoritmos siguiendo las prácticas habituales del área, este estudio aporta un análisis detallado de las implicaciones que supone la optimización de múltiples objetivos en la obtención de modelos de soporte a la decisión. La última propuesta en el contexto del descubrimiento de arquitecturas software se centra en la incorporación de la opinión del ingeniero al proceso de búsqueda. Para ello se ha diseñado un mecanismo de interacción que permite al ingeniero indicar tanto las características deseables en las soluciones arquitectónicas (preferencias positivas) como aquellos aspectos que deben evitarse (preferencias negativas). Esta información es combinada con las medidas software utilizadas hasta el momento, permitiendo al algoritmo evolutivo adaptar la búsqueda conforme el ingeniero interactúe. Dadas las características del modelo, su validación se ha realizado con la participación de ingenieros con distinta experiencia en desarrollo software, a fin de demostrar la idoneidad y utilidad de la propuesta. En el transcurso de la tesis doctoral, los conocimientos adquiridos y las técnicas desarrolladas también han sido extrapolados a otros ámbitos de la ingeniería del software basada en búsqueda mediante colaboraciones con investigadores del área. Cabe destacar especialmente la formalización de una nueva disciplina transversal, denominada ingeniería del software basada en búsqueda interactiva, cuyo fin es promover la participación activa del ingeniero durante el proceso de búsqueda. Además, se ha explorado la aplicación de algoritmos de muchos objetivos a un problema clásico de la computación orientada a servicios, como es la composición de servicios web.Nowadays, software engineers have not only the responsibility of building systems that provide a particular functionality, but they also have to guarantee that these systems ful l demanding non-functional requirements like high availability, e ciency or security. To achieve this, software engineers face a continuous decision process, as they have to evaluate system needs and existing technological alternatives to implement it. All this process should be oriented towards obtaining high-quality and reusable systems, also making future modi cations and maintenance easier in such a competitive scenario. Software engineering, as a systematic method to build software, has provided a number of guidelines and tasks that, when done in a disciplinarily manner and properly adapted to the development context, allow the creation of high-quality software. More speci cally, software analysis and design has acquired great relevance, being the phase in which the software structure is conceived in terms of its functional blocks and their interactions. In this phase, engineers have to make decisions about the most suitable architecture, including its constituent components. Such decisions are made according to the system requirements, either functional or non-functional, and will have a great impact on its future development. Therefore, the engineer has to rigorously analyse existing alternatives, their implications on the imposed quality criteria and the need of establishing trade-o s among them. In this context, engineers are mostly guided by their own capabilities and experience, so providing them with decision support methods would represent a signi cant contribution. The application of arti cial intelligent techniques in this area has experienced a growing interest in the last years. Particularly, software engineering represents a complex application domain to arti cial intelligence, whose diverse techniques can help in the semi-automation of tasks traditionally performed manually. The union of both elds has led to the appearance of search-based software engineering, which proposes reformulating software engineering activities as optimisation problems. For their resolution, search techniques like metaheuristics can be then applied. This type of technique performs an \intelligent" exploration of the space of candidate solutions, often inspired by natural processes as happens with evolutionary algorithms. Despite the novelty of this research eld, there are proposals to automate a great variety of tasks within the software lifecycle, such as requirement prioritisation, resource planning, code refactoring or test case generation. Focusing on analysis and design, whose tasks require creativity and experience, trying to achieve full automation is not realistic. Therefore, solving design tasks by means of search approaches should be oriented towards the engineer's perspective, even promoting their interaction. Furthermore, design tasks are also characterised by a high level of abstraction and the di culty of quantitatively evaluating design quality. All these aspects represent key challenges for the application of search techniques in early phases of the software construction process. The aim of this Ph.D. Thesis is to make signi cant contributions in search-based software engineering and, specially, in the area of software architecture optimisation. Although it is an area in which signi cant progress is being done, most of the current proposals are focused on generating low-level architectures or selecting and deploying already developed artefacts. Therefore, there is a lack of proposals dealing with architectural modelling at a high level of abstraction. At this level, engineers do not have a deep understanding of the system yet, meaning that assisting them is even more di cult. As case study, the discovery of component-based software architectures has been primary addressed. The objective for this problem consists in the abstraction of the architectural blocks, and their interactions, that best de ne the current structure of a software system. This can be viewed as the rst step an engineer would perform in order to further analyse and improve the system architecture. In this Ph.D. Thesis, the use of a great variety of search techniques has been explored. The suitability of these techniques has been studied, also making the necessary adaptations to cope with the aforementioned challenges. A rst proposal has been focused on the formulation of software architecture discovery as an optimisation problem, which consists in the computational representation of its software artefacts and the de nition of software metrics to evaluate their quality during the search process. Moreover, a single-objective evolutionary algorithm has been designed for its resolution, which has been validated using real software systems. The resulting model is comprehensible and exible, since its components have been designed under software engineering standards and tools and are also con gurable according to engineer's needs. Next, the discovery of software architectures has been tackled from a multi-objective perspective, in which several software metrics, often in con ict, have to be simultaneously optimised. In this case, the problem is solved by applying eight state-of-theart algorithms, including some recent many-objective approaches. These algorithms have been adapted to the problem and compared in an extensive experimental study, whose purpose is to analyse the in uence of the number and combination of metrics when guiding the search process. Apart from the performance validation following usual practices within the eld, this study provides a detailed analysis of the practical implications behind the optimisation of multiple objectives in the context of decision support. The last proposal is focused on interactively including the engineer's opinion in the search-based architecture discovery process. To do this, an interaction mechanism has been designed, which allows the engineer to express desired characteristics for the solutions (positive preferences), as well as those aspects that should be avoided (negative preferences). The gathered information is combined with the software metrics used until the moment, thus making possible to adapt the search as the engineer interacts. Due to the characteristics of the proposed model, engineers of di erent expertise in software development have participated in its validation with the aim of showing the suitability and utility of the approach. The knowledge acquired along the development of the Thesis, as well as the proposed approaches, have also been transferred to other search-based software engineering areas as a result of research collaborations. In this sense, it is worth noting the formalisation of interactive search-based software engineering as a cross-cutting discipline, which aims at promoting the active participation of the engineer during the search process. Furthermore, the use of many-objective algorithms has been explored in the context of service-oriented computing to address the so-called web service composition problem
    corecore