1,789 research outputs found

    A Taxonomy of Workflow Management Systems for Grid Computing

    Full text link
    With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure

    Programming models to support data science workflows

    Get PDF
    Data Science workflows have become a must to progress in many scientific areas such as life, health, and earth sciences. In contrast to traditional HPC workflows, they are more heterogeneous; combining binary executions, MPI simulations, multi-threaded applications, custom analysis (possibly written in Java, Python, C/C++ or R), and real-time processing. Furthermore, in the past, field experts were capable of programming and running small simulations. However, nowadays, simulations requiring hundreds or thousands of cores are widely used and, to this point, efficiently programming them becomes a challenge even for computer sciences. Thus, programming languages and models make a considerable effort to ease the programmability while maintaining acceptable performance. This thesis contributes to the adaptation of High-Performance frameworks to support the needs and challenges of Data Science workflows by extending COMPSs, a mature, general-purpose, task-based, distributed programming model. First, we enhance our prototype to orchestrate different frameworks inside a single programming model so that non-expert users can build complex workflows where some steps require highly optimised state of the art frameworks. This extension includes the @binary, @OmpSs, @MPI, @COMPSs, and @MultiNode annotations for both Java and Python workflows. Second, we integrate container technologies to enable developers to easily port, distribute, and scale their applications to distributed computing platforms. This combination provides a straightforward methodology to parallelise applications from sequential codes along with efficient image management and application deployment that ease the packaging and distribution of applications. We distinguish between static, HPC, and dynamic container management and provide representative use cases for each scenario using Docker, Singularity, and Mesos. Third, we design, implement and integrate AutoParallel, a Python module to automatically find an appropriate task-based parallelisation of affine loop nests and execute them in parallel in a distributed computing infrastructure. It is based on sequential programming and requires one single annotation (the @parallel Python decorator) so that anyone with intermediate-level programming skills can scale up an application to hundreds of cores. Finally, we propose a way to extend task-based management systems to support continuous input and output data to enable the combination of task-based workflows and dataflows (Hybrid Workflows) using one single programming model. Hence, developers can build complex Data Science workflows with different approaches depending on the requirements without the effort of combining several frameworks at the same time. Also, to illustrate the capabilities of Hybrid Workflows, we have built a Distributed Stream Library that can be easily integrated with existing task-based frameworks to provide support for dataflows. The library provides a homogeneous, generic, and simple representation of object and file streams in both Java and Python; enabling complex workflows to handle any data type without dealing directly with the streaming back-end.Els fluxos de treball de Data Science s’han convertit en una necessitat per progressar en moltes àrees científiques com les ciències de la vida, la salut i la terra. A diferència dels fluxos de treball tradicionals per a la CAP, els fluxos de Data Science són més heterogenis; combinant l’execució de binaris, simulacions MPI, aplicacions multiprocés, anàlisi personalitzats (possiblement escrits en Java, Python, C / C ++ o R) i computacions en temps real. Mentre que en el passat els experts de cada camp eren capaços de programar i executar petites simulacions, avui dia, aquestes simulacions representen un repte fins i tot per als experts ja que requereixen centenars o milers de nuclis. Per aquesta raó, els llenguatges i models de programació actuals s’esforcen considerablement en incrementar la programabilitat mantenint un rendiment acceptable. Aquesta tesi contribueix a l’adaptació de models de programació per a la CAP per afrontar les necessitats i reptes dels fluxos de Data Science estenent COMPSs, un model de programació distribuïda madur, de propòsit general, i basat en tasques. En primer lloc, millorem el nostre prototip per orquestrar diferent programari per a que els usuaris no experts puguin crear fluxos complexos usant un únic model on alguns passos requereixin tecnologies altament optimitzades. Aquesta extensió inclou les anotacions de @binary, @OmpSs, @MPI, @COMPSs, i @MultiNode per a fluxos en Java i Python. En segon lloc, integrem tecnologies de contenidors per permetre als desenvolupadors portar, distribuir i escalar fàcilment les seves aplicacions en plataformes distribuïdes. A més d’una metodologia senzilla per a paral·lelitzar aplicacions a partir de codis seqüencials, aquesta combinació proporciona una gestió d’imatges i una implementació d’aplicacions eficients que faciliten l’empaquetat i la distribució d’aplicacions. Distingim entre la gestió de contenidors estàtica, CAP i dinàmica i proporcionem casos d’ús representatius per a cada escenari amb Docker, Singularity i Mesos. En tercer lloc, dissenyem, implementem i integrem AutoParallel, un mòdul de Python per determinar automàticament la paral·lelització basada en tasques de nius de bucles afins i executar-los en paral·lel en una infraestructura distribuïda. AutoParallel està basat en programació seqüencial, requereix una sola anotació (el decorador @parallel) i permet a un usuari intermig escalar una aplicació a centenars de nuclis. Finalment, proposem una forma d’estendre els sistemes basats en tasques per admetre dades d’entrada i sortida continus; permetent així la combinació de fluxos de treball i dades (Fluxos Híbrids) en un únic model. Conseqüentment, els desenvolupadors poden crear fluxos complexos seguint diferents patrons sense l’esforç de combinar diversos models al mateix temps. A més, per a il·lustrar les capacitats dels Fluxos Híbrids, hem creat una biblioteca (DistroStreamLib) que s’integra fàcilment amb els models basats en tasques per suportar fluxos de dades. La biblioteca proporciona una representació homogènia, genèrica i simple de seqüències contínues d’objectes i arxius en Java i Python; permetent gestionar qualsevol tipus de dades sense tractar directament amb el back-end de streaming.Los flujos de trabajo de Data Science se han convertido en una necesidad para progresar en muchas áreas científicas como las ciencias de la vida, la salud y la tierra. A diferencia de los flujos de trabajo tradicionales para la CAP, los flujos de Data Science son más heterogéneos; combinando la ejecución de binarios, simulaciones MPI, aplicaciones multiproceso, análisis personalizados (posiblemente escritos en Java, Python, C/C++ o R) y computaciones en tiempo real. Mientras que en el pasado los expertos de cada campo eran capaces de programar y ejecutar pequeñas simulaciones, hoy en día, estas simulaciones representan un desafío incluso para los expertos ya que requieren cientos o miles de núcleos. Por esta razón, los lenguajes y modelos de programación actuales se esfuerzan considerablemente en incrementar la programabilidad manteniendo un rendimiento aceptable. Esta tesis contribuye a la adaptación de modelos de programación para la CAP para afrontar las necesidades y desafíos de los flujos de Data Science extendiendo COMPSs, un modelo de programación distribuida maduro, de propósito general, y basado en tareas. En primer lugar, mejoramos nuestro prototipo para orquestar diferentes software para que los usuarios no expertos puedan crear flujos complejos usando un único modelo donde algunos pasos requieran tecnologías altamente optimizadas. Esta extensión incluye las anotaciones de @binary, @OmpSs, @MPI, @COMPSs, y @MultiNode para flujos en Java y Python. En segundo lugar, integramos tecnologías de contenedores para permitir a los desarrolladores portar, distribuir y escalar fácilmente sus aplicaciones en plataformas distribuidas. Además de una metodología sencilla para paralelizar aplicaciones a partir de códigos secuenciales, esta combinación proporciona una gestión de imágenes y una implementación de aplicaciones eficientes que facilitan el empaquetado y la distribución de aplicaciones. Distinguimos entre gestión de contenedores estática, CAP y dinámica y proporcionamos casos de uso representativos para cada escenario con Docker, Singularity y Mesos. En tercer lugar, diseñamos, implementamos e integramos AutoParallel, un módulo de Python para determinar automáticamente la paralelización basada en tareas de nidos de bucles afines y ejecutarlos en paralelo en una infraestructura distribuida. AutoParallel está basado en programación secuencial, requiere una sola anotación (el decorador @parallel) y permite a un usuario intermedio escalar una aplicación a cientos de núcleos. Finalmente, proponemos una forma de extender los sistemas basados en tareas para admitir datos de entrada y salida continuos; permitiendo así la combinación de flujos de trabajo y datos (Flujos Híbridos) en un único modelo. Consecuentemente, los desarrolladores pueden crear flujos complejos siguiendo diferentes patrones sin el esfuerzo de combinar varios modelos al mismo tiempo. Además, para ilustrar las capacidades de los Flujos Híbridos, hemos creado una biblioteca (DistroStreamLib) que se integra fácilmente a los modelos basados en tareas para soportar flujos de datos. La biblioteca proporciona una representación homogénea, genérica y simple de secuencias continuas de objetos y archivos en Java y Python; permitiendo manejar cualquier tipo de datos sin tratar directamente con el back-end de streaming

    Programming models to support data science workflows

    Get PDF
    Data Science workflows have become a must to progress in many scientific areas such as life, health, and earth sciences. In contrast to traditional HPC workflows, they are more heterogeneous; combining binary executions, MPI simulations, multi-threaded applications, custom analysis (possibly written in Java, Python, C/C++ or R), and real-time processing. Furthermore, in the past, field experts were capable of programming and running small simulations. However, nowadays, simulations requiring hundreds or thousands of cores are widely used and, to this point, efficiently programming them becomes a challenge even for computer sciences. Thus, programming languages and models make a considerable effort to ease the programmability while maintaining acceptable performance. This thesis contributes to the adaptation of High-Performance frameworks to support the needs and challenges of Data Science workflows by extending COMPSs, a mature, general-purpose, task-based, distributed programming model. First, we enhance our prototype to orchestrate different frameworks inside a single programming model so that non-expert users can build complex workflows where some steps require highly optimised state of the art frameworks. This extension includes the @binary, @OmpSs, @MPI, @COMPSs, and @MultiNode annotations for both Java and Python workflows. Second, we integrate container technologies to enable developers to easily port, distribute, and scale their applications to distributed computing platforms. This combination provides a straightforward methodology to parallelise applications from sequential codes along with efficient image management and application deployment that ease the packaging and distribution of applications. We distinguish between static, HPC, and dynamic container management and provide representative use cases for each scenario using Docker, Singularity, and Mesos. Third, we design, implement and integrate AutoParallel, a Python module to automatically find an appropriate task-based parallelisation of affine loop nests and execute them in parallel in a distributed computing infrastructure. It is based on sequential programming and requires one single annotation (the @parallel Python decorator) so that anyone with intermediate-level programming skills can scale up an application to hundreds of cores. Finally, we propose a way to extend task-based management systems to support continuous input and output data to enable the combination of task-based workflows and dataflows (Hybrid Workflows) using one single programming model. Hence, developers can build complex Data Science workflows with different approaches depending on the requirements without the effort of combining several frameworks at the same time. Also, to illustrate the capabilities of Hybrid Workflows, we have built a Distributed Stream Library that can be easily integrated with existing task-based frameworks to provide support for dataflows. The library provides a homogeneous, generic, and simple representation of object and file streams in both Java and Python; enabling complex workflows to handle any data type without dealing directly with the streaming back-end.Els fluxos de treball de Data Science s’han convertit en una necessitat per progressar en moltes àrees científiques com les ciències de la vida, la salut i la terra. A diferència dels fluxos de treball tradicionals per a la CAP, els fluxos de Data Science són més heterogenis; combinant l’execució de binaris, simulacions MPI, aplicacions multiprocés, anàlisi personalitzats (possiblement escrits en Java, Python, C / C ++ o R) i computacions en temps real. Mentre que en el passat els experts de cada camp eren capaços de programar i executar petites simulacions, avui dia, aquestes simulacions representen un repte fins i tot per als experts ja que requereixen centenars o milers de nuclis. Per aquesta raó, els llenguatges i models de programació actuals s’esforcen considerablement en incrementar la programabilitat mantenint un rendiment acceptable. Aquesta tesi contribueix a l’adaptació de models de programació per a la CAP per afrontar les necessitats i reptes dels fluxos de Data Science estenent COMPSs, un model de programació distribuïda madur, de propòsit general, i basat en tasques. En primer lloc, millorem el nostre prototip per orquestrar diferent programari per a que els usuaris no experts puguin crear fluxos complexos usant un únic model on alguns passos requereixin tecnologies altament optimitzades. Aquesta extensió inclou les anotacions de @binary, @OmpSs, @MPI, @COMPSs, i @MultiNode per a fluxos en Java i Python. En segon lloc, integrem tecnologies de contenidors per permetre als desenvolupadors portar, distribuir i escalar fàcilment les seves aplicacions en plataformes distribuïdes. A més d’una metodologia senzilla per a paral·lelitzar aplicacions a partir de codis seqüencials, aquesta combinació proporciona una gestió d’imatges i una implementació d’aplicacions eficients que faciliten l’empaquetat i la distribució d’aplicacions. Distingim entre la gestió de contenidors estàtica, CAP i dinàmica i proporcionem casos d’ús representatius per a cada escenari amb Docker, Singularity i Mesos. En tercer lloc, dissenyem, implementem i integrem AutoParallel, un mòdul de Python per determinar automàticament la paral·lelització basada en tasques de nius de bucles afins i executar-los en paral·lel en una infraestructura distribuïda. AutoParallel està basat en programació seqüencial, requereix una sola anotació (el decorador @parallel) i permet a un usuari intermig escalar una aplicació a centenars de nuclis. Finalment, proposem una forma d’estendre els sistemes basats en tasques per admetre dades d’entrada i sortida continus; permetent així la combinació de fluxos de treball i dades (Fluxos Híbrids) en un únic model. Conseqüentment, els desenvolupadors poden crear fluxos complexos seguint diferents patrons sense l’esforç de combinar diversos models al mateix temps. A més, per a il·lustrar les capacitats dels Fluxos Híbrids, hem creat una biblioteca (DistroStreamLib) que s’integra fàcilment amb els models basats en tasques per suportar fluxos de dades. La biblioteca proporciona una representació homogènia, genèrica i simple de seqüències contínues d’objectes i arxius en Java i Python; permetent gestionar qualsevol tipus de dades sense tractar directament amb el back-end de streaming.Los flujos de trabajo de Data Science se han convertido en una necesidad para progresar en muchas áreas científicas como las ciencias de la vida, la salud y la tierra. A diferencia de los flujos de trabajo tradicionales para la CAP, los flujos de Data Science son más heterogéneos; combinando la ejecución de binarios, simulaciones MPI, aplicaciones multiproceso, análisis personalizados (posiblemente escritos en Java, Python, C/C++ o R) y computaciones en tiempo real. Mientras que en el pasado los expertos de cada campo eran capaces de programar y ejecutar pequeñas simulaciones, hoy en día, estas simulaciones representan un desafío incluso para los expertos ya que requieren cientos o miles de núcleos. Por esta razón, los lenguajes y modelos de programación actuales se esfuerzan considerablemente en incrementar la programabilidad manteniendo un rendimiento aceptable. Esta tesis contribuye a la adaptación de modelos de programación para la CAP para afrontar las necesidades y desafíos de los flujos de Data Science extendiendo COMPSs, un modelo de programación distribuida maduro, de propósito general, y basado en tareas. En primer lugar, mejoramos nuestro prototipo para orquestar diferentes software para que los usuarios no expertos puedan crear flujos complejos usando un único modelo donde algunos pasos requieran tecnologías altamente optimizadas. Esta extensión incluye las anotaciones de @binary, @OmpSs, @MPI, @COMPSs, y @MultiNode para flujos en Java y Python. En segundo lugar, integramos tecnologías de contenedores para permitir a los desarrolladores portar, distribuir y escalar fácilmente sus aplicaciones en plataformas distribuidas. Además de una metodología sencilla para paralelizar aplicaciones a partir de códigos secuenciales, esta combinación proporciona una gestión de imágenes y una implementación de aplicaciones eficientes que facilitan el empaquetado y la distribución de aplicaciones. Distinguimos entre gestión de contenedores estática, CAP y dinámica y proporcionamos casos de uso representativos para cada escenario con Docker, Singularity y Mesos. En tercer lugar, diseñamos, implementamos e integramos AutoParallel, un módulo de Python para determinar automáticamente la paralelización basada en tareas de nidos de bucles afines y ejecutarlos en paralelo en una infraestructura distribuida. AutoParallel está basado en programación secuencial, requiere una sola anotación (el decorador @parallel) y permite a un usuario intermedio escalar una aplicación a cientos de núcleos. Finalmente, proponemos una forma de extender los sistemas basados en tareas para admitir datos de entrada y salida continuos; permitiendo así la combinación de flujos de trabajo y datos (Flujos Híbridos) en un único modelo. Consecuentemente, los desarrolladores pueden crear flujos complejos siguiendo diferentes patrones sin el esfuerzo de combinar varios modelos al mismo tiempo. Además, para ilustrar las capacidades de los Flujos Híbridos, hemos creado una biblioteca (DistroStreamLib) que se integra fácilmente a los modelos basados en tareas para soportar flujos de datos. La biblioteca proporciona una representación homogénea, genérica y simple de secuencias continuas de objetos y archivos en Java y Python; permitiendo manejar cualquier tipo de datos sin tratar directamente con el back-end de streaming.Postprint (published version

    Towards the Industrialization of New MDO Methodologies and Tools for Aircraft Design

    Get PDF
    An overall summary of the Institute of Technology IRT Saint Exupery MDA-MDO project (Multi-Disciplinary Analysis - Multidisciplinary Design Optimization) is presented. The aim of the project is to develop efficient capabilities (methods, tools and a software platform) to enable industrial deployment of MDO methods in industry. At IRT Saint Exupery, industrial and academic partners collaborate in a single place to the development of MDO methodologies; the advantage provided by this mixed organization is to directly benefit from both advanced methods at the cutting edge of research and deep knowledge of industrial needs and constraints. This paper presents the three main goals of the project: the elaboration of innovative MDO methodologies and formulations (also referred to as architectures in the literature 1) adapted to the resolution of industrial aircraft optimization design problems, the development of a MDO platform featuring scalable MDO capabilities for transfer to industry and the achievement of a simulation-based optimization of an aircraft engine pylon with industrial Computational Fluid Dynamics (CFD) and Computational Structural Mechanics (CSM) tools

    Providing value to a business using a lightweight design system to support knowledge reuse by designers

    No full text
    This paper describes an alternative approach to knowledge based systems in engineering than traditional geometry or explicit knowledge focused systems. Past systems have supported product optimisation rather than creative solutions and provide little benefit to businesses for bespoke and low volume products or products which do not benefit from optimisation. The approach here addresses this by supporting the creativity of designers through codified tacit knowledge and encouraging knowledge reuse for bespoke product development, in particular for small to medium sized enterprises. The implementation and evaluation of the approach is described within a company producing bespoke fixtures and tooling in shorter than average lead times. The active support of knowledge management in the company is intended to add value to the business by further reducing the lead times of the designs and creating a positive impact to business processes. The evaluation demonstrates a viable alternative framework to the traditional management of knowledge in engineering, which could be implemented by other small to medium enterprises

    Reactive Rules for Emergency Management

    Get PDF
    The goal of the following survey on Event-Condition-Action (ECA) Rules is to come to a common understanding and intuition on this topic within EMILI. Thus it does not give an academic overview on Event-Condition-Action Rules which would be valuable for computer scientists only. Instead the survey tries to introduce Event-Condition-Action Rules and their use for emergency management based on real-life examples from the use-cases identified in Deliverable 3.1. In this way we hope to address both, computer scientists and security experts, by showing how the Event-Condition-Action Rule technology can help to solve security issues in emergency management. The survey incorporates information from other work packages, particularly from Deliverable D3.1 and its Annexes, D4.1, D2.1 and D6.2 wherever possible

    Comparative Evaluation for the Performance of Big Stream Processing Systems

    Get PDF
    Andmete hulk kasvab tänapäeval meeletu kiirusega ning seda andmete hulka tuleb korrektselt töödelda, et saavutada kontroll andmete üle. Antud olukord sunnib meid mõtlema andmevoo töötlemise peale. Enamasti nõuavad andmemahuline pettuse tuvastus-, kaubandus-, tootmis-, sõjanduse ja luure süsteemid pidevat andmete analüüsi (reaalajas). Sellist tüüpi süsteemid nõuavad kõrgetasemel ist mustrite sobitamist ja korrelatsioone. Aja jooksul on ilmnenud erinevaid andmevoo töötlemise võimalusi. Antud lõputöös tehakse jõudlustest Apache Flink, Apache Storm, Heron, Kafka ja Apache Spark andmevoo töötlemismootoritega ning tulemusi võrreldakse ja vastandatakse omavahel. Nendes rakendustes ja domeenides on väga oluline nõue koguda, menetleda ning analüüsida olulisi andmevooge, et eraldada sealt väärtusliku informatsiooni. Antud magistritöö eesmärk on läbi viia empiiriline hindamine ning võrdlemine kõrgtasemel andmevoo töötlemissüsteemide vahel.Nowadays data is growing with tremendous acceleration, and this growing data must be processed properly if we want to have control over it. It pushes us to think about data stream processing. Most of the time, a data-intensive fraud detecting, trading, manufacturing, military and intelligence systems require processing data immediately (real-time). These kinds of systems need considerably ssophisticated pattern matching and correlations. However, other uses of stream processing have also emerged over time. In this thesis, we will benchmark to compare and contrast Apache Flink, Apache Storm, Heron, Kafka an Apache Spark stream processing engines. In these applications and domains, there is a crucial requirement to collect, process, and analyze significant streams of data to extract valuable information. This thesis aims to conduct an empirical evaluation and benchmarking of the state-of-the-art of big stream processing systems

    토카막 통합 시뮬레이션 코드의 개발과 여러 장치에 대한 적용 연구

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 공과대학 에너지시스템공학부, 2022. 8. 나용수.The in-depth design and implementation of a newly developed integrated suite of codes, TRIASSIC (tokamak reactor integrated automated suite for simulation and computation), are reported. The suite comprises existing plasma simulation codes, including equilibrium solvers, 1.5D and 2D plasma transport solvers, neoclassical and anomalous transport models, current drive and heating (cooling) models, and 2D grid generators. The components in TRIASSIC could be fully modularized, by adopting a generic data structure as its internal data. Due to a unique interfacing method that does not depend on the generic data itself, legacy codes that are no longer maintained by the original author were easily interfaced. The graphical user interface and the parallel computing of the framework and its components are also addressed. The verification of TRIASSIC in terms of equilibrium, transport, and heating is also shown. Following the data model and definition of the data structure, a declarative programming method was adopted in the core part of the framework. The method was used to keep the internal data consistency of the data by enforcing the reciprocal relations between the data nodes, contributing to extra flexibility and explicitness of the simulations. TRIASSIC was applied on various devices including KSTAR, VEST, and KDEMO, owing to its flexibility in composing a workflow. TRIASSIC was validated against KSTAR plasmas in terms of interpretive and predictive modelings. The prediction and validation on the VEST device using TRIASSIC are also shown. For the applications to the upcoming KDEMO device, the machine design parameters were optimized, targeting an economical fusion demonstration reactor.본 연구에서는 TRIASSIC (tokamak reactor integrated automated suite for simulation and computation) 코드의 자세한 디자인과 실행 결과에 대해 소개합니다. 이 시뮬레이션 코드는 기존에 존재하던 플라즈마 평형, 1.5차원 및 2차원 플라즈마 수송, 신고전 및 난류 수송 모델, 전류 구동 및 가열 (냉각) 모델, 그리고 2차원 격자 생성기 등의 코드를 구성하여 만들어졌습니다. 프레임워크 내 데이터 구조로써 일반 데이터 구조를 채택함으로써 TRIASSIC의 코드 구성요소들은 완전한 모듈화 방식으로 결합될 수 있었습니다. 일반 데이터 구조에 의존하지 않는 독특한 코드 결합 방식으로 인해, 더 이상 유지보수되지 않는 레거시 코드들 또한 쉽게 결합될 수 있었습니다. 본 코드의 그래피컬 유저 인터페이스, 프레임워크와 코드 구성 요소들의 병렬 컴퓨팅에 관한 내용도 다뤄집니다. 평형, 수송, 그리고 가열 측면에서의 TRIASSIC 시뮬레이션의 검증 내용도 소개됩니다. 시뮬레이션 프레임워크 내 일반 데이터 구조의 데이터 모델과 데이터 정의를 만족시키기 위해, 데이터를 관리하는 프레임워크의 중심부에는 선언적 프로그래밍이 도입되었습니다. 선언적 프로그래밍을 통해 일반 데이터의 데이터 노드 간 관계식을 만족시킴으로써 데이터 간 내부 일관성을 확보하고, 코드의 유연성과 명시성을 추가적으로 확보할 수 있었습니다. TRIASSIC은 해석적, 예측적 모델링 측면에서 KSTAR 플라즈마를 대상으로 검증되었습니다. VEST 장치를 대상으로 한 예측 및 이에 대한 검증 내용 또한 서술됩니다. 경제적인 핵융합 실증로 건설을 목표로 KDEMO 장치에 대한 적용 및 장치 설계 최적화 연구도 소개됩니다.Abstract 1 Table of Contents 2 List of Figures 4 List of Tables 10 Chapter 1. Introduction 11 1.1. Background 11 1.1.1. Fusion Reactor and Modeling 11 1.1.2. Interpretive Analysis and Predictive Modeling 17 1.1.3. Modular Approach 21 1.1.4. The Standard Data Structure 24 1.1.5. The Internal Data Consistency in a Generic Data 28 1.1.6. Integration of Physics Codes into IDS 29 1.2. Overview of the Research 31 Chapter 2. Development of Integrated Suite of Codes 33 2.1. Development of TRIASSIC 33 2.1.1. Design Requirements 33 2.1.2. Overview of TRIASSIC 35 2.1.3. Comparison of Integrated Simulation Codes 40 2.2. Components in the Framework 43 2.2.1. Physics Codes Interfaced with the Framework 43 2.2.2. Physics Code Interfacings 46 2.2.3. Graphical User Interface 52 2.2.4. Jobs Scheduler and MPI 55 2.3. Verifications 57 2.3.1. The Coordinate Conventions 57 2.3.2. Coupling of Equilibrium-Transport 59 2.3.3. Neoclassical Transport and Bootstrap Current 63 2.3.4. Heating and Current Drive 65 Chapter 3. Improvements in Keeping the Internal Data Consistency 68 3.1. Background 68 3.2. Possible Implementations of a Component 71 3.3. A Method Adopted in the Framework 73 3.3.1. Prerequisites and Relation Definitions 73 3.3.2. Adding Relations in the Framework 78 3.3.3. Applying Relations 80 3.4. Performance and Flexibility of the Framework 83 3.4.1. Performance Enhancement 83 3.4.2. Flexibility and Maintenance of the Framework 85 Chapter 4. Applications to Various Devices 91 4.1. Applications to KSTAR 91 4.1.1. Kinetic equilibrium workflow and its validation 91 4.1.2. Stationary-state predictive modeling workflow 95 4.2. Application to VEST 102 4.2.1. Time-dependent predictive modeling workflow 103 4.3. Application to KDEMO 106 4.3.1. Predictive simulation workflow for optimization 106 Chapter 5. Summary and Conclusion 112 5.1. Summary and Conclusion 112 Appendix 116 A. Code Snippet of the Relation Definition 116 Bibliography 118 Abstract in Korean 126박

    Definition of a benchmark for low Reynolds number propeller aeroacoustics

    Get PDF
    Experimental and numerical results of a propeller of 0.3 m diameter operated at 5000 RPM and axial velocity ranging from 0 to 20 m/s and advance ratio ranging from 0 to 0.8 are presented as a preliminary step towards the definition of a benchmark configuration for low Reynolds number propeller aeroacoustics. The corresponding rotational tip Mach number is 0.23 and the Reynolds number based on the blade sectional chord and flow velocity varies from about 46000 to 106000 in the operational domain and in the 30% to 100% blade radial range. Force and noise measurements carried out in a low-speed semi-anechoic wind-tunnel are compared to scale-resolved CFD and low-fidelity numerical predictions. Results identify the experimental and numerical challenges of the benchmark and the relevance of fundamental research questions related to transition and other low Reynolds number effects
    corecore