10 research outputs found

    Genetic Programming for QoS-Aware Data-Intensive Web Service Composition and Execution

    No full text
    Web service composition has become a promising technique to build powerful enterprise applications by making use of distributed services with different functions. In the age of big data, more and more web services are created to deal with a large amount of data, which are called data-intensive services. Due to the explosion in the volume of data, providing efficient approaches to composing data-intensive services will become more and more important in the field of service-oriented computing. Meanwhile, as numerous web services have been emerging to offer identical or similar functionality on the Internet, web service composition is usually performed with end-to-end Quality of Service (QoS) properties which are adopted to describe the non-functional properties (e.g., response time, execution cost, reliability, etc.) of a web service. In addition, the executions of composite web services are typically coordinated by a centralized workflow engine. As a result, the centralized execution paradigm suffers from inefficient communication and a single point of failure. This is particularly problematic in the context of data-intensive processes. To that end, more decentralized and flexible execution paradigms are required for the execution of data-intensive applications. From a computational point of view, the problems of QoS-aware data-intensive web service composition and execution can be characterised as complex, large-scale, constrained and multi-objective optimization problems. Therefore, genetic programming (GP) based solutions are presented in this thesis to address the problems. A series of simulation experiments are provided to demonstrate the performance of the proposed approaches, and the empirical observations are also described in this thesis. Firstly, we propose a hybrid approach that integrates the local search procedure of tabu search into the global search process of GP to solving the problem of QoS-aware data-intensive web service composition. A mathematical model is developed for considering the mass data transmission across different component services in a data-intensive service composition. The experimental results show that our proposed approach can provide better performance than the standard GP approach and two traditional optimization methods. Next, a many-objective evolutionary approach is proposed for tackling the QoS-aware data-intensive service composition problem having more than three competing quality objectives. In this approach, the original search space of the problem is reduced before a recently developed many-objective optimization algorithm, NSGA-III, is adopted to solve the many-objective optimization problem. The experimental results demonstrate the effectiveness of our approach, as well as its superiority than existing single-objective and multi-objective approaches. Finally, a GP-based approach to partitioning a composite data-intensive service for decentralized execution is put forth in this thesis. Similar to the first problem, a mathematical model is developed for estimating the communication overhead inside a partition and across the partitions. The data and control dependencies in the original composite web service can be properly preserved in the deployment topology generated by our approach. Compared with two existing heuristic algorithms, the proposed approach exhibits better scalability and it is more suitable for large-scale partitioning problems

    Advanced CMOS Integrated Circuit Design and Application

    Get PDF
    The recent development of various application systems and platforms, such as 5G, B5G, 6G, and IoT, is based on the advancement of CMOS integrated circuit (IC) technology that enables them to implement high-performance chipsets. In addition to development in the traditional fields of analog and digital integrated circuits, the development of CMOS IC design and application in high-power and high-frequency operations, which was previously thought to be possible only with compound semiconductor technology, is a core technology that drives rapid industrial development. This book aims to highlight advances in all aspects of CMOS integrated circuit design and applications without discriminating between different operating frequencies, output powers, and the analog/digital domains. Specific topics in the book include: Next-generation CMOS circuit design and application; CMOS RF/microwave/millimeter-wave/terahertz-wave integrated circuits and systems; CMOS integrated circuits specially used for wireless or wired systems and applications such as converters, sensors, interfaces, frequency synthesizers/generators/rectifiers, and so on; Algorithm and signal-processing methods to improve the performance of CMOS circuits and systems

    Allocation optimale multicontraintes des workflows aux ressources d’un environnement Cloud Computing

    Get PDF
    Cloud Computing is increasingly recognized as a new way to use on-demand, computing, storage and network services in a transparent and efficient way. In this thesis, we address the problem of workflows scheduling on distributed heterogeneous infrastructure of Cloud Computing. The existing workflows scheduling approaches mainly focus on the bi-objective optimization of the makespan and the cost. In this thesis, we propose news workflows scheduling algorithms based on metaheuristics. Our algorithms are able to handle more than two QoS (Quality of Service) metrics, namely, makespan, cost, reliability, availability and energy in the case of physical resources. In addition, they address several constraints according to the specified requirements in the SLA (Service Level Agreement). Our algorithms have been evaluated by simulations. We used (1) synthetic workflows and real world scientific workflows having different structures, for our applications; and (2) the features of Amazon EC2 services for our Cloud. The obtained results show the effectiveness of our algorithms when dealing multiple QoS metrics. Our algorithms produce one or more solutions which some of them outperform the solution produced by HEFT heuristic over all the QoS considered, including the makespan for which HEFT is supposed to give good results.Le Cloud Computing est de plus en plus reconnu comme une nouvelle façon d'utiliser, à la demande, les services de calcul, de stockage et de réseau d'une manière transparente et efficace. Dans cette thèse, nous abordons le problème d'ordonnancement de workflows sur les infrastructures distribuées hétérogènes du Cloud Computing. Les approches d'ordonnancement de workflows existantes dans le Cloud se concentrent principalement sur l'optimisation biobjectif du makespan et du coût. Dans cette thèse, nous proposons des algorithmes d'ordonnancement de workflows basés sur des métaheuristiques. Nos algorithmes sont capables de gérer plus de deux métriques de QoS (Quality of Service), notamment, le makespan, le coût, la fiabilité, la disponibilité et l'énergie dans le cas de ressources physiques. En outre, ils traitent plusieurs contraintes selon les exigences spécifiées dans le SLA (Service Level Agreement). Nos algorithmes ont été évalués par simulation en utilisant (1) comme applications: des workflows synthétiques et des workflows scientifiques issues du monde réel ayant des structures différentes; (2) et comme ressources Cloud: les caractéristiques des services de Amazon EC2. Les résultats obtenus montrent l'efficacité de nos algorithmes pour le traitement de plusieurs QoS. Nos algorithmes génèrent une ou plusieurs solutions dont certaines surpassent la solution de l'heuristique HEFT sur toutes les QoS considérées, y compris le makespan pour lequel HEFT est censé donner de bons résultats

    Energy optimization for wireless sensor networks using hierarchical routing techniques

    Get PDF
    Philosophiae Doctor - PhDWireless sensor networks (WSNs) have become a popular research area that is widely gaining the attraction from both the research and the practitioner communities due to their wide area of applications. These applications include real-time sensing for audio delivery, imaging, video streaming, and remote monitoring with positive impact in many fields such as precision agriculture, ubiquitous healthcare, environment protection, smart cities and many other fields. While WSNs are aimed to constantly handle more intricate functions such as intelligent computation, automatic transmissions, and in-network processing, such capabilities are constrained by their limited processing capability and memory footprint as well as the need for the sensor batteries to be cautiously consumed in order to extend their lifetime. This thesis revisits the issue of the energy efficiency in sensor networks by proposing a novel clustering approach for routing the sensor readings in wireless sensor networks. The main contribution of this dissertation is to 1) propose corrective measures to the traditional energy model adopted in current sensor networks simulations that erroneously discount both the role played by each node, the sensor node capability and fabric and 2) apply these measures to a novel hierarchical routing architecture aiming at maximizing sensor networks lifetime. We propose three energy models for sensor network: a) a service-aware model that account for the specific role played by each node in a sensor network b) a sensor-aware model and c) load-balancing energy model that accounts for the sensor node fabric and its energy footprint. These two models are complemented by a load balancing model structured to balance energy consumption on the network of cluster heads that forms the backbone for any cluster-based hierarchical sensor network. We present two novel approaches for clustering the nodes of a hierarchical sensor network: a) a distanceaware clustering where nodes are clustered based on their distance and the residual energy and b) a service-aware clustering where the nodes of a sensor network are clustered according to their service offered to the network and their residual energy. These approaches are implemented into a family of routing protocols referred to as EOCIT (Energy Optimization using Clustering Techniques) which combines sensor node energy location and service awareness to achieve good network performance. Finally, building upon the Ant Colony Optimization System (ACS), Multipath Routing protocol based on Ant Colony Optimization approach for Wireless Sensor Networks (MRACO) is proposed as a novel multipath routing protocol that finds energy efficient routing paths for sensor readings dissemination from the cluster heads to the sink/base station of a hierarchical sensor network. Our simulation results reveal the relative efficiency of the newly proposed approaches compared to selected related routing protocols in terms of sensor network lifetime maximization

    Design and Evaluation of Low-Latency Communication Middleware on High Performance Computing Systems

    Get PDF
    [Resumen]El interés en Java para computación paralela está motivado por sus interesantes características, tales como su soporte multithread, portabilidad, facilidad de aprendizaje,alta productividad y el aumento significativo en su rendimiento omputacional. No obstante, las aplicaciones paralelas en Java carecen generalmente de mecanismos de comunicación eficientes, los cuales utilizan a menudo protocolos basados en sockets incapaces de obtener el máximo provecho de las redes de baja latencia, obstaculizando la adopción de Java en computación de altas prestaciones (High Per- formance Computing, HPC). Esta Tesis Doctoral presenta el diseño, implementación y evaluación de soluciones de comunicación en Java que superan esta limitación. En consecuencia, se desarrollaron múltiples dispositivos de comunicación a bajo nivel para paso de mensajes en Java (Message-Passing in Java, MPJ) que aprovechan al máximo el hardware de red subyacente mediante operaciones de acceso directo a memoria remota que proporcionan comunicaciones de baja latencia. También se incluye una biblioteca de paso de mensajes en Java totalmente funcional, FastMPJ, en la cual se integraron los dispositivos de comunicación. La evaluación experimental ha mostrado que las primitivas de comunicación de FastMPJ son competitivas en comparación con bibliotecas nativas, aumentando significativamente la escalabilidad de aplicaciones MPJ. Por otro lado, esta Tesis analiza el potencial de la computación en la nube (cloud computing) para HPC, donde el modelo de distribución de infraestructura como servicio (Infrastructure as a Service, IaaS) emerge como una alternativa viable a los sistemas HPC tradicionales. La evaluación del rendimiento de recursos cloud específicos para HPC del proveedor líder, Amazon EC2, ha puesto de manifiesto el impacto significativo que la virtualización impone en la red, impidiendo mover las aplicaciones intensivas en comunicaciones a la nube. La clave reside en un soporte de virtualización apropiado, como el acceso directo al hardware de red, junto con las directrices para la optimización del rendimiento sugeridas en esta Tesis.[Resumo]O interese en Java para computación paralela está motivado polas súas interesantes características, tales como o seu apoio multithread, portabilidade, facilidade de aprendizaxe, alta produtividade e o aumento signi cativo no seu rendemento computacional. No entanto, as aplicacións paralelas en Java carecen xeralmente de mecanismos de comunicación e cientes, os cales adoitan usar protocolos baseados en sockets que son incapaces de obter o máximo proveito das redes de baixa latencia, obstaculizando a adopción de Java na computación de altas prestacións (High Performance Computing, HPC). Esta Tese de Doutoramento presenta o deseño, implementaci ón e avaliación de solucións de comunicación en Java que superan esta limitación. En consecuencia, desenvolvéronse múltiples dispositivos de comunicación a baixo nivel para paso de mensaxes en Java (Message-Passing in Java, MPJ) que aproveitan ao máaximo o hardware de rede subxacente mediante operacións de acceso directo a memoria remota que proporcionan comunicacións de baixa latencia. Tamén se inclúe unha biblioteca de paso de mensaxes en Java totalmente funcional, FastMPJ, na cal foron integrados os dispositivos de comunicación. A avaliación experimental amosou que as primitivas de comunicación de FastMPJ son competitivas en comparación con bibliotecas nativas, aumentando signi cativamente a escalabilidade de aplicacións MPJ. Por outra banda, esta Tese analiza o potencial da computación na nube (cloud computing) para HPC, onde o modelo de distribución de infraestrutura como servizo (Infrastructure as a Service, IaaS) xorde como unha alternativa viable aos sistemas HPC tradicionais. A ampla avaliación do rendemento de recursos cloud específi cos para HPC do proveedor líder, Amazon EC2, puxo de manifesto o impacto signi ficativo que a virtualización impón na rede, impedindo mover as aplicacións intensivas en comunicacións á nube. A clave atópase no soporte de virtualización apropiado, como o acceso directo ao hardware de rede, xunto coas directrices para a optimización do rendemento suxeridas nesta Tese.[Abstract]The use of Java for parallel computing is becoming more promising owing to its appealing features, particularly its multithreading support, portability, easy-tolearn properties, high programming productivity and the noticeable improvement in its computational performance. However, parallel Java applications generally su er from inefficient communication middleware, most of which use socket-based protocols that are unable to take full advantage of high-speed networks, hindering the adoption of Java in the High Performance Computing (HPC) area. This PhD Thesis presents the design, development and evaluation of scalable Java communication solutions that overcome these constraints. Hence, we have implemented several lowlevel message-passing devices that fully exploit the underlying network hardware while taking advantage of Remote Direct Memory Access (RDMA) operations to provide low-latency communications. Moreover, we have developed a productionquality Java message-passing middleware, FastMPJ, in which the devices have been integrated seamlessly, thus allowing the productive development of Message-Passing in Java (MPJ) applications. The performance evaluation has shown that FastMPJ communication primitives are competitive with native message-passing libraries, improving signi cantly the scalability of MPJ applications. Furthermore, this Thesis has analyzed the potential of cloud computing towards spreading the outreach of HPC, where Infrastructure as a Service (IaaS) o erings have emerged as a feasible alternative to traditional HPC systems. Several cloud resources from the leading IaaS provider, Amazon EC2, which speci cally target HPC workloads, have been thoroughly assessed. The experimental results have shown the signi cant impact that virtualized environments still have on network performance, which hampers porting communication-intensive codes to the cloud. The key is the availability of the proper virtualization support, such as the direct access to the network hardware, along with the guidelines for performance optimization suggested in this Thesis

    Federated knowledge base debugging in DL-Lite A

    Full text link
    Due to the continuously growing amount of data the federation of different and distributed data sources gained increasing attention. In order to tackle the challenge of federating heterogeneous sources a variety of approaches has been proposed. Especially in the context of the Semantic Web the application of Description Logics is one of the preferred methods to model federated knowledge based on a well-defined syntax and semantics. However, the more data are available from heterogeneous sources, the higher the risk is of inconsistency – a serious obstacle for performing reasoning tasks and query answering over a federated knowledge base. Given a single knowledge base the process of knowledge base debugging comprising the identification and resolution of conflicting statements have been widely studied while the consideration of federated settings integrating a network of loosely coupled data sources (such as LOD sources) has mostly been neglected. In this thesis we tackle the challenging problem of debugging federated knowledge bases and focus on a lightweight Description Logic language, called DL-LiteA, that is aimed at applications requiring efficient and scalable reasoning. After introducing formal foundations such as Description Logics and Semantic Web technologies we clarify the motivating context of this work and discuss the general problem of information integration based on Description Logics. The main part of this thesis is subdivided into three subjects. First, we discuss the specific characteristics of federated knowledge bases and provide an appropriate approach for detecting and explaining contradictive statements in a federated DL-LiteA knowledge base. Second, we study the representation of the identified conflicts and their relationships as a conflict graph and propose an approach for repair generation based on majority voting and statistical evidences. Third, in order to provide an alternative way for handling inconsistency in federated DL-LiteA knowledge bases we propose an automated approach for assessing adequate trust values (i.e., probabilities) at different levels of granularity by leveraging probabilistic inference over a graphical model. In the last part of this thesis, we evaluate the previously developed algorithms against a set of large distributed LOD sources. In the course of discussing the experimental results, it turns out that the proposed approaches are sufficient, efficient and scalable with respect to real-world scenarios. Moreover, due to the exploitation of the federated structure in our algorithms it further becomes apparent that the number of identified wrong statements, the quality of the generated repair as well as the fineness of the assessed trust values profit from an increasing number of integrated sources

    A decision framework to mitigate vendor lock-in risks in cloud (SaaS category) migration.

    Get PDF
    Cloud computing offers an innovative business model to enterprise IT services consumption and delivery. However, vendor lock-in is recognised as being a major barrier to the adoption of cloud computing, due to lack of standardisation. So far, current solutions and efforts tackling the vendor lock-in problem have been confined to/or are predominantly technology-oriented. Limited studies exist to analyse and highlight the complexity of vendor lock-in problem existing in the cloud environment. Consequently, customers are unaware of proprietary standards which inhibit interoperability and portability of applications when taking services from vendors. The complexity of the service offerings makes it imperative for businesses to use a clear and well understood decision process to procure, migrate and/or discontinue cloud services. To date, the expertise and technological solutions to simplify such transition and facilitate good decision making to avoid lock-in risks in the cloud are limited. Besides, little research investigations have been carried out to provide a cloud migration decision framework to assist enterprises to avoid lock-in risks when implementing cloud-based Software-as-a-Service (SaaS) solutions within existing environments. Such decision framework is important to reduce complexity and variations in implementation patterns on the cloud provider side, while at the same time minimizing potential switching cost for enterprises by resolving integration issues with existing IT infrastructures. Thus, the purpose of this thesis is to propose a decision framework to mitigate vendor lock-in risks in cloud (SaaS) migration. The framework follows a systematic literature review and analysis to present research findings containing factual and objective information, and business requirements for vendor-neutral interoperable cloud services, and/or when making architectural decisions for secure cloud migration and integration. The underlying research procedure for this thesis investigation consists of a survey based on qualitative and quantitative approaches conducted to identify the main risk factors that give rise to cloud computing lock-in situations. Epistemologically, the research design consists of two distinct phases. In phase 1, qualitative data were collected using open-ended interviews with IT practitioners to explore the business-related issues of vendor lock-in affecting cloud adoption. Whereas the goal of phase 2 was to identify and evaluate the risks and opportunities of lock-in which affect stakeholders’ decision-making about migrating to cloud-based solutions. In synthesis, the survey analysis and the framework proposed by this research (through its step-by-step approach), provides guidance on how enterprises can avoid being locked to individual cloud service providers. This reduces the risk of dependency on a cloud provider for service provision, especially if data portability, as the most fundamental aspect, is not enabled. Moreover, it also ensures appropriate pre-planning and due diligence so that the correct cloud service provider(s) with the most acceptable risks to vendor lock-in is chosen, and that the impact on the business is properly understood (upfront), managed (iteratively), and controlled (periodically). Each decision step within the framework prepares the way for the subsequent step, which supports a company to gather the correct information to make a right decision before proceeding to the next step. The reason for such an approach is to support an organisation with its planning and adaptation of the services to suit the business requirements and objectives. Furthermore, several strategies are proposed on how to avoid and mitigate lock-in risks when migrating to cloud computing. The strategies relate to contract, selection of vendors that support standardised formats and protocols regarding data structures and APIs, negotiating cloud service agreements (SLA) accordingly as well as developing awareness of commonalities and dependencies among cloud-based solutions. The implementation of proposed strategies and supporting framework has a great potential to reduce the risks of vendor lock-in

    Strategies for the intelligent selection of components

    Get PDF
    It is becoming common to build applications as component-intensive systems - a mixture of fresh code and existing components. For application developers the selection of components to incorporate is key to overall system quality - so they want the `best\u27. For each selection task, the application developer will de ne requirements for the ideal component and use them to select the most suitable one. While many software selection processes exist there is a lack of repeatable, usable, exible, automated processes with tool support. This investigation has focussed on nding and implementing strategies to enhance the selection of software components. The study was built around four research elements, targeting characterisation, process, strategies and evaluation. A Post-positivist methodology was used with the Spiral Development Model structuring the investigation. Data for the study is generated using a range of qualitative and quantitative methods including a survey approach, a range of case studies and quasiexperiments to focus on the speci c tuning of tools and techniques. Evaluation and review are integral to the SDM: a Goal-Question-Metric (GQM)-based approach was applied to every Spiral

    Um estudo sobre pareamento aproximado para busca por similaridade : técnicas, limitações e melhorias para investigações forenses digitais

    Get PDF
    Orientador: Marco Aurélio Amaral HenriquesTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: A forense digital é apenas um dos ramos da Ciência da Computação que visa investigar e analisar dispositivos eletrônicos na busca por evidências de crimes. Com o rápido aumento da capacidade de armazenamento de dados, é necessário o uso de procedimentos automatizados para lidar com o grande volume de dados disponíveis atualmente, principalmente em investigações forenses, nas quais o tempo é um recurso escasso. Uma possível abordagem para tornar o processo mais eficiente é através da técnica KFF (Filtragem por arquivos conhecidos - Known File Filtering), onde uma lista de objetos de interesse é usada para reduzir/separar dados para análise. Com um banco de dados de hashes destes objetos, o examinador realiza buscas no dispositivo de destino sob investigação por qualquer item que seja igual ao buscado. No entanto, devido a limitações nas funções criptográficas de hash (incapacidade de detectar objetos semelhantes), novos métodos foram projetados baseando-se em funções de Pareamento Aproximado (ou Approximate Matching) (AM). Estas funções aparecem como candidatos para realizar buscas uma vez que elas têm a capacidade de identificar similaridade (no nível de bits) de uma maneira muito eficiente, criando e comparando representações compactas de objetos (conhecidos como resumos). Neste trabalho, apresentamos as funções de Pareamento Aproximado. Mostramos algumas das ferramentas de AM mais conhecidas e apresentamos as Estratégias de Busca por Similaridade baseadas em resumos, capazes de realizar a busca de similaridade (usando AM) de maneira mais eficiente, principalmente ao lidar com grandes conjuntos de dados. Realizamos também uma análise detalhada das estratégias atuais e, dado que as mesmas trabalham somente com algumas ferramentas específicas de AM, nós propomos uma nova abordagem baseada em uma ferramenta diferente que possui boas características para investigações forenses. Além disso, abordamos algumas limitações das ferramentas atuais de AM em relação ao processo de detecção de similaridade, onde muitas comparações apontadas como semelhantes, são de fato falsos positivos; as ferramentas geralmente são enganadas por blocos comuns (dados comuns em muitos objetos diferentes). Ao remover estes blocos dos resumos de AM, obtemos melhorias significativas na detecção de objetos similares. Também apresentamos neste trabalho uma análise teórica detalhada das capacidades de detecção da ferramenta de AM sdhash e propomos melhorias em sua função de comparação, onde a versão aprimorada apresenta uma medida de similaridade (score) mais precisa. Por último, novas aplicações de AM são apresentadas e analisadas: uma de identificação rápida de arquivos por meio de amostragem de dados e outra de identificação eficiente de impressões digitais. Esperamos que profissionais da área forense e de outras áreas relacionadas se beneficiem de nosso estudo sobre AM para resolver seus problemasAbstract: Digital forensics is a branch of Computer Science aiming at investigating and analyzing electronic devices in the search for crime evidence. With the rapid increase in data storage capacity, the use of automated procedures to handle the massive volume of data available nowadays is required, especially in forensic investigations, in which time is a scarce resource. One possible approach to make the process more efficient is the Known File Filter (KFF) technique, where a list of interest objects is used to reduce/separate data for analysis. Holding a database of hashes of such objects, the examiner performs lookups for matches against the target device under investigation. However, due to limitations over cryptographic hash functions (inability to detect similar objects), new methods have been designed based on Approximate Matching (AM). They appear as suitable candidates to perform this process because of their ability to identify similarity (bytewise level) in a very efficient way, by creating and comparing compact representations of objects (a.k.a. digests). In this work, we present the Approximate Matching functions. We show some of the most known AM tools and present the Similarity Digest Search Strategies (SDSS), capable of performing the similarity search (using AM) more efficiently, especially when dealing with large data sets. We perform a detailed analysis of current SDSS approaches and, given that current strategies only work for a few particular AM tools, we propose a new strategy based on a different tool that has good characteristics for forensic investigations. Furthermore, we address some limitations of current AM tools regarding the similarity detection process, where many matches pointed out as similar, are indeed false positives; the tools are usually misled by common blocks (pieces of data common in many different objects). By removing such blocks from AM digests, we obtain significant improvements in the detection of similar data. We also present a detailed theoretical analysis of the capabilities of sdhash AM tool and provide some improvements to its comparison function, where our improved version has a more precise similarity measure (score). Lastly, new applications of AM are presented and analyzed: One for fast file identification based on data samples and another for efficient fingerprint identification. We hope that practitioners in the forensics field and other related areas will benefit from our studies on AM when solving their problemsDoutoradoEngenharia de ComputaçãoDoutor em Engenharia Elétrica23038.007604/2014-69CAPE

    筑波大学計算科学研究センター 平成24年度 年次報告書

    Get PDF
    1 平成23年度 重点施策・改善目標 …… 22 平成24年度 実施報告 …… 53 各研究部門の報告 …… 11Ⅰ.素粒子物理研究部門 …… 11Ⅱ.宇宙・原子核物理研究部門 …… 40 Ⅱ-1.宇宙分野 …… 40 Ⅱ-2.原子核分野 …… 65Ⅲ.量子物性研究部門 …… 88Ⅳ.生命科学研究部門 …… 115 Ⅳ-1.生命機能情報分野 …… 115 Ⅳ-2.分子進化分野 …… 125Ⅴ.地球環境研究部門 …… 136Ⅵ.高性能計算システム研究部門 …… 146Ⅶ.計算情報学研究部門 …… 165 Ⅶ-1.データ基盤分野 …… 165 Ⅶ-2.計算メディア分野 …… 17
    corecore