8 research outputs found

    PList-based Divide and Conquer Parallel Programming

    Get PDF
    This paper details an extension of a Java parallel programming framework – JPLF. The JPLF framework is a programming framework that helps programmers build parallel programs using existing building blocks. The framework is based on {em PowerLists} and PList Theories and it naturally supports multi-way Divide and Conquer. By using this framework, the programmer is exempted from dealing with all the complexities of writing parallel programs from scratch. This extension to the JPLF framework adds PLists support to the framework and so, it enlarges the applicability of the framework to a larger set of parallel solvable problems. Using this extension, we may apply more flexible data division strategies. In addition, the length of the input lists no longer has to be a power of two – as required by the PowerLists theory. In this paper we unveil new applications that emphasize the new class of computations that can be executed within the JPLF framework. We also give a detailed description of the data structures and functions involved in the PLists extension of the JPLF, and extended performance experiments are described and analyzed

    Performance and Analysis of a U-Net Model for Automated Skin Lesion Segmentation

    Get PDF
    A greater proportion of people are affected by skin cancer, particularly melanoma, which has a higher tendency to metastasize. For Dermatologist, Visual inspections are most challenging & complex task for melanoma detection. To solve this problem, dermoscopic images are analyzed and segmented. Due to the sensitivity involved in surgical operations, existing techniques are unable to achieve higher accuracy. As a result, computer-aided systems are essential to detect & segment dermoscopic images.     In this paper, for segmentation 5000 skin images were taken from the HAM10000 dataset. Prior to segmentation, preprocessing is done by resizing images. A novel U Net structure is a fully convolutional network is presented & implemented using up-sampling and down-sampling technique with Rectified Linear Units (ReLU) for activation functions. The outcomes of proposed methodology shows performance improvement for skin-lesion segmentation with 94.7 % pixel accuracy & 89.2 % dice coefficient compared with existing KNN & SVM techniques

    A Review on Detection of Medical Plant Images

    Get PDF
    Both human and non-human life on Earth depends heavily on plants. The natural cycle is most significantly influenced by plants. Because of the sophistication of recent plant discoveries and the computerization of plants, plant identification is particularly challenging in biology and agriculture. There are a variety of reasons why automatic plant classification systems must be put into place, including instruction, resource evaluation, and environmental protection. It is thought that the leaves of medicinal plants are what distinguishes them. It is an interesting goal to identify the species of plant automatically using the photo identity of their leaves because taxonomists are undertrained and biodiversity is quickly vanishing in the current environment. Due to the need for mass production, these plants must be identified immediately. The physical and emotional health of people must be taken into consideration when developing drugs. To important processing of medical herbs is to identify and classify. Since there aren't many specialists in this field, it might be difficult to correctly identify and categorize medicinal plants. Therefore, a fully automated approach is optimal for identifying medicinal plants. The numerous means for categorizing medicinal plants that take into interpretation based on the silhouette and roughness of a plant's leaf are briefly précised in this article

    A survey on different plant diseases detection using machine learning techniques

    Get PDF
    Early detection and identification of plant diseases from leaf images using machine learning is an important and challenging research area in the field of agriculture. There is a need for such kinds of research studies in India because agriculture is one of the main sources of income which contributes seventeen percent of the total gross domestic product (GDP). Effective and improved crop products can increase the farmer's profit as well as the economy of the country. In this paper, a comprehensive review of the different research works carried out in the field of plant disease detection using both state-of-art, handcrafted-features- and deep-learning-based techniques are presented. We address the challenges faced in the identification of plant diseases using handcrafted-features-based approaches. The application of deep-learning-based approaches overcomes the challenges faced in handcrafted-features-based approaches. This survey provides the research improvement in the identification of plant diseases from handcrafted-features-based to deep-learning-based models. We report that deep-learning-based approaches achieve significant accuracy rates on a particular dataset, but the performance of the model may be decreased significantly when the system is tested on field image condition or on different datasets. Among the deep learning models, deep learning with an inception layer such as GoogleNet and InceptionV3 have better ability to extract the features and produce higher performance results. We also address some of the challenges that are needed to be solved to identify the plant diseases effectively.Web of Science1117art. no. 264

    블록체인 기반 디지털 자산에 대한 분석 연구: 중앙은행 디지털 화폐, 스테이블코인, 대체 불가능 토큰을 중심으로

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 공과대학 산업공학과, 2023. 2. 이재욱.본 논문은 탈중앙화 금융 (DeFi) 시장에서 유망한 세 가지 자산인 중앙은행 디지털 화폐, 스테이블 코인 및 대체 불가능한 토큰에 대한 심층적인 실증분석을 제공한다. 먼저 현재 중앙은행 디지털 화폐 설계에 있어서 가장 큰 걸림돌이 되고 있는 두 가지 문제를 해결하기 위한 블록체인 기반 중앙은행 디지털 화폐 결제 시스템을 제안한다. 이 때, 크로스-체인 아토믹 스왑 기술과 격자 기반 순차적 통합 서명 (sequential aggregate signature) 기술이 함께 활용된다. 그리고 스테이블 코인 시장에 대한 심층적 이해를 위해 최근에 발생하였던 테라-루나 사태를 파급효과 지수와 효과적 전이 엔트로피를 활용하여 분석하였다. 이를 통해 스테이블코인과 암호화폐 시장 간의 연결성과 정보 전송을 정량화하였다. 그리고 대체 불가능 토큰의 경우, 대체 불가능 토큰의 특성상 기존 암호화폐에 비해 거래량이 적다는 점을 착안하여 대체 불가능 시장 내 수익률과 거래량 간의 인과관계를 분석한다. 중앙은행 디지털 화폐의 경우, 현재 중앙은행 디지털 화폐 설계의 두 가지 근본적인 과제를 해결하는 블록체인 기반 결제 시스템을 제안한다. 먼저 감사 가능성을 제공하기 위해 결제 시스템에 관리자 원장을 도입하고, 관리자 노드가 모든 거래에 참여할 수 있도록 하였다. 본 모델은 크로스 체인 아토믹 스왑과 격자 기반 순차적 통합서명을 활용하여 안전성을 보장하고 국가간 결제를 가능케한다. 또한 제안 모델은 거래 기록을 추적하고 거래 기록과 거래 참가자의 신원을 일치시킬 수 있으며, 격자 기반 암호 활용을 통해 미래의 양자 컴퓨터 공격에도 강건할 수 있다. 동일 프로토콜 내의 토큰을 준비금으로 갖는 스테이블 코인의 경우, 해당 프로토콜에 대한 대중의 신뢰가 무너진다면 데스 스파이럴에 빠질 위험이 매우 높다. 정상적인 시장 상황에서는 스테이블코인의 가격이 매우 안정적이기 때문에, 이에 대한 분석을 진행하는 데에 어려움이 있다. 따라서, 스테이블코인의 시장 영향력을 정량화하기 위하여, 스테이블코인의 가격 변동성이 매우 심했던 최근의 테라-루나 폭락 사태를 분석하였으며 이 때, 파급효과 지수와 효과적 전이 엔트로피와 같은 계량 경제학적 방법론을 사용하였다. 분석에는 1시간 및 5분 단위 암호화폐 가격, 구글 트렌드 지수, 그리고 StockTwits에 포스팅된 트윗들을 사용하였다. 실험 결과, 디페그가 시작되면서 스테이블 코인의 파급효과가 급격하게 증가했고, 루나 코인이 전체 암호화폐 시장에서 큰 영향력을 가졌음을 확인하였다. 또한 루나에서 비트코인이나 이더리움과 같은 다른 주요 암호화폐로의 효과적 전이 엔트로피도 함께 증가하였다. 그러나 투자자 감성의 경우 루나로의 전이 엔트로피가 크게 감소함에 따라, 폭락 사태 동안 정보 송신자로서의 역할을 잃어버렸다. 이러한 현상이 일어난 이유는, 루나의 미래에 대한 투자자들의 의견이 매우 분분하여 시장 내 투자자 감성이 방향성을 잃었기 때문이라고 해석할 수 있다. 대체 불가능 토큰 시장은, 대체 불가능 토큰이 갖는 고유성이라는 특성으로 인해 기존 암호화폐 시장과는 차이점이 있다. 이에 따라 거래의 유동성이 매우 낮아지게 된다. 다시 말해, 개별 대체 불가능 토큰에 대한 적합한 매도자와 매수자를 찾는 작업이 비교적 오래 걸릴 수 있다. 이러한 특성을 알아보기 위하여 대체 불가능 토큰의 거래량과 가격 간의 인과관계를 알아보고자 하였다. 이 때, 분위수별 그레인저 인과관계 검정을 사용하였다. 데이터의 경우, 대체 불가능 토큰의 일일 거래량과 가격을 사용하였으며, 분석 결과 전반적인 대체 불가능 토큰 시장에 대해서는 극단적인 시장 상황 속에서 인과 관계가 더욱 강하게 나타남을 보였다. 하지만 대체 불가능 토큰 프로젝트 별로 분석한 결과는 이와 다르게 나타났다. 예를 들어, 액시 인피니티는 모든 분위수에서 거래량과 수익률이 강한 인과관계를 가진 바면, 디센트럴랜드는 중앙값 주변에서만 인과관계를 보였다. 또한 샌드박스의 거래량은 오히려 약세장 속에서 샌드박스 가격을 예측하는 데에만 도움을 줄 수 있음을 확인하였다. 마지막으로, 대체 불가능 토큰과 해당 토큰이 존재하는 프로토콜 내 기본 암호화폐와의 인과관계를 분석하였다. 실증 실험 결과, 대체 불가능 토큰의 가격과 프로토콜 내 기본 암호화폐의 가격에는 밀접한 관계가 있으며, 대체 불가능 토큰 거래 및 투자 시에도 이러한 점을 고려해야함을 보였다. 본 논문은 블록체인 기반 중앙은행 디지털 화폐, 스테이블코인 및 대체 불가능 토큰과 같은 다양한 유형의 디지털 자산에 대한 실증분석을 진행하였다. 가장 먼저, 전통 금융시장과 탈중앙화 금융시장의 현 기술적 장애물을 해결하기 위한 블록체인 기반 중앙은행 디지털 화폐를 제안하였다. 또한 스테이블 코인의 데스 스파이를에 대한 계량경제학적 분석을 통하여 스테이블코인이 암호화폐 및 탈중앙화 금융시장에 지대한 영향을 미치고 있음을 보였다. 또한, 대체 불가능 토큰 시장의 수익률-거래량 인과관계를 확인하였으며, 이를 통해 다양한 시장 상황에 놓여 있는 대체 불가능 토큰 투자자들에게 도움을 줄 수 있을 것으로 기대한다.This dissertation provides an in-depth analysis of three promising assets in the DeFi market: CBDCs, stablecoins, and NFTs. For CBDCs, a blockchain-based CBDC settlement model is proposed using cross-chain atomic swaps and lattice-based sequential aggregate signature scheme to address two challenging issues. For stablecoins, the connectedness and information transmission between the stablecoin and cryptocurrency market is quantified to conclude that CBDCs can mitigate financial risks. For NFTs, the return-volume causal relationships in the NFT markets are analyzed due to the low transaction volume. For CBDCs, we propose a blockchain-based CBDC settlement model which addresses two fundamental challenges in CBDC design. It introduces an administrator ledger to the settlement system to provide auditability and allows the administrator node to participate in every transaction. The model also uses cross-chain atomic swap technology and a lattice-based sequential aggregate signature scheme to ensure safety and enable cross-border payments. These features make the model suitable for the growing needs for stable and reliable digital currencies. Our model provides a secure and reliable way to track transaction records and match the identity of transaction participants, while also protecting against malicious behavior and quantum computer attacks. Stablecoins backed with their own protocol's native tokens are highly susceptible to death spirals if the corresponding blockchain protocol is met with public distrust. During normal market conditions, the impact of stablecoins on the cryptocurrency market is difficult to measure as their prices remain fairly stable. To quantify the impact of the stablecoin, we analyze the recent Terra-Luna crash with econometric methodologies such as the spillover index and effective transfer entropy. Hourly and 5-minute cryptocurrency prices, Google Trends index and tweets posted on StockTwits were collected and used to measure the spillover effect. Results showed that the spillover effect of the stablecoin increased rapidly as the depeg started, and LUNA gained influence in the overall cryptocurrency market. The effective transfer entropy from LUNA to other cryptocurrencies such as BTC and ETH also increased dramatically. However, investor sentiment lost its role as an information transmitter during the crash, as the effective transfer entropy from the investor sentiment to LUNA decreased significantly. We conclude that the collusion between bearish and bullish opinions about the future of LUNA led to the market sentiment losing its influence. NFT markets are distinct from traditional cryptocurrency markets due to their uniqueness. This makes it difficult to find the right buyer and seller pair for each individual NFT. To understand the relationship between trading volume of NFTs and their prices, we used the Granger causality test in quantiles. Our data included daily transaction volume and price of NFTs. The results showed that the causality from overall NFT volume to return became stronger in extreme market conditions. However, different NFT projects had different behaviors. For example, Axie Infinity had strong causality in every quantile, while Decentraland only had a causal relationship around the median. Additionally, the transaction volume of The Sandbox was only helpful in forecasting The Sandbox prices during bearish markets conditions. Lastly, we found a strong causal relationship between NFT returns and the return of its in-protocol native cryptocurrencies. Overall, our analysis showed that NFT volume and prices are closely related and should be taken into account when trading NFTs. This dissertation has explored the various types of digital assets, such as blockchain-based CBDCs, stablecoins, and NFTs. It has proposed a blockchain-based CBDC model to address the current obstacles in traditional and decentralized financial markets. The econometric analysis of stablecoin death spiral has revealed the significant impact of stablecoin on the cryptocurrency and DeFi markets. Additionally, the return-volume causal relationships in the NFT markets have been confirmed, providing guidance to NFT investors in different market conditions.Chapter 1 Introduction 1 1.1 Motivation of the Dissertation 1 1.2 Aims of the Dissertation 8 1.3 Organization of the Disseration 11 Chapter 2 Analysis on Blockchain-based CBDC Settlement System 12 2.1 Chapter Overview 12 2.2 Defining our CBDC research goal 16 2.2.1 Security and Privacy issues in CBDCs 16 2.2.2 Our Research Challenges in CBDC 31 2.3 Preliminaries 35 2.3.1 CBDC: State of Adoption 35 2.3.2 Cryptographic Background 36 2.4 Proposed Model 39 2.4.1 Model Description 39 2.4.2 Model Architecture 43 2.4.3 Our signature scheme: AggSign 45 2.5 Security Analysis 48 2.5.1 Security of the Settlement System 48 2.5.2 Security of AggSign 51 2.6 Proof-of-Concept Experiments and Analysis 60 2.6.1 Simulation Setting 60 2.6.2 Experimental Results 62 2.7 Chapter Summary 65 Chapter 3 Quantifying the Connectedness between the Algorithmic based Stablecoin and Cryptocurrency: The Impact of Death Spiral 67 3.1 Chapter Overview 67 3.2 Data and Methodology 71 3.2.1 Data 71 3.2.2 Methodology 73 3.3 Empirical Findings 75 3.3.1 Return and volatility spillover effects 75 3.3.2 Effective Transfer Entropy 84 3.4 Chapter Summary 88 Chapter 4 Return-Volume Relationship in Non-Fungible Tokens:Evidence from the Granger Causality in Quantiles 92 4.1 Chapter Overview 92 4.2 Data and Methodology 95 4.2.1 Data 95 4.2.2 Methodology: Granger causality test in quantiles 98 4.3 Empirical results 101 4.3.1 Causal effects of NFT volume on return 101 4.3.2 Causal effects of NFT return on volume 105 4.3.3 Causal effects between NFTs and their native cryptocurrencies 108 4.4 Chapter Summary 111 Chapter 5 Conclusion 113 5.1 Contributions of the Dissertation 113 5.2 Future Works 117 Bibliography 118 국문초록 141박

    GPGPU Reliability Analysis: From Applications to Large Scale Systems

    Get PDF
    Over the past decade, GPUs have become an integral part of mainstream high-performance computing (HPC) facilities. Since applications running on HPC systems are usually long-running, any error or failure could result in significant loss in scientific productivity and system resources. Even worse, since HPC systems face severe resilience challenges as progressing towards exascale computing, it is imperative to develop a better understanding of the reliability of GPUs. This dissertation fills this gap by providing an understanding of the effects of soft errors on the entire system and on specific applications. To understand system-level reliability, a large-scale study on GPU soft errors in the field is conducted. The occurrences of GPU soft errors are linked to several temporal and spatial features, such as specific workloads, node location, temperature, and power consumption. Further, machine learning models are proposed to predict error occurrences on GPU nodes so as to proactively and dynamically turning on/off the costly error protection mechanisms based on prediction results. To understand the effects of soft errors at the application level, an effective fault-injection framework is designed aiming to understand the reliability and resilience characteristics of GPGPU applications. This framework is effective in terms of reducing the tremendous number of fault injection locations to a manageable size while still preserving remarkable accuracy. This framework is validated with both single-bit and multi-bit fault models for various GPGPU benchmarks. Lastly, taking advantage of the proposed fault-injection framework, this dissertation develops a hierarchical approach to understanding the error resilience characteristics of GPGPU applications at kernel, CTA, and warp levels. In addition, given that some corrupted application outputs due to soft errors may be acceptable, we present a use case to show how to enable low-overhead yet reliable GPU computing for GPGPU applications

    Stratégies de placement et d'ordonnancement sensibles à la consommation énergétique pour graphes de tâches temps-réel avec contraintes de fiabilité

    Get PDF
    This paper focuses on energy minimization for the mapping and scheduling of real-time workflows under reliability constraints. Workflow instances are input periodically to the system. Each instance is composed of several tasks and must complete execution before the arrival of the next instance, and with a prescribed reliability threshold. While the shape of the dependence graph is identical for each instance, task execution times are stochastic and vary from one instance to the next. The reliability threshold is met by using several replicas for each task. The target platform consists of identical processors equipped with Dynamic Voltage and Frequency Scaling (DVFS) capabilities. A different frequency can be assigned to each task replica.This difficult tri-criteria mapping and scheduling problem (energy, deadline, reliability) has been studied only recently for workflows with arbitrary dependence constraints [20, 11]. We investigate new mapping and scheduling strategies based upon layers in the task graph, and which better balance replicas across processors, thereby decreasing the time overlap between the different replicas of the same task and saving energy. We compare these strategies with the two competitor approaches [20, 11] and a reference baseline [33] on a variety of benchmark workflows. Our best heuristics achieve an average energy gain of 40% over the competitors and of 80% over the baseline.Ce travail s’intéresse à la minimisation de la consommation énergétique lors du placement et de l’ordonnancement de graphes de tâches temps-réel soumis à des contraintes de fiabilité. Des instances d’un graphe de tâches sont soumises périodiquement à un système. Chaque instance est composée de plusieurs tâches et son exécution doit être terminée avant l’arrivée de l’instance suivante tout en respectant un niveau de fiabilité donné. Ce niveau de fiabilité est atteint en répliquant un certain nombre de fois chacune des tâches. La plateforme de calcul est constituée de processeurs identiques dont le voltage et la fréquence peuvent être modifiés (système DVFS). Chaque réplica de tâche peut se voir attribuer sa propre fréquence.Ce problème tri-critère de placement et d’ordonnancement (énergie, dates butoir, fiabilité) n’a commencé à être étudié que très récemment avec des dépendances arbitraires [20, 11]. Nous étudions de nouvelles stratégies de placement et d’ordonnancement basées sur une notion de couche du graphe de tâches, et qui équilibrent mieux les réplicas entre les processeurs, ce qui permet de réduire le recouvrement temporel entre les différents réplicas d’une même tâche et, de ce fait, la consommation énergétique. Nous comparons ces stratégies à deux approches concurrentes [20, 11] et à une approche de référence [33], sur un tout un ensemble de graphes de tâches. Nos meilleures heuristiques obtiennent un gain d’énergie de 40% par rapport aux approches concurrentes et de 80% vis-à-vis de l’approche de référence
    corecore