24 research outputs found

    Application Performance Optimization in Multicloud Environment

    Get PDF
    Through the development and accessibility of the Internet, nowadays the cloud computing has become a very popular. Through the development and accessibility of the Internet, nowadays the cloud computing has become a very popular. This concept has the potential to change the use of information technologies. Cloud computing is the technology that provides infrastructure, platform or software as a service via the network to a huge number of remote users. The main benefit of cloud computing is the utilization of elastic resources and virtualization. Two main properties are required from clouds by users: interoperability and privacy. This article focuses on interoperability. Nowadays it is difficult to migrate an application between clouds offered by different providers. The article deals with that problem in multicloud environment. Specifically, it focuses on the application performance optimization in a multicloud environment. A new method is suggested based on the state of the art. The method is divided into three parts: multicloud architecture, method of a horizontal scalability, and taxonomy for multicriteria optimization. The principles of the method were applied in a design of multicriteria optimization architecture, which we verified experimentally. The aim of our experiment is carried on a portal offering a platform according to the users' requirements

    Reinforced Labels: Multi-Agent Deep Reinforcement Learning for Point-Feature Label Placement

    Full text link
    Over the recent years, Reinforcement Learning combined with Deep Learning techniques has successfully proven to solve complex problems in various domains, including robotics, self-driving cars, and finance. In this paper, we are introducing Reinforcement Learning (RL) to label placement, a complex task in data visualization that seeks optimal positioning for labels to avoid overlap and ensure legibility. Our novel point-feature label placement method utilizes Multi-Agent Deep Reinforcement Learning to learn the label placement strategy, the first machine-learning-driven labeling method, in contrast to the existing hand-crafted algorithms designed by human experts. To facilitate RL learning, we developed an environment where an agent acts as a proxy for a label, a short textual annotation that augments visualization. Our results show that the strategy trained by our method significantly outperforms the random strategy of an untrained agent and the compared methods designed by human experts in terms of completeness (i.e., the number of placed labels). The trade-off is increased computation time, making the proposed method slower than the compared methods. Nevertheless, our method is ideal for scenarios where the labeling can be computed in advance, and completeness is essential, such as cartographic maps, technical drawings, and medical atlases. Additionally, we conducted a user study to assess the perceived performance. The outcomes revealed that the participants considered the proposed method to be significantly better than the other examined methods. This indicates that the improved completeness is not just reflected in the quantitative metrics but also in the subjective evaluation by the participants

    Architecture of a Function-as-a-Service Application

    Get PDF
    Serverless computing and Function-as-a-Service (FaaS) are programming paradigms that have many advantages for modern, distributed and highly modular applications. However, the process of transforming a legacy, monolithic application into a set of functions suitable for a FaaS environment can be a complex task. It may be questionable whether the obvious advantages received from such a transformation outweigh the effort and resources spent on it. In this paper we present our continuing research aimed at the transformation of legacy applications into the FaaS paradigm. Our test subject is an airport visibility system, a sub-class of the meteorological services required for airport operations. We have chosen to modularize the application, divide it into parts that can be implemented as functions in the FaaS paradigm, and provide it with a simple cloud-based data management layer. The tools that we are using are Apache OpenWhisk for FaaS and Airflow for workflow management, Apache Airflow for workflow management and NextCloud for cloud storage. Only a part of the original application has been transformed, but it already allows us to draw some conclusions and especially start forming a generalized picture of a Function-as-a-Service application

    Breast Histopathology with High-Performance Computing and Deep Learning

    Get PDF
    The increasingly intensive collection of digitalized images of tumor tissue over the last decade made histopathology a demanding application in terms of computational and storage resources. With images containing billions of pixels, the need for optimizing and adapting histopathology to large-scale data analysis is compelling. This paper presents a modular pipeline with three independent layers for the detection of tumoros regions in digital specimens of breast lymph nodes with deep learning models. Our pipeline can be deployed either on local machines or high-performance computing resources with a containerized approach. The need for expertise in high-performance computing is removed by the self-sufficient structure of Docker containers, whereas a large possibility for customization is left in terms of deep learning models and hyperparameters optimization. We show that by deploying the software layers in different infrastructures we optimize both the data preprocessing and the network training times, further increasing the scalability of the application to datasets of approximatively 43 million images. The code is open source and available on Github

    Can spirometry improve the performance of cardiovascular risk model in high-risk Eastern European countries?

    Get PDF
    AIMS: Impaired lung function has been strongly associated with cardiovascular disease (CVD) events. We aimed to assess the additive prognostic value of spirometry indices to the risk estimation of CVD events in Eastern European populations in this study. METHODS: We randomly selected 14,061 individuals with a mean age of 59 ± 7.3 years without a previous history of cardiovascular and pulmonary diseases from population registers in the Czechia, Poland, and Lithuania. Predictive values of standardised Z-scores of forced expiratory volume measured in 1 s (FEV1), forced vital capacity (FVC), and FEV1 divided by height cubed (FEV1/ht3) were tested. Cox proportional hazards models were used to estimate hazard ratios (HRs) of CVD events of various spirometry indices over the Framingham Risk Score (FRS) model. The model performance was evaluated using Harrell's C-statistics, likelihood ratio tests, and Bayesian information criterion. RESULTS: All spirometry indices had a strong linear relation with the incidence of CVD events (HR ranged from 1.10 to 1.12 between indices). The model stratified by FEV1/ht3 tertiles had a stronger link with CVD events than FEV1 and FVC. The risk of CVD event for the lowest vs. highest FEV1/ht3 tertile among people with low FRS was higher (HR: 2.35; 95% confidence interval: 1.96-2.81) than among those with high FRS. The addition of spirometry indices showed a small but statistically significant improvement of the FRS model. CONCLUSIONS: The addition of spirometry indices might improve the prediction of incident CVD events particularly in the low-risk group. FEV1/ht3 is a more sensitive predictor compared to other spirometry indices

    Educational gradients in all-cause mortality in two cohorts in the Czech Republic during the early stage of the postcommunist transition

    Get PDF
    Objectives: We investigated whether social gradient in all-cause mortality in the Czech Republic changed during the postcommunist transition by comparing two cohorts, recruited before and after the political changes in 1989. Methods: Participants (aged 25–64 years) in two population surveys (n=2530 in 1985, n=2294 in 1992) were followed up for mortality for 15 years (291 and 281 deaths, respectively). Education was classified into attainment categories and years of schooling (both continuous and in tertiles). Cox regression was used to estimate HR of death by educational indices in each cohort over a 15-year follow-up. Results: All three educational variables were significantly associated with reduced risk of death in both cohorts when men and women were combined; for example, the adjusted HRs of death in the highest versus lowest tertile of years of schooling were 0.65 (95% CI 0.47 to 0.89) in 1985 and 0.67 (95% CI 0.48 to 0.93) in 1992. Adjustment for covariates attenuated the gradients. In sex-specific analysis, the gradient was more pronounced and statistically significant in men. There were no significant interactions between cohort and educational indices. Conclusions: The educational gradient in mortality did not differ between the two cohorts (1985 vs 1992), suggesting no major increase in educational inequality during the early stage of postcommunist transition. Further research is needed to understand trends in health inequalities during socioeconomic transitions

    Impaired lung function and mortality in Eastern Europe: results from multi-centre cohort study

    Get PDF
    BACKGROUND: The association between impaired lung function and mortality has been well documented in the general population of Western European countries. We assessed the risk of death associated with reduced spirometry indices among people from four Central and Eastern European countries. METHODS: This prospective population-based cohort includes men and women aged 45-69 years, residents in urban settlements in Czech Republic, Poland, Russia and Lithuania, randomly selected from population registers. The baseline survey in 2002-2005 included 36,106 persons of whom 24,993 met the inclusion criteria. Cox proportional hazards models were used to estimate the hazard ratios of mortality over 11-16 years of follow-up for mild, moderate, moderate-severe and very severe lung function impairment categories. RESULTS: After adjusting for covariates, mild (hazard ratio (HR): 1.25; 95% CI 1.15‒1.37) to severe (HR: 3.35; 95% CI 2.62‒4.27) reduction in FEV1 was associated with an increased risk of death according to degree of lung impairment, compared to people with normal lung function. The association was only slightly attenuated but remained significant after exclusion of smokers and participants with previous history of respiratory diseases. The HRs varied between countries but not statistically significant; the highest excess risk among persons with more severe impairment was seen in Poland (HR: 4.28, 95% CI 2.14‒8.56) and Lithuania (HR: 4.07, 95% CI 2.21‒7.50). CONCLUSIONS: Reduced FEV1 is an independent predictor of all-cause mortality, with risk increasing with the degree of lung function impairment and some country-specific variation between the cohorts

    Towards Exascale Computing Architecture and Its Prototype: Services and Infrastructure

    Get PDF
    This paper presents the design and implementation of a scalable compute platform for processing large data sets in the scope of the EU H2020 project PROCESS. We are presenting requirements of the platform, related works, infrastructure with focus on the compute components and finally results of our work

    PROCESS Data Infrastructure and Data Services

    Get PDF
    Due to energy limitation and high operational costs, it is likely that exascale computing will not be achieved by one or two datacentres but will require many more. A simple calculation, which aggregates the computation power of the 2017 Top500 supercomputers, can only reach 418 petaflops. Companies like Rescale, which claims 1.4 exaflops of peak computing power, describes its infrastructure as composed of 8 million servers spread across 30 datacentres. Any proposed solution to address exascale computing challenges has to take into consideration these facts and by design should aim to support the use of geographically distributed and likely independent datacentres. It should also consider, whenever possible, the co-allocation of the storage with the computation as it would take 3 years to transfer 1 exabyte on a dedicated 100 Gb Ethernet connection. This means we have to be smart about managing data more and more geographically dispersed and spread across different administrative domains. As the natural settings of the PROCESS project is to operate within the European Research Infrastructure and serve the European research communities facing exascale challenges, it is important that PROCESS architecture and solutions are well positioned within the European computing and data management landscape namely PRACE, EGI, and EUDAT. In this paper we propose a scalable and programmable data infrastructure that is easy to deploy and can be tuned to support various data-intensive scientific applications

    Nationwide increases in anti-SARS-CoV-2 IgG antibodies between October 2020 and March 2021 in the unvaccinated Czech population

    Get PDF
    Background: The aim of the nationwide prospective seroconversion (PROSECO) study was to investigate the dynamics of anti-SARS-CoV-2 IgG antibodies in the Czech population. Here we report on baseline prevalence from that study. Methods: The study included the first 30,054 persons who provided a blood sample between October 2020 and March 2021. Seroprevalence was compared between calendar periods, previous RT-PCR results and other factors. Results: The data show a large increase in seropositivity over time, from 28% in October/November 2020 to 43% in December 2020/January 2021 to 51% in February/March 2021. These trends were consistent with government data on cumulative viral antigenic prevalence in the population captured by PCR testing – although the seroprevalence rates established in this study were considerably higher. There were only minor differences in seropositivity between sexes, age groups and BMI categories, and results were similar between test providing laboratories. Seropositivity was substantially higher among persons with history of symptoms (76% vs. 34%). At least one third of all seropositive participants had no history of symptoms, and 28% of participants with antibodies against SARS-CoV-2 never underwent PCR testing. Conclusions: Our data confirm the rapidly increasing prevalence in the Czech population during the rising pandemic wave prior to the beginning of vaccination. The difference between our results on seroprevalence and PCR testing suggests that antibody response provides a better marker of past infection than the routine testing program
    corecore