21 research outputs found

    Sentinel Mining

    Get PDF

    Cloud computing application model for online recommendation through fuzzy logic system

    Get PDF
    Cloud computing can offer us different distance services over the internet. We propose an online application model for health care systems that works by use of cloud computing. It can provide a higher quality of services remotely and along with that, it decreases the cost of chronic patient. This model is composed of two sub-model that each one uses a different service, one of these is software as a service (SaaS) which is user related and another one is Platform as a service (PaaS) that is engineer related. Doctors classify the chronic diseases into different stages according to their symptoms. As the clinical data has a non-numeric value, we use the fuzzy logic system in Paas model to design this online application model. Based on this classification, patienst can receive the proper recommendation through smart devices (SaaS model).Facultad de Informátic

    Cloud computing application model for online recommendation through fuzzy logic system

    Get PDF
    Cloud computing can offer us different distance services over the internet. We propose an online application model for health care systems that works by use of cloud computing. It can provide a higher quality of services remotely and along with that, it decreases the cost of chronic patient. This model is composed of two sub-model that each one uses a different service, one of these is software as a service (SaaS) which is user related and another one is Platform as a service (PaaS) that is engineer related. Doctors classify the chronic diseases into different stages according to their symptoms. As the clinical data has a non-numeric value, we use the fuzzy logic system in Paas model to design this online application model. Based on this classification, patienst can receive the proper recommendation through smart devices (SaaS model).Facultad de Informátic

    Providing Insight into the Performance of Distributed Applications Through Low-Level Metrics

    Get PDF
    The field of high-performance computing (HPC) has always dealt with the bleeding edge of computational hardware and software to achieve the maximum possible performance for a wide variety of workloads. When dealing with brand new technologies, it can be difficult to understand how these technologies work and why they work the way they do. One of the more prevalent approaches to providing insight into modern hardware and software is to provide tools that allow developers to access low-level metrics about their performance. The modern HPC ecosystem supports a wide array of technologies, but in this work, I will be focusing on two particularly influential technologies: The Message Passing Interface (MPI), and Graphical Processing Units (GPUs).For many years, MPI has been the dominant programming paradigm in HPC. Indeed, over 90% of applications that are a part of the U.S. Exascale Computing Project plan to use MPI in some fashion. The MPI Standard provides programmers with a wide variety of methods to communicate between processes, along with several other capabilities. The high-level MPI Profiling Interface has been the primary method for profiling MPI applications since the inception of the MPI Standard, and more recently the low-level MPI Tool Information Interface was introduced.Accelerators like GPUs have been increasingly adopted as the primary computational workhorse for modern supercomputers. GPUs provide more parallelism than traditional CPUs through a hierarchical grid of lightweight processing cores. NVIDIA provides profiling tools for their GPUs that give access to low-level hardware metrics.In this work, I propose research in applying low-level metrics to both the MPI and GPU paradigms in the form of an implementation of low-level metrics for MPI, and a new method for analyzing GPU load imbalance with a synthetic efficiency metric. I introduce Software-based Performance Counters (SPCs) to expose internal metrics of the Open MPI implementation along with a new interface for exposing these counters to users and tool developers. I also analyze a modified load imbalance formula for GPU-based applications that uses low-level hardware metrics provided through nvprof in a hierarchical approach to take the internal load imbalance of the GPU into account

    Libro de Actas JCC&BD 2018 : VI Jornadas de Cloud Computing & Big Data

    Get PDF
    Se recopilan las ponencias presentadas en las VI Jornadas de Cloud Computing & Big Data (JCC&BD), realizadas entre el 25 al 29 de junio de 2018 en la Facultad de Informática de la Universidad Nacional de La Plata.Universidad Nacional de La Plata (UNLP) - Facultad de Informátic

    Scalable Hash Tables

    Get PDF
    The term scalability with regards to this dissertation has two meanings: It means taking the best possible advantage of the provided resources (both computational and memory resources) and it also means scaling data structures in the literal sense, i.e., growing the capacity, by “rescaling” the table. Scaling well to computational resources implies constructing the fastest best per- forming algorithms and data structures. On today’s many-core machines the best performance is immediately associated with parallelism. Since CPU frequencies have stopped growing about 10-15 years ago, parallelism is the only way to take ad- vantage of growing computational resources. But for data structures in general and hash tables in particular performance is not only linked to faster computations. The most execution time is actually spent waiting for memory. Thus optimizing data structures to reduce the amount of memory accesses or to take better advantage of the memory hierarchy especially through predictable access patterns and prefetch- ing is just as important. In terms of scaling the size of hash tables we have identified three domains where scaling hash-based data structures have been lacking previously, i.e., space effi- cient growing, concurrent hash tables, and Approximate Membership Query data structures (AMQ-filter). Throughout this dissertation, we describe the problems in these areas and develop efficient solutions. We highlight three different libraries that we have developed over the course of this dissertation, each containing mul- tiple implementations that have shown throughout our testing to be among the best implementations in their respective domains. In this composition they offer a comprehensive toolbox that can be used to solve many kinds of hashing related problems or to develop individual solutions for further ones. DySECT is a library for space efficient hash tables specifically growing space effi- cient hash tables that scale with their input size. It contains the namesake DySECT data structure in addition to a number of different probing and cuckoo based im- plementations. Growt is a library for highly efficient concurrent hash tables. It contains a very fast base table and a number of extensions to adapt this table to match any purpose. All extension can be combined to create a variety of different interfaces. In our extensive experimental evaluation, each adaptation has shown to be among the best hash tables for their specific purpose. Lpqfilter is a library for concurrent approximate membership query (AMQ) data structures. It contains some original data structures, like the linear probing quotient filter, as well as some novel approaches to dynamically sized quotient filters

    Electronic instructional materials and course requirements "Computer science" for specialty: 1-53 01 01 «Automation of technological processes and production»

    Get PDF
    The purpose of the electronic instructional materials and course requirements by the discipline «Computer science» (EIMCR) is to develop theoretical systemic and practical knowledge in different fields of Computer science. Features of structuring and submission of educational material: EIMCR includes the following sections: theoretical, practical, knowledge control, auxiliary. The theoretical section presents lecture material in accordance with the main sections and topics of the syllabus. The practical section of the EIMCR contains materials for conducting practical classes aimed to develop modern computational thinking, basic skills in computing and making decisions in the field of the fundamentals of computer theory and many computer science fields. The knowledge control section of the EIMCR contains: guidelines for the implementation of the control work aimed at developing the skills of independent work on the course under study, developing the skills of selecting, analyzing and writing out the necessary material, as well as the correct execution of the tasks; list of questions for the credit by the discipline. The auxiliary section of the EIMCR contains the following elements of the syllabus: explanatory note; thematic lectures plan; tables of distribution of classroom hours by topics and informational and methodological part. EIMCR contains active links to quickly find the necessary material

    Liquid stream processing on the web: a JavaScript framework

    Get PDF
    The Web is rapidly becoming a mature platform to host distributed applications. Pervasive computing application running on the Web are now common in the era of the Web of Things, which has made it increasingly simple to integrate sensors and microcontrollers in our everyday life. Such devices are of great in- terest to Makers with basic Web development skills. With them, Makers are able to build small smart stream processing applications with sensors and actuators without spending a fortune and without knowing much about the technologies they use. Thanks to ongoing Web technology trends enabling real-time peer-to- peer communication between Web-enabled devices, Web browsers and server- side JavaScript runtimes, developers are able to implement pervasive Web ap- plications using a single programming language. These can take advantage of direct and continuous communication channels going beyond what was possible in the early stages of the Web to push data in real-time. Despite these recent advances, building stream processing applications on the Web of Things remains a challenging task. On the one hand, Web-enabled devices of different nature still have to communicate with different protocols. On the other hand, dealing with a dynamic, heterogeneous, and volatile environment like the Web requires developers to face issues like disconnections, unpredictable workload fluctuations, and device overload. To help developers deal with such issues, in this dissertation we present the Web Liquid Streams (WLS) framework, a novel streaming framework for JavaScript. Developers implement streaming operators written in JavaScript and may interactively and dynamically define a streaming topology. The framework takes care of deploying the user-defined operators on the available devices and connecting them using the appropriate data channel, removing the burden of dealing with different deployment environments from the developers. Changes in the semantic of the application and in its execution environment may be ap- plied at runtime without stopping the stream flow. Like a liquid adapts its shape to the one of its container, the Web Liquid Streams framework makes streaming topologies flow across multiple heterogeneous devices, enabling dynamic operator migration without disrupting the data flow. By constantly monitoring the execution of the topology with a hierarchical controller infrastructure, WLS takes care of parallelising the operator execution across multiple devices in case of bottlenecks and of recovering the execution of the streaming topology in case one or more devices disconnect, by restarting lost operators on other available devices

    1991 OURE report, including the 1st Annual UMR Undergraduate Research Symposium -- Entire Proceedings

    Get PDF
    The Opportunities for Undergraduate Research Experiences program began in 1990. The aims were to enrich the learning process and make it more active, encourage interaction between students and faculty members, raise the level of research on the campus, help recruit superior students to the graduate program, and support the notion that teaching and research are compatible and mutually reinforcing. Chancellor Jischke made available an annual budget of $50,000 to support the program. As the papers herein attest, the OURE program is achieving its goals — UMR graduates have performed research on an enormous variety of topics, have worked closely with faculty members, and have experienced deeply both the pleasures and frustrations of research. Several of the undergraduates whose papers are included are now graduate students at UMR or elsewhere. Others, who have not yet graduated, are eager to submit proposals to the next OURE round. I am sure all involved join me in expressing gratitude to Chancellor Jischke for inaugurating the program. The first section of this volume is made up of papers presented at the first annual UMR Undergraduate Research Symposium, held in April 1991. Joining the UMR undergraduates in the Symposium were students from other colleges and universities who had participated in an NSF- sponsored summer program of research on parallel processing conducted by the UMR Computer Science Department
    corecore