11 research outputs found
Pengembangan Trainer Netcube untuk Media Belajar Praktik Addressing dan Routing Jaringan Komputer
Wahyu Ardian Dwi Prasetyo. PENGEMBANGAN TRAINER NETCUBE UNTUK MEDIA BELAJAR PRAKTIK ADDRESSING DAN ROUTING JARINGAN KOMPUTER. Skripsi, Fakultas Keguruan dan Ilmu Pendidikan Universitas Sebelas Maret Surakarta. November 2018. Addressing dan routing dalam pelajaran jaringan komputer mempelajari tentang metode pengalamatan IP dan protokol routing. Selain memahami konsep pengalamatan dan routing, mahasiswa dituntut untuk memiliki kemampuan mengoperasikan perangkat jaringan yang riil. Penggunaan aplikasi simulator sebagai media dalam belajar pengalamatan IP dan routing saat ini membuat mahasiswa kurang mendapatkaan pengalaman mengoperasikan perangkat jaringan riil. Tujuan dari penelitian ini adalah untuk mengembangkan trainer jaringan komputer bernama NETCUBE. NETCUBE memiliki spesifikasi yang menyesuaikan kebutuhan dalam praktik addressing dan routing. Penelitian ini menggunakan metode penelitian pengembangan ADDIE. Model pengembangan ADDIE terbagi dalam 5 tahap meliputi Analisis, Desain, Pengembangan, Implementasi dan Evaluasi. Hasil dari penelitian menunjukkan tingkat kelayakan dari trainer NETCUBE sebagai media instruksional praktik. Tingkat kelayakan yang diukur meliputi kelayakan materi dengan skor 86,6%, kelayakan media dengan skor 86%, dan kepuasan pengguna dengan skor 85%. Kata Kunci: Media Pembelajaran, Trainer, Addressing, Routing, Jaringan Kompute
A workload‑driven approach for view selection in large dimensional datasets
The information explosion the world has witnessed in the last two decades has forced businesses to adopt a data-driven culture for them to be competitive. These data-driven businesses have access to countless sources of information, and face the challenge of making sense of overwhelming amounts of data in a efficient and reliable manner, which implies the execution of read-intensive operations. In the context of this challenge, a framework for the dynamic read-optimization of large dimensional datasets has been designed, and on top of it a workload-driven mechanism for automatic materialized view selection and creation has been developed. This paper presents an extensive description of this mechanism, along with a proof-of-concept implementation of it and its corresponding performance evaluation. Results show that the proposed mechanism is able to derive a limited but comprehensive set of views leading to a drop in query latency ranging from 80% to 99.99% at the expense of 13% of the disk space used by the base dataset. This way, the devised mechanism enables speeding up query execution by building materialized views that match the actual demand of query workloads
Discover, recycle and reuse frequent patterns in association rule mining
Ph.DDOCTOR OF PHILOSOPH
Rethinking the risk matrix
So far risk has been mostly defined as the expected value of a loss, mathematically PL (being P the probability of an adverse event and L the loss incurred as a consequence of the adverse event). The so called risk matrix follows from such definition.
This definition of risk is justified in a long term “managerial” perspective, in which it is conceivable to distribute the effects of an adverse event on a large number of subjects or a large number of recurrences. In other words, this definition is mostly justified on frequentist terms. Moreover, according to this definition, in two extreme situations (high-probability/low-consequence and low-probability/high-consequence), the estimated risk is low. This logic is against the principles of sustainability and continuous improvement, which should impose instead both a continuous search for lower probabilities of adverse events (higher and higher reliability) and a continuous search for lower impact of adverse events (in accordance with the fail-safe principle).
In this work a different definition of risk is proposed, which stems from the idea of safeguard: (1Risk)=(1P)(1L). According to this definition, the risk levels can be considered low only when both the probability of the adverse event and the loss are small.
Such perspective, in which the calculation of safeguard is privileged to the calculation of risk, would possibly avoid exposing the Society to catastrophic consequences, sometimes due to wrong or oversimplified use of probabilistic models. Therefore, it can be seen as the citizen’s perspective to the definition of risk
Netcube: A scalable tool for fast data mining and compression
We propose an novel method of computing and storing DataCubes. Our idea is to use Bayesian Networks, which can generate approximate counts for any query combination of attribute values and “don’t cares. ” A Bayesian network represents the underlying joint probability distribution of the data that were used to generate it. By means of such a network the proposed method, NetCube, exploits correlations among attributes. Our proposed preprocessing algorithm scales linearly on the size of the database, and is thus scalable; it is also parallelizable with a straightforward parallel implementation. Moreover, we give an algorithm to estimate counts of arbitrary queries that is fast (constant on the database size). Experimental results show that NetCubes have fast generation and use (a few This material is based upon work supported by the National Scienc
NetCube: A Scalable Tool for Fast Data Mining and Compression
We propose an novel method of computing
and storing DataCubes. Our idea is to use
Bayesian Networks, which can generate approximate
counts for any query combination of attribute
values and “don’t cares.” A Bayesian network
represents the underlying joint probability
distribution of the data that were used to generate
it. By means of such a network the proposed
method, NetCube, exploits correlations among attributes.
Our proposed preprocessing algorithm
scales linearly on the size of the database, and
is thus scalable; it is also parallelizable with a
straightforward parallel implementation. Moreover,
we give an algorithm to estimate counts
of arbitrary queries that is fast (constant on the
database size). Experimental results show that
NetCubes have fast generation and use (a few minutes preprocessing time per 100,000 records
and less than a second query time), achieve excellent
compression (at least 1800:1 compression
ratios on real data) and have low reconstruction
error (less than 5% on average). Moreover, our
method naturally allows for visualization and data
mining, at no extra cost