64 research outputs found

    Foreword from Chair of EECSI 2016

    Get PDF
    In the name of Allah, the Gracious Most Merciful It is great pleasure to welcome out colleagues from all over the world to attend 3rd International Conference on Electrical Engineering, Computer Science, and Informatics (EECSI 2016) Conference in Semarang City, Central Java, Indonesia EECSI 2016 provides a forum for researchers, academicians, professionals, and students from various engineering fields and cross-disciplinary working or interested in the field of Electrical Engineering, Computer Science, and Informatics especially: Power Engineering, Power Systems and Protection; Electric Power Transmission and Distribution; High Voltage Engineering and Insulation Technology; Renewable Energy Sources, Smart-grids Technologies & Applications; Energy: Policy, Security, Infrastructure, Growth and Economics; Power Electronics and Drives; Control, Automation, Instrumentation and Robotics; Information, Internet of Things and Internet Technologies; Electromagnetic Waves and Field; Circuits and Systems; Semiconductors and Applications; Microelectronics and Electronics Technologies; Electronics and Photonics; Wireless Telecommunications and Networking; Remote Sensing and Data Interpretation; Signal, Image, Video & Multimedia Processing; ICT for Electrical and Electronics Applications; Computer Network & Information Security; High Performance Computing and Communication; Databases, Data Mining and Software Engineering. ...

    ESTIMATION OF SOFTWARE SIZE USING FUNCTION POINT ANALYSIS ON INVENTORY INFORMATION SYSTEMS

    Get PDF
    Indicators of good software development achievements are measured measurements on all aspects of software development, including the environment and systems of software. In this paper, indicators of the complexity of web-based software developed by the waterfall method with XML language-based entity database relations. This inventory information system functions as an integrated system with other systems in an integrated information system scope. Function point analysis (FPA) is used as a measuring device that can assess the complexity of components and functions of the software developed. The measurement results show that the overall value of the FPA is 180.83, which indicates that the application of inventory information systems is easy to use with low complexity. While the results of CFP and RCAF calculations also show that this application is quite simple so it is easy to use by various groups

    Indonesian Articles Recommender System Using n-gram and Tanimoto Coefficient

    Get PDF
    Human needs of technology and the availability of adequate infrastructure are evidences that the technology currently becomes part of the human beings’ basic necessities. Growing multitude of journals and scientific papers makes choosing and sorting become more selective though there have been many online journals service providers and portals. Research on search engines, plagiarism and recommendation system has been carried out with various methods to improve the performance of the system itself, this paper aims to calculate similarities between one article with other articles by implementing n-gram and tanimoto cosine. The number of articles tested were forty-three titles and abstracts, tests  were carried fifty times with random selected keywords, by separating each sentences of the title and abstract into n characters (n = 2) including spaces and punctuation, then calculating similarity to the query or keywords used to test the system. Testing was done using several variation of the thresholds. After observing the fifty times-testings, the threshold value of 0.30, produced accuracy = 0.86, precision = 0.37 and recall = 0.44

    Machine Learning Approaches on External Plagiarism Detection

    Get PDF
    External plagiarism detection is a technique that refers to the comparison between suspicious document and different sources. External plagiarism models are generally preceded by candidate document retrieval and further analysis and then performed to determine the plagiarism occurring. Currently most of the external plagiarism detection is using similarity measurement approaches that are expressed by a pair of sentences or phrase considered similar. Similarity techniques approach is more easily understood using a formula which compares term or token between the two documents. In contrast to the approach of machine learning techniques which refer to the pattern matching and cannot directly comparing token or term between two documents. This paper proposes some machine learning techniques such as k-nearest neighbors (KNN), support vector machine (SVM) and artificial neural network (ANN) for external plagiarism detection and comparing the result with Cosine similarity measurement approach. This paper presented density based that normalized by frequency as the pattern. The result showed that all machine learning approach used in this experiment has better performance in term of accuracy, precision and recall

    Sentiment Analysis of Indonesian Figure using Support Vector Machine

    Get PDF
    On the political year 2018 will be mutually popping reverberated figures for Indonesian presidential candidate 2019. The figures recognition process generally are now using social media, so it would appear the opinions of social media users. Opinions that appeared not only contain positive and negative polarity, but also contain a sentence of subjective and objective. By using a machine learning algorithm, namely Support Vector Machine, made sentiment analysis. The results of the analysis of this sentiment more optimally use the kernel Linear with the F-Measure of Polarity 68%, 68%, 63%, and the F-Measure Subjectivity 73%, 77%, 75% for each figure Anies Baswedan, Joko Widodo, and Prabowo Subianto

    Fortifying Big Data infrastructures to Face Security and Privacy Issues

    Get PDF
    The explosion of data available on the internet is very increasing in recent years. One of the most challenging issues is how to effectively manage such a large amount of data and identify new ways to analyze large amounts of data and unlock information. Organizations must find a way to manage their data in accordance with all relevant privacy regulations without making the data inaccessible and unusable. Cloud Security Alliance (CSA) has released that the top 10 challenges, which are as follows: 1) secure computations in distributed programming frameworks, 2) security best practices for non-relational data stores, 3) secure data storage and transactions logs, 4) end-point input validation/filtering, 5) real-time security monitoring, 6) scalable and composable privacy-preserving data mining and analytics, 7) cryptographically enforced data centric security, 8) granular access control, 9) granular audits, 10) data Provenance. The challenges themselves can be organized into four distinct aspects of the Big Data ecosystem

    Deteksi Plagiat Tugas Akhir dengan Metode Jaccard Similarity

    Get PDF
    Syarat menyelesaikan Tugas Akhir sebelum sidang Tugas Akhir harus membuat Laporan Tugas Akhir dimana mahasiswa dalam pembuatan Laporan Tugas Akhir bisa melihat Laporan Tugas Akhir dari kakak tingkat yang sudah menyelesaikan Laporan Tugas Akhir yang sudah disetujui oleh Dosen Pembimbing atau juga bisa mencari referensi lain diinternet baik jurnal maupun paper. Untuk mencegah adanya unsur plagiarisme dalam pembuatan Laporan Tugas Akhir mahasiswa tidak boleh meniru persis kata atau kalimat yang akan dijadikan sebagai acuan dari pembuatan Laporan Tugas Akhir. Oleh sebab itu dibuatlah sebuah sistem “Deteksi Plagiat Tugas Akhir Dengan Metode Jaccard Similarity Pada Program Studi Teknik Informatika Universitas Islam Sultan Agung”. Pada proses pendeteksian dalam sistem melewati proses preprocessing yang terdiri dari tahap case folding dimana sebuah kalimat dalam Laporan Tugas Akhir disetarakan hurufnya menjadi kecil semua dari huruf kapital menjadi huruf kecil. Tahap filtering dimana didalam kalimat spasi dihapus ataupun spesial karakter lain selain huruf. Tahap stemming dimana dari proses filtering diambil kata-kata dasar, kemudian tahap stopword dimana dari hasil stemming  dipecah atau dijadikan token. Setelah itu masuk proses Metode Jaccard Similarity yaitu proses penghitungan token dari dokumen uji dan dokumen asli atau pembanding hasilnya adalah nilai similarity dari dokumen uji dan dokumen asli sehingga menampilkan hasil presentase similarity Laporan Tugas Akhir

    Recommendation Systems for Online Learning Materials Using Cosine Similarity and Simple Additive Weighting

    Get PDF
    This study focuses on searching for teaching materials in order to obtain relevant teaching material information appropriately for further use as material for recommendation of course material to students using the Cosine Similarity method and calculating weighting using the Simple Additive Weighting (SAW) method. With the SAW method, 3 criteria and weight values are determined for each attribute, followed by a ranking process. So that in the end the search results that are ranked in the order of similarity and most relevant can be displayed and then selected and used as recommendations in the student's e-learning learning system. From the results of the study, the Cosine Similarity and SAW methods have provided a fairly good/effective recommendation with an average precision of 0.7867 and a recall of 0.766 so that this method is appropriate to be placed in campus e-Learnin

    Obtaining Reference's Topic Congruity in Indonesian Publications using Machine Learning Approach

    Get PDF
    There are some criteria on how an article is categorized as a good article for publications. It could depend on some aspect like formatting and clarity, but mainly it depends on how the content of the article is constructed. The consistency of the topic that the article was written could show us how the authors construct the main idea in the article content. One indication that shows this consistency is congruity in the article’s topic and the topic of literature or reference cited in the document listed in the bibliography. This works attempting to automate the topic detection on the article’s references then obtain the congruity to the article title’s topic through metadata extraction and text classification. This is done by extracting metadata of an article file to obtain all possible reference title using GROBID than classify the topic using a supervised classification model. We found that some refinements in the whole approach should be considered in the next step of this work
    • …
    corecore