52 research outputs found

    APPLICATION OF THE ARIMA MODEL FOR PREDICTION OF MONTHLY DIVORCE RATE IN THE RELIGIOUS COURTS IN SOUTH SUMATRA

    Get PDF
    This research discusses predictions of divorce rates in Religious Courts in South Sumatra. This is important to do because the divorce rate has a changing trend. Sometimes high and sometimes low but always at a high trend number, so soCourt officials or social scientists in developing effective strategies for overcoming marriage problems, allocating resources, or supporting families who need counseling can be prepared in advance, especially at the Religious Courts in South Sumatra to reduce the divorce rate because the purpose of marriage is not to divorce. This research discusses the divorce rate in terms of predicting/forecasting because much research has been done on the divorce rate by examining the causes of divorce. This research uses the ARIMA (Autoregressive Integrated Moving Average) model to predict. The ARIMA model is a method that has been widely used in forecasting research to get good results. The research results are that the ARIMA model used is (1,0,2) and (2,0,2), with an error rate of only 0.48% using the MAPE method. Keywords: predictions, numbers, divorce, arima, mape

    Propotipe komunikasi multimedia sebuah laboratorium bahasa pada jaringan komputer dengan memanfaatkan gnomemeeting=Propotype of multimedia communication of a language ...

    Get PDF
    Language laboratory based on tape recorder has conversation and learning stand alone facilities. The conversation facilities consists of conversation between a teacher and a student, conversation between a teacher and all students, conversation between a teacher and two or three students. Conversation between students is not permitted. Stand alone learning facility, which consists of listening and pronouncing, enables students exercise on their own. In this research a communication prototype of a language laboratory on a networked personal computers (PC) is developed. The prototype implements all communication facilities found on a traditional language laboratory based on a tape recorder namely communication between a teacher and a student, conversation between a teacher and all students, conversation between a teacher and two or three students. Conversation between students is not permitted. The prototype is based on GnomeMeeting and OpenMCU. We found that the communication between teacher\u27s PC and a student\u27s PC is real time and two directional. Communication between teacher\u27s PC and the entire students is multicast and real time. Communication between teacher\u27s PC and two or three student\u27s PC selected by the teacher is implemented as an invitation from the teacher to the students which must be answered by the students. Keyword : computer-based language laboratory, GnomeMeeting, OpenMCU

    Effect of Genetic Algorithm on Prediction of Heart Disease Stadium using Fuzzy Hierarchical Model

    Get PDF
    The Fuzzy Hierarchical Model method can be used to predict the stage of heart disease. The use of the Fuzzy Hierarchical Model on complex problems is still not optimal because it is difficult to find a fuzzy set that provides a more optimal solution. This method can be improved by changing the membership function constraints using Genetic Algorithm to get better predictions. Tests carried out using 282 heart disease patient data resulted in a Root Mean Squared Error (RMSE) value of 0.55 using the best Genetic Algorithm parameters, including population size of 140, number of generations of 125, and a combination of cross-over rate and mutation rate of 0.4 and 0.6 whereas the RMSE value generated by the Fuzzy Hierarchical Model before being optimized by the Genetic Algorithm was 0.89. These results indicate an increase in the predictive value of the Fuzzy Hierarchical Model after being optimized using the Genetic Algorithm

    SENTIMENT ANALYSIS ON TWITTER BY USING MAXIMUM ENTROPY AND SUPPORT VECTOR MACHINE METHOD

    Get PDF
    With the advancement of social media and its growth, there is a lot of data that can be presented for research in social mining. Twitter is a microblogging that can be used. In this event, a lot of companies used the data on Twitter to analyze the satisfaction of their customer about product quality. On the other hand, a lot of users use social media to express their daily emotions. The case can be developed into a research study that can be used both to improve product quality, as well as to analyze the opinion on certain events. The research is often called sentiment analysis or opinion mining. While The previous research does a particularly useful feature for sentiment analysis, but it is still a lack of performance. Furthermore, they used Support Vector Machine as a classification method. On the other hand, most researchers found another classification method, which is considered more efficient such as Maximum Entropy. So, this research used two types of a dataset, the general opinion data, and the airline's opinion data. For feature extraction, we employ four feature extraction, such as pragmatic, lexical-grams, pos-grams, and sentiment lexical. For the classification, we use both of Support Vector Machine and Maximum Entropy to find the best result. In the end, the best result is performed by Maximum Entropy with 85,8% accuracy on general opinion data, and 92,6% accuracy on airlines opinion data

    Text Similarity Detection Between Documents Using Case Based Reasoning Method with Cosine Similarity Measure (Case Study SIMNG LPPM Universitas Sriwijaya)

    Get PDF
    LPPM Universitas Sriwijaya is an institution that coordinates academic research and community service inside Universitas Sriwijaya. In carrying out the duty, LPPM assesses every proposal’s originality which would be impossible to do manually in the future due to massive data growth. Thus, automatization for the proposal's originality check is needed. The Case Based Reasoning method is used in this research because it allows the system to reuse the information that has been obtained to find documents that are similar to the test document. In this study, the data is represented in the form of the Vector Space Model and uses Cosine Similarity to measure document to document similarity. The data is represented by giving weight for each part of the tested documents. In this study, four formulas from previous research will be used for term weighting then the final result will be compared. The process begins by extracting data, separating parts of the document, figuring the similarity value of the test document to the case base utilizing Cosine Similarity Measure, results filtering with a certain threshold, summarizing the calculation results, and finally preserving the results obtained to be reused in the next calculation. The results of this study indicate that the text-similarity detection between documents has been successfully carried out using the proposed method with the best sensitivity level and the fastest computation time achieved in configuration II

    Automatic Clustering and Fuzzy Logical Relationship to Predict the Volume of Indonesia Natural Rubber Export

    Get PDF
    Natural rubber is one of the pillars of Indonesia's export commodities. However, over the last few years, the export value of natural rubber has decreased due to an oversupply of this commodity in the global market. To overcome this problem, it is possible to predict the volume of Indonesia natural rubber exports. Predicted values can also help the government to compile market intelligence for natural rubber commodities periodically. In this study, the prediction of the export volume of natural rubber was carried out using the Automatic Clustering as an interval maker in the Fuzzy Time Series or usually called Automatic Clustering and Fuzzy Logical Relationship (ACFLR). The data used is 51 data per year from 1970 to 2020. The purpose of this study is to predict the volume of Indonesia natural rubber exports and compare the prediction results between the Automatic Clustering and Fuzzy Logical Relationship (ACFLR) and Chen's Fuzzy Time Series. The results showed that there was a significant difference between the two methods, ACFLR got 0.5316% MAPE with  and Chen's Fuzzy Time Series model got 8.009%. Show that the ACFLR method performs better than the pure Fuzzy Time Series in predicting volume of Indonesia natural rubber exports

    Author Matching Classification with Anomaly Detection Approach for Bibliomethric Repository Data

    Get PDF
    Authors name disambiguation (AND) is a complex problem in the process of identifying an author in a digital library (DL). The AND data classification process is very much determined by the grouping process and data processing techniques before entering the classifier algorithm. In general, the data pre-processing technique used is pairwise and similarity to do author matching. In a large enough data set scale, the pairwise technique used in this study is to do a combination of each attribute in the AND dataset and by defining a binary class for each author matching combination, where the unequal author is given a value of 0 and the same author is given a value of 1. The technique produces very high imbalance data where class 0 becomes 98.9% of the amount of data compared to 1.1% of class 1. The results bring up an analysis in which class 1 can be considered and processed as data anomaly of the whole data. Therefore, anomaly detection is the method chosen in this study using the Isolation Forest algorithm as its classifier. The results obtained are very satisfying in terms of accuracy which can reach 99.5%

    Optimization of Deep Neural Networks with Particle Swarm Optimization Algorithm for Liver Disease Classification

    Get PDF
    Liver disease has affected more than one million new patients in the world. which is where the liver organ has an important role function for the body's metabolism in channeling several vital functions. Liver disease has symptoms including jaundice, abdominal pain, fatigue, nausea, vomiting, back pain, abdominal swelling, weight loss, enlarged spleen and gallbladder and has abnormalities that are very difficult to detect because the liver works as usual even though some liver functions have been damaged. Diagnosis of liver disease through Deep Neural Network classification, optimizing the weight value of neural networks with the Particle Swarm Optimization algorithm. The results of optimizing the PSO weight value get the best accuracy of 92.97% of the Hepatitis dataset, 79.21%, Hepatitis 91.89%, and Hepatocellular 92.97% which is greater than just using a Deep Neural Network

    Classification of Epilepsy Diagnostic Results through EEG Signals Using the Convolutional Neural Network Method

    Get PDF
    The brain is one of the most important organs in the human body as a central nervous system which functions as a controlling center, intelligence, creativity, emotions, memories, and body movements. Epileptic seizure is one of the disorder of the brain central nervous system which has many symptoms, such as loss of awareness, unusual behavior and confusion. These symptoms lead in many cases to injuries due to falls, biting one’s tongue. Detecting a possible seizure beforehand is not an easy task. Most of the seizures occur unexpectedly, and finding ways to detect a possible seizure before it happens has been a challenging task for many researchers. Analyzing EEG signals can help us obtain information that can be used to diagnose normal brain activity or epilepsy. CNN has been demonstrated high performance on detection and classification epileptic seizure. This research uses CNN to classify the epilepsy EEG signal dataset. AlexNet and LeNet-5 are applied in CNN architecture. The result of this research is that the AlexNet architecture provides better precision, recall, and f1-score values on the epilepsy signal EEG data than the LeNet-5 architecture. &nbsp
    • …
    corecore