9 research outputs found

    PREFERENSI KONSUMEN BERDASARKAN KELOMPOK USIA TERHADAP VARIASI RASA, WARNA, DAN BENTUK WAFER

    Get PDF
    Wafer adalah produk makanan kering berbasis tepung terigu, yang memiliki pori-pori besar, renyah, serta penampangnya berongga jika dipatahkan. Di pasaran, ada dua jenis bentuk wafer yang umum dijumpai yaitu berbentuk persegi dan persegi panjang. Pada saat ini, wafer diproduksi dengan banyak varian rasa, di antaranya adalah coklat, vanila, stroberi, dan masih banyak lagi. Untuk membuat produk lebih disukai masyarakat daripada produk pesaing, produsen perlu membuat produk yang berbeda dari yang lain serta mempunyai keunggulan inovatif. Produsen dapat menambah atau memberikan variasi bentuk pada produk yang sudah ada serta memperluas segmen pasar dengan melayani berbagai keanekaragaman konsumen yang mempunyai selera yang berbeda-beda. Atribut dari wafer yang dapat dikembangkan antara lain varian rasa, warna krim, warna sheet, dan bentuk wafer. Varian rasa, warna krim, dan warna sheet pada wafer masih dapat diperbanyak lagi sehingga konsumen dapat lebih leluasa lagi untuk memilih varian rasa dan warna yang mereka inginkan. Banyak sekali potensi inovasi varian rasa, warna krim, dan warna sheet yang dapat diterapkan pada produk wafer. Tujuan dari penelitian ini adalah untuk mengetahui preferensi konsumen terhadap varian rasa, warna, merek, dan bentuk wafer berdasarkan kelompok usia dan mengetahui hubungan antara preferensi konsumen terhadap varian rasa, warna, dan bentuk wafer dengan produk yang dijual di pasaran. Penelitian ini dilakukan menggunakan pendekatan penelitian survei yang diawali dengan tahap pengumpulan data, data cleaning, clustering, weighting, survei produk, dan pengambilan kesimpulan. Pengumpulan data menggunakan kuesioner yang dibagikan kepada masyarakat yang berusia produktif. Variabel dalam penelitian ini adalah recency, frequency, dan monetary. Survei dibagi menjadi 2 tahap yaitu survei pendahuluan dan survei lanjutan. Dalam survei pendahuluan, kuesioner diuji validitasnya menggunakan Korelasi Pearson dan reliabilitasnya menggunakan Cronbach Alpha. Kemudian dilakukan survei lanjutan dan seluruh data yang didapatkan dilakukan clustering. Hasil penelitian terhadap merek yang disukai menunjukkan merek yang mendapatkan skor yang tertinggi oleh responden adalah wafer merek Tango. Sedangkan merek yang mendapatkan skor terendah adalah wafer merek Oreo. Hanya responden dengan kelompok usia 15 sampai 24 tahun yang menyukai bentuk wafer persegi dan persegi panjang. Sedangkan responden kelompok usia yang lain lebih menyukai bentuk wafer persegi panjang saja. Varian rasa cokelat, keju, vanila, stroberi merupakan varian rasa wafer yang paling disukai oleh semua kelompok usia. Varian rasa pisang, durian, jeruk, matcha, red velvet, pandan, manga, taro, lemon, dan apel merupakan varian rasa baru yang diinginkan oleh responden. Berdasarkan hasil survei pasaran yang telah dilakukan, sebagian besar data merek, bentuk wafer, dan varian rasa yang telah dikumpulkan dari beberapa penjual sudah sesuai dengan keinginan responden

    Avaliar e melhorar a qualidade dos dados com impacto no negócio num processo de migração de dados entre ERPs

    Get PDF
    Mestrado em Gestão de Sistemas de InformaçãoApesar de toda a literatura publicada sobre a melhoria da qualidade de dados, problemas de qualidade de dados continuam a afetar a operacionalidade das empresas e dos seus sistemas de decisão. Reconhecendo este facto, a RetailPC, uma empresa de comércio a retalho de equipamento informático, aceitou a realização da presente investigação, a qual teve como objetivo a avaliação e melhoria da qualidade de dados da sua entidade Cliente, durante um processo de migração de dados entre dois ERPs (Primavera Professional para SAP Business One). Para o efeito, foi utilizada a metodologia Action Research, uma vez que permite ao investigador assumir um papel intervencionista na resolução do problema da qualidade de dados. No caso concreto deste trabalho, foi avaliada e melhorada a qualidade de dados durante a migração entre ERPs e alterados processos de recolha dos mesmos, tendo sido disponibilizados meios de diagnóstico para futuros ciclos de Action Research. No final, foi possível constatar que a qualidade de dados foi melhorada significativamente. Foi possível corrigir todos os erros detetados nos atributos ShipType (Modo de expedição), PymCode (Formas de pagamento), Currency (Moeda) e LangCode (Língua da documentação enviada para o cliente); 98,53% dos erros detetados em sujeitos passivos coletivos com respeito ao atributo LicTradNum (NIF); 56,67% das moradas com erros do atributo ZipCode (Código Postal) e 99,65% dos tuplos que continham valores no atributo IntrntSite, uma vez estava a ser utilizado para um fim diferente do previsto pelo ERP, tendo esses valores sido migrados para o atributo E_Mail para posterior tratamento.Despite all the published literature on data quality enhancement, data quality problems continue to affect the company's operation and their decision systems. In recognition of that, RetailPC, an IT equipment retail trading company, accepted to be part of the present research, which aimed the assess and improve data quality of its Customer entity during a data migration process between ERPs (Primavera Professional to SAP Business One). For this purpose, it was used the Action Research methodology, as it allows the researcher to assume an interventional role in the resolution of the data quality problem. In the specific case of this research, has been assessed and enhanced data quality during data migration and changed data collection processes, were made available diagnostic methods for future cycles of Action Research. At the end it was perceived that the quality of data is improved significantly. It was possible to correct all the errors detected in attributes ShipType (Delivery mode), PymCode (Payment Methods), Currency (Currency) and LangCode (Language of documents sent to customer); correct 98.53% of detected errors in collective taxpayers with respect to LicTradNum attribute (Tax ID); 56.67% of addresses with errors in ZipCode attribute (Postal Code) and 99.65% of tuples that contain values in IntrntSite attribute, as was being used for a different purpose from that defined by the ERP, and these values were migrated to E_Mail attribute for further processing. They were also detected and eliminated 323 tuples of entities that were duplicated

    "CHARACTERIZATION OF SLAUGHTERED AND NON-SLAUGHTERED GOAT MEAT AT LOW FREQUENCIES"

    Get PDF
    The electrical stimulation of meat has a high potential for use in the quality control of meat tissues during the past two decades. Dielectric spectroscopy is the most used technique to measure the electrical properties of tissues. Open ended coaxial cable or two parallel plates integrated with network analyzer, impedance analyzer or LCZ meter have been used to measure the dielectric properties of meat for different purposes. The purpose of this research is to construct a capacitive device capable of differentiating slaughtered and non-slaughtered goat meats, by determining the dielectric properties of goat meat at various frequencies and storage times. The detector cell has two circular platinum plates assembled on the micrometer barrel encased within a perspex box material to form the capacitor. The test rig is validated to insure it is working well. Two goats were slaughtered in the same environment. One of the goats was slaughtered properly (Islamic method) and the second one was killed by garrote. The measurements were done on the hindlimb muscles. The sizes of samples were 2 em diameter and 5 mm thick. The slaughtered and non-slaughtered meat samples were separately placed between the capacitor plates. The capacitance and dissipation factor were measured across the capacitor device which was connected to a LCR meter. The experiment was repeated for various frequencies (from I 00 Hz to 2 kHz), and at different storage times (at I day after slaughtering to 10 days). Maxwell Garnett mixing rule was applied to obtain the theoretical value of the effective permittivity by using goat muscle and blood permittivity. The results show that the device is able to differentiate slaughtered and non-slaughtered goat meat. At all applied frequencies, the relative permittivity of the non-slaughtered meat were clearly more than the relative permittivity of the slaughtered meat which agrees with the simulation results. The dissipation factor of the non-slaughtered meat was less than the dissipation factor of the slaughtered meat

    Correlation-based methods for data cleaning, with application to biological databases

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Data quality and data cleaning in database applications

    Get PDF
    Today, data plays an important role in people's daily activities. With the help of some database applications such as decision support systems and customer relationship management systems (CRM), useful information or knowledge could be derived from large quantities of data. However, investigations show that many such applications fail to work successfully. There are many reasons to cause the failure, such as poor system infrastructure design or query performance. But nothing is more certain to yield failure than lack of concern for the issue of data quality. High quality of data is a key to today's business success. The quality of any large real world data set depends on a number of factors among which the source of the data is often the crucial factor. It has now been recognized that an inordinate proportion of data in most data sources is dirty. Obviously, a database application with a high proportion of dirty data is not reliable for the purpose of data mining or deriving business intelligence and the quality of decisions made on the basis of such business intelligence is also unreliable. In order to ensure high quality of data, enterprises need to have a process, methodologies and resources to monitor and analyze the quality of data, methodologies for preventing and/or detecting and repairing dirty data. This thesis is focusing on the improvement of data quality in database applications with the help of current data cleaning methods. It provides a systematic and comparative description of the research issues related to the improvement of the quality of data, and has addressed a number of research issues related to data cleaning. In the first part of the thesis, related literature of data cleaning and data quality are reviewed and discussed. Building on this research, a rule-based taxonomy of dirty data is proposed in the second part of the thesis. The proposed taxonomy not only summarizes the most dirty data types but is the basis on which the proposed method for solving the Dirty Data Selection (DDS) problem during the data cleaning process was developed. This helps us to design the DDS process in the proposed data cleaning framework described in the third part of the thesis. This framework retains the most appealing characteristics of existing data cleaning approaches, and improves the efficiency and effectiveness of data cleaning as well as the degree of automation during the data cleaning process. Finally, a set of approximate string matching algorithms are studied and experimental work has been undertaken. Approximate string matching is an important part in many data cleaning approaches which has been well studied for many years. The experimental work in the thesis confirmed the statement that there is no clear best technique. It shows that the characteristics of data such as the size of a dataset, the error rate in a dataset, the type of strings in a dataset and even the type of typo in a string will have significant effect on the performance of the selected techniques. In addition, the characteristics of data also have effect on the selection of suitable threshold values for the selected matching algorithms. The achievements based on these experimental results provide the fundamental improvement in the design of 'algorithm selection mechanism' in the data cleaning framework, which enhances the performance of data cleaning system in database applications.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Data quality and data cleaning in database applications

    Get PDF
    Today, data plays an important role in people’s daily activities. With the help of some database applications such as decision support systems and customer relationship management systems (CRM), useful information or knowledge could be derived from large quantities of data. However, investigations show that many such applications fail to work successfully. There are many reasons to cause the failure, such as poor system infrastructure design or query performance. But nothing is more certain to yield failure than lack of concern for the issue of data quality. High quality of data is a key to today’s business success. The quality of any large real world data set depends on a number of factors among which the source of the data is often the crucial factor. It has now been recognized that an inordinate proportion of data in most data sources is dirty. Obviously, a database application with a high proportion of dirty data is not reliable for the purpose of data mining or deriving business intelligence and the quality of decisions made on the basis of such business intelligence is also unreliable. In order to ensure high quality of data, enterprises need to have a process, methodologies and resources to monitor and analyze the quality of data, methodologies for preventing and/or detecting and repairing dirty data. This thesis is focusing on the improvement of data quality in database applications with the help of current data cleaning methods. It provides a systematic and comparative description of the research issues related to the improvement of the quality of data, and has addressed a number of research issues related to data cleaning.In the first part of the thesis, related literature of data cleaning and data quality are reviewed and discussed. Building on this research, a rule-based taxonomy of dirty data is proposed in the second part of the thesis. The proposed taxonomy not only summarizes the most dirty data types but is the basis on which the proposed method for solving the Dirty Data Selection (DDS) problem during the data cleaning process was developed. This helps us to design the DDS process in the proposed data cleaning framework described in the third part of the thesis. This framework retains the most appealing characteristics of existing data cleaning approaches, and improves the efficiency and effectiveness of data cleaning as well as the degree of automation during the data cleaning process.Finally, a set of approximate string matching algorithms are studied and experimental work has been undertaken. Approximate string matching is an important part in many data cleaning approaches which has been well studied for many years. The experimental work in the thesis confirmed the statement that there is no clear best technique. It shows that the characteristics of data such as the size of a dataset, the error rate in a dataset, the type of strings in a dataset and even the type of typo in a string will have significant effect on the performance of the selected techniques. In addition, the characteristics of data also have effect on the selection of suitable threshold values for the selected matching algorithms. The achievements based on these experimental results provide the fundamental improvement in the design of ‘algorithm selection mechanism’ in the data cleaning framework, which enhances the performance of data cleaning system in database applications

    Marcus, Maletic, Lin Ordinal Association Rules for Error Identification in Data Sets

    No full text
    Abstract: Association rules are a fundamental class of patterns that exist in data. These patterns have been widely utilized (e.g., market basket analysis) and extensive studies exist to find efficient association rule mining algorithms. Special attention is given in literature to the extension of binary association rules (e.g., ratio, quantitative, generalized, multiple-level, constrained-based, distance-based, composite association rules). A new extension of the Boolean association rules, ordinal association rules, that incorporates ordinal relationships among data items, is introduced. These rules are used to identify possible errors in data. An algorithm that finds these rules and identifies potential errors in data is proposed. The results of applying this method to a real-world data set are given
    corecore