182 research outputs found

    An approach to filtering RFID data streams

    Full text link
    RFID is gaining significant thrust as the preferred choice of automatic identification and data collection system. However, there are various data processing and management problems such as missed readings and duplicate readings which hinder wide scale adoption of RFID systems. To this end we propose an approach that filters the captured data including both noise removal and duplicate elimination. Experimental results demonstrate that the proposed approach improves missed data restoration process when compared with the existing method.<br /

    Enhancing RFID data quality and reliability

    Full text link
    This thesis addressed the problem of data quality, reliability and energy consumption of networked Radio Frequency Identification systems for business intelligence applications decision making processes. The outcome of the research substantially improved the accuracy and reliability of RFID generated data as well as energy depletion thus prolonging RFID system lifetime

    Information extraction from semi and unstructured data sources: a systematic literature review

    Get PDF
    Millions of structured, semi structured and unstructured documents have been produced around the globe on a daily basis. Sources of such documents are individuals as well as several research societies like IEEE, Elsevier, Springer and Wiley that we use to publish the scientific documents enormously. These documents are a huge resource of scientific knowledge for research communities and interested users around the world. However, due to their massive volume and varying document formats, search engines are facing problems in indexing such documents, thus making retrieval of information inefficient, tedious and time consuming. Information extraction from such documents is among the hottest areas of research in data/text mining. As the number of such documents is increasing tremendously, more sophisticated information extraction techniques are necessary. This research focuses on reviewing and summarizing existing state-of-theart techniques in information extraction to highlight their limitations. Consequently, the research gap is formulated for the researchers in information extraction domain

    Web transcript verification using check digit as secure number

    Get PDF
    The case of fraudulent on academic qualification document is increasing due to the advancement of editing technology. This makes document forgery easy to be done. It has becoming a worldwide problem to verify the academic documents including the degree’s certificate and academic transcript. It is frightening situation when people without the right qualification working as professional in the area that will harm the society. Manual verification can be time consuming and not practical to be implemented. Therefore, it is paramount to have a web-based solution that can function all day long to perform this task. In this paper, dual check digit approach is proposed to verify the authenticity of an academic transcript. Three check digit methods which are Universal Product Code Mod 10 (UPC Mod 10), International Standard Book Number (ISBN Mod 11), and Luhn Mod 10 have been applied to test their reliability in carrying this task. The simulation results show that the combination of ISBN-11 and UPC Mod 10 performed very well to validate the authenticity of the transcripts

    Fitting statistical distribution of extreme rainfall data for the purpose of simulation

    Get PDF
    In this study, several types of probability distributions were used to fit the daily torrential rainfall data from 15 monitoring stations of Peninsular Malaysia from the period of 1975 to 2007. The study of fitting statistical distribution is important to find the most suitable model that could anticipate extreme events of certain natural phenomena such as flood and tsunamis. The aim of the study is to determine which distribution fits well with the daily torrential Malaysian rainfall data. Generalized Pareto, Lognormal and Gamma distributions were the distributions that had been tested to fit the daily torrential rainfall amount in Peninsular Malaysia. First, the appropriate distribution of the daily torrential rainfall was identified within the selected distributions for rainfall stations. Then, data sets were generated based on probability distributions that mimic a daily torrential rainfall data. Graphical representation and goodness of fit tests were used in finding the best fit model. The Generalized Pareto was found to be the most appropriate distribution in describing the daily torrential rainfall amounts of Peninsular Malaysia. The outputs can be beneficial for the purpose of generating several sets of simulated data matrices that mimic the same characteristics of rainfall data in order to assess the performance of the modification method compared to classical method

    A deep contractive autoencoder for solving multiclass classification problems

    Get PDF
    Contractive auto encoder (CAE) is on of the most robust variant of standard Auto Encoder (AE). The major drawback associated with the conventional CAE is its higher reconstruction error during encoding and decoding process of input features to the network. This drawback in the operational procedure of CAE leads to its incapability of going into finer details present in the input features by missing the information worth consideration. Resultantly, the features extracted by CAE lack the true representation of all the input features and the classifier fails in solving classification problems efficiently. In this work, an improved variant of CAE is proposed based on layered architecture following feed forward mechanism named as deep CAE. In the proposed architecture, the normal CAEs are arranged in layers and inside each layer, the process of encoding and decoding take place. The features obtained from the previous CAE are given as inputs to the next CAE. Each CAE in all layers are responsible for reducing the reconstruction error thus resulting in obtaining the informative features. The feature set obtained from the last CAE is given as input to the softmax classifier for classification. The performance and efficiency of the proposed model has been tested on five MNIST variant-datasets. The results have been compared with standard SAE, DAE, RBM, SCAE, ScatNet and PCANet in term of training error, testing error and execution time. The results revealed that the proposed model outperform the aforementioned models

    A review on missing tags detection approaches in RFID system

    Get PDF
    Radio Frequency Identification (RFID) system can provides automatic detection on very large number of tagged objects within short time. With this advantage, it is been using in many areas especially in the supply chain management, manufacturing and many others. It has the ability to track individual object all away from the manufacturing factory until it reach the retailer store. However, due to its nature that depends on radio signal to do the detection, reading on tagged objects can be missing due to the signal lost. The signal lost can be caused by weak signal, interference and unknown source. Missing tag detection in RFID system is truly significant problem, because it makes system reporting becoming useless, due to the misleading information generated from the inaccurate readings. The missing detection also can invoke fake alarm on theft, or object left undetected and unattended for some period. This paper provides review regarding this issue and compares some of the proposed approaches including Window Sub-range Transition Detection (WSTD), Efficient Missing-Tag Detection Protocol (EMD) and Multi-hashing based Missing Tag Identification (MMTI) protocol. Based on the reviews it will give insight on the current challenges and open up for a new solution in solving the problem of missing tag detection

    Printing conductive ink tracks on textile materials

    Get PDF
    Textile materials with integrated electrical features are capable of creating intelligent articles with wide range of applications such as sports, work wear, health care, safety and others. Traditionally the techniques used to create conductive textiles are conductive fibers, treated conductive fibers, conductive woven fabrics and conductive ink. The technologies to print conductive ink on textile materials are still under progress of development thus this study is to investigate the feasibility of printing conductive ink using manual, silk screen printing and on-shelf modified ink jet printer. In this study, the two points probe resistance test (IV Resistance Test) is employed to measure the resistance for all substrates. The surface finish and the thickness of the conductive inks track were measured using the optical microscope. The functionality of the electronics structure printed was tested by introducing strain via bending test to determine its performance in changing resistance when bent. It was found that the resistance obtained from manual method and single layer conductive ink track by silkscreen process were as expected. But this is a different case for the double layer conductive ink tracks by silkscreen where the resistance acquired shows a satisfactory result as expected. A micro-structure analysis shows the surface finish for the single layer conductive inks tracks were not good enough compared to the double conductive ink track. Furthermore, the bending tests provide expected result if increasing of the bend angle will decrease the level of conductivity. The silver conductive paint RS186-3600 could provide low resistance which was below 40 ohm after printed on fabrics material

    The development of simulation logic model that dealing with uncertainty for piping construction industry

    Get PDF
    Basically, multi processes and activities involve in the piping construction (PC) projects needs to be followed before the project is handed over to the customer or client, called multi-project construction environment (MPCE). In PC, the MPCEs exist where more than one project is managed simultaneously within an organization become as a common phenomenon faced by managers that highly possible to face with uncertainty issues, lead to project completion late delivery (PCLD). Dealing with uncertainty in MPCE is a common, though managing uncertainty in PC become more complicated rather that others industry. Therefore, the objective of this paper is to develop the simulation logic model to deal with the uncertainty for PC projects that focused on environmental issues (EI). This result will be proceed for uncertainty model development in the next phase of this research. The simulation logic model development began with qualitative data collection towards the case study, to get all the activities throughout the water supply company. Then, the business model is developed with 14 activities before integrated with the uncertainty factors in EI. The integration of business model and uncertainty factors generated the simulation logic model as main contribution. Once the uncertainty model completely developed, it will provide a medium for PC that confront with MPCE to monitor the uncertainties and prepared to encounter any matters in future. Therefore, it is important to pursue in providing the PC company to all inclusive-model, and helping them in managing and tackling the uncertainty, especially for EI

    A Relative Tolerance Relation of Rough Set (RTRS) for potential fish yields in Indonesia

    Get PDF
    The sea is essential to life on earth, including regulating the climate, producing oxygen, providing medicines, providing habitats for marine animals, and feeding millions of people. It must be ensured that the sea continues to meet the needs of life without sacrificing the people of future generations. The sea regulates the planet’s climate and is a significant source of nutrients. The sea becomes an essential part of global commerce, while the contents of the ocean become the solution of human energy needs today and the future. The wealth and potential of the sea as a source of energy for humans today and the future needs to be mapped and described to provide a picture of marine potential to all concerned. As part of the government, the Ministry of Marine Affairs and Fisheries is responsible for the process of formulating, determining, and implementing policies in the field of marine and fisheries based on the results of mapping and extracting information from existing conditions. The results of this information can be used to predict the marine potential in a marine area. This prediction process can be developed using data-mining techniques such as applying the association rule by looking at the relationship between the quantity of fish based on the plankton abundance index. However, this association rules data-mining techniques that require complete data, which are data sets with no missing values to generate interesting rules for detection systems. The problem is often that required marine data are not available or that marine data are available, but they contain incomplete data. To address this problem, this paper introduces a Relative Tolerance Relation of Rough Set (RTRS). Novelty RTRS differs from previous rough approaches that use tolerance relationships, nonsymmetric equation relationships, and limited tolerance relationships. The RTRS approach is based on a limited tolerance relationship considering the relative precision between two objects; therefore, this is the first job to use relative precision. In addition, this paper presents the mathematical approach of the RTRS and compares it with other existing approaches using the marine real dataset to classify the marine potential level of the region. The results show that the proposed approach is better than the existing approach in terms of accuracy
    • …
    corecore