58 research outputs found
A connectionist model to predict rice yield based on disease infection
Advance changes in technology, economy and business environment are influencing all sectors including agriculture.Rice as the worlds main dietary food is experiencing a decrease in yield due to the infection of pests and diseases, decreasing level of water sources, the scarcity of suitable land for agriculture and inefficient labour management.Rice Yield losses of approximately 31.5% were attributed to rice plant related diseases.This work describes the development of a connectionist model to predict the rice yield based on the amount of area infected by rice diseases.The Back Propagation learning algorithm were used with 5 input parameters which represents the planting seasons; the plantation district and the 3 main deadly disease recordings from the Muda Agricultural area in Malaysia during various planting seasons from 1995-2001. The output parameter represents the rice yield measured in kilograms per hectare.The result of the model shows that the recorded average mean deviation is 0.053
Gasoline price forecasting: An application of LSSVM with improved ABC
Optimizing the hyper-parameters of Least Squares Support Vector Machines (LSSVM) is crucial as it will directly influence the predictive power of the algorithm.To tackle such issue, this study proposes an improved Artificial Bee Colony (IABC) algorithm which is based on conventional mutation.The IABC serves as an optimizer for LSSVM.Realized in gasoline price forecasting, the performance is guided based on Mean Absolute Percentage Error (MAPE) and Root Mean Square Percentage Error (RMSPE).The conducted simulation results show that, the proposed IABCLSSVM
outperforms the results produced by ABC-LSSVM and also the Back Propagation Neural Network
Filter-wrapper based feature ranking technique for dynamic software quality attributes
This article presents a filter-wrapper based
feature ranking technique that is able to learn and rank quality attributes according to new cases of software quality assessment data.The proposed feature ranking technique consists of a scoring method named Most Priority of Feature (MPF) and a learning algorithm to learn the software quality attribute weights. The existing ranking techniques do not address the issue of redundancy in ranking the software quality attributes. Our proposed technique resolves the redundancy issue by using classifiers to choose attributes that shows high classification accuracy. Experimental result indicates that our technique outperforms other similar technique and correlates better with human experts
Intelligent software quality model: The theoretical framework
Globally, software quality issues has increasingly been seen as a common strategic response for achieving competitiveness in business.It has been seen very important as the usage of software become very demanding.Software quality includes quality control tests, quality assurance and quality management.Currently, software quality models available were built based on static measurements of attributes and measures.Previous study has indicated that to ensure the quality of software meets the future requirements and needs, the new dynamic and intelligent software quality
model has to be developed.This paper discusses the development of intelligent software quality model based on behavioral and human perspectives approach which enhances from Pragmatic Quality Factor (PQF) model as a benchmark of the quality assessment
Making the Switch from Learningzone to UUM Online Learning
A Learning Management System (LMS) was developed for UUM lecturers and students as a complement to the existing conventional teaching and learning methods. UUM Online Learning was the third LMS used by Universiti Utara Malaysia (UUM) to perform all teaching and learning activities through web applicatons.The first LMS introduced in UUM was the Learning Care which was a third party software that incurred high maintenance costs.Consequently, it was
instantly replaced by an inhouse developed LMS, known as Learningzone, in July 2009.The transition time from the Learning Care to Learningzone took approximately TWO (2) semesters from July 2009 until Januari 2010 before the Learning Care was fully replaced
Research trends in microarray data analysis: modelling gene regulatory network by integrating transcription factors data / Farzana Kabir Ahmad and Siti Sakira Kamaruddin
The invention of microarray technology has enabled expression levels of thousands of genes to be monitored at once. This modernized approach has created large amount of data to be examined. Recently, gene regulatory network has been an interesting topic and generated impressive research
goals in computational biology. Better understanding of the genetic regulatory processes would bring significant implications in the biomedical fields and many other pharmaceutical industries. As a result, various mathematical and computational methods have been used to model gene regulatory network from microarray data. Amongst those methods, the Bayesian network model attracts the most attention and has become the prominent technique since it can capture nonlinear and stochastic relationships between variables. However, structure learning of this model is NP-hard and computationally complex as the number of potential edges increase drastically with the number of genes. In addition, most of the studies only focused on the predicted results while neglecting the fact that microarray data is a fragmented information on the whole biological process. Hence, this study proposed a network-based inference model that combined biological knowledge in order to verify the constructed gene regulatory relationships. The gene regulatory network is constructed using Bayesian network based on low-order conditional independence approach. This technique aims to identify from the data the dependencies to construct the network structure, while addressing the structure learning problem. In addition, three main toolkits such as Ensembl, TFSearch and TRANSFAC have been used to determine the false positive edges and verify reliability of regulatory relationships. The experimental results show that by integrating biological knowledge it could enhance the precision results and reduce the number of false positive edges in the trained gene regulatory network
Representing Semantics of Text by Acquiring its Canonical Form
Canonical form is a notion stating that related idea should have the same meaning representation. It is a notion that greatly simplifies task by dealing with a single meaning representation for a wide range of expression. The issue in text representation is to generate a formal approach of capturing meaning or semantics in sentences. These issues include heterogeneity and inconsistency in text. Polysemous, synonymous, morphemes and homonymous word poses serious drawbacks when trying to capture senses in sentences. This calls for a need to capture and represent senses in order to resolve vagueness and improve understanding of senses in documents for knowledge creation purposes. We introduce a simple and straightforward method to capture canonical form of sentences. The proposed method first identifies the canonical forms using the Word Sense Disambiguation (WSD) technique and later applies the First Order Predicate Logic (FOPL) scheme to represent the identified canonical forms. We adopted two algorithms in WSD, which are Lesk and Selectional Preference Restriction. These algorithms concentrate mainly on disambiguating senses in words, phrases and sentences. Also we adopted the First order Predicate Logic scheme to analyse argument predicate in sentences, employing the consequence logic theorem to test for satisfiability, validity and completeness of information in sentences
Evaluation of automated phonetic labeling and segmentation for dyslexic children’s speech
Phonetic labeling and segmentation have one major outback – they are time consuming, erroneous, and
tedious if done manually.Although manual labeling and segmentation are always the best, automated approach is potentially promising as alternative approach for a more efficient process.In an attempt to automatically label and segment dyslexic children’s read speech, this paper investigates whether or not the automated approach can be as accurate as compared with the manual one. This is due to the highly phonetically similar reading errors produced when they read
that have affected automatic speech recognition (ASR).In this work, experiments were performed using a specifically designed ASR to force-align the read speech and produce the labels and segmentations automatically.The CSLU toolkit’s
force alignment algorithm has been employed to measure their performances.Selected speech data of dyslexic children’s reading in Malay were fed to the algorithm as input and the evaluation resulted in 95% agreement on phonetic labeling and
only 65% on segmentation with respect to the manual ones
Enhanced ABD-LSSVM for energy fuel price prediction
This paper presents an enhanced Artificial Bee Colony (eABC)based on Lévy Probability Distribution (LPD) and conventional mutation. The purposes of enhancement are to enrich the
searching behavior of the bees in the search space and prevent premature convergence.Such an approach is used to improve the performance of the original ABC in optimizing the embedded hyper-parameters of Least Squares Support Vector Machines(LSSVM).Later on, a procedure is put forward to serve as a prediction tool to solve prediction task.To evaluate the efficiency of the proposed model, crude oil prices data was employed as empirical data and a comparison against four approaches were conducted, which include standard ABC-LSSVM, Genetic Algorithm-LSSVM (GA-LSSVM), Cross Validation-LSSVM (CV-LSSVM), and conventional Back Propagation Neural Network (BPNN).From the experiment that was conducted, the proposed eABC-LSSVM shows encouraging results in optimizing parameters of interest by producing higher prediction accuracy for employed time series data
Data normalization techniques in swarm-based forecasting models for energy commodity spot price
Data mining is a fundamental technique in identifying patterns from large data sets.The extracted facts and patterns contribute in various domains such as marketing, forecasting, and medical.Prior to that, data are consolidated so that the resulting mining process may be more efficient.This study investigates the effect of different data normalization techniques.which are Min-max, Z-score and decimal scaling, on Swarm-based forecasting models.Recent swarm intelligence algorithms employed includes the Grey Wolf Optimizer (GWO) and Artificial Bee Colony (ABC).Forecasting models are later developed to predict the daily spot price of crude oil and gasoline.Results showed that GWO works better with Z-score normalization technique while ABC produces better accuracy with the Min-Max.Nevertheless, the GWO is more superior than ABC as its model generates the highest accuracy for both crude oil and gasoline price.Such a result indicates that GWO is a promising competitor in the family of swarm intelligence algorithms
- …