14 research outputs found

    Business Intelligence and Data Mining: Opportunities and Future

    Get PDF
    In the business world, endless streams of information “data” are needed in order to properly initiate the process of an effective business by analyzing different needs that the service has in relation to the needs of the end-customer, by anticipating these needs, the aim of any service creation is to meet customer requirements.  In the recent years business intelligence (BI) been an interesting topic in almost every field. Likewise data mining which is a good solution in business intelligence matter, as for discussion, application and business domain. There are various attempts to detect the characteristics of services that are important to the acceptance of the service offered. The quest for attributes that satisfy and excite the consumer is possible through the use of various technological research approaches, but the efforts are enormous. The business is able to collect customer data in a more reliable and simpler way with the use of 'Smart Systems,' which are Information and Communication Technology (ICT) enabled services. The use of data mining and business intelligence to enhance the reversal of consumer needs when designing collection techniques is defined in this paper.  The main purpose of this study is to define the importance of business intelligence with it features, how data mining works and some data mining techniques discussed in brief, in addition to exploring the future and opportunities of Business Intelligence and Data Mining. Keywords: Business Intelligence, Data Mining, Business needs, Data Mining Techniques DOI: 10.7176/EJBM/13-11-01 Publication date:June 30th 202

    3D Animation Simulation of Computer Fractal and Fractal Technology Combined with Diamond-Square Algorithm

    No full text
    This article studies the generation of 3D animation simulation based on gray value algorithm and fractal interpolation algorithm. The article uses fractal technology combined with the Diamond-Square algorithm to generate height data and then color it. This algorithm optimizes the 3D animation process. The results show that the algorithm is fast in generating speed and only needs to input a few simple animation parameters to generate different 3D animations

    Challenges of Implementing Cloud Computing in the Arab Libraries Environment

    Get PDF
    Cloud computing is considered one of the important Web applications that can be utilized in libraries based on their ability to create different forms of information vessels via the library site which allow users to share them. Sharing could be possible by using a ready-made model provided by cloud computing. The main contribution of this paper is to identify the challenges that face the implementation of cloud computing in libraries and information institutions in the Arab environment. The study methodology was conducted by applying Delphi method of which depends on expert consulting in several rounds at least three rounds to come up with results that are useful in identifying their future vision of the subject. The paper concludes that the main challenges of this paper lie in professionalism, training and technical challenges, such as the availability of applications and programs, storage capacity, huge volume of data, privacy and information security

    Nonlinear Differential Equations in Computer-Aided Modeling of Big Data Technology

    No full text
    If you use simple linear equation classification for big data analysis and classification modeling, the work efficiency is low, and the accuracy is also poor. For this reason, the thesis uses nonlinear differential equations to carry out computer-aided unsteady aerodynamic modeling. Based on the perspective of differential equations, the big data classification technology is studied, and the classification model is established. The article constructs the differential classification mathematical model by establishing the differential equation with second-order delay and the constraint conditions of the model specification set. The article identifies and identifies linear parameters such as characteristic time constants in the aerodynamic model. Research shows that the model can accurately predict unsteady aerodynamic characteristics under different maneuvers

    Mathematical model of transforming image elements to structured data based on BP neural network

    No full text
    The analysis and structural transformation of power-related picture elements is an essential result of regional power grid research. This paper proposes a new idea for extracting monolithic insulator images based on analysing the characteristics of scanned colour grid power insulators. At the same time, the article extracts the RGB colour matrix of the insulator based on the BP neural network algorithm. Then, it uses it as a characteristic parameter for training and analysis. Combining the characteristics of image data, it is found that the model proposed in this paper enhances the ability to express images, thereby improving the accuracy of image classification. Furthermore, many experiments on the accurate data set of insulator monoliths show the effectiveness of this model

    Cognitive Computational Model Using Machine Learning Algorithm in Artificial Intelligence Environment

    No full text
    In order to explore the application of machine learning algorithm to intelligent analysis of big data in an artificial intelligence (AI) environment, make cognitive computing meet the requirements of AI and better assist humans to carry out data analysis, first, the theoretical basis of machine learning algorithm is elaborated. Then, a cognitive computational model based on the machine learning algorithm is proposed, including the essence, principle, function, training method of deep belief network (DBN) algorithm, as well as the joint use of DBN algorithm and multilayer perceptron. Finally, the proposed algorithm is simulated. The results show that under the same parameter conditions, the accuracy rate of the DBN algorithm combined with multilayer perceptron is higher than that of the DBN algorithm; when the number of units is >40, the accuracy rate of the DBN algorithm combined with multilayer perceptron is significantly higher than that of the DBN algorithm; when the number of units is 30, the best effect can be obtained, and the error rate is <0.05, but the DBN algorithm cannot achieve this effect alone; when the number of network layers is specified as four, the error rate of the DBN algorithm combined with multilayer perceptron is <0.05, forming the optimal level. In the AI environment, the performance of the cognitive computational model based on the DBN algorithm and multilayer perceptron can reach the highest level, which makes the computer become a handy intelligent auxiliary tool for human beings

    Demonstration of application program of logistics public information management platform based on fuzzy constrained programming mathematical model

    No full text
    As the main development direction of the new era, it not only makes rational use of modern technology and information resources, but also optimizes the overall level of urban logistics planning, construction and management. Now, with the continuous improvement of residents’ living standards, it has gradually become the focus of attention. The difference between the Internet of Things and the digital city’s public information platform based on the comprehensive perception of urban information resources allows in-depth exploration and analysis, and ultimately improves the overall urban logistics construction management service level. On the basis of understanding the current scientific research results and public information management platform construction experience, starting from the construction of the current public information management platform, based on the information cloud service architecture, verify and analyze the operation of the public information management platform of the information cloud service architecture. Can lay the foundation for construction. From the perspective of logistics, this paper makes an in-depth exploration of the uncertainty of production and customer demand in public system management, and gives a mathematical model of fuzzy constrained programming combined with the characteristics of the information cloud structure to solve the detailed aspects of the system at a higher level

    Customer Churn Prediction in Telecommunication Industry Using Deep Learning

    No full text
    Without proper analysis and forecasting, industries will find themselves repeatedly churning customers, which the telecom industry in particular cannot afford. A predictable model for customers will allow companies to retain current customers and to obtain new ones. Deep-BP-ANN implemented in this study using two feature selection methods, Variance Thresholding and Lasso Regression, in addition, our model strengthened by early stopping technique to stop training at right time and prevent overfitting. We compared the efficiency of minimizing overfitting between dropout and activity regularization strategies for two real datasets: IBM Telco and Cell2cell. Different evaluation approaches used: Holdout, and 10-fold cross-validation to evaluate the model’s efficiency. To solve unbalanced issue, the Random Oversampling technique was used to balance both datasets. The results show that the model implemented performs well with lasso regression for feature selection, early stopping technique to pick the epochs, and large numbers of neurons (250) into the input and hidden layers, and activity regularization to minimize overfitting for both datasets. In predicting customer churn, our findings outperform ML techniques: XG_Boost, Logistic_Regression, Naïve_Bayes, and KNN. Moreover, our Deep-BP-ANN model’s accuracy outperforms the existing deep learning techniques that use holdout or 10- fold CV for the same datasets

    Evaluation of descriptive type answer using transformed weight and Cosine-SVM

    No full text
    Text Mining is the technique of obtaining high characteristic information from text. In recent years, applications of text mining are broadly used in the fields of multimedia, biomedical, patent analysis, anti-spam filtering of emails, linguistic profiling and opinion mining etc. To extract useful patterns from text, various tasks such as text preprocessing, feature extraction, pattern discovery and evaluation are performed on it. The proposed work has been developed as an efficient and effective classification algorithm for textual data base. This algorithm helps to evaluate the descriptive type answers collected from the learners and also eliminate the discrepancy in manual evaluation. The implemented framework preprocesses the documents in two steps. Initially, the documents have been pruned and stemmed to moderate the size of the documents. Also, some of the feature extraction methods have been analyzed and implemented for feature extraction. The existing feature extraction method Term-Frequency-Inverse Document Frequency (TF-IDF) assigns weight to the term, based on the occurrence. But the modified TF-IDF (M-TF-IDF) assigns weight to the term based on the occurrence and importance of the terms in the document. This weighting scheme is used to increase the accuracy of the classification algorithm. But this method does not consider semantic similarity of the term. Hence Latent Semantic Analysis (LSA) method is discussed to select the terms based on the semantic similarity. The combination of M-TF-IDF and LSA has assigned weight to the terms based on the importance and semantic similarity between the terms. The Support Vector Machine (SVM) algorithm classifies the text document which depends on the kernel functions and cost parameter. The proposed work has introduced cosine similarity function as decision making function. The implemented framework Cosine-SVM (CSVM) classifies the new test data in three steps. First, the cosine similarity value has been calculated between each group support vectors and the new test data. Then, the average is calculated between them and the similarity value has been checked. If the new test data has the highest similarity with any one group of support vectors, then the label of that group has been assigned to the test data. The present work effectively and efficiently classifies the bench mark data set and hence it has also been used to evaluate the descriptive type answer written by the learners. This method has a number of benefits like increased reliability of results, reduced time and effort, reduced burden on the faculty and efficient use of resources

    FIXED TIMEOUT TECHNIQUE (FTOT) FOR MOBILE DATABASE and GRID

    No full text
    FIXED TIMEOUT TECHNIQUE (FTOT) FOR MOBILE DATABASE and GRI
    corecore