22 research outputs found

    A Cloud-Oriented Green Computing Architecture for E-Learning Applications

    Get PDF
    Cloud computing is a highly scalable and cost-effective infrastructure for running Web applications. E-learning or e-Learning is one of such Web application has increasingly gained popularity in the recent years, as a comprehensive medium of global education system/training systems. The development of e-Learning Application within the cloud computing environment enables users to access diverse software applications, share data, collaborate more easily, and keep their data safely in the infrastructure. However, the growing demand of Cloud infrastructure has drastically increased the energy consumption of data centers, which has become a critical issue. High energy consumption not only translates to high operational cost, which reduces the profit margin of Cloud providers, but also leads to high carbon emissions which is not environmentally friendly. Hence, energy-efficient solutions are required to minimize the impact of Cloud-Oriented E-Learning on the environment. E-learning methods have drastically changed the educational environment and also reduced the use of papers and ultimately reduce the production of carbon footprint. E-learning methodology is an example of Green computing. Thus, in this paper, it is proposed a Cloud-Oriented Green Computing Architecture for eLearning Applications (COGALA). The e-Learning Applications using COGALA can lower expenses, reduce energy consumption, and help organizations with limited IT resources to deploy and maintain needed software in a timely manner. This paper also discussed the implication of this solution for future research directions to enable Cloud-Oriented Green Computing

    Conception et realisation sur un multiprocesseur d'un simulateur reparti d'architectures systoliques

    No full text
    SIGLECNRS TD 15042 / INIST-CNRS - Institut de l'Information Scientifique et TechniqueFRFranc

    Exploring the use of syntactic dependency features for document-level sentiment classification

    No full text
    An automatic analysis of product reviews requires deep understanding of the natural language text by machine. The limitation of bag-of-words (BoW) model is that a large amount of word relation information from the original sentence is lost and the word order is ignored. Higher-order-N-grams also fail to capture the long-range dependency relations and word order information. To address these issues, syntactic features extracted from the dependency relations can be used for machine learning based document-level sentiment classification. Generalization of syntactic dependency features and negation handling is used to achieve more accurate classification. Further to reduce the huge dimensionality of the feature space, feature selection methods based on information gain (IG) and weighted frequency and odds (WFO) are used. A supervised feature weighting scheme called delta term frequency-inverse document frequency (TF-IDF) is also employed to boost the importance of discriminative features using the observed uneven distribution of features between the two classes. Experimental results show the effectiveness of generalized syntactic dependency features over standard features for sentiment classification using Boolean multinomial naive Bayes (BMNB) classifier

    Fast Linear Adaptive Skipping Training Algorithm for Training Artificial Neural Network

    No full text
    Artificial neural network has been extensively consumed training model for solving pattern recognition tasks. However, training a very huge training data set using complex neural network necessitates excessively high training time. In this correspondence, a new fast Linear Adaptive Skipping Training (LAST) algorithm for training artificial neural network (ANN) is instituted. The core essence of this paper is to ameliorate the training speed of ANN by exhibiting only the input samples that do not categorize perfectly in the previous epoch which dynamically reducing the number of input samples exhibited to the network at every single epoch without affecting the network’s accuracy. Thus decreasing the size of the training set can reduce the training time, thereby ameliorating the training speed. This LAST algorithm also determines how many epochs the particular input sample has to skip depending upon the successful classification of that input sample. This LAST algorithm can be incorporated into any supervised training algorithms. Experimental result shows that the training speed attained by LAST algorithm is preferably higher than that of other conventional training algorithms

    New approach to software architecture

    No full text

    Women's role in Kūḍiyāṭṭam /

    No full text
    Vandevelde, Iri
    corecore