54 research outputs found

    Opinion-Mining on Marglish and Devanagari Comments of YouTube Cookery Channels Using Parametric and Non-Parametric Learning Models

    Get PDF
    YouTube is a boon, and through it people can educate, entertain, and express themselves about various topics. YouTube India currently has millions of active users. As there are millions of active users it can be understood that the data present on the YouTube will be large. With India being a very diverse country, many people are multilingual. People express their opinions in a code-mix form. Code-mix form is the mixing of two or more languages. It has become a necessity to perform Sentiment Analysis on the code-mix languages as there is not much research on Indian code-mix language data. In this paper, Sentiment Analysis (SA) is carried out on the Marglish (Marathi + English) as well as Devanagari Marathi comments which are extracted from the YouTube API from top Marathi channels. Several machine-learning models are applied on the dataset along with 3 different vectorizing techniques. Multilayer Perceptron (MLP) with Count vectorizer provides the best accuracy of 62.68% on the Marglish dataset and Bernoulli Naïve Bayes along with the Count vectorizer, which gives accuracy of 60.60% on the Devanagari dataset. Multilayer Perceptron and Bernoulli Naïve Bayes are considered to be the best performing algorithms. 10-fold cross-validation and statistical testing was also carried out on the dataset to confirm the results

    An auction framework for DaaS in cloud computing and its evaluation

    Get PDF
    Data-as-a-service (DaaS) is the next emerging technology in cloud computing research. Small clouds operating as a group may exploit the DaaS efficiently to perform the substantial amount of work. In this paper, an auction framework is studied and evaluated when the small clouds are strategic in nature. We present the system model and formal definition of the problem and its experimental evaluation. Several auction DaaS-based mechanisms are proposed and their correctness and computational complexity is analysed. To the best of our knowledge, this is the first and realistic attempt to study the DaaS in a strategic setting. We have evaluated the proposed approach under various simulation scenarios to judge on its usefulness and efficiencyPeer ReviewedPostprint (author's final draft

    Specific Electronic Platform to Test the Influence of Hypervisors on the Performance of Embedded Systems

    Get PDF
    [EN] Some complex digital circuits must host various operating systems in a single electronic platform to make real-time and not-real-time tasks compatible or assign different priorities to current applications. For this purpose, some hardware–software techniques—called virtualization—must be integrated to run the operating systems independently, as isolated in different processors: virtual machines. These are monitored and managed by a software tool named hypervisor, which is in charge of allowing each operating system to take control of the hardware resources. Therefore, the hypervisor determines the effectiveness of the system when reacting to events. To measure, estimate or compare the performance of different ways to configure the virtualization, our research team has designed and implemented a specific testbench: an electronic system, based on a complex System on Chip with a processing system and programmable logic, to configure the hardware–software partition and show merit figures, to evaluate the performance of the different options, a field that has received insufficient attention so far. In this way, the fabric of the Field Programmable Gate Array (FPGA) can be exploited for measurements and instrumentation. The platform has been validated with two hypervisors, Xen and Jailhouse, in a multiprocessor System-on-Chip, by executing real-time operating systems and application programs in different contexts.This work has been supported by the Basque Government within the project HAZITEK ZE-2020/00022 as well as the Ministerio de Ciencia e Innovación of Spain through the Centro para el Desarrollo Tecnológico Industrial (CDTI) within the project IDI-20201264 and FEDER fund

    Addressing big data analytics for classification intrusion detection system

    Get PDF
    Currently, with the rapid developments communication technologies, large number of trustworthy online systems and facilities has been introduced. The cybersecurity is quiet on the rise threat from unauthorized; such security threats can be detected by an intrusion detection system. Thus, enhancing the intrusion detection system is main object of numbers of research and developers for monitoring the network security. Addressing challenges of big data in intrusion detection is one issue faced the researchers and developers due to dimensionality reduction in network data. In this paper, hybrid model is proposed to handle the dimensionality reduction in intrusion detection system. The genetic algorithm was applied as preprocessing steps for selecting most significant features from entire big network dataset. The genetic algorithm was applied to generate subset of relevant features from network data set for handling dimensionality reduction. The Support Vector Machine (SVM) algorithm was processed the relevant features for detecting intrusion. The NSL-KDD standard data was considered to test the performance of the hybrid model. Standard evaluation metrics were employed to presents the results of hybrid model. It is concluded that the empirical results of hybrid outperformed the performance of existing systems

    Class-Level Refactoring Prediction by Ensemble Learning with Various Feature Selection Techniques

    Get PDF
    Background: Refactoring is changing a software system without affecting the software functionality. The current researchers aim i to identify the appropriate method(s) or class(s) that needs to be refactored in object-oriented software. Ensemble learning helps to reduce prediction errors by amalgamating different classifiers and their respective performances over the original feature data. Other motives are added in this paper regarding several ensemble learners, errors, sampling techniques, and feature selection techniques for refactoring prediction at the class level. Objective: This work aims to develop an ensemble-based refactoring prediction model with structural identification of source code metrics using different feature selection techniques and data sampling techniques to distribute the data uniformly. Our model finds the best classifier after achieving fewer errors during refactoring prediction at the class level. Methodology: At first, our proposed model extracts a total of 125 software metrics computed from object-oriented software systems processed for a robust multi-phased feature selection method encompassing Wilcoxon significant text, Pearson correlation test, and principal component analysis (PCA). The proposed multi-phased feature selection method retains the optimal features characterizing inheritance, size, coupling, cohesion, and complexity. After obtaining the optimal set of software metrics, a novel heterogeneous ensemble classifier is developed using techniques such as ANN-Gradient Descent, ANN-Levenberg Marquardt, ANN-GDX, ANN-Radial Basis Function; support vector machine with different kernel functions such as LSSVM-Linear, LSSVM-Polynomial, LSSVM-RBF, Decision Tree algorithm, Logistic Regression algorithm and extreme learning machine (ELM) model are used as the base classifier. In our paper, we have calculated four different errors i.e., Mean Absolute Error (MAE), Mean magnitude of Relative Error (MORE), Root Mean Square Error (RMSE), and Standard Error of Mean (SEM). Result: In our proposed model, the maximum voting ensemble (MVE) achieves better accuracy, recall, precision, and F-measure values (99.76, 99.93, 98.96, 98.44) as compared to the base trained ensemble (BTE) and it experiences less errors (MAE = 0.0057, MORE = 0.0701, RMSE = 0.0068, and SEM = 0.0107) during its implementation to develop the refactoring model. Conclusions: Our experimental result recommends that MVE with upsampling can be implemented to improve the performance of the refactoring prediction model at the class level. Furthermore, the performance of our model with different data sampling techniques and feature selection techniques has been shown in the form boxplot diagram of accuracy, F-measure, precision, recall, and area under the curve (AUC) parameters.publishedVersio

    Intelligent Algorithm for Enhancing MPEG-DASH QoE in eMBMS

    Full text link
    [EN] Multimedia streaming is the most demanding and bandwidth hungry application in today¿s world of Internet. MPEG-DASH as a video technology standard is designed for delivering live or on-demand streams in Internet to deliver best quality content with the fewest dropouts and least possible buffering. Hybrid architecture of DASH and eMBMS has attracted a great attention from the telecommunication industry and multimedia services. It is deployed in response to the immense demand in multimedia traffic. However, handover and limited available resources of the system affected on dropping segments of the adaptive video streaming in eMBMS and it creates an adverse impact on Quality of Experience (QoE), which is creating trouble for service providers and network providers towards delivering the service. In this paper, we derive a case study in eMBMS to approach to provide test measures evaluating MPEG-DASH QoE, by defining the metrics are influenced on QoE in eMBMS such as bandwidth and packet loss then we observe the objective metrics like stalling (number, duration and place), buffer length and accumulative video time. Moreover, we build a smart algorithm to predict rate of segments are lost in multicast adaptive video streaming. The algorithm deploys an estimation decision regards how to recover the lost segments. According to the obtained results based on our proposal algorithm, rate of lost segments is highly decreased by comparing to the traditional approach of MPEG-DASH multicast and unicast for high number of users.This work has been partially supported by the Postdoctoral Scholarship Contratos Postdoctorales UPV 2014 (PAID-10-14) of the Universitat Politècnica de València , by the Programa para la Formación de Personal Investigador (FPI-2015-S2-884) of the Universitat Politècnica de València , by the Ministerio de Economía y Competitividad , through the Convocatoria 2014. Proyectos I+D - Programa Estatal de Investigación Científica y Técnica de Excelencia in the Subprograma Estatal de Generación de Conocimiento , project TIN2014-57991-C3-1-P and through the Convocatoria 2017 - Proyectos I+D+I - Programa Estatal de Investigación, Desarrollo e Innovación, convocatoria excelencia (Project TIN2017-84802-C2-1-P).Abdullah, MT.; Jimenez, JM.; Canovas Solbes, A.; Lloret, J. (2017). Intelligent Algorithm for Enhancing MPEG-DASH QoE in eMBMS. Network Protocols and Algorithms. 9(3-4):94-114. https://doi.org/10.5296/npa.v9i3-4.12573S9411493-

    A Hybrid Feature Extraction Method With Regularized Extreme Learning Machine for Brain Tumor Classification

    Get PDF
    Brain cancer classification is an important step that depends on the physician's knowledge and experience. An automated tumor classification system is very essential to support radiologists and physicians to identify brain tumors. However, the accuracy of current systems needs to be improved for suitable treatments. In this paper, we propose a hybrid feature extraction method with a regularized extreme learning machine (RELM) for developing an accurate brain tumor classification approach. The approach starts by preprocessing the brain images by using a min–max normalization rule to enhance the contrast of brain edges and regions. Then, the brain tumor features are extracted based on a hybrid method of feature extraction. Finally, a RELM is used for classifying the type of brain tumor. To evaluate and compare the proposed approach, a set of experiments is conducted on a new public dataset of brain images. The experimental results proved that the approach is more effective compared with the existing state-of-the-art approaches, and the performance in terms of classification accuracy improved from 91.51% to 94.233% for the experiment of the random holdout technique

    Estudio del efecto moderador en una localidad encuestada sobre la intención de adopción de M-Commerce

    Get PDF
    Introduction: The present research was conducted at the University of Delhi in 2018. Problem: With the increase in usage of internet technology through wireless devices, the relevance of m-commerce has amplified. In a developing country like India, the rural and urban population is not equally divided on the use of m-commerce and this demands a detailed study regarding this problem.  Objective: The study aims to determine the factors that influence the m-commerce adoption intention of customers and how the effect varies over rural and urban populations. Methodology: This study combines the TAM and UTAUT model to consider the determinants as perceived ease of use, perceived usefulness, perceived risk, perceived cost, social interaction, and facilitating conditions, taking the endogenous variable as intention to adopt m-commerce.     Results: The results of PLS-SEM accepted the hypotheses underlying the model and also validated the moderating role played by a respondent’s locality over the intention to adopt m-commerce. Conclusion: The proposed model was validated by using PLS-SEM approach on a sample size of 200 collected from the urban and rural areas of Delhi NCR. Moreover, the moderating effect of a respondent’s locality was observed over adoption intention. Originality: With the advancement in technological infrastructure and improvement in mobile data facilities, customers have shown enthusiasm towards making online transactions using their phones. The advantage of mobile commerce over computer based electronic commerce is its mobility. Extant research has shown interest in studying the adoption intention of mobile commerce, based on determinants from the TAM or UTAUT model or their combinations. This study combines both models to choose the determinants of mobile adoption intention.  Limitation: Further studies can be conducted by considering other combinations of determinants and extending the model to incorporate the loyalty measures.Introducción: la presente investigación se realizó en la Universidad de Delhi en 2018. Problema: con el aumento en el uso de la tecnología de Internet a través de dispositivos inalámbricos, larelevancia del comercio móvil se ha ampliado. En un país en desarrollo como India, la población rural y urbana no está dividida por igual en el uso del comercio móvil y esto exige un estudio detallado sobre este problema. Objetivo: el estudio tiene como objetivo determinar los factores que influyen en la intención de adopción deM-Commerce de los clientes y cómo varía el efecto sobre las poblaciones rurales y urbanas.Metodología: este estudio combina el modelo TAM y UTAUT para considerar los determinantes como facilidad de uso percibida, utilidad percibida, riesgo percibido, costo percibido, interacción social y condiciones facilitadoras, tomando la variable endógena como intención de adoptar el comercio móvil. Resultados: los resultados de PLS-SEM aceptaron las hipótesis subyacentes al modelo y también validaronel papel moderador desempeñado por la localidad del encuestado sobre la intención de adoptar el comerciomóvil. Conclusión: el modelo propuesto fue validado utilizando el enfoque PLS-SEM en un tamaño de 200 muestras recolectadas de las áreas urbanas y rurales de Delhi NCR. Además, el efecto moderador de la localidad del encuestado se observó sobre la intención de adopción. Originalidad: con el avance en la infraestructura tecnológica y la mejora en las instalaciones de datos móviles, los clientes han mostrado entusiasmo por realizar transacciones en línea usando sus teléfonos. La ventaja del comercio móvil sobre el comercio electrónico basado en computadora es su movilidad. La investigación existente ha mostrado interés en estudiar la intención de adopción del comercio móvil, basada en determinantes de la intención de adopción móvil

    Evaluating Latency in Multiprocessing Embedded Systems for the Smart Grid

    Get PDF
    Smart grid endpoints need to use two environments within a processing system (PS), one with a Linux-type operating system (OS) using the Arm Cortex-A53 cores for management tasks, and the other with a standalone execution or a real-time OS using the Arm Cortex-R5 cores. The Xen hypervisor and the OpenAMP framework allow this, but they may introduce a delay in the system, and some messages in the smart grid need a latency lower than 3 ms. In this paper, the Linux thread latencies are characterized by the Cyclictest tool. It is shown that when Xen hypervisor is used, this scenario is not suitable for the smart grid as it does not meet the 3 ms timing constraint. Then, standalone execution as the real-time part is evaluated, measuring the delay to handle an interrupt created in programmable logic (PL). The standalone application was run in A53 and R5 cores, with Xen hypervisor and OpenAMP framework. These scenarios all met the 3 ms constraint. The main contribution of the present work is the detailed characterization of each real-time execution, in order to facilitate selecting the most suitable one for each application.This work has been supported by the Ministerio de Economía y Competitividad of Spain within the project TEC2017-84011-R and FEDER funds as well as by the Department of Education of the Basque Government within the fund for research groups of the Basque university system IT978-16. It has also been supported by the Basque Government within the project HAZITEK ZE-2020/00022 as well as the Ministerio de Ciencia e Innovación of Spain through the Centro para el Desarrollo Tecnológico Industrial (CDTI) within the project IDI-20201264; in both cases, they have been financed through the Fondo Europeo de Desarrollo Regional 2014-2020 (FEDER funds). It has also been supported by the University of the Basque Country within the scholarship for training of research staff with code PIF20/135

    EOG-Based Human–Computer Interface: 2000–2020 Review

    Get PDF
    Electro-oculography (EOG)-based brain-computer interface (BCI) is a relevant technology influencing physical medicine, daily life, gaming and even the aeronautics field. EOG-based BCI systems record activity related to users' intention, perception and motor decisions. It converts the bio-physiological signals into commands for external hardware, and it executes the operation expected by the user through the output device. EOG signal is used for identifying and classifying eye movements through active or passive interaction. Both types of interaction have the potential for controlling the output device by performing the user's communication with the environment. In the aeronautical field, investigations of EOG-BCI systems are being explored as a relevant tool to replace the manual command and as a communicative tool dedicated to accelerating the user's intention. This paper reviews the last two decades of EOG-based BCI studies and provides a structured design space with a large set of representative papers. Our purpose is to introduce the existing BCI systems based on EOG signals and to inspire the design of new ones. First, we highlight the basic components of EOG-based BCI studies, including EOG signal acquisition, EOG device particularity, extracted features, translation algorithms, and interaction commands. Second, we provide an overview of EOG-based BCI applications in the real and virtual environment along with the aeronautical application. We conclude with a discussion of the actual limits of EOG devices regarding existing systems. Finally, we provide suggestions to gain insight for future design inquiries
    • …
    corecore