44 research outputs found

    A Service Based Adaptive U-Learning System Using UX

    Get PDF
    In recent years, traditional development techniques for e-learning systems have been changing to become more convenient and efficient. One new technology in the development of application systems includes both cloud and ubiquitous computing. Cloud computing can support learning system processes by using services while ubiquitous computing can provide system operation and management via a high performance technical process and network. In the cloud computing environment, a learning service application can provide a business module or process to the user via the internet. This research focuses on providing the learning material and processes of courses by learning units using the services in a ubiquitous computing environment. And we also investigate functions that support users’ tailored materials according to their learning style. That is, we analyzed the user’s data and their characteristics in accordance with their user experience. We subsequently applied the learning process to fit on their learning performance and preferences. Finally, we demonstrate how the proposed system outperforms learning effects to learners better than existing techniques

    A Novel Method for Functional Annotation Prediction Based on Combination of Classification Methods

    Get PDF
    Automated protein function prediction defines the designation of functions of unknown protein functions by using computational methods. This technique is useful to automatically assign gene functional annotations for undefined sequences in next generation genome analysis (NGS). NGS is a popular research method since high-throughput technologies such as DNA sequencing and microarrays have created large sets of genes. These huge sequences have greatly increased the need for analysis. Previous research has been based on the similarities of sequences as this is strongly related to the functional homology. However, this study aimed to designate protein functions by automatically predicting the function of the genome by utilizing InterPro (IPR), which can represent the properties of the protein family and groups of the protein function. Moreover, we used gene ontology (GO), which is the controlled vocabulary used to comprehensively describe the protein function. To define the relationship between IPR and GO terms, three pattern recognition techniques have been employed under different conditions, such as feature selection and weighted value, instead of a binary one

    PoGO: Prediction of Gene Ontology terms for fungal proteins

    Get PDF
    BACKGROUND: Automated protein function prediction methods are the only practical approach for assigning functions to genes obtained from model organisms. Many of the previously reported function annotation methods are of limited utility for fungal protein annotation. They are often trained only to one species, are not available for high-volume data processing, or require the use of data derived by experiments such as microarray analysis. To meet the increasing need for high throughput, automated annotation of fungal genomes, we have developed a tool for annotating fungal protein sequences with terms from the Gene Ontology. RESULTS: We describe a classifier called PoGO (Prediction of Gene Ontology terms) that uses statistical pattern recognition methods to assign Gene Ontology (GO) terms to proteins from filamentous fungi. PoGO is organized as a meta-classifier in which each evidence source (sequence similarity, protein domains, protein structure and biochemical properties) is used to train independent base-level classifiers. The outputs of the base classifiers are used to train a meta-classifier, which provides the final assignment of GO terms. An independent classifier is trained for each GO term, making the system amenable to updating, without having to re-train the whole system. The resulting system is robust. It provides better accuracy and can assign GO terms to a higher percentage of unannotated protein sequences than other methods that we tested. CONCLUSIONS: Our annotation system overcomes many of the shortcomings that we found in other methods. We also provide a web server where users can submit protein sequences to be annotated

    Numeric Analysis for Relationship-Aware Scalable Streaming Scheme

    Get PDF
    Frequent packet loss of media data is a critical problem that degrades the quality of streaming services over mobile networks. Packet loss invalidates frames containing lost packets and other related frames at the same time. Indirect loss caused by losing packets decreases the quality of streaming. A scalable streaming service can decrease the amount of dropped multimedia resulting from a single packet loss. Content providers typically divide one large media stream into several layers through a scalable streaming service and then provide each scalable layer to the user depending on the mobile network. Also, a scalable streaming service makes it possible to decode partial multimedia data depending on the relationship between frames and layers. Therefore, a scalable streaming service provides a way to decrease the wasted multimedia data when one packet is lost. However, the hierarchical structure between frames and layers of scalable streams determines the service quality of the scalable streaming service. Even if whole packets of layers are transmitted successfully, they cannot be decoded as a result of the absence of reference frames and layers. Therefore, the complicated relationship between frames and layers in a scalable stream increases the volume of abandoned layers. For providing a high-quality scalable streaming service, we choose a proper relationship between scalable layers as well as the amount of transmitted multimedia data depending on the network situation. We prove that a simple scalable scheme outperforms a complicated scheme in an error-prone network. We suggest an adaptive set-top box (AdaptiveSTB) to lower the dependency between scalable layers in a scalable stream. Also, we provide a numerical model to obtain the indirect loss of multimedia data and apply it to various multimedia streams. Our AdaptiveSTB enhances the quality of a scalable streaming service by removing indirect loss

    An Optimized Prediction Model Based on Feature Probability for Functional Identification of Large-Scale Ubiquitous Data

    No full text
    Recently, there is a growing interest in the sequence analysis. In particular, the next generation sequencing (NGS) technique fragments the base sequence and analyzes the functions thereof. Its essential role is to arrange pieces of the base sequence together based on sequencing and to define the functions. The organization of unarranged piece of sequence is one of the active research areas; moreover, definition of gene function automatically is a popular research topic. The previous studies about the automatic gene function have mainly utilized the method that automatically defines protein functions by using the similarities of base sequence or the disclosed database and the protein interaction or context free method. This study aims to predict the category of protein whose function was not defined after learning automatically with GO by extracting the characteristics of protein inside the cluster. This study conducts clustering by using the protein interaction that is generated by the similarities of base sequence under the assumption that the proteins inside the cluster have similar function. The proposed method is to show an optimized result in accordance with the option after finding the option value that can give the outperformed prediction of GO, which classifies the functions based on the IPR and keywords inside the same cluster as the unique features

    ISSN:2229-6093 Exploiting Maximum Value and Energy Awareness in Real-Time Scheduling

    No full text
    The maximum power budget on electronic devices is limited by the rechargeable electrochemical batteries, so the energy awareness and high performance has been an important issue for the embedded systems. We present a scheduling algorithm that maximizes the value for the multiple task sets by applying dynamically at run time to cope with changes in the execution environment. In order to apply the scheduling algorithm to each task set when it goes out from a queue, we modified REWpack and REW-unpack algorithms, and also investigated the admission algorithm of a task set to decide whether or not a new task set is accepted to go into the system. keywords: 1

    Asymmetric Block Design-Based Neighbor Discovery Protocol in Sensor Networks

    No full text
    Neighbor discovery is one of the emerging research areas in a wireless sensor network. After sensors are distributed, neighbor discovery is the first process to set up a communication channel with neighboring sensors. This paper proposes a new block design–based asymmetric neighbor discovery protocol for sensor networks. We borrow the concept of combinatorial block designs for our block combination scheme for neighbor discovery. First, we introduce an asymmetric neighbor discovery problem and define a target research question. Second, we propose a new asymmetric block design–based neighbor discovery protocol and explain how it works. Third, we analyze the worst-case neighbor discovery latency numerically between our protocol and some well-known protocols in the literature, and compare and evaluate the performance between the proposed protocol and others. Our protocol reveals that the worst-case latency is much lower than that of Disco and U-Connect. Finally, we conclude that the minimum number of slots per a neighbor schedule shows the lowest discovery time in terms of discovery latency and energy consumption

    Neighbor Discovery Optimization for Big Data Analysis in Low-Power, Low-Cost Communication Networks

    No full text
    Big data analysis generally consists of the gathering and processing of raw data and producing meaningful information from this data. These days, large collections of sensors, smart phones, and electronic devices are all connected in the network. One of the primary features of these devices is low-power consumption and low cost. Power consumption is one of the important research concerns in low-power, low-cost communication networks such as sensor networks. A primary feature of sensor networks is a distributed and autonomous system. Therefore, all network devices in this type of network maintain the network connectivity by themselves using limited energy resources. When they are deployed in the area of interest, the first step for neighbor discovery involves the identification of neighboring nodes for connection and communication. Most wireless sensors utilize a power-saving mechanism by powering on the system if it is off, and vice versa. The neighbor discovery process becomes a power-consuming task if two neighboring nodes do not know when their partner wakes up and sleeps. In this paper, we consider the optimization of the neighbor discovery to reduce the power consumption in wireless sensor networks and propose an energy-efficient neighbor discovery scheme by adapting symmetric block designs, combining block designs, and utilizing the concept of activating nodes based on the multiples of a specific number. The performance evaluation demonstrates that the proposed neighbor discovery algorithm outperforms other competitive approaches by analyzing the wasted awakening slots numerically

    Comparative Analysis Review of Pioneering DBSCAN and Successive Density-Based Clustering Algorithms

    No full text
    The density-based spatial clustering of applications with noise (DBSCAN) is regarded as a pioneering algorithm of the density-based clustering technique. It provides the ability to handle outlier objects, detect clusters of different shapes, and disregard the need for prior knowledge about existing clusters in a dataset. These features along with its simplistic approach helped it become widely applicable in many areas of science. However, for all its accolades, the DBSCAN still has limitations in terms of performance, its ability to detect clusters of varying densities, and its dependence on user input parameters. Multiple DBSCAN-inspired algorithms have been subsequently proposed to alleviate these and more problems of the algorithm. In this paper, the implementation, features, strengths, and drawbacks of the DBSCAN are thoroughly examined. The successive algorithms proposed to provide improvement on the original DBSCAN are classified based on their motivations and are discussed. Experimental tests were conducted to understand and compare the changes presented by a C++ implementation of these algorithms along with the original DBSCAN algorithm. Finally, the analytical evaluation is presented based on the results found
    corecore