12 research outputs found

    Watcher: Cloud-Based Coding Activity Tracker for Fair Evaluation of Programming Assignments

    No full text
    Online learning has made it possible to attend programming classes regardless of the constraint that all students should be gathered in a classroom. However, it has also made it easier for students to cheat on assignments. Therefore, we need a system to deal with cheating on assignments. This study presents a Watcher system, an automated cloud-based software platform for impartial and convenient online programming hands-on education. The primary features of Watcher are as follows. First, Watcher offers a web-based integrated development environment (Web-IDE) that allows students to start programming immediately without the need for additional installation and configuration. Second, Watcher collects and monitors the coding activity of students automatically in real-time. As Watcher provides the history of the coding activity to instructors in log files, the instructors can investigate suspicious coding activities such as plagiarism, even for a short source code. Third, Watcher provides facilities to remotely manage and evaluate students’ hands-on programming assignments. We evaluated Watcher in a Unix system programming class for 96 students. The results showed that Watcher improves the quality of the coding experience for students through Web-IDE, and it offers instructors valuable data that can be used to analyze the various coding activities of individual students

    The Impact of Container Virtualization on Network Performance of IoT Devices

    No full text
    Container-based virtualization offers advantages such as high performance, resource efficiency, and agile environment. These advantages make Internet of Things (IoT) device management easy. Although container-based virtualization has already been introduced to IoT devices, the different network modes of containers and their performance issues have not been addressed. Since the network performance is an important factor in IoT, the analysis of the container network performance is essential. In this study, we analyze the network performance of containers on an IoT device, Raspberry Pi 3. The results show that the network performance of containers is lower than that of the native Linux, with an average performance difference of 6% and 18% for TCP and UDP, respectively. In addition, the network performance of containers varies depending on the network mode. When a single container runs, bridge mode achieves higher performance than host mode by 25% while host mode shows better performance than bridge mode by 45% in the multicontainer environment

    Enhancing the Isolation and Performance of Control Planes for Fog Computing

    No full text
    Fog computing, which places computing resources close to IoT devices, can offer low latency data processing for IoT applications. With software-defined networking (SDN), fog computing can enable network control logics to become programmable and run on a decoupled control plane, rather than on a physical switch. Therefore, network switches are controlled via the control plane. However, existing control planes have limitations in providing isolation and high performance, which are crucial to support multi-tenancy and scalability in fog computing. In this paper, we present optimization techniques for Linux to provide isolation and high performance for the control plane of SDN. The new techniques are (1) separate execution environment (SE2), which separates the execution environments between multiple control planes, and (2) separate packet processing (SP2), which reduces the complexity of the existing network stack in Linux. We evaluate the proposed techniques on commodity hardware and show that the maximum performance of a control plane increases by four times compared to the native Linux while providing strong isolation

    ANCS: Achieving QoS through Dynamic Allocation of Network Resources in Virtualized Clouds

    No full text
    To meet the various requirements of cloud computing users, research on guaranteeing Quality of Service (QoS) is gaining widespread attention in the field of cloud computing. However, as cloud computing platforms adopt virtualization as an enabling technology, it becomes challenging to distribute system resources to each user according to the diverse requirements. Although ample research has been conducted in order to meet QoS requirements, the proposed solutions lack simultaneous support for multiple policies, degrade the aggregated throughput of network resources, and incur CPU overhead. In this paper, we propose a new mechanism, called ANCS (Advanced Network Credit Scheduler), to guarantee QoS through dynamic allocation of network resources in virtualization. To meet the various network demands of cloud users, ANCS aims to concurrently provide multiple performance policies; these include weight-based proportional sharing, minimum bandwidth reservation, and maximum bandwidth limitation. In addition, ANCS develops an efficient work-conserving scheduling method for maximizing network resource utilization. Finally, ANCS can achieve low CPU overhead via its lightweight design, which is important for practical deployment

    NetAP-ML: Machine Learning-Assisted Adaptive Polling Technique for Virtualized IoT Devices

    No full text
    To maximize the performance of IoT devices in edge computing, an adaptive polling technique that efficiently and accurately searches for the workload-optimized polling interval is required. In this paper, we propose NetAP-ML, which utilizes a machine learning technique to shrink the search space for finding an optimal polling interval. NetAP-ML is able to minimize the performance degradation in the search process and find a more accurate polling interval with the random forest regression algorithm. We implement and evaluate NetAP-ML in a Linux system. Our experimental setup consists of a various number of virtual machines (2–4) and threads (1–5). We demonstrate that NetAP-ML provides up to 23% higher bandwidth than the state-of-the-art technique

    qCon: QoS-Aware Network Resource Management for Fog Computing

    No full text
    Fog computing is a new computing paradigm that employs computation and network resources at the edge of a network to build small clouds, which perform as small data centers. In fog computing, lightweight virtualization (e.g., containers) has been widely used to achieve low overhead for performance-limited fog devices such as WiFi access points (APs) and set-top boxes. Unfortunately, containers have a weakness in the control of network bandwidth for outbound traffic, which poses a challenge to fog computing. Existing solutions for containers fail to achieve desirable network bandwidth control, which causes bandwidth-sensitive applications to suffer unacceptable network performance. In this paper, we propose qCon, which is a QoS-aware network resource management framework for containers to limit the rate of outbound traffic in fog computing. qCon aims to provide both proportional share scheduling and bandwidth shaping to satisfy various performance demands from containers while implementing a lightweight framework. For this purpose, qCon supports the following three scheduling policies that can be applied to containers simultaneously: proportional share scheduling, minimum bandwidth reservation, and maximum bandwidth limitation. For a lightweight implementation, qCon develops its own scheduling framework on the Linux bridge by interposing qCon’s scheduling interface on the frame processing function of the bridge. To show qCon’s effectiveness in a real fog computing environment, we implement qCon in a Docker container infrastructure on a performance-limited fog device—a Raspberry Pi 3 Model B board

    Monolithic 1 × 8 DWDM Silicon Optical Transmitter Using an Arrayed-Waveguide Grating and Electro-Absorption Modulators for Switch Fabrics in Intra-Data-Center Interconnects

    No full text
    In this study, we propose an eight-channel monolithic optical transmitter using silicon electro-absorption modulators (EAMs) based on free-carrier injection by Schottky junctions. The transmitter consists of a 1 × 8 silicon arrayed-waveguide grating (AWG) and eight 500-μm-long EAMs on a 5.41 × 2.84 mm2 footprint. It generates eight-channel dense wavelength-division multiplexing (DWDM) outputs with 1.33 nm channel spacing (Δλ) in the C-band from a single broadband light source and modulates each channel with over 3 dB modulation depth at 6 V peak-to-peak. The experimental results showed that the feasibility of a homogeneous silicon DWDM transmitter with a single light source for switch fabrics in intra-data-center interconnects over heterogeneous integration with regards to more complementary metal–oxide–semiconductor (CMOS) compatibility

    Resource Analysis of Blockchain Consensus Algorithms in Hyperledger Fabric

    No full text
    In the blockchain network, the consensus algorithm is used to tolerate node faults with data consistency and integrity, so it is vital in all blockchain services. Previous studies on the consensus algorithm have the following limitations: 1) no resource consumption analysis was done, 2) performance analysis was not comprehensive in terms of blockchain parameters (e.g., number of orderer nodes, number of fault nodes, batch size, payload size), and 3) practical fault scenarios were not evaluated. In other words, the resource provisioning of consensus algorithms in clouds has not been addressed adequately. As many blockchain services are deployed in the form of blockchain-as-a-service (BaaS), how to provision consensus algorithms becomes a key question to be answered. This study presents a kernel-level analysis for the resource consumption and comprehensive performance evaluations of three major consensus algorithms (i.e., Kafka, Raft, and PBFT). Our experiments reveal that resource consumption differs up to seven times, which demonstrates the importance of proper resource provisioning for BaaS

    NetAP-ML: Machine Learning-Assisted Adaptive Polling Technique for Virtualized IoT Devices

    No full text
    To maximize the performance of IoT devices in edge computing, an adaptive polling technique that efficiently and accurately searches for the workload-optimized polling interval is required. In this paper, we propose NetAP-ML, which utilizes a machine learning technique to shrink the search space for finding an optimal polling interval. NetAP-ML is able to minimize the performance degradation in the search process and find a more accurate polling interval with the random forest regression algorithm. We implement and evaluate NetAP-ML in a Linux system. Our experimental setup consists of a various number of virtual machines (2–4) and threads (1–5). We demonstrate that NetAP-ML provides up to 23% higher bandwidth than the state-of-the-art technique

    Kafe: Can OS Kernels Forward Packets Fast Enough for Software Routers?

    No full text
    corecore