637 research outputs found

    ACTS in Need: Automatic Configuration Tuning with Scalability Guarantees

    Full text link
    To support the variety of Big Data use cases, many Big Data related systems expose a large number of user-specifiable configuration parameters. Highlighted in our experiments, a MySQL deployment with well-tuned configuration parameters achieves a peak throughput as 12 times much as one with the default setting. However, finding the best setting for the tens or hundreds of configuration parameters is mission impossible for ordinary users. Worse still, many Big Data applications require the support of multiple systems co-deployed in the same cluster. As these co-deployed systems can interact to affect the overall performance, they must be tuned together. Automatic configuration tuning with scalability guarantees (ACTS) is in need to help system users. Solutions to ACTS must scale to various systems, workloads, deployments, parameters and resource limits. Proposing and implementing an ACTS solution, we demonstrate that ACTS can benefit users not only in improving system performance and resource utilization, but also in saving costs and enabling fairer benchmarking

    Evaluation Model for IP Network Routing Decision based on PCA-ANN

    Get PDF
    As an act of moving information across an internetwork from one source to one destination, routing is vital for network activities. The IP (Internet protocol) network is the so-called �best-effort� communication network in which the best-effort delivery is provided for hosts. Accordingly, routing decision is closely related with a wide variety of network applications. This study presents one called PCA (�principal component analysis�)-ANN (�artificial neural networks�) evaluation model for routing decision. The PCA technology is used to reduce the dimension of the route measurement data to present the most effective structure of the route measurement data for the ANN to perform further evaluation. The proposed PCA-Based model is evaluated by experiments. Experimental results show its potentiality

    BestConfig: Tapping the Performance Potential of Systems via Automatic Configuration Tuning

    Full text link
    An ever increasing number of configuration parameters are provided to system users. But many users have used one configuration setting across different workloads, leaving untapped the performance potential of systems. A good configuration setting can greatly improve the performance of a deployed system under certain workloads. But with tens or hundreds of parameters, it becomes a highly costly task to decide which configuration setting leads to the best performance. While such task requires the strong expertise in both the system and the application, users commonly lack such expertise. To help users tap the performance potential of systems, we present BestConfig, a system for automatically finding a best configuration setting within a resource limit for a deployed system under a given application workload. BestConfig is designed with an extensible architecture to automate the configuration tuning for general systems. To tune system configurations within a resource limit, we propose the divide-and-diverge sampling method and the recursive bound-and-search algorithm. BestConfig can improve the throughput of Tomcat by 75%, that of Cassandra by 63%, that of MySQL by 430%, and reduce the running time of Hive join job by about 50% and that of Spark join job by about 80%, solely by configuration adjustment

    On the security of a provably secure, efficient, and flexible authentication scheme for ad hoc wireless sensor networks

    Get PDF
    In a recent paper, Chang and Le proposed an efficient smart card?based authenticated key exchange protocol (which is referred to as CL scheme) for heterogeneous ad hoc wireless sensor networks. However, we found that the CL scheme is subject to sensor capture attack which breaks the session key security of the CL scheme. An improved protocol is proposed to fix this problem.Peer reviewe

    Estimation of the User Contribution Rate by Leveraging Time Sequence in Pairwise Matching function-point between Users Feedback and App Updating Log

    Full text link
    Mobile applications have become an inseparable part of people's daily life. Nonetheless, the market competition is extremely fierce, and apps lacking recognition among most users are susceptible to market elimination. To this end, developers must swiftly and accurately apprehend the requirements of the wider user base to effectively strategize and promote their apps' orderly and healthy evolution. The rate at which general user requirements are adopted by developers, or user contribution, is a very valuable metric that can be an important tool for app developers or software engineering researchers to measure or gain insight into the evolution of app requirements and predict the evolution of app software. Regrettably, the landscape lacks refined quantitative analysis approaches and tools for this pivotal indicator. To address this problem, this paper exploratively proposes a quantitative analysis approach based on the temporal correlation perception that exists in the app update log and user reviews, which provides a feasible solution for quantitatively obtaining the user contribution. The main idea of this scheme is to consider valid user reviews as user requirements and app update logs as developer responses, and to mine and analyze the pairwise and chronological relationships existing between the two by text computing, thus constructing a feasible approach for quantitatively calculating user contribution. To demonstrate the feasibility of the approach, this paper collects data from four Chinese apps in the App Store in mainland China and one English app in the U.S. region, including 2,178 update logs and 4,236,417 user reviews, and from the results of the experiment, it was found that 16.6%-43.2% of the feature of these apps would be related to the drive from the online popular user requirements

    Mass segmentation using a combined method for cancer detection

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Breast cancer is one of the leading causes of cancer death for women all over the world and mammography is thought of as one of the main tools for early detection of breast cancer. In order to detect the breast cancer, computer aided technology has been introduced. In computer aided cancer detection, the detection and segmentation of mass are very important. The shape of mass can be used as one of the factors to determine whether the mass is malignant or benign. However, many of the current methods are semi-automatic. In this paper, we investigate fully automatic segmentation method.</p> <p>Results</p> <p>In this paper, a new mass segmentation algorithm is proposed. In the proposed algorithm, a fully automatic marker-controlled watershed transform is proposed to segment the mass region roughly, and then a level set is used to refine the segmentation. For over-segmentation caused by watershed, we also investigated different noise reduction technologies. Images from DDSM were used in the experiments and the results show that the new algorithm can improve the accuracy of mass segmentation.</p> <p>Conclusions</p> <p>The new algorithm combines the advantages of both methods. The combination of the watershed based segmentation and level set method can improve the efficiency of the segmentation. Besides, the introduction of noise reduction technologies can reduce over-segmentation.</p
    corecore