11 research outputs found

    Towards Optimizing Storage Costs on the Cloud

    Full text link
    We study the problem of optimizing data storage and access costs on the cloud while ensuring that the desired performance or latency is unaffected. We first propose an optimizer that optimizes the data placement tier (on the cloud) and the choice of compression schemes to apply, for given data partitions with temporal access predictions. Secondly, we propose a model to learn the compression performance of multiple algorithms across data partitions in different formats to generate compression performance predictions on the fly, as inputs to the optimizer. Thirdly, we propose to approach the data partitioning problem fundamentally differently than the current default in most data lakes where partitioning is in the form of ingestion batches. We propose access pattern aware data partitioning and formulate an optimization problem that optimizes the size and reading costs of partitions subject to access patterns. We study the various optimization problems theoretically as well as empirically, and provide theoretical bounds as well as hardness results. We propose a unified pipeline of cost minimization, called SCOPe that combines the different modules. We extensively compare the performance of our methods with related baselines from the literature on TPC-H data as well as enterprise datasets (ranging from GB to PB in volume) and show that SCOPe substantially improves over the baselines. We show significant cost savings compared to platform baselines, of the order of 50% to 83% on enterprise Data Lake datasets that range from terabytes to petabytes in volume.Comment: The first two authors contributed equally. 12 pages, Accepted to the International Conference on Data Engineering (ICDE) 202

    Exploring Automatic Search in Digital Libraries – A Caution Guide for Systematic Reviewers

    No full text
    <div>Digital Libraries (DLs) like IEEE Xplore, ACM DL, SpringerLink, etc., are being used quite frequently for executing the Automatic Search Phase of Secondary Studies such as Systematic Literature Reviews and Mapping Studies. However, the execution behavior of the various search related features of these DLs is not as consistent as required by such secondary studies. This repository intends to make a continuous effort toward reporting the current state of execution behavior of various search features of DLs used in secondary studies related to software engineering domain. In particular, the repository targets behavioral inconsistencies with respect to the feature set required in the conduct the search process of secondary studies. The repository will be updated with every recent search feature related changes in the respective DLs and hence would provide the secondary study researchers with an up-to-date information about the robustness and reliability of such DLs. To start with , we target five DLs: IEEE Xplore, ACM DL, SpringerLink, Science Direct and Wiley, and report their search features with appropriate scores extracted through a formal feature analysis.</div><div><br></div><div>Explore paper-preprint and other relevant materials at</div><div>https://github.com/pv-singh/DLs-for-SLRs</div><div><b><br></b></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div

    Buckling Load Maximization of Curvilinearly Stiffened Tow-Steered Laminates

    No full text

    Hybrid Optimization of Curvilinearly Stiffened Shells Using Parallel Processing

    No full text

    Lightweight Chassis Design of Hybrid Trucks Considering Multiple Road Conditions and Constraints

    No full text
    The paper describes a fully automated process to generate a shell-based finite element model of a large hybrid truck chassis to perform mass optimization considering multiple load cases and multiple constraints. A truck chassis consists of different parts that could be optimized using shape and size optimization. The cross members are represented by beams, and other components of the truck (batteries, engine, fuel tanks, etc.) are represented by appropriate point masses and are attached to the rail using multiple point constraints to create a mathematical model. Medium-fidelity finite element models are developed for front and rear suspensions and they are attached to the chassis using multiple point constraints, hence creating the finite element model of the complete truck. In the optimization problem, a set of five load conditions, each of which corresponds to a road event, is considered, and constraints are imposed on maximum allowable von Mises stress and the first vertical bending frequency. The structure is optimized by implementing the particle swarm optimization algorithm using parallel processing. A mass reduction of about 13.25% with respect to the baseline model is achieved
    corecore