162 research outputs found

    Parameter identification of JONSWAP spectrum acquired by airborne LIDAR

    Get PDF
    International audienceIn this study, we developed the first linear Joint North Sea Wave Project (JONSWAP) spectrum (JS), which involves a transformation from the JS solution to the natural logarithmic scale. This transformation is convenient for defining the least squares function in terms of the scale and shape parameters. We identified these two wind-dependent parameters to better understand the wind effect on surface waves. Due to its efficiency and high-resolution, we employed the airborne Light Detection and Ranging (LIDAR) system for our measurements. Due to the lack of actual data, we simulated ocean waves in the MATLAB environment, which can be easily translated into industrial programming language. We utilized the Longuet-Higgin (LH) random-phase method to generate the time series of wave records and used the fast Fourier transform (FFT) technique to compute the power spectra density. After validating these procedures, we identified the JS parameters by minimizing the mean-square error of the target spectrum to that of the estimated spectrum obtained by FFT. We determined that the estimation error is relative to the amount of available wave record data. Finally, we found the inverse computation of wind factors (wind speed and wind fetch length) to be robust and sufficiently precise for wave forecasting

    Toward Efficient Automated Feature Engineering

    Full text link
    Automated Feature Engineering (AFE) refers to automatically generate and select optimal feature sets for downstream tasks, which has achieved great success in real-world applications. Current AFE methods mainly focus on improving the effectiveness of the produced features, but ignoring the low-efficiency issue for large-scale deployment. Therefore, in this work, we propose a generic framework to improve the efficiency of AFE. Specifically, we construct the AFE pipeline based on reinforcement learning setting, where each feature is assigned an agent to perform feature transformation \com{and} selection, and the evaluation score of the produced features in downstream tasks serve as the reward to update the policy. We improve the efficiency of AFE in two perspectives. On the one hand, we develop a Feature Pre-Evaluation (FPE) Model to reduce the sample size and feature size that are two main factors on undermining the efficiency of feature evaluation. On the other hand, we devise a two-stage policy training strategy by running FPE on the pre-evaluation task as the initialization of the policy to avoid training policy from scratch. We conduct comprehensive experiments on 36 datasets in terms of both classification and regression tasks. The results show 2.9%2.9\% higher performance in average and 2x higher computational efficiency comparing to state-of-the-art AFE methods

    ShenZhen transportation system (SZTS): a novel big data benchmark suite

    Get PDF
    Data analytics is at the core of the supply chain for both products and services in modern economies and societies. Big data workloads, however, are placing unprecedented demands on computing technologies, calling for a deep understanding and characterization of these emerging workloads. In this paper, we propose ShenZhen Transportation System (SZTS), a novel big data Hadoop benchmark suite comprised of real-life transportation analysis applications with real-life input data sets from Shenzhen in China. SZTS uniquely focuses on a specific and real-life application domain whereas other existing Hadoop benchmark suites, such as HiBench and CloudRank-D, consist of generic algorithms with synthetic inputs. We perform a cross-layer workload characterization at the microarchitecture level, the operating system (OS) level, and the job level, revealing unique characteristics of SZTS compared to existing Hadoop benchmarks as well as general-purpose multi-core PARSEC benchmarks. We also study the sensitivity of workload behavior with respect to input data size, and we propose a methodology for identifying representative input data sets

    ChainsFormer: A Chain Latency-aware Resource Provisioning Approach for Microservices Cluster

    Full text link
    The trend towards transitioning from monolithic applications to microservices has been widely embraced in modern distributed systems and applications. This shift has resulted in the creation of lightweight, fine-grained, and self-contained microservices. Multiple microservices can be linked together via calls and inter-dependencies to form complex functions. One of the challenges in managing microservices is provisioning the optimal amount of resources for microservices in the chain to ensure application performance while improving resource usage efficiency. This paper presents ChainsFormer, a framework that analyzes microservice inter-dependencies to identify critical chains and nodes, and provision resources based on reinforcement learning. To analyze chains, ChainsFormer utilizes light-weight machine learning techniques to address the dynamic nature of microservice chains and workloads. For resource provisioning, a reinforcement learning approach is used that combines vertical and horizontal scaling to determine the amount of allocated resources and the number of replicates. We evaluate the effectiveness of ChainsFormer using realistic applications and traces on a real testbed based on Kubernetes. Our experimental results demonstrate that ChainsFormer can reduce response time by up to 26% and improve processed requests per second by 8% compared with state-of-the-art techniques.Comment: 15 page
    • …
    corecore