18 research outputs found

    Computational meta-heuristics based on Machine Learning to optimize fuel consumption of vessels using diesel engines

    Get PDF
    With the expansion of means of river transportation, especially in the case of small and medium-sized vessels that make routes of greater distances, the cost of fuel, if not taken as an analysis criterion for a larger profit margin, is considered to be a primary factor , considering that the value of fuel specifically diesel to power internal combustion machines is high. Therefore, the use of tools that assist in decision-making becomes necessary, as is the case of the present research, which aims to contribute with a computational model of prediction and optimization of the best speed to decrease the fuel cost considering the characteristics of the SCANIA 315 machine. propulsion model, of a vessel from the river port of Manaus that carries out river transportation to several municipalities in Amazonas. According to the results of the simulations, the best training algorithm of the Artificial Neural Network (ANN) was the BFGS Quasi-Newton considering the characteristics of the engine for optimization with Genetic Algorithm (AG)

    The computational complexity of ReLU network training parameterized by data dimensionality

    Get PDF
    Understanding the computational complexity of training simple neural networks with rectified linear units (ReLUs) has recently been a subject of intensive research. Closing gaps and complementing results from the literature, we present several results on the parameterized complexity of training two-layer ReLU networks with respect to various loss functions. After a brief discussion of other parameters, we focus on analyzing the influence of the dimension d of the training data on the computational complexity. We provide running time lower bounds in terms of W[1]-hardness for parameter d and prove that known brute-force strategies are essentially optimal (assuming the Exponential Time Hypothesis). In comparison with previous work, our results hold for a broad(er) range of loss functions, including `p-loss for all p ∈ [0, ∞]. In particular, we improve a known polynomial-time algorithm for constant d and convex loss functions to a more general class of loss functions, matching our running time lower bounds also in these cases

    Interpreting and Disentangling Feature Components of Various Complexity from DNNs

    Full text link
    This paper aims to define, quantify, and analyze the feature complexity that is learned by a DNN. We propose a generic definition for the feature complexity. Given the feature of a certain layer in the DNN, our method disentangles feature components of different complexity orders from the feature. We further design a set of metrics to evaluate the reliability, the effectiveness, and the significance of over-fitting of these feature components. Furthermore, we successfully discover a close relationship between the feature complexity and the performance of DNNs. As a generic mathematical tool, the feature complexity and the proposed metrics can also be used to analyze the success of network compression and knowledge distillation
    corecore