7 research outputs found
A Human-on-the-Loop Optimization Autoformalism Approach for Sustainability
This paper outlines a natural conversational approach to solving personalized
energy-related problems using large language models (LLMs). We focus on
customizable optimization problems that necessitate repeated solving with
slight variations in modeling and are user-specific, hence posing a challenge
to devising a one-size-fits-all model. We put forward a strategy that augments
an LLM with an optimization solver, enhancing its proficiency in understanding
and responding to user specifications and preferences while providing nonlinear
reasoning capabilities. Our approach pioneers the novel concept of human-guided
optimization autoformalism, translating a natural language task specification
automatically into an optimization instance. This enables LLMs to analyze,
explain, and tackle a variety of instance-specific energy-related problems,
pushing beyond the limits of current prompt-based techniques.
Our research encompasses various commonplace tasks in the energy sector, from
electric vehicle charging and Heating, Ventilation, and Air Conditioning (HVAC)
control to long-term planning problems such as cost-benefit evaluations for
installing rooftop solar photovoltaics (PVs) or heat pumps. This pilot study
marks an essential stride towards the context-based formulation of optimization
using LLMs, with the potential to democratize optimization processes. As a
result, stakeholders are empowered to optimize their energy consumption,
promoting sustainable energy practices customized to personal needs and
preferences
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application
GLSDC Based Parameter Estimation Algorithm for a PMSM Model
In this study, a GLSDC (Gaussian Least Squares Differential Correction) based parameter estimation algorithm is used to identify a PMSM (Permanent Magnet Synchronous Motor) model. In this method, a nonlinear model is assumed to be the correct representation of the underlying state dynamics and the output signals are assumed to be measured in a noisy environment. Using noisy input and output signals, parameters that constitute the coefficients of the nonlinear state and input signal terms are to be estimated using the state transition matrix which is computed by the numerical means that are detailed. Since a GLSDC algorithm requires correct initial state value, this term is also estimated in addition to the unknown coefficients whose bounds are assumed to be known, which is mostly the case in the industrial applications. The batch input and output signals are used to iteratively estimate the parameter set before and after the convergence, and to recover the filtered state trajectories. A couple of different scenarios are tested by means of numerical simulations and the results are addressed. Different methods are discussed to compute better initial estimate values, to shorten the convergence time
GLSDC Based Parameter Estimation Algorithm for a PMSM Model
In this study, a GLSDC (Gaussian Least Squares Differential Correction) based parameter estimation algorithm is used to identify a PMSM (Permanent Magnet Synchronous Motor) model. In this method, a nonlinear model is assumed to be the correct representation of the underlying state dynamics and the output signals are assumed to be measured in a noisy environment. Using noisy input and output signals, parameters that constitute the coefficients of the nonlinear state and input signal terms are to be estimated using the state transition matrix which is computed by the numerical means that are detailed. Since a GLSDC algorithm requires correct initial state value, this term is also estimated in addition to the unknown coefficients whose bounds are assumed to be known, which is mostly the case in the industrial applications. The batch input and output signals are used to iteratively estimate the parameter set before and after the convergence, and to recover the filtered state trajectories. A couple of different scenarios are tested by means of numerical simulations and the results are addressed. Different methods are discussed to compute better initial estimate values, to shorten the convergence time
SOS-Based Nonlinear Observer Design for Simultaneous State and Disturbance Estimation Designed for a PMSM Model
In this study, a type of nonlinear observer design is studied for a class of nonlinear systems. For the construction of the nonlinear observer, SOS-based optimization tools are utilized, which for some nonlinear dynamical systems have the advantage of transforming the problem into a more tractable one. The general problem of nonlinear observer design is translated into an SOS polynomial optimization which can be turned into an SDP problem. For a study problem, simultaneous state and disturbance estimation is considered, a cascaded nonlinear observer using a certain parameterization is constructed, and computation techniques are discussed. Cascade nonlinear observer structure is a design strategy that decomposes the problem into its components resulting in dimension reduction. In this paper, SOS-based methods using the cascade design technique are represented, and a simultaneous state and disturbance signal online estimation algorithm is constructed. The method with its smaller components is given in detail, the efficacy of the method is demonstrated by means of numerical simulations performed in MATLAB, and the observer is designed using numerical optimization tools YALMIP, MOSEK, and PENLAB
SOS-Based Nonlinear Observer Design for Simultaneous State and Disturbance Estimation Designed for a PMSM Model
In this study, a type of nonlinear observer design is studied for a class of nonlinear systems. For the construction of the nonlinear observer, SOS-based optimization tools are utilized, which for some nonlinear dynamical systems have the advantage of transforming the problem into a more tractable one. The general problem of nonlinear observer design is translated into an SOS polynomial optimization which can be turned into an SDP problem. For a study problem, simultaneous state and disturbance estimation is considered, a cascaded nonlinear observer using a certain parameterization is constructed, and computation techniques are discussed. Cascade nonlinear observer structure is a design strategy that decomposes the problem into its components resulting in dimension reduction. In this paper, SOS-based methods using the cascade design technique are represented, and a simultaneous state and disturbance signal online estimation algorithm is constructed. The method with its smaller components is given in detail, the efficacy of the method is demonstrated by means of numerical simulations performed in MATLAB, and the observer is designed using numerical optimization tools YALMIP, MOSEK, and PENLAB
On Solution Functions of Optimization: Universal Approximation and Covering Number Bounds
We study the expressibility and learnability of solution functions of convex optimization and their multi-layer architectural extension. The main results are: (1) the class of solution functions of linear programming (LP) and quadratic programming (QP) is a universal approximant for the smooth model class or some restricted Sobolev space, and we characterize the rate-distortion, (2) the approximation power is investigated through a viewpoint of regression error, where information about the target function is provided in terms of data observations, (3) compositionality in the form of deep architecture with optimization as a layer is shown to reconstruct some basic functions used in numerical analysis without error, which implies that (4) a substantial reduction in rate-distortion can be achieved with a universal network architecture, and (5) we discuss the statistical bounds of empirical covering numbers for LP/QP, as well as a generic optimization problem (possibly nonconvex) by exploiting tame geometry. Our results provide the **first rigorous analysis of the approximation and learning-theoretic properties of solution functions** with implications for algorithmic design and performance guarantees