8,303 research outputs found
Hysteresis and Post Walrasian Economics
Macroeconomics, hysteresis The “new consensus” dsge (dynamic stochastic general equilibrium) macroeconomic model has microfoundations provided by a single representative agent. In this model shocks to the economic environment do not have any lasting effects. In reality adjustments at the micro level are made by heterogeneous agents, and the aggregation problem cannot be assumed away. In this paper we show that the discontinuous adjustments made by heterogeneous agents at the micro level mean that shocks have lasting effects, aggregate variables containing a selective, erasable memory of the shocks experienced. This hysteresis framework provides foundations for the post-Walrasian analysis of macroeconomic systems
A Deep Learning Approach to Structured Signal Recovery
In this paper, we develop a new framework for sensing and recovering
structured signals. In contrast to compressive sensing (CS) systems that employ
linear measurements, sparse representations, and computationally complex
convex/greedy algorithms, we introduce a deep learning framework that supports
both linear and mildly nonlinear measurements, that learns a structured
representation from training data, and that efficiently computes a signal
estimate. In particular, we apply a stacked denoising autoencoder (SDA), as an
unsupervised feature learner. SDA enables us to capture statistical
dependencies between the different elements of certain signals and improve
signal recovery performance as compared to the CS approach
Ku-band system design study and TDRSS interface analysis
The capabilities of the Shuttle/TDRSS link simulation program (LinCsim) were expanded to account for radio frequency interference (RFI) effects on the Shuttle S-band links, the channel models were updated to reflect the RFI related hardware changes, the ESTL hardware modeling of the TDRS communication payload was reviewed and evaluated, in LinCsim the Shuttle/TDRSS signal acquisition was modeled, LinCsim was upgraded, and possible Shuttle on-orbit navigation techniques was evaluated
Generalization in Deep Learning
This paper provides theoretical insights into why and how deep learning can
generalize well, despite its large capacity, complexity, possible algorithmic
instability, nonrobustness, and sharp minima, responding to an open question in
the literature. We also discuss approaches to provide non-vacuous
generalization guarantees for deep learning. Based on theoretical observations,
we propose new open problems and discuss the limitations of our results.Comment: To appear in Mathematics of Deep Learning, Cambridge University
Press. All previous results remain unchange
Stochastic Inverse Reinforcement Learning
The goal of the inverse reinforcement learning (IRL) problem is to recover
the reward functions from expert demonstrations. However, the IRL problem like
any ill-posed inverse problem suffers the congenital defect that the policy may
be optimal for many reward functions, and expert demonstrations may be optimal
for many policies. In this work, we generalize the IRL problem to a well-posed
expectation optimization problem stochastic inverse reinforcement learning
(SIRL) to recover the probability distribution over reward functions. We adopt
the Monte Carlo expectation-maximization (MCEM) method to estimate the
parameter of the probability distribution as the first solution to the SIRL
problem. The solution is succinct, robust, and transferable for a learning task
and can generate alternative solutions to the IRL problem. Through our
formulation, it is possible to observe the intrinsic property for the IRL
problem from a global viewpoint, and our approach achieves a considerable
performance on the objectworld.Comment: 8+2 pages, 5 figures, Under Revie
Dimensional hyper-reduction of nonlinear finite element models via empirical cubature
We present a general framework for the dimensional reduction, in terms of number of degrees of freedom as well as number of integration points (“hyper-reduction”), of nonlinear parameterized finite element (FE) models. The reduction process is divided into two sequential stages. The first stage consists in a common Galerkin projection onto a reduced-order space, as well as in the condensation of boundary conditions and external forces. For the second stage (reduction in number of integration points), we present a novel cubature scheme that efficiently determines optimal points and associated positive weights so that the error in integrating reduced internal forces is minimized. The distinguishing features of the proposed method are: (1) The minimization problem is posed in terms of orthogonal basis vector (obtained via a partitioned Singular Value Decomposition) rather that in terms of snapshots of the integrand. (2) The volume of the domain is exactly integrated. (3) The selection algorithm need not solve in all iterations a nonnegative least-squares problem to force the positiveness of the weights. Furthermore, we show that the proposed method converges to the absolute minimum (zero integration error) when the number of selected points is equal to the number of internal force modes included in the objective function. We illustrate this model reduction methodology by two nonlinear, structural examples (quasi-static bending and resonant vibration of elastoplastic composite plates). In both examples, the number of integration points is reduced three order of magnitudes (with respect to FE analyses) without significantly sacrificing accuracy.Peer ReviewedPostprint (published version
- …