431 research outputs found

    Essays in New Keynesian Monetary Policy

    Full text link
    The dissertation consists of three Chapters. I consider New Keynesian models which involve tradeoffs between output gap and inflation variances. Such policy strategy is often referred to as flexible inflation targeting rules (e.g., Lars Svensson 2011, pp.1238-95). Taylor rules, in general, have the symbolic expression t=xt+t+gt, where t is the nominal interest rate at period t, t is the target variable output gap at period t, t is the target variable inflation rate at period t, t is realized shock to output gap at period t, and x, and are coefficients. This three-term Taylor rule is the most efficient Taylor rule in terms of the social welfare loss measurement (i.e., the minimized social welfare loss involved with the three-term Taylor rule is the smallest value when we compare it with the minimized social welfare loss involved with a one-term Taylor rule (=) or a two-term Taylor rule (=+).) Thus, the three-term Taylor rule is used as the benchmark for comparing the performance of Taylor rules in the dissertation. Chapter 1 argues that the dynamic interpretation most authors have put on the “stability and uniqueness” (determinacy) condition of the new Keynesian monetary policy model is inappropriate. Literatures authors maintain a belief when monetary policy is operating through a Taylor rule, the model stability and uniqueness requires the real interest rate move in the same direction as inflation (Taylor Principle). This chapter shows the determinacy condition does not necessarily require the Taylor Principle to hold. The Taylor Principle and the determinacy condition are two different kettles of fish. Although the three-term Taylor rule is applied in Chapter 1, some people may object or think that it is impractical or “unrealistic” to expect the central bank (“the Fed”) bases a rule on a shock term (). Thus, in Chapter 2 and Chapter 3, I examine two-term (“simple”) Taylor rules which do not have term—i.e., =+. Chapter 2 is a study of the linear relationship of the coefficients and in Taylor rules, which is the coefficient to the target variable output gap () and is the coefficient to the target variable inflation rate (). Furthermore, since I use only and in Taylor rules instead of using and −−1 (i.e., the difference between price levels in two periods) in Taylor rules, the Taylor rules do not cause optimal inertia. In other words, the Fed has once-and-for-all response to the new development in either or , or both. Such new developments are either from realized output gap shocks or inflation rate shocks or both. The monetary policy objective function is then treated as a period quadratic social welfare loss function for two target variables and their coefficients because the solution expectation for all periods is the same as the solution for period t. The optimal policy implies that, especially, the coefficients and must produce minimum social welfare loss to the economy when the Fed’s monetary policy target is based on the tradeoffs between two target variables inflation rate (not price levels) and output gap . For those policy-rate paths (expressed by Taylor rules) which the minimum social welfare losses are guaranteed, I use the term optimal Taylor rules, and for those coefficient vi values satisfied this purpose, I called them optimal coefficients or optimal linear relationship among those coefficients. The natural optimum Taylor rule, as pointed out by Woodford (2001), would have the term (=), but for the reason in the previous paragraph, I only examine the case of a simpler Taylor rule, =t+ (hereafter this Taylor rule is called the simple Taylor rule or the simple TR), when the rule is specified as the optimal interest rate rule for governing the optimal paths of output gap and inflation rate. The global-type solutions with “optimal inertia” will not be considered in all chapters. The first part of Chapter 2 develops an approach to obtain the linear relationship of and which is the first order condition for minimum social welfare loss, =1/2*[2+2], where L denotes social welfare loss, E is the expectational operator and is the weights on output gap. The second part of Chapter 2 is the discussion of two properties of the linear relationship of and that are observed by comparing with the three-term Taylor: (a) the linear relationship is the same for governing the optimal paths of and whether g-shocks are nullified by containing g= in the baseline new Keynesian model or not; (b) the limit of the social welfare loss containing the simple Taylor rule (=+) is at the minimum when the values of and are very big (or approaching infinity), and such minimum is the same as the social welfare loss containing the three-term Taylor rule. This implies the three-term Taylor rule with (=) suggested by Woodford (2001), whose model has different setup but it works out with the same result, is more efficient than the simple (two-term) Taylor rule. In Chapter 3, using the method developed from and the two properties discovered in Chapter 2, I propose a combination monetary policy rule when the Fed sets the interest rate before observing current variables of output gap () and inflation (). The missing information is in equation—i.e., =−1+ where ~d (0,2), and in equation—i.e., =-1+ where t~d (0,2). Thus, the Fed cannot adjust their interest rate for those shocks because the Fed cannot observe and . On the other hand, the information of money is immediately available to the Fed when I use a model as abstract representation of the Fed’s observation of money surprise, so the Fed can use signals about money to adjust their interest rate. My model of the Fed’s operation on how they observe money surprise is a simplified model for making a theoretical point, not for the purpose of improving what the Fed is actually doing. The combination policy of a Taylor rule and money signal can improve the social welfare loss when the Fed sets their monetary policy with unobservable shocks. Chapter 3 uses an inverted version of Poole’s (1970) combination policy analysis and shows that the social welfare loss is improved from the money signals

    Simulation analysis of manipulating light propagation through turbid Media

    Get PDF
    We model light propagation through turbid media by employing the pseudospectral time-domain (PSTD) simulation technique. With specific amplitude and phase, light can be manipulated to propagate through turbid media via multiple scattering. By exploiting the flexibility of the PSTD simulation, we analyze factors that contribute to enhancing light penetration. Specific research findings suggest that it is possible to propagate light with specific amplitude/phase. The reported simulation analysis enables quantitative analyses of directing light through turbid media. Please click Additional Files below to see the full abstract

    Actualizing the affordance of mobile technology for classroom orchestration: A main path analysis of mobile learning

    Get PDF
    Ubiquitous and increasingly accessible, mobile technology enhanced learning in the learning process, referred to as classroom orchestration, is inspiring an increasing number of studies that examines mobile learning from various perspectives. Nonetheless, educators find themselves confronted by the ever-evolving features of mobile technology and challenges in implementation context. This study, therefore, surveys the research literature on mobile learning using main path analysis, and cites affordance actualization by Strong (Strong et al. 2014) as a theoretical lens to identify the research themes from results found in main paths, to develop a “mobile learning actualization” framework. This particular framework integrates several research themes, ranging from system features, educator and learner, the goal of mobile technology adoption, contextual implementation, to the outcome of mobile learning. These insights have proven constructive for educators to adapt mobile technology to a learning environment, thus successfully achieving classroom orchestration

    Comfort-Centered Design of a Lightweight and Backdrivable Knee Exoskeleton

    Full text link
    This paper presents design principles for comfort-centered wearable robots and their application in a lightweight and backdrivable knee exoskeleton. The mitigation of discomfort is treated as mechanical design and control issues and three solutions are proposed in this paper: 1) a new wearable structure optimizes the strap attachment configuration and suit layout to ameliorate excessive shear forces of conventional wearable structure design; 2) rolling knee joint and double-hinge mechanisms reduce the misalignment in the sagittal and frontal plane, without increasing the mechanical complexity and inertia, respectively; 3) a low impedance mechanical transmission reduces the reflected inertia and damping of the actuator to human, thus the exoskeleton is highly-backdrivable. Kinematic simulations demonstrate that misalignment between the robot joint and knee joint can be reduced by 74% at maximum knee flexion. In experiments, the exoskeleton in the unpowered mode exhibits 1.03 Nm root mean square (RMS) low resistive torque. The torque control experiments demonstrate 0.31 Nm RMS torque tracking error in three human subjects.Comment: 8 pages, 16figures, Journa

    RNAMST: efficient and flexible approach for identifying RNA structural homologs

    Get PDF
    RNA molecules fold into characteristic secondary structures for their diverse functional activities such as post-translational regulation of gene expression. Searching homologs of a pre-defined RNA structural motif, which may be a known functional element or a putative RNA structural motif, can provide useful information for deciphering RNA regulatory mechanisms. Since searching for the RNA structural homologs among the numerous RNA sequences is extremely time-consuming, this work develops a data preprocessing strategy to enhance the search efficiency and presents RNAMST, which is an efficient and flexible web server for rapidly identifying homologs of a pre-defined RNA structural motif among numerous RNA sequences. Intuitive user interface are provided on the web server to facilitate the predictive analysis. By comparing the proposed web server to other tools developed previously, RNAMST performs remarkably more efficiently and provides more effective and flexible functions. RNAMST is now available on the web at

    Quantum correlation generation capability of experimental processes

    Full text link
    Einstein-Podolsky-Rosen (EPR) steering and Bell nonlocality illustrate two different kinds of correlations predicted by quantum mechanics. They not only motivate the exploration of the foundation of quantum mechanics, but also serve as important resources for quantum-information processing in the presence of untrusted measurement apparatuses. Herein, we introduce a method for characterizing the creation of EPR steering and Bell nonlocality for dynamical processes in experiments. We show that the capability of an experimental process to create quantum correlations can be quantified and identified simply by preparing separable states as test inputs of the process and then performing local measurements on single qubits of the corresponding outputs. This finding enables the construction of objective benchmarks for the two-qubit controlled operations used to perform universal quantum computation. We demonstrate this utility by examining the experimental capability of creating quantum correlations with the controlled-phase operations on the IBM Quantum Experience and Amazon Braket Rigetti superconducting quantum computers. The results show that our method provides a useful diagnostic tool for evaluating the primitive operations of nonclassical correlation creation in noisy intermediate scale quantum devices.Comment: 5 figures, 3 appendice

    DEXON: A Highly Scalable, Decentralized DAG-Based Consensus Algorithm

    Get PDF
    A blockchain system is a replicated state machine that must be fault tolerant. When designing a blockchain system, there is usually a trade-off between decentralization, scalability, and security. In this paper, we propose a novel blockchain system, DEXON, which achieves high scalability while remaining decentralized and robust in the real-world environment. We have two main contributions. First, we present a highly scalable sharding framework for blockchain. This framework takes an arbitrary number of single chains and transforms them into the \textit{blocklattice} data structure, enabling \textit{high scalability} and \textit{low transaction confirmation latency} with asymptotically optimal communication overhead. Second, we propose a single-chain protocol based on our novel verifiable random function and a new Byzantine agreement that achieves high decentralization and low latency

    dbPTM: an information repository of protein post-translational modification

    Get PDF
    dbPTM is a database that compiles information on protein post-translational modifications (PTMs), such as the catalytic sites, solvent accessibility of amino acid residues, protein secondary and tertiary structures, protein domains and protein variations. The database includes all of the experimentally validated PTM sites from Swiss-Prot, PhosphoELM and O-GLYCBASE. Only a small fraction of Swiss-Prot proteins are annotated with experimentally verified PTM. Although the Swiss-Prot provides rich information about the PTM, other structural properties and functional information of proteins are also essential for elucidating protein mechanisms. The dbPTM systematically identifies three major types of protein PTM (phosphorylation, glycosylation and sulfation) sites against Swiss-Prot proteins by refining our previously developed prediction tool, KinasePhos (). Solvent accessibility and secondary structure of residues are also computationally predicted and are mapped to the PTM sites. The resource is now freely available at
    • 

    corecore