37 research outputs found
Stochastic quantization and holographic Wilsonian renormalization group of scalar theory with generic mass, self-interaction and multiple trace deformation
We explore the mathematical relationship between holographic Wilsonian
renormalization group(HWRG) and stochastic quantization(SQ) of scalar field
theory with its generic mass, self-interaction and -multiple-trace
deformation on the conformal -dimensional boundary defined in AdS
spacetime. We understand that once we define our Euclidean action, as
, then the stochastic process will reconstruct the holographic
Wilsonian renormalization group data via solving Langevin equation and
computing stochastic correlation functions. The is given by , where is the boundary counter term and is the boundary deformation which gives a boundary condition. In our
study, we choose the boundary condition adding (marginal)-multiple trace
deformation to the holographic dual field theory. In this theory, we establish
maps bewteen ficticious time, evolution of stochastic -point,
()-point correlation functions and the (AdS)radial, evolution of
-multiple-trace and ()-multiple-trace deformations respectively once
we take identifications of and between some of constants appearing in
both sides.Comment: 41 pages and 2 figure
Holographic entanglement entropy probe on spontaneous symmetry breaking with vector order
We study holographic entanglement entropy in 5-dimensional charged black
brane geometry obtained from Einstein-SU(2)Yang-Mills theory defined in
asymptotically AdS space. This gravity system undergoes second order phase
transition near its critical point affected by a spatial component of the
Yang-Mills fields, which is normalizable mode of the solution. This is known as
phase transition between isotropic and anisotropic phases. We get analytic
solutions of holographic entanglement entropies by utilizing the solution of
bulk spacetime geometry given in arXiv:1109.4592, where we consider subsystems
defined on AdS boundary of which shapes are wide and thin slabs and a cylinder.
It turns out that the entanglement entropies near the critical point shows
scaling behavior such that for both of the slabs and cylinder,
and the critical
exponent , where , and
denotes the entanglement entropy in isotropic phase whereas
denotes that in anisotropic phase. We suggest a quantity
as a new order parameter near the critical point, where
is entanglement entropy when the slab is perpendicular to the direction
of the vector order whereas is that when the slab is parallel to the
vector order. in isotropic phase but in anisotropic phase, the order
parameter becomes non-zero showing the same scaling behavior. Finally, we show
that even near the critical point, the first law of entanglement entropy is
hold. Especially, we find that the entanglement temperature for the cylinder is
, where and
is the radius of the cylinder.Comment: 1+29 pages, 4 figure
Nanoscale Perovskite‐Sensitized Solar Cell Revisited: Dye‐Cell or Perovskite‐Cell?
A general and straightforward way of preparing few nanometer-sized well-separated MAPbIxBr3-x perovskite photosensitizers on the
surface of ~1 μm thick mesoporous TiO2 photoanode was suggested via a two-step sequential deposition of low-concentrated lead halides
(0.10 ~ 0.30 M PbI2 or PbBr2) and methylammonium iodide/bromide (MAI/MABr). When those nanoscale MAPbIxBr3-x perovskites are
incorporated as a photosensitizer in typical solid state dye-sensitized solar cells (ss-DSSCs), it could be verified clearly by the capacitance
analysis that nano-particulate MAPbI3 perovskites are playing the same role as that of a typical dye sensitizer (MK-2 molecule) though their
size, composition and structure are different
Preparation of nanoscale inorganic CsPbIxBr3-x perovskite photosensitizers on the surface of mesoporous TiO2 film for solid-state sensitized solar cells
Metal chalcogenide quantum dot (QD)-like all-inorganic nanoscale perovskite photosensitizers of CsPbIxBr3-x were prepared on the surface of mesoscopic TiO2 film by a direct two-step spin-coating of lead and cesium halide precursors for application into solid-state dye-sensitized solar cells (DSSCs), as confirmed by impedance frequency response analysis. A few nanometer-sized hemisphere-shaped dots of CsPbIxBr3-x perovskites were deposited and distributed separately onto TiO2, which were checked by scanning and transmission electron microscopic (SEM and TEM) techniques. The as-deposited CsPbIxBr3-x perovskites were stable only in the case of including about 20% or more bromide in the composition of halides. When the bromide content increased in the ratio of halides of CsPbIxBr3-x, gradual decrease in lattice spacing and blue-shift of emission peaks were observed in X-ray diffraction (XRD) and photoluminescence (PL) measurements, respectively. These well-defined nano-particulate CsPbIxBr3-x perovskites were incorporated into solid-state DSSCs and tested as a new type of photosensitizers. The initial power conversion efficiency (PCE) of ca. 1.0–3.5% based on relatively thin mesoporous TiO2 film (~1 μm) looks promising with many parameters remaining for possibly more optimization. The best result, 3.79%, was obtained from CsPbI2.2Br0.80 25 days after initial measurement. These CsPbIxBr3-x-sensitized cells displayed a stable record of PCE over ~2 month and no hysteresis behavior in current-voltage traces
Solving support vector machine classification problems and their applications to supplier selection
Doctor of PhilosophyDepartment of Industrial & Manufacturing Systems EngineeringChih-Hang WuRecently, interdisciplinary (management, engineering, science, and economics) collaboration research has been growing to achieve the synergy and to reinforce the weakness of each discipline. Along this trend, this research combines three topics: mathematical programming, data mining, and supply chain management. A new pegging algorithm is developed for solving the continuous nonlinear knapsack problem. An efficient solving approach is proposed for solving the ν-support vector machine for classification problem in the field of data mining. The new pegging algorithm is used to solve the subproblem of the support vector machine problem. For the supply chain management, this research proposes an efficient integrated solving approach for the supplier selection problem. The support vector machine is applied to solve the problem of selecting potential supplies in the procedure of the integrated solving approach.
In the first part of this research, a new pegging algorithm solves the continuous nonlinear knapsack problem with box constraints. The problem is to minimize a convex and differentiable nonlinear function with one equality constraint and box constraints. Pegging algorithm needs to calculate primal variables to check bounds on variables at each iteration, which frequently is a time-consuming task. The newly proposed dual bound algorithm checks the bounds of Lagrange multipliers without calculating primal variables explicitly at each iteration. In addition, the calculation of the dual solution at each iteration can be reduced by a proposed new method for updating the solution.
In the second part, this research proposes several streamlined solution procedures of ν-support vector machine for the classification. The main solving procedure is the matrix splitting method. The proposed method in this research is a specified matrix splitting method combined with the gradient projection method, line search technique, and the incomplete Cholesky decomposition method. The method proposed can use a variety of methods for line search and parameter updating. Moreover, large scale problems are solved with the incomplete Cholesky decomposition and some efficient implementation techniques.
To apply the research findings in real-world problems, this research developed an efficient integrated approach for supplier selection problems using the support vector machine and the mixed integer programming. Supplier selection is an essential step in the procurement processes. For companies considering maximizing their profits and reducing costs, supplier selection requires seeking satisfactory suppliers and allocating proper orders to the selected suppliers. In the early stage of supplier selection, a company can use the support vector machine classification to choose potential qualified suppliers using specific criteria. However, the company may not need to purchase from all qualified suppliers. Once the company determines the amount of raw materials and components to purchase, the company then selects final suppliers from which to order optimal order quantities at the final stage of the process. Mixed integer programming model is then used to determine final suppliers and allocates optimal orders at this stage
On the Effect of Traffic Self-Similarity on Network Performance
Recent measurements of network traffic have shown that self-similarity is an ubiquitous phenomenon present in both local area and wide area traffic traces. In previous work, we have shown a simple, robust application layer causal mechanism of traffic self-similarity, namely, the transfer of files in a network system where the file size distributions are heavy-tailed. In this paper, we study the effect of scale-invariant burstiness on network performance when the functionality of the transport layer and the nteraction of traffic sources sharing bounded network resources is incorporated. First, we show that transport layer mechanisms are important factors in translating the application layer causality into link traffic self-similarity. Network performance as captured by throughput, packet loss rate, and packet retransmission rate degrades gradually with increased heavy-tailedness while queueing delay, response time, and fairness deteriorate more drastically. The degree to which heavy-tailedness affects self-similarity is determined by how well congestion control is able to shape a source traffic into an on-average constant output stream while conserving information. Second, we show that increasing network resources such as link bandwidth and buffer capacity results in a superlinear improvement in performance. When large file transfers occur with nonnegligible probability, the incrementa
Preserving Bandwidth Using A Lazy Packet Discard Policy in ATM Networks
A number of recent studies have pointed out that TCP's performance over ATM networks tends to suffer, especially under congestion and switch buffer limitations. Switch-level enhancements and link-level flow control have been proposed to improve TCP's performance in ATM networks. Selective Cell Discard (SCD) and Early Packet Discard (EPD) ensure that partial packets are discarded from the network "as early as possible", thus reducing wasted bandwidth. While such techniques improve the achievable throughput, their effectiveness tends to degrade in multi-hop networks. In this paper, we introduce Lazy Packet Discard (LPD), an AAL-level enhancement that improves effective throughput, reduces response time, and minimizes wasted bandwidth for TCP/IP over ATM. In contrast to the SCD and EPD policies, LPD delays as much as possible the removal from the network of cells belonging to a partially communicated packet. LPD preserves network bandwidth by keeping such cells alive and by ensuring that ..
Implementation and Performance Evaluation of TCP Boston A Fragmentation-tolerant TCP Protocol for ATM Networks
: In this paper, we overview the implementation of TCP Boston--- a novel fragmentation-tolerant transport protocol, especially suited for ATM's 53-byte cell-oriented switching architecture. TCP Boston integrates a standard TCP/IP protocol, such as Reno or Vegas, with a powerful redundancy control mechanism based on AIDA---an adaptive version of Rabin's IDA dispersal and reconstruction algorithms. Our results show that TCP Boston improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., effective throughput) and application-centric metrics (e.g., response time). 1 Introduction In the last few years, the Transmission Control Protocol (TCP) [15]---a reliable transport protocol that uses a window-based flow and error control algorithm on top of the Internet Protocol (IP) layer---has emerged as the standard in data communication. However, the introduction of the Asynchronous Transfer Mode (ATM) technology and attempts to integrate that technology with IP protocols ha..
Exploiting Redundancy for Timeliness in TCP Boston
While ATM bandwidth-reservation techniques are able to offer the guarantees necessary for the delivery of real-time streams in many applications (e.g. live audio and video), they suffer from many disadvantages that make them inattractive (or impractical) for many others. These limitations coupled with the flexibility and popularity of TCP/IP as a best-effort transport protocol have prompted the network researchcommunity to propose and implementanumber of techniques that adapt TCP/IP to the Available Bit Rate (ABR) and Unspecified Bit Rate (UBR) services in ATM network environments. This allows these environments to smoothly integrate (and make use of) currently available TCP-based applications and services without much (if any) modifications. However, recent studies have shown that TCP/IP,whenimplemented over ATM networks, is susceptible to serious performance limitations. In a recently completed study, wehaveunveiled a new transport protocol, TCP Boston, that turns ATM's 53-byte cell-oriented switching architecture into an advantage for TCP/IP. In this paper, we demonstrate the real-time features of TCP Boston that allow communication bandwidth to be traded off for timeliness. We start with an overview of the protocol. Next, we analytically characterize the dynamic redundancy control features of TCP Boston. Next, Wepresent detailed simulation results that show the superiority of our protocol when compared to other adaptations of TCP/IP over ATMs. In particular, weshowthatTCP Boston improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., effective throughput and percent of missed deadlines) and real-time application-centric metrics (e.g., response time and jitter)