23,310 research outputs found

    Dispersive calculation of B_7^{3/2} and B_8^{3/2} in the chiral limit

    Get PDF
    We show how the isospin vector and axialvector current spectral functions rho_V and rho_A can be used to determine in leading chiral order the low energy constants B_7^{3/2} and B_8^{3/2}. This is accomplished by matching the Operator Product Expansion to the dispersive analysis of vacuum polarization functions. The data for the evaluation of these dispersive integrals has been recently enhanced by the ALEPH measurement of spectral functions in tau decay, and we update our previous phenomenological determination. Our calculation yields in the NDR renormalization scheme and at renormalization scale mu = 2 GeV the values B_7^{3/2} = 0.55 +- 0.07 +- 0.10 and B_8^{3/2} = 1.11 +- 0.16 +- 0.23 for the quark mass values m_s + m = 0.1 GeV.Comment: 16 pages, 1 figur

    Direct determination of the strange and light quark condensates from full lattice QCD

    Get PDF
    We determine the strange quark condensate from lattice QCD for the first time and compare its value to that of the light quark and chiral condensates. The results come from a direct calculation of the expectation value of the trace of the quark propagator followed by subtraction of the appropriate perturbative contribution, derived here, to convert the non-normal-ordered mψ̅ ψ to the MS̅ scheme at a fixed scale. This is then a well-defined physical “nonperturbative” condensate that can be used in the operator product expansion of current-current correlators. The perturbative subtraction is calculated through O(αs) and estimates of higher order terms are included through fitting results at multiple lattice spacing values. The gluon field configurations used are “second generation” ensembles from the MILC collaboration that include 2+1+1 flavors of sea quarks implemented with the highly improved staggered quark action and including u/d sea quarks down to physical masses. Our results are ⟨s̅ s⟩MS̅ (2  GeV)=-(290(15)  MeV)3, ⟨l̅ l⟩MS̅ (2  GeV)=-(283(2)  MeV)3, where l is a light quark with mass equal to the average of the u and d quarks. The strange to light quark condensate ratio is 1.08(16). The light quark condensate is significantly larger than the chiral condensate in line with expectations from chiral analyses. We discuss the implications of these results for other calculations

    The influence of banks on auditor choice and auditor reporting in Japan

    Full text link
    Debt as opposed to equity as the major source of financing and the influence of banks on the corporate governance of listed companies are unique features of the Japanese business environment. This thesis investigates how these features affect the choice of auditor by Japanese listed companies and auditor reporting by Japanese CPA firms on those companies. Pong and Kita (2006) provided some univariate analyses and indicated that Japanese companies tended to select the same external auditors as their main banks to reduce the agency costs. In this thesis, I further examine the influence of main banks on auditor selection by logistic regression and also investigate the influence of main banks on auditor reporting quality after controlling self-selection bias. Using data from Japanese listed companies in the Tokyo Stock Exchange over the 2002-2008 period, I provide empirical evidence that companies with more reliance on main bank loans are more likely to choose their main banks’ external auditors. Using the Propensity Score Matching method and the Heckman two-step binary probit model to control for self-selection bias, the empirical results support the hypothesis that main bank auditors are more likely to issue modified opinions to the borrowing companies than non-main bank auditors, providing evidence of higher audit quality from main bank auditors. As a sensitivity test, I also use discretionary accruals as a measure of audit quality. the results indicate that companies who choose the same auditors as their main banks have higher audit quality than companies who choose different auditors from their main banks. My thesis contributes to the existing auditing literature in several ways. First, by studying the influence of debt financing on auditor choice and auditor reporting, this thesis extends the auditor market research that focuses mainly on the role of auditors in equity markets to the bank-based market. Furthermore, this thesis also complements auditing research on the influence of institutions on audit quality

    QUALITATIVE ASSESMENT OF ECONOMIC EFFECTS OF INTEGRATION IN EU: THE CASE OF ALBANIA

    Get PDF
    TEverybody is conscious for the importance of Albanian integration in EU. But each of us must be aware of the fact, that this outstanding process is accompanied by some benefits and costs, which has to be paid by our society. The main objective of this study is to qualitatively asses the economic impact of Albanian integration in EU. The first section is dedicated to theoretically explanation of the types of effects of integration which may be: direct or directs ones, micro or macroeconomic effects, and short run or long run effects. In the second part I am mainly focused on a qualitative assessment of effects of Albanian economic integration in EU. The main benefits relate to the new opportunities that will be ushered in, principally arising from better access of Albanian exports to foreign markets, enhanced competitive structures and improvements in efficiency, which in the long run will strengthen the Albanian economy. The costs arise mostly from the change-over from a protected economy to an open competitive one, which could lead to a loss of income and employment.. There are introduced in the third part, three main quantitative methods which are recommended to be used for a further step of quantifying the economic effects of Albanian integration in EU.Integration, Effects; Assessment, Qualitative, Quantitative

    A Probabilistic One-Step Approach to the Optimal Product Line Design Problem Using Conjoint and Cost Data

    Get PDF
    Designing and pricing new products is one of the most critical activities for a firm, and it is well-known that taking into account consumer preferences for design decisions is essential for products later to be successful in a competitive environment (e.g., Urban and Hauser 1993). Consequently, measuring consumer preferences among multiattribute alternatives has been a primary concern in marketing research as well, and among many methodologies developed, conjoint analysis (Green and Rao 1971) has turned out to be one of the most widely used preference-based techniques for identifying and evaluating new product concepts. Moreover, a number of conjoint-based models with special focus on mathematical programming techniques for optimal product (line) design have been proposed (e.g., Zufryden 1977, 1982, Green and Krieger 1985, 1987b, 1992, Kohli and Krishnamurti 1987, Kohli and Sukumar 1990, Dobson and Kalish 1988, 1993, Balakrishnan and Jacob 1996, Chen and Hausman 2000). These models are directed at determining optimal product concepts using consumers' idiosyncratic or segment level part-worth preference functions estimated previously within a conjoint framework. Recently, Balakrishnan and Jacob (1996) have proposed the use of Genetic Algorithms (GA) to solve the problem of identifying a share maximizing single product design using conjoint data. In this paper, we follow Balakrishnan and Jacob's idea and employ and evaluate the GA approach with regard to the problem of optimal product line design. Similar to the approaches of Kohli and Sukumar (1990) and Nair et al. (1995), product lines are constructed directly from part-worths data obtained by conjoint analysis, which can be characterized as a one-step approach to product line design. In contrast, a two-step approach would start by first reducing the total set of feasible product profiles to a smaller set of promising items (reference set of candidate items) from which the products that constitute a product line are selected in a second step. Two-step approaches or partial models for either the first or second stage in this context have been proposed by Green and Krieger (1985, 1987a, 1987b, 1989), McBride and Zufryden (1988), Dobson and Kalish (1988, 1993) and, more recently, by Chen and Hausman (2000). Heretofore, with the only exception of Chen and Hausman's (2000) probabilistic model, all contributors to the literature on conjoint-based product line design have employed a deterministic, first-choice model of idiosyncratic preferences. Accordingly, a consumer is assumed to choose from her/his choice set the product with maximum perceived utility with certainty. However, the first choice rule seems to be an assumption too rigid for many product categories and individual choice situations, as the analyst often won't be in a position to control for all relevant variables influencing consumer behavior (e.g., situational factors). Therefore, in agreement with Chen and Hausman (2000), we incorporate a probabilistic choice rule to provide a more flexible representation of the consumer decision making process and start from segment-specific conjoint models of the conditional multinomial logit type. Favoring the multinomial logit model doesn't imply rejection of the widespread max-utility rule, as the MNL includes the option of mimicking this first choice rule. We further consider profit as a firm's economic criterion to evaluate decisions and introduce fixed and variable costs for each product profile. However, the proposed methodology is flexible enough to accomodate for other goals like market share (as well as for any other probabilistic choice rule). This model flexibility is provided by the implemented Genetic Algorithm as the underlying solver for the resulting nonlinear integer programming problem. Genetic Algorithms merely use objective function information (in the present context on expected profits of feasible product line solutions) and are easily adjustable to different objectives without the need for major algorithmic modifications. To assess the performance of the GA methodology for the product line design problem, we employ sensitivity analysis and Monte Carlo simulation. Sensitivity analysis is carried out to study the performance of the Genetic Algorithm w.r.t. varying GA parameter values (population size, crossover probability, mutation rate) and to finetune these values in order to provide near optimal solutions. Based on more than 1500 sensitivity runs applied to different problem sizes ranging from 12.650 to 10.586.800 feasible product line candidate solutions, we can recommend: (a) as expected, that a larger problem size be accompanied by a larger population size, with a minimum popsize of 130 for small problems and a minimum popsize of 250 for large problems, (b) a crossover probability of at least 0.9 and (c) an unexpectedly high mutation rate of 0.05 for small/medium-sized problems and a mutation rate in the order of 0.01 for large problem sizes. Following the results of the sensitivity analysis, we evaluated the GA performance for a large set of systematically varying market scenarios and associated problem sizes. We generated problems using a 4-factorial experimental design which varied by the number of attributes, number of levels in each attribute, number of items to be introduced by a new seller and number of competing firms except the new seller. The results of the Monte Carlo study with a total of 276 data sets that were analyzed show that the GA works efficiently in both providing near optimal product line solutions and CPU time. Particularly, (a) the worst-case performance ratio of the GA observed in a single run was 96.66%, indicating that the profit of the best product line solution found by the GA was never less than 96.66% of the profit of the optimal product line, (b) the hit ratio of identifying the optimal solution was 84.78% (234 out of 276 cases) and (c) it tooks at most 30 seconds for the GA to converge. Considering the option of Genetic Algorithms for repeated runs with (slightly) changed parameter settings and/or different initial populations (as opposed to many other heuristics) further improves the chances of finding the optimal solution.

    Scattering of thermal He beams by crossed atomic and molecular beams. II. The He-Ar van der Waals potential

    Get PDF
    Differential cross sections for He–Ar scattering at room temperature have been measured. The experimental consistency of these measurements with others performed in different laboratories is demonstrated. Despite this consistency, the present van der Waals well depth of 1.78 meV, accurate to 10%, is smaller by 20% to 50% than the experimental values obtained previously. These discrepancies are caused by differences between the assumed mathematical forms or between the assumed dispersion coefficients of the potentials used in the present paper and those of previous studies. Independent investigations have shown that the previous assumptions are inappropriate for providing accurate potentials from fits to experimental differential cross section data for He–Ar. We use two forms free of this inadequacy in the present analysis: a modified version of the Simons–Parr–Finlan–Dunham (SPFD) potential, and a double Morse–van der Waals (M^2SV) type of parameterization. The resulting He–Ar potentials are shown to be equal to with experimental error, throughout the range of interatomic distances to which the scattering data are sensitive. The SPFD or M^2SV potentials are combined with a repulsive potential previously determined exclusively from fits to gas phase bulk properties. The resulting potentials, valid over the extended range of interatomic distances r≳2.4 Å, are able to reproduce all these bulk properties quite well, without adversely affecting the quality of the fits to the DC
    corecore