3,543 research outputs found

    Income Distribution Effects of Water Quality Controls: An Econometric Approach

    Get PDF
    The imposition of water quality controls may affect the economy chiefly by altering aggregate production and changing the factor payments, These two effects could not only reallocate resources among production possibilities, but also could change the distribution of benefits of production among members of the society. This study attempted to provide a workable theory to establish an empirical test of the impacts of water quality controls on family income distribution. It consists of two separate areas: first, to analyze methodologies of measuring income distribution changes, and , second, to develop a theoretical model that is useful for empirical tests of the impacts of different water quality controls. A number of alternative probability density functions have been proposed as models of personal income distribution. The lognormal, displaced lognormal, gamma, and beta distribution functions were considered as appropriate methodologies, since each allows more productive power for income distribution as suggested in the past literature. Detailed information on income distribution can be extracted from the approximations of the distribution functions. One of the objectives of the research was to evaluate the different methodologies for usefulness. The gastwirth bounds for Gini coefficient were used as the test of goodness to fit; the beta density was clearly superior to the other densities for the SMSA data. Next, a theoretical model was constructed, emphasizing the production sector and the distribution sector. Water quality controls were introduced in the production process as a negative input. Water quality data were collected for all states, and indices of quality were estimated using analysis of variance techniques. The equilibrium conditions in commodity and factor markets generated the first impacts of water quality controls on total output and factor payments in the economy. The specific assumption was made as a theoretical bridge connecting family income distribution and factor payments in the distribution sector. It was assumed that a family\u27s income equals total payments received from owned labor and capital in the production process. Thus, changes in factor payments and total output were included in the distribution equations. Water quality controls would, therefore, effect family income distribution through changes in total output and changes in factor payment. The simultaneous equation regression results for 72 SMSA\u27s were not conclusive. It appeared that water quality parameter may effect the wage rate and total output, if the parameter was not, in fact, a surrogate for other excluded variables in the system. The effect of wage changes on income distribution was not significant, but changes in total output appeared to be the most significant variable in the distribution equations. In an attempt to account for the many variables which might be expected to effect income distribution, factor analysis was performed on the SMSA\u27s. Two groups of SMSA\u27s were identified and regressions were performed for these groups. Results from these regressions were similar in sign to the results from the 172 observations regressions, although many of the coefficients were not significant. Interpreting the results of the research was somewhat difficult, even though some results did appear consistent among all regressions. It does appear that there is some evidence to indicate that water quality controls lead to less equal family income distribution. Better data are required from more complete and accurate analysis. The principle thrust of the study was to develop a model to organize the complexity of economic causality with respect to income distribution change and water quality policy. It appeared that this type of systematic econometric approach can be fruitful in analyzing income distribution change

    Multi-Factor Policy Evaluation and Selection in the One-Sample Situation

    Get PDF
    Firms nowadays need to make decisions with fast information obsolesce. In this paper I deal with one class of decision problems in this situation, called the ā€œone-sampleā€ problems: we have finite options and one sample of the multiple criteria with which we use to evaluate those options. I develop evaluation procedures based on bootstrapping DEA (Data Envelopment Envelopment) and the related decision-making methods. This paper improves the bootstrap procedure proposed by Simar and Wilson (1998) and shows how to exploit information from bootstrap outputs for decision-making

    Blast Load Input Estimation of the Medium Girder Bridgeusing Inverse Method

    Get PDF
    Innovative adaptive weighted input estimation inverse methodology for estimating theunknown time-varying blast loads on the truss structure system is presented. This method isbased on the Kalman filter and the recursive least square estimator (RLSE). The filter models thesystem dynamics in a linear set of state equations. The state equations of the truss structureare constructed using the finite element method. The input blast loads of the truss structuresystem are inverse estimated from the system responses measured at two distinct nodes. Thiswork presents an efficient weighting factorĀ  applied in the RLSE, which is capable of providinga reasonable estimation results. The results obtained from the simulations show that the methodis effective in estimating input blast loads, so has great stability and precision.Defence Science Journal, 2008,Ā 58(1), pp.46-56,Ā DOI:http://dx.doi.org/10.14429/dsj.58.162

    Forgetful Large Language Models: Lessons Learned from Using LLMs in Robot Programming

    Full text link
    Large language models offer new ways of empowering people to program robot applications-namely, code generation via prompting. However, the code generated by LLMs is susceptible to errors. This work reports a preliminary exploration that empirically characterizes common errors produced by LLMs in robot programming. We categorize these errors into two phases: interpretation and execution. In this work, we focus on errors in execution and observe that they are caused by LLMs being "forgetful" of key information provided in user prompts. Based on this observation, we propose prompt engineering tactics designed to reduce errors in execution. We then demonstrate the effectiveness of these tactics with three language models: ChatGPT, Bard, and LLaMA-2. Finally, we discuss lessons learned from using LLMs in robot programming and call for the benchmarking of LLM-powered end-user development of robot applications.Comment: 9 pages ,8 figures, accepted by the AAAI 2023 Fall Symposium Serie

    Maximizing Friend-Making Likelihood for Social Activity Organization

    Full text link
    The social presence theory in social psychology suggests that computer-mediated online interactions are inferior to face-to-face, in-person interactions. In this paper, we consider the scenarios of organizing in person friend-making social activities via online social networks (OSNs) and formulate a new research problem, namely, Hop-bounded Maximum Group Friending (HMGF), by modeling both existing friendships and the likelihood of new friend making. To find a set of attendees for socialization activities, HMGF is unique and challenging due to the interplay of the group size, the constraint on existing friendships and the objective function on the likelihood of friend making. We prove that HMGF is NP-Hard, and no approximation algorithm exists unless P = NP. We then propose an error-bounded approximation algorithm to efficiently obtain the solutions very close to the optimal solutions. We conduct a user study to validate our problem formulation and per- form extensive experiments on real datasets to demonstrate the efficiency and effectiveness of our proposed algorithm

    Probing triple-Higgs productions via 4b2Ī³4b2\gamma decay channel at a 100 TeV hadron collider

    Full text link
    The quartic self-coupling of the Standard Model Higgs boson can only be measured by observing the triple-Higgs production process, but it is challenging for the Large Hadron Collider (LHC) Run 2 or International Linear Collider (ILC) at a few TeV because of its extremely small production rate. In this paper, we present a detailed Monte Carlo simulation study of the triple-Higgs production through gluon fusion at a 100 TeV hadron collider and explore the feasibility of observing this production mode. We focus on the decay channel HHHā†’bbĖ‰bbĖ‰Ī³Ī³HHH\rightarrow b\bar{b}b\bar{b}\gamma\gamma, investigating detector effects and optimizing the kinematic cuts to discriminate the signal from the backgrounds. Our study shows that, in order to observe the Standard Model triple-Higgs signal, the integrated luminosity of a 100 TeV hadron collider should be greater than 1.8Ɨ1041.8\times 10^4 abāˆ’1^{-1}. We also explore the dependence of the cross section upon the trilinear (Ī»3\lambda_3) and quartic (Ī»4\lambda_4) self-couplings of the Higgs. We find that, through a search in the triple-Higgs production, the parameters Ī»3\lambda_3 and Ī»4\lambda_4 can be restricted to the ranges [āˆ’1,5][-1, 5] and [āˆ’20,30][-20, 30], respectively. We also examine how new physics can change the production rate of triple-Higgs events. For example, in the singlet extension of the Standard Model, we find that the triple-Higgs production rate can be increased by a factor of O(10)\mathcal{O}(10).Comment: 33 pages, 11 figures, added references, corrected typos, improved text, affiliation is changed. This is the publication versio

    WildSpan: mining structured motifs from protein sequences

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Automatic extraction of motifs from biological sequences is an important research problem in study of molecular biology. For proteins, it is desired to discover sequence motifs containing a large number of wildcard symbols, as the residues associated with functional sites are usually largely separated in sequences. Discovering such patterns is time-consuming because abundant combinations exist when long gaps (a gap consists of one or more successive wildcards) are considered. Mining algorithms often employ constraints to narrow down the search space in order to increase efficiency. However, improper constraint models might degrade the sensitivity and specificity of the motifs discovered by computational methods. We previously proposed a new constraint model to handle large wildcard regions for discovering functional motifs of proteins. The patterns that satisfy the proposed constraint model are called W-patterns. A W-pattern is a structured motif that groups motif symbols into pattern blocks interleaved with large irregular gaps. Considering large gaps reflects the fact that functional residues are not always from a single region of protein sequences, and restricting motif symbols into clusters corresponds to the observation that short motifs are frequently present within protein families. To efficiently discover W-patterns for large-scale sequence annotation and function prediction, this paper first formally introduces the problem to solve and proposes an algorithm named WildSpan (sequential pattern mining across large wildcard regions) that incorporates several pruning strategies to largely reduce the mining cost.</p> <p>Results</p> <p>WildSpan is shown to efficiently find W-patterns containing conserved residues that are far separated in sequences. We conducted experiments with two mining strategies, protein-based and family-based mining, to evaluate the usefulness of W-patterns and performance of WildSpan. The protein-based mining mode of WildSpan is developed for discovering functional regions of a single protein by referring to a set of related sequences (e.g. its homologues). The discovered W-patterns are used to characterize the protein sequence and the results are compared with the conserved positions identified by multiple sequence alignment (MSA). The family-based mining mode of WildSpan is developed for extracting sequence signatures for a group of related proteins (e.g. a protein family) for protein function classification. In this situation, the discovered W-patterns are compared with PROSITE patterns as well as the patterns generated by three existing methods performing the similar task. Finally, analysis on execution time of running WildSpan reveals that the proposed pruning strategy is effective in improving the scalability of the proposed algorithm.</p> <p>Conclusions</p> <p>The mining results conducted in this study reveal that WildSpan is efficient and effective in discovering functional signatures of proteins directly from sequences. The proposed pruning strategy is effective in improving the scalability of WildSpan. It is demonstrated in this study that the W-patterns discovered by WildSpan provides useful information in characterizing protein sequences. The WildSpan executable and open source codes are available on the web (<url>http://biominer.csie.cyu.edu.tw/wildspan</url>).</p
    • ā€¦
    corecore