1,552 research outputs found

    Spartan Daily, April 26, 1990

    Get PDF
    Volume 94, Issue 58https://scholarworks.sjsu.edu/spartandaily/7988/thumbnail.jp

    Almost Optimal Distribution-Free Junta Testing

    Get PDF
    We consider the problem of testing whether an unknown n-variable Boolean function is a k-junta in the distribution-free property testing model, where the distance between functions is measured with respect to an arbitrary and unknown probability distribution over {0,1}^n. Chen, Liu, Servedio, Sheng and Xie [Zhengyang Liu et al., 2018] showed that the distribution-free k-junta testing can be performed, with one-sided error, by an adaptive algorithm that makes O~(k^2)/epsilon queries. In this paper, we give a simple two-sided error adaptive algorithm that makes O~(k/epsilon) queries

    A Decision-Support Framework For Using Value Capture to Fund Public Transit: Lessons From Project-Specific Analyses, Research Report 11-14

    Get PDF
    Local and state governments provide 75 percent of transit funds in the United States. With all levels of governments under significant fiscal stress, any new transit funding mechanism is welcome. Value capture (VC) is one such mechanism. Based on the “benefits received” principle, VC involves the identification and capture of public infrastructure-led increase in land value. While the literature has extensively demonstrated the property-value impacts of transit investments and has empirically simulated the potential magnitude of VC revenues for financing transit facilities, very little research has examined the suitability of VC mechanisms for specific transit projects. This report aims to fill this research gap by examining five VC mechanisms in depth: tax-increment financing (TIF), special assessment districts (SADs), transit impact fees, joint developments, and air rights. The report is intended to assist practitioners in gauging the legal, financial, and administrative suitability of VC mechanisms for meeting project-specific funding requirements

    A Decision-Support Framework For Using Value Capture to Fund Public Transit: Lessons From Project-Specific Analyses

    Get PDF
    Local and state governments provide 75 percent of transit funds in the United States. With all levels of governments under significant fiscal stress, any new transit funding mechanism is welcome. Value capture (VC) is one such mechanism. Based on the “benefits received” principle, VC involves the identification and capture of public infrastructure-led increase in land value. While the literature has extensively demonstrated the property-value impacts of transit investments and has empirically simulated the potential magnitude of VC revenues for financing transit facilities, very little research has examined the suitability of VC mechanisms for specific transit projects. This report aims to fill this research gap by examining five VC mechanisms in depth: tax-increment financing (TIF), special assessment districts (SADs), transit impact fees, joint developments, and air rights. The report is intended to assist practitioners in gauging the legal, financial, and administrative suitability of VC mechanisms for meeting project-specific funding requirements

    Closed-form inverses for the mixed pixel/multipath interference problem in AMCW lidar

    Get PDF
    We present two new closed-form methods for mixed pixel/multipath interference separation in AMCW lidar systems. The mixed pixel/multipath interference problem arises from the violation of a standard range-imaging assumption that each pixel integrates over only a single, discrete backscattering source. While a numerical inversion method has previously been proposed, no close-form inverses have previously been posited. The first new method models reflectivity as a Cauchy distribution over range and uses four measurements at different modulation frequencies to determine the amplitude, phase and reflectivity distribution of up to two component returns within each pixel. The second new method uses attenuation ratios to determine the amplitude and phase of up to two component returns within each pixel. The methods are tested on both simulated and real data and shown to produce a significant improvement in overall error. While this paper focusses on the AMCW mixed pixel/multipath interference problem, the algorithms contained herein have applicability to the reconstruction of a sparse one dimensional signal from an extremely limited number of discrete samples of its Fourier transform

    Efficient Black-Box Identity Testing for Free Group Algebras

    Get PDF
    Hrubes and Wigderson [Pavel Hrubes and Avi Wigderson, 2014] initiated the study of noncommutative arithmetic circuits with division computing a noncommutative rational function in the free skew field, and raised the question of rational identity testing. For noncommutative formulas with inverses the problem can be solved in deterministic polynomial time in the white-box model [Ankit Garg et al., 2016; Ivanyos et al., 2018]. It can be solved in randomized polynomial time in the black-box model [Harm Derksen and Visu Makam, 2017], where the running time is polynomial in the size of the formula. The complexity of identity testing of noncommutative rational functions, in general, remains open for noncommutative circuits with inverses. We solve the problem for a natural special case. We consider expressions in the free group algebra F(X,X^{-1}) where X={x_1, x_2, ..., x_n}. Our main results are the following. 1) Given a degree d expression f in F(X,X^{-1}) as a black-box, we obtain a randomized poly(n,d) algorithm to check whether f is an identically zero expression or not. The technical contribution is an Amitsur-Levitzki type theorem [A. S. Amitsur and J. Levitzki, 1950] for F(X, X^{-1}). This also yields a deterministic identity testing algorithm (and even an expression reconstruction algorithm) that is polynomial time in the sparsity of the input expression. 2) Given an expression f in F(X,X^{-1}) of degree D and sparsity s, as black-box, we can check whether f is identically zero or not in randomized poly(n,log s, log D) time. This yields a randomized polynomial-time algorithm when D and s are exponential in n

    Lifting with Inner Functions of Polynomial Discrepancy

    Get PDF
    Lifting theorems are theorems that bound the communication complexity of a composed function f?g? in terms of the query complexity of f and the communication complexity of g. Such theorems constitute a powerful generalization of direct-sum theorems for g, and have seen numerous applications in recent years. We prove a new lifting theorem that works for every two functions f,g such that the discrepancy of g is at most inverse polynomial in the input length of f. Our result is a significant generalization of the known direct-sum theorem for discrepancy, and extends the range of inner functions g for which lifting theorems hold

    Average-Case Hardness of NP and PH from Worst-Case Fine-Grained Assumptions

    Get PDF
    What is a minimal worst-case complexity assumption that implies non-trivial average-case hardness of NP or PH? This question is well motivated by the theory of fine-grained average-case complexity and fine-grained cryptography. In this paper, we show that several standard worst-case complexity assumptions are sufficient to imply non-trivial average-case hardness of NP or PH: - NTIME[n] cannot be solved in quasi-linear time on average if UP ? ? DTIME[2^{O?(?n)}]. - ??TIME[n] cannot be solved in quasi-linear time on average if ?_kSAT cannot be solved in time 2^{O?(?n)} for some constant k. Previously, it was not known if even average-case hardness of ??SAT implies the average-case hardness of ??TIME[n]. - Under the Exponential-Time Hypothesis (ETH), there is no average-case n^{1+?}-time algorithm for NTIME[n] whose running time can be estimated in time n^{1+?} for some constant ? > 0. Our results are given by generalizing the non-black-box worst-case-to-average-case connections presented by Hirahara (STOC 2021) to the settings of fine-grained complexity. To do so, we construct quite efficient complexity-theoretic pseudorandom generators under the assumption that the nondeterministic linear time is easy on average, which may be of independent interest
    • 

    corecore