86 research outputs found

    On Bernoulli Decompositions for Random Variables, Concentration Bounds, and Spectral Localization

    Full text link
    As was noted already by A. N. Kolmogorov, any random variable has a Bernoulli component. This observation provides a tool for the extension of results which are known for Bernoulli random variables to arbitrary distributions. Two applications are provided here: i. an anti-concentration bound for a class of functions of independent random variables, where probabilistic bounds are extracted from combinatorial results, and ii. a proof, based on the Bernoulli case, of spectral localization for random Schroedinger operators with arbitrary probability distributions for the single site coupling constants. For a general random variable, the Bernoulli component may be defined so that its conditional variance is uniformly positive. The natural maximization problem is an optimal transport question which is also addressed here

    An asymptotic formula for the maximum size of an h-family in products of partially ordered sets

    Get PDF
    AbstractAn h-family of a partially ordered set P is a subset of P such that no h + 1 elements of the h-family lie on any single chain. Let S1, S2,… be a sequence of partially ordered sets which are not antichains and have cardinality less than a given finite value. Let Pn be the direct product of S1,…, Sn. An asymptotic formula of the maximum size of an h-family in Pn is given, where h=o(n) and n → ∞

    A Mathematical Framework of Human Thought Process: Rectifying Software Construction Inefficiency and Identifying Characteristic Efficiencies of Networked Systems Via Problem-solution Cycle

    Get PDF
    Problem The lack of a theory to explain human thought process latently affects the general perception of problem solving activities. This present study was to theorize human thought process (HTP) to ascertain in general the effect of problem solving inadequacy on efficiency. Method To theorize human thought process (HTP), basic human problem solving activities were investigated through the vein of problem-solution cycle (PSC). The scope of PSC investigation was focused on the inefficiency problem in software construction and latent characteristic efficiencies of a similar networked system. In order to analyze said PSC activities, three mathematical quotients and a messaging wavefunction model similar to Schrodinger’s electronic wavefunction model are respectively derived for four intrinsic brain traits namely intelligence, imagination, creativity and language. These were substantiated using appropriate empirical verifications. Firstly, statistical analysis of intelligence, imagination and creativity quotients was done using empirical data with global statistical views from: 1. 1994–2004 CHAOS report Standish Group International’s software development projects success and failure survey. 2. 2000–2009 Global Creativity Index (GCI) data based on 3Ts of economic development (technology, talent and tolerance indices) from 82 nations. 3. Other varied localized success and failure surveys from 1994–2009/1998–2010 respectively. These statistical analyses were done using spliced decision Sperner system (SDSS) to show that the averages of all empirical scientific data on successes and failures of software production within specified periods are in excellent agreement with theoretically derived values. Further, the catalytic effect of creativity (thought catalysis) in human thought process is outlined and shown to be in agreement with newly discovered branch-like nerve cells in brain of mice (similar to human brain). Secondly, the networked communication activities of the language trait during PSC was scrutinized statistical using journal-journal citation data from 13 randomly selected 1984 major chemistry journals. With the aid of aforementioned messaging wave formulation, computer simulation of message-phase “thermogram” and “chromatogram” were generated to provide messaging line spectra relative to the behavioral messaging activities of the messaging network under study. Results Theoretical computations stipulated 66.67% efficiency due to intelligence, imagination and creativity traits interactions (multi-computational skills) was 33.33% due to networked linkages of language trait (aggregated language skills). The worldwide software production and economic data used were normally distributed with significance level α of 0.005. Thus, there existed a permissible error of 1% attributed to the significance level of said normally distributed data. Of the brain traits quotient statistics, the imagination quotient (IMGQ) score was 52.53% from 1994-2004 CHAOS data analysis and that from 2010 GCI data was 54.55%. Their average reasonably approximated 50th percentile of the cumulative distribution of problem-solving skills. On the other hand, the creativity quotient score from 1994-2004 CHAOS data was 0.99% and that from 2010 GCI data was 1.17%. These averaged to a near 1%. The chances of creativity and intelligence working together as joint problem-solving skills was consistently found to average at 11.32%(1994-2004 CHAOS: 10.95%, 2010 GCI: 11.68%). Also, the empirical data analysis showed that the language inefficiency of thought flow ηʹ(τ) from 1994-2004 CHAOS data was 35.0977% and that for 2010 GCI data was 34.9482%. These averaged around 35%. On the success and failure of software production, statistical analysis of empirical data showed 63.2% average efficiency for successful software production (1994 - 2012) and 33.94% average inefficiency for failed software production (1998 - 2010). On the whole, software production projects had a bound efficiency approach level (BEAL) of 94.8%. In the messaging wave analysis of 13 journal-to-journal citations, the messaging phase space graph(s) indicated a fundamental frequency (probable minimum message state) of 11. Conclusions By comparison, using cutoff level of printed editions of Journal Citation Reports to substitute for missing data values is inappropriate. However, values from optimizing method(s) harmonized with the fundamental frequency inferred from message wave analysis using informatics wave equation analysis (IWEA). Due to its evenly spaced chronological data snapshot, the application of SDSS technique inherently does diminish the difficulty associated with handling large data volume (big data) for analysis. From CHAOS and GCI data analysis, the averaged CRTQ scores indicate that only 1 percent (on the average) of the entire human race can be considered exceptionally creative. However in the art of software production, the siphoning effect of existing latent language inefficiency suffocates its processes of solution creation to an efficiency bound level of 66.67%. With a BEAL value of 94.8% and basic human error of 5.2%, it can be reasonable said that software production projects have delivered efficiently within existing latent inefficiency. Consequently, by inference from the average language inefficiency of thought flow, an average language efficiency of 65% exists in the process of software production worldwide. Reasonably, this correlates very strongly with existing average software production efficiency of 63.2% around which software crisis has averagely stagnated since the inception of software creation. The persistent dismal performance of software production is attributable to existing central focus on the usage of multiplicity of programming languages. Acting as an “efficiency buffer”, the latter minimizes changes to efficiency in software production thereby limiting software production efficiency theoretically to 66.67%. From both theoretical and empirical perspective, this latently shrouds software production in a deficit maximum attainable efficiency (DMAE). Software crisis can only be improved drastically through policy-driven adaptation of a universal standard supporting very minimal number of programming languages. On the average, the proposed universal standardization could save the world an estimated 6 trillion US dollars per year which is lost through existing inefficient software industry

    Acta Cybernetica : Tomus 8. Fasciculus 3.

    Get PDF

    Conjecturally Superpolynomial Lower Bound for Share Size

    Get PDF
    Information ratio, which measures the maximum/average share size per shared bit, is a criterion of efficiency of a secret sharing scheme. It is generally believed that there exists a family of access structures such that the information ratio of any secret sharing scheme realizing it is 2Ω(n)2^{\Omega(n)}, where the parameter nn stands for the number of participants. The best known lower bound, due to Csirmaz (1994), is Ω(n/logn)\Omega(n/\log n). Closing this gap is a long-standing open problem in cryptology. In this paper, using a technique called \emph{substitution}, we recursively construct a family of access structures having information ratio nlognloglognn^{\frac{\log n}{\log \log n}}, assuming a well-stated information-theoretic conjecture is true. Our conjecture emerges after introducing the notion of \emph{convec set} for an access structure, a subset of nn-dimensional real space. We prove some topological properties about convec sets and raise several open problems

    A Candidate Access Structure for Super-polynomial Lower Bound on Information Ratio

    Get PDF
    The contribution vector (convec) of a secret sharing scheme is the vector of all share sizes divided by the secret size. A measure on the convec (e.g., its maximum or average) is considered as a criterion of efficiency of secret sharing schemes, which is referred to as the information ratio. It is generally believed that there exists a family of access structures such that the information ratio of any secret sharing scheme realizing it is 2Ω(n)2^{\mathrm{\Omega}(n)}, where the parameter nn stands for the number of participants. The best known lower bound, due to Csirmaz (1994), is Ω(n/logn)\mathrm{\Omega}(n/\log n). Closing this gap is a long-standing open problem in cryptology. Using a technique called \emph{substitution}, we recursively construct a family of access structures by starting from that of Csirmaz, which might be a candidate for super-polynomial information ratio. We provide support for this possibility by showing that our family has information ratio nΩ(lognloglogn){n^{\mathrm{\Omega}(\frac{\log n}{\log \log n})}}, assuming the truth of a well-stated information-theoretic conjecture, called the \emph{substitution conjecture}. The substitution method is a technique for composition of access structures, similar to the so called block composition of Boolean functions, and the substitution conjecture is reminiscent of the Karchmer-Raz-Wigderson conjecture on depth complexity of Boolean functions. It emerges after introducing the notion of convec set for an access structure, a subset of nn-dimensional real space, which includes all achievable convecs. We prove some topological properties about convec sets and raise several open problems

    Disordered Systems: Random Schrödinger Operators and Random Matrices

    Get PDF
    [no abstract available

    A Vector Monotonicity Assumption for Multiple Instruments

    Full text link
    When a researcher wishes to use multiple instrumental variables for a single binary treatment, the familiar LATE monotonicity assumption can become restrictive: it requires that all units share a common direction of response even when different instruments are shifted in opposing directions. What I call vector monotonicity, by contrast, simply restricts treatment status to be monotonic in each instrument separately. This is a natural assumption in many contexts, capturing the intuitive notion of "no defiers" for each instrument. I show that in a setting with a binary treatment and multiple discrete instruments, a class of causal parameters is point identified under vector monotonicity, including the average treatment effect among units that are responsive to any particular subset of the instruments. I propose a simple "2SLS-like" estimator for the family of identified treatment effect parameters. An empirical application revisits the labor market returns to college education.Comment: 56 pages, 6 figure
    corecore