203 research outputs found

    PORNOGRAFI dan TEKNOLOGI (Komodifikasi dan Pembatasan Akses pada Materi Bermuatan Pornografi)

    Full text link
    The politics of the body make someone body becomes object treatment. In culture of patriarchy, woman becomes object in depiction of sexual passion of men. For capitalists, body which have experienced of modification process with technological aid become commodity able to sale. Restriction access and dissemination of pornography material are visible reality in regulation various state. The regulation which there in Indonesia not merely expressing growth of culture and moral, however also express also mount growth of a success technology of covering by that regulation. Restriction access and dissemination of pornography material at this information technology era do not maximal if relying on government regulation, so that emerge idea to do government without governance, which is have back part to self-regulation to all owners, users and entrepreneur in the field of information technology.Keywords: pornografi, teknologi, komodifikasi, patriarki, hukum

    A Polynomial Time Algorithm for Lossy Population Recovery

    Full text link
    We give a polynomial time algorithm for the lossy population recovery problem. In this problem, the goal is to approximately learn an unknown distribution on binary strings of length nn from lossy samples: for some parameter μ\mu each coordinate of the sample is preserved with probability μ\mu and otherwise is replaced by a `?'. The running time and number of samples needed for our algorithm is polynomial in nn and 1/ε1/\varepsilon for each fixed μ>0\mu>0. This improves on algorithm of Wigderson and Yehudayoff that runs in quasi-polynomial time for any μ>0\mu > 0 and the polynomial time algorithm of Dvir et al which was shown to work for μ0.30\mu \gtrapprox 0.30 by Batman et al. In fact, our algorithm also works in the more general framework of Batman et al. in which there is no a priori bound on the size of the support of the distribution. The algorithm we analyze is implicit in previous work; our main contribution is to analyze the algorithm by showing (via linear programming duality and connections to complex analysis) that a certain matrix associated with the problem has a robust local inverse even though its condition number is exponentially small. A corollary of our result is the first polynomial time algorithm for learning DNFs in the restriction access model of Dvir et al

    Learning using Local Membership Queries

    Full text link
    We introduce a new model of membership query (MQ) learning, where the learning algorithm is restricted to query points that are \emph{close} to random examples drawn from the underlying distribution. The learning model is intermediate between the PAC model (Valiant, 1984) and the PAC+MQ model (where the queries are allowed to be arbitrary points). Membership query algorithms are not popular among machine learning practitioners. Apart from the obvious difficulty of adaptively querying labelers, it has also been observed that querying \emph{unnatural} points leads to increased noise from human labelers (Lang and Baum, 1992). This motivates our study of learning algorithms that make queries that are close to examples generated from the data distribution. We restrict our attention to functions defined on the nn-dimensional Boolean hypercube and say that a membership query is local if its Hamming distance from some example in the (random) training data is at most O(log(n))O(\log(n)). We show the following results in this model: (i) The class of sparse polynomials (with coefficients in R) over {0,1}n\{0,1\}^n is polynomial time learnable under a large class of \emph{locally smooth} distributions using O(log(n))O(\log(n))-local queries. This class also includes the class of O(log(n))O(\log(n))-depth decision trees. (ii) The class of polynomial-sized decision trees is polynomial time learnable under product distributions using O(log(n))O(\log(n))-local queries. (iii) The class of polynomial size DNF formulas is learnable under the uniform distribution using O(log(n))O(\log(n))-local queries in time nO(log(log(n)))n^{O(\log(\log(n)))}. (iv) In addition we prove a number of results relating the proposed model to the traditional PAC model and the PAC+MQ model

    Noisy population recovery in polynomial time

    Full text link
    In the noisy population recovery problem of Dvir et al., the goal is to learn an unknown distribution ff on binary strings of length nn from noisy samples. For some parameter μ[0,1]\mu \in [0,1], a noisy sample is generated by flipping each coordinate of a sample from ff independently with probability (1μ)/2(1-\mu)/2. We assume an upper bound kk on the size of the support of the distribution, and the goal is to estimate the probability of any string to within some given error ε\varepsilon. It is known that the algorithmic complexity and sample complexity of this problem are polynomially related to each other. We show that for μ>0\mu > 0, the sample complexity (and hence the algorithmic complexity) is bounded by a polynomial in kk, nn and 1/ε1/\varepsilon improving upon the previous best result of poly(kloglogk,n,1/ε)\mathsf{poly}(k^{\log\log k},n,1/\varepsilon) due to Lovett and Zhang. Our proof combines ideas from Lovett and Zhang with a \emph{noise attenuated} version of M\"{o}bius inversion. In turn, the latter crucially uses the construction of \emph{robust local inverse} due to Moitra and Saks

    New Networks, Competition and Regulation

    Get PDF
    We consider a model with two firms operating their individual networks. Each firm can choose its price as well as its investment to build up its network. Assuming a skewed distribution of consumers, our model leads to an asymmetric market structure with one firm choosing higher investments. While access regulation imposed on the dominant firm leads to lower prices, positive welfare effects are diminished by strategic investment decisions of the firms. Within a dynamic game with indirect network effects leading to potentially increased demand, regulation can substantially lower aggregate social welfare. Conditional access holidays can alleviate regulatory failure.Regulation, network effects, natural monopoly

    THE ROMANIAN MIGRATIONAL EVOLUTION PHENOMENON

    Get PDF
    In our contemporary democratic society the migration phenomenon meets unknown valences in any previous societies. Free will and right to self-determination, much exploited by the XX century society., raised the possibility of interpretation of migrationmigration, unemployment, effects of migration, labor market

    Efficient Average-Case Population Recovery in the Presence of Insertions and Deletions

    Get PDF
    A number of recent works have considered the trace reconstruction problem, in which an unknown source string x in {0,1}^n is transmitted through a probabilistic channel which may randomly delete coordinates or insert random bits, resulting in a trace of x. The goal is to reconstruct the original string x from independent traces of x. While the asymptotically best algorithms known for worst-case strings use exp(O(n^{1/3})) traces [De et al., 2017; Fedor Nazarov and Yuval Peres, 2017], several highly efficient algorithms are known [Yuval Peres and Alex Zhai, 2017; Nina Holden et al., 2018] for the average-case version of the problem, in which the source string x is chosen uniformly at random from {0,1}^n. In this paper we consider a generalization of the above-described average-case trace reconstruction problem, which we call average-case population recovery in the presence of insertions and deletions. In this problem, rather than a single unknown source string there is an unknown distribution over s unknown source strings x^1,...,x^s in {0,1}^n, and each sample given to the algorithm is independently generated by drawing some x^i from this distribution and returning an independent trace of x^i. Building on the results of [Yuval Peres and Alex Zhai, 2017] and [Nina Holden et al., 2018], we give an efficient algorithm for the average-case population recovery problem in the presence of insertions and deletions. For any support size 1 <= s <= exp(Theta(n^{1/3})), for a 1-o(1) fraction of all s-element support sets {x^1,...,x^s} subset {0,1}^n, for every distribution D supported on {x^1,...,x^s}, our algorithm can efficiently recover D up to total variation distance at most epsilon with high probability, given access to independent traces of independent draws from D. The running time of our algorithm is poly(n,s,1/epsilon) and its sample complexity is poly (s,1/epsilon,exp(log^{1/3} n)). This polynomial dependence on the support size s is in sharp contrast with the worst-case version of the problem (when x^1,...,x^s may be any strings in {0,1}^n), in which the sample complexity of the most efficient known algorithm [Frank Ban et al., 2019] is doubly exponential in s

    The economics of Information Technologies Standards &

    Get PDF
    This research investigates the problem of Information Technologies Standards or Recommendations from an economical point of view. In our competitive economy, most enterprises adopted standardization’s processes, following recommendations of specialized Organisations such as ISO (International Organisation for Standardization), W3C (World Wide Web Consortium) and ISOC (Internet Society) in order to reassure their customers. But with the development of new and open internet standards, different enterprises from the same sector fields, decided to develop their own IT standards for their activities. So we will hypothesis that the development of a professional IT standard required a network of enterprises but also a financial support, a particular organizational form and a precise activity to describe. In order to demonstrate this hypothesis and understand how professional organise themselves for developing and financing IT standards, we will take the Financial IT Standards as an example. So after a short and general presentation of IT Standards for the financial market, based on XML technologies, we will describe how professional IT standards could be created (nearly 10 professional norms or recommendations appear in the beginning of this century). We will see why these standards are developed outside the classical circles of standardisation organisations, and what could be the “key factors of success” for the best IT standards in Finance. We will use a descriptive and analytical method, in order to evaluate the financial support and to understand these actors’ strategies and the various economical models described behind. Then, we will understand why and how these standards have emerged and been developed. We will conclude this paper with a prospective view on future development of standards and recommendations.information technologies, financial standards, development of standards, evaluation of the economical costs of standards
    corecore