54 research outputs found
Centrality of Trees for Capacitated k-Center
There is a large discrepancy in our understanding of uncapacitated and
capacitated versions of network location problems. This is perhaps best
illustrated by the classical k-center problem: there is a simple tight
2-approximation algorithm for the uncapacitated version whereas the first
constant factor approximation algorithm for the general version with capacities
was only recently obtained by using an intricate rounding algorithm that
achieves an approximation guarantee in the hundreds.
Our paper aims to bridge this discrepancy. For the capacitated k-center
problem, we give a simple algorithm with a clean analysis that allows us to
prove an approximation guarantee of 9. It uses the standard LP relaxation and
comes close to settling the integrality gap (after necessary preprocessing),
which is narrowed down to either 7, 8 or 9. The algorithm proceeds by first
reducing to special tree instances, and then solves such instances optimally.
Our concept of tree instances is quite versatile, and applies to natural
variants of the capacitated k-center problem for which we also obtain improved
algorithms. Finally, we give evidence to show that more powerful preprocessing
could lead to better algorithms, by giving an approximation algorithm that
beats the integrality gap for instances where all non-zero capacities are
uniform.Comment: 21 pages, 2 figure
Provable Bounds for Learning Some Deep Representations
We give algorithms with provable guarantees that learn a class of deep nets
in the generative model view popularized by Hinton and others. Our generative
model is an node multilayer neural net that has degree at most
for some and each edge has a random edge weight in . Our
algorithm learns {\em almost all} networks in this class with polynomial
running time. The sample complexity is quadratic or cubic depending upon the
details of the model.
The algorithm uses layerwise learning. It is based upon a novel idea of
observing correlations among features and using these to infer the underlying
edge structure via a global graph recovery procedure. The analysis of the
algorithm reveals interesting structure of neural networks with random edge
weights.Comment: The first 18 pages serve as an extended abstract and a 36 pages long
technical appendix follow
Non-Negative Sparse Regression and Column Subset Selection with L1 Error
We consider the problems of sparse regression and column subset selection under L1 error. For both problems, we show that in the non-negative setting it is possible to obtain tight and efficient approximations, without any additional structural assumptions (such as restricted isometry, incoherence, expansion, etc.). For sparse regression, given a matrix A and a vector b with non-negative entries, we give an efficient algorithm to output a vector x of sparsity O(k), for which |Ax - b|_1 is comparable to the smallest error possible using non-negative k-sparse x. We then use this technique to obtain our main result: an efficient algorithm for column subset selection under L1 error for non-negative matrices
DESAIN KOMUNIKASI VISUAL UNTUK KAMPANYE EDUKASI PEDULI LINGKUNGAN DALAM PENANGANAN SAMPAH PLASTIK DI BALI
ABSTRAK
DESAIN MEDIA KOMUNIKASI VISUAL SEBAGAI SARANA KAMPANYE EDUKASI PEDULI LINGKUNGAN DALAM PENANGANAN SAMPAH PLASTIK DI BALI
Upaya mewujudkan Bali sebagai provinsi hijau dan bersih masih memiliki berbagai tantangan jika dikaitkan dengan kondisi Bali saat ini, khususnya lingkungannya. Seperti kita ketahui bersama permasalahan lingkungan yang sangat menonjol adalah masalah sampah, di mana produksi sampah setiap harinya sangat tinggi sedangkan kegiatan pendauran ulang sampah belum optimal. Budaya bersih dan budaya membuang sampah pada tempatnya belum kelihatan pada masyarakat kita, apalagi kebiasaan dan kesadaran untuk memisahkan jenis sampah, antara sampah organik dengan sampah anorganik. Dalam usaha kampanye, setiap media komunikasi visual memiliki peranan dan fungsi yang berbeda seperti halnya strategi kampanye untuk penanganan sampah dimana sampah plastik, dimana sampah plastik merupakan konsumsi dari masyarakat. Oleh karena itu perlu adanya perencanaan baik secara konseptual maupun visual yang menyesuaikan dengan kode kampanye edukasi. Desain ini bertujuan untuk memperoleh media komunikasi visual yang efektif, komunikatif dan sesuai kriteria desain untuk melengkapi kegiatan kampanye Badan Lingkungan Hidup Provinsi Bali.
Melalui metode penelitian, Data-data yang diperoleh dari hasil observasi, wawancara, kepustakaan dan dokumentasi di Badan Lingkungan Hidup Provinsi Bali disesuaikan kembali dengan strategi Kampanye. Teori yang digunakan dalam studi ini adalah mind maping yang kemudian diolah melalui analisis deskriptif kualitatif dan sintesa sehingga diperoleh konsep dasar desain.
āGreenerationā merupakan konsep dasar yang relevan digunakan pada proses desain komunikasi visual untuk kampanye Edukasi Peduli Lingkungan Dalam Penanganan Sampah Plastik Di Bali. Konsep tersebut sesuai pencanangan dari gubernur Bali dengan konsep Clean & Green dengan tema yang akan di ambil dalam kampanye ini yaitu āDiet Kantong Plastikā. Dalam proses desain, telah ditentukan media yang tepat dan sesuai yaitu Poster, Flyer, T-shirt, Tas Kanvas, Billboard, Roll up banner, Mobile Advertising, Spanduk, Umbul-umbul dan Katalog.
Kata Kunci : sampah plastik, media komunikasi visual, Greeneration.
ABSTRACT
DESIGN OF VISUAL MEDIA COMMUNICATION AS A MEANS OF CAMPAIGN OF ENVIRONTMENT CARE EDUCATION IN HANDLING PLASTIC TRASH IN BALI.
Bali provincial efforts to achieve a clean green and still have many challenges if it is associated with the current condition of Bali, especially the environment. As we all know a very prominent environmental problems is the problem of waste, where waste production per day is very high while the waste recycling activity is not optimal. Culture and cultural clean the trash in place yet visible in our society, especially the habits and awareness to separate types of waste, including organic waste with inorganic. In the campaign effort, every visual communication media have different roles and functions as well as a campaign strategy for waste management in which plastic waste, plastic waste which is the consumption of the public. Therefore, it is necessary to plan both conceptually and visually adjust the code educational campaign. The design aims to obtain effective visual communication media, communicative and appropriate design criteria to complement the activities of Provincial Badan Lingkungan Hidup campaign
Through the research methods, data obtained from the observations, interviews, literature and documentation on the Environment Agency Bali adjusted returns with campaign strategy. The theory used in this study is a mind mapping which is then processed through a qualitative descriptive analysis and synthesis in order to obtain the basic design concept.
"Greeneration" is relevant basic concepts used in the process of visual communication design for the campaign Education Environmental Care Waste Plastics in Bali. The concept is in line with the declaration of the governor of Bali Clean and Green concept with a theme that will be taken in this campaign is "Diet Plastic Bags". In the design process, it has been determined that proper and appropriate media ie Posters, Flyers, T-shirts, canvas bags, Billboard, Roll up banner, Mobile Advertising, Banner, Bannerman-pennant and catalog.
Keywords : plastic, visual communications media, Greeneratio
On Quadratic Programming with a Ratio Objective
Quadratic Programming (QP) is the well-studied problem of maximizing over
{-1,1} values the quadratic form \sum_{i \ne j} a_{ij} x_i x_j. QP captures
many known combinatorial optimization problems, and assuming the unique games
conjecture, semidefinite programming techniques give optimal approximation
algorithms. We extend this body of work by initiating the study of Quadratic
Programming problems where the variables take values in the domain {-1,0,1}.
The specific problems we study are
QP-Ratio : \max_{\{-1,0,1\}^n} \frac{\sum_{i \not = j} a_{ij} x_i x_j}{\sum
x_i^2}, and Normalized QP-Ratio : \max_{\{-1,0,1\}^n} \frac{\sum_{i \not = j}
a_{ij} x_i x_j}{\sum d_i x_i^2}, where d_i = \sum_j |a_{ij}|
We consider an SDP relaxation obtained by adding constraints to the natural
eigenvalue (or SDP) relaxation for this problem. Using this, we obtain an
algorithm for QP-ratio. We also obtain an
approximation for bipartite graphs, and better algorithms
for special cases. As with other problems with ratio objectives (e.g. uniform
sparsest cut), it seems difficult to obtain inapproximability results based on
P!=NP. We give two results that indicate that QP-Ratio is hard to approximate
to within any constant factor. We also give a natural distribution on instances
of QP-Ratio for which an n^\epsilon approximation (for \epsilon roughly 1/10)
seems out of reach of current techniques
Smoothed Analysis in Unsupervised Learning via Decoupling
Smoothed analysis is a powerful paradigm in overcoming worst-case
intractability in unsupervised learning and high-dimensional data analysis.
While polynomial time smoothed analysis guarantees have been obtained for
worst-case intractable problems like tensor decompositions and learning
mixtures of Gaussians, such guarantees have been hard to obtain for several
other important problems in unsupervised learning. A core technical challenge
in analyzing algorithms is obtaining lower bounds on the least singular value
for random matrix ensembles with dependent entries, that are given by
low-degree polynomials of a few base underlying random variables.
In this work, we address this challenge by obtaining high-confidence lower
bounds on the least singular value of new classes of structured random matrix
ensembles of the above kind. We then use these bounds to design algorithms with
polynomial time smoothed analysis guarantees for the following three important
problems in unsupervised learning:
1. Robust subspace recovery, when the fraction of inliers in the
d-dimensional subspace is at least for any constant integer . This contrasts with the known
worst-case intractability when , and the previous smoothed
analysis result which needed (Hardt and Moitra, 2013).
2. Learning overcomplete hidden markov models, where the size of the state
space is any polynomial in the dimension of the observations. This gives the
first polynomial time guarantees for learning overcomplete HMMs in a smoothed
analysis model.
3. Higher order tensor decompositions, where we generalize the so-called
FOOBI algorithm of Cardoso to find order- rank-one tensors in a subspace.
This allows us to obtain polynomially robust decomposition algorithms for
'th order tensors with rank .Comment: 44 page
- ā¦