168 research outputs found

    Analysis of Roller Unit Assembly of Calcination Drum

    Full text link
    Solid waste management is posing a great problem as the population is increasing day by day. The municipal sewage waste has to be disposed properly otherwise it will lead to air pollution and cause the serious effects on the health of human beings. The most common practice adopted is generating methane from municipal solid waste. For this purpose it is necessary that the municipal solid waste which is sent for further processing should be in the form of homogenous mass. In order to form the homogenous mass it is necessary that it should be separated in degradable and non degradable. This degradable waste is then converted into fine particles. To achieve this task the engineers have designed a special instrument named Calcination Drum. Usually Calcination Drum is also known as Rotary Kilns. In primary stage when the Calcination Drum was implemented it was found that some problems were encountered. The main aim of this project is to solve the problems related to guide ring and support roller shaft of Calcination Drum. The shaft is redesigned by changing its material and considering all forces acting on it. Similarly the modified fabrication process is also suggested for guide ring. In this paper Analysis of roller shaft, roller and Calcination drum is carried out and the stress and deformation is found out in static and dynamic condition. For theoretical calculations of shaft the standard data is taken from design data book and for guide ring Hertz contact stress theory is used

    Finding Connected Dense kk-Subgraphs

    Full text link
    Given a connected graph GG on nn vertices and a positive integer knk\le n, a subgraph of GG on kk vertices is called a kk-subgraph in GG. We design combinatorial approximation algorithms for finding a connected kk-subgraph in GG such that its density is at least a factor Ω(max{n2/5,k2/n2})\Omega(\max\{n^{-2/5},k^2/n^2\}) of the density of the densest kk-subgraph in GG (which is not necessarily connected). These particularly provide the first non-trivial approximations for the densest connected kk-subgraph problem on general graphs

    Approximation Algorithms for Connected Maximum Cut and Related Problems

    Full text link
    An instance of the Connected Maximum Cut problem consists of an undirected graph G = (V, E) and the goal is to find a subset of vertices S \subseteq V that maximizes the number of edges in the cut \delta(S) such that the induced graph G[S] is connected. We present the first non-trivial \Omega(1/log n) approximation algorithm for the connected maximum cut problem in general graphs using novel techniques. We then extend our algorithm to an edge weighted case and obtain a poly-logarithmic approximation algorithm. Interestingly, in stark contrast to the classical max-cut problem, we show that the connected maximum cut problem remains NP-hard even on unweighted, planar graphs. On the positive side, we obtain a polynomial time approximation scheme for the connected maximum cut problem on planar graphs and more generally on graphs with bounded genus.Comment: 17 pages, Conference version to appear in ESA 201

    Near-optimal asymmetric binary matrix partitions

    Full text link
    We study the asymmetric binary matrix partition problem that was recently introduced by Alon et al. (WINE 2013) to model the impact of asymmetric information on the revenue of the seller in take-it-or-leave-it sales. Instances of the problem consist of an n×mn \times m binary matrix AA and a probability distribution over its columns. A partition scheme B=(B1,...,Bn)B=(B_1,...,B_n) consists of a partition BiB_i for each row ii of AA. The partition BiB_i acts as a smoothing operator on row ii that distributes the expected value of each partition subset proportionally to all its entries. Given a scheme BB that induces a smooth matrix ABA^B, the partition value is the expected maximum column entry of ABA^B. The objective is to find a partition scheme such that the resulting partition value is maximized. We present a 9/109/10-approximation algorithm for the case where the probability distribution is uniform and a (11/e)(1-1/e)-approximation algorithm for non-uniform distributions, significantly improving results of Alon et al. Although our first algorithm is combinatorial (and very simple), the analysis is based on linear programming and duality arguments. In our second result we exploit a nice relation of the problem to submodular welfare maximization.Comment: 17 page

    Approximating k-Forest with Resource Augmentation: A Primal-Dual Approach

    Full text link
    In this paper, we study the kk-forest problem in the model of resource augmentation. In the kk-forest problem, given an edge-weighted graph G(V,E)G(V,E), a parameter kk, and a set of mm demand pairs V×V\subseteq V \times V, the objective is to construct a minimum-cost subgraph that connects at least kk demands. The problem is hard to approximate---the best-known approximation ratio is O(min{n,k})O(\min\{\sqrt{n}, \sqrt{k}\}). Furthermore, kk-forest is as hard to approximate as the notoriously-hard densest kk-subgraph problem. While the kk-forest problem is hard to approximate in the worst-case, we show that with the use of resource augmentation, we can efficiently approximate it up to a constant factor. First, we restate the problem in terms of the number of demands that are {\em not} connected. In particular, the objective of the kk-forest problem can be viewed as to remove at most mkm-k demands and find a minimum-cost subgraph that connects the remaining demands. We use this perspective of the problem to explain the performance of our algorithm (in terms of the augmentation) in a more intuitive way. Specifically, we present a polynomial-time algorithm for the kk-forest problem that, for every ϵ>0\epsilon>0, removes at most mkm-k demands and has cost no more than O(1/ϵ2)O(1/\epsilon^{2}) times the cost of an optimal algorithm that removes at most (1ϵ)(mk)(1-\epsilon)(m-k) demands

    Fast Distributed Approximation for Max-Cut

    Full text link
    Finding a maximum cut is a fundamental task in many computational settings. Surprisingly, it has been insufficiently studied in the classic distributed settings, where vertices communicate by synchronously sending messages to their neighbors according to the underlying graph, known as the LOCAL\mathcal{LOCAL} or CONGEST\mathcal{CONGEST} models. We amend this by obtaining almost optimal algorithms for Max-Cut on a wide class of graphs in these models. In particular, for any ϵ>0\epsilon > 0, we develop randomized approximation algorithms achieving a ratio of (1ϵ)(1-\epsilon) to the optimum for Max-Cut on bipartite graphs in the CONGEST\mathcal{CONGEST} model, and on general graphs in the LOCAL\mathcal{LOCAL} model. We further present efficient deterministic algorithms, including a 1/31/3-approximation for Max-Dicut in our models, thus improving the best known (randomized) ratio of 1/41/4. Our algorithms make non-trivial use of the greedy approach of Buchbinder et al. (SIAM Journal on Computing, 2015) for maximizing an unconstrained (non-monotone) submodular function, which may be of independent interest

    SPATIOTEMPORAL CHANGES IN VELOCITY OF MELLOR GLACIER, EAST ANTARCTICA USING LANDSAT-8 DATA

    Get PDF
    Glaciers all over the world are experiencing changes at varying stages due to changing climatic conditions. Minuscule changes in the glaciers in Antarctica can thus have major implications. The velocity of glaciers is important in several aspects of glaciology. A glacier’s movement is caused by different factors such as gravity, internal deformation present in the ice, pressure caused by accumulation of snow, basal sliding etc. The velocity of a glacier is an important factor governing mass balance and the stability of the glacier. A glacier which moves fast generally brings more ice towards the terminus than a slow moving glacier. Thus, the glacier velocity can determine its load carrying capacity and gives indication on the ‘health’ of the glacier. Measurement of the ice flow velocity can help model glacier dynamics and thus provide increasing insights on different glacier subtleties. However, field measurements of velocity are limited in spatial and temporal domains because these operations are manual, tedious and logistically expensive. Remote sensing is a tool to monitor and generate such data without the need for physical expeditions. This study uses optical satellite imagery to understand the mechanisms involved in the movement of a glacier. Optical image correlation method (COSI-Corr module) is chosen here as the promising method to derive displacement of a moving glacier. The principle involved in this technique is that two images acquired at different times are correlated to find the shift in the position of moving ice, which is then treated as displacement in the time interval. Employing this technique we estimated the velocity of Mellor glacier (73°30′S, 66°30′E), a tributary glacier of the Amery Ice Shelf, Antarctica, over a span of four years from 2014 to 2017. Correlation is performed using Landsat-8 panchromatic images of 15 m resolution. Optical images from Landsat 8 often have noise due to atmospheric conditions such as cloud cover, so we used only those images cloud with cloud cover less than 10%. The glacier is covered in 128 path frame and 112 by Landsat-8. The correlation frequency was calculated using the correlator engine. Window size taken here is 256 and step sizes is 64 for both x and y dimensions. Once the correlation is calculated for an image pair for a specific time-period, we obtain three different outputs. Two of them indicated displacement (one in x direction and another in y direction) and the remaining output provided signal to noise ratio. The band math tool using displacement outputs in ENVI software performed velocity calculations. This gives us a raster image showing velocity at each point or pixel. Some errors such as noise persist and their correction is performed in ArcGIS software. In order to get pure signals, we removed all the signals with a signal to noise ratio less than 0.9 and this was carried out using raster calculator tool. All the resultant velocity rasters were interpolated and bias was calculated between seasons of two consecutive years. Two maps were generated for each year, one for early summer i.e. from January to April and one from September to December using the resultant velocity raster. The mean values of velocities found for Mellor glacier from Jan-April 2014, 2015, 2016 and 2017 were 374.06 ma−1, 413.59 ma−1, 278.62 ma−1 and 406.66 ma−1, respectively. Velocities for September-December 2014, 2015, 2016 and 2017 were found to be 334.63 ma−1, 334.43 ma−1, 367.37 ma−1 and 381.31 ma−1, respectively. The biases are computed for all the seasons of four years and root mean square (RMSE) values are estimated. These RMSE values signify the season-wise variations in the velocities. RMSE values for season of Jan–April 2014–15, 2015–16 and 2016–17 were 75.92 ma−1, 147.82 ma−1, and 133.33 ma−1, respectively. Similarly, RMSE values for season of September-December 2014–15, 2015–16 and 2016–17 are 35.7 ma−1, 51.29 ma−1 and 35.84 ma−1 respectively. The results showed variations in velocities for different seasons. We plan to integrate this data with the discharge rates, to estimate mass balance and melting rates of the glacier to decipher mechanisms at work for the Mellor glacier

    Near-Optimal Asymmetric Binary Matrix Partitions

    Get PDF
    We study the asymmetric binary matrix partition problem that was recently introduced by Alon et al. (Proceedings of the 9th Conference on Web and Internet Economics (WINE), pp 1–14, 2013). Instances of the problem consist of an n× m binary matrix A and a probability distribution over its columns. A partition schemeB= (B1, … , Bn) consists of a partition Bifor each row i of A. The partition Biacts as a smoothing operator on row i that distributes the expected value of each partition subset proportionally to all its entries. Given a scheme B that induces a smooth matrix AB, the partition value is the expected maximum column entry of AB. The objective is to find a partition scheme such that the resulting partition value is maximized. We present a 9/10-approximation algorithm for the case where the probability distribution is uniform and a (1 - 1 / e) -approximation algorithm for non-uniform distributions, significantly improving results of Alon et al. Although our first algorithm is combinatorial (and very simple), the analysis is based on linear programming and duality arguments. In our second result we exploit a nice relation of the problem to submodular welfare maximization

    Parameterized Inapproximability of Independent Set in HH-Free Graphs

    Get PDF
    We study the Independent Set (IS) problem in HH-free graphs, i.e., graphs excluding some fixed graph HH as an induced subgraph. We prove several inapproximability results both for polynomial-time and parameterized algorithms. Halld\'orsson [SODA 1995] showed that for every δ>0\delta>0 IS has a polynomial-time (d12+δ)(\frac{d-1}{2}+\delta)-approximation in K1,dK_{1,d}-free graphs. We extend this result by showing that Ka,bK_{a,b}-free graphs admit a polynomial-time O(α(G)11/a)O(\alpha(G)^{1-1/a})-approximation, where α(G)\alpha(G) is the size of a maximum independent set in GG. Furthermore, we complement the result of Halld\'orsson by showing that for some γ=Θ(d/logd),\gamma=\Theta(d/\log d), there is no polynomial-time γ\gamma-approximation for these graphs, unless NP = ZPP. Bonnet et al. [IPEC 2018] showed that IS parameterized by the size kk of the independent set is W[1]-hard on graphs which do not contain (1) a cycle of constant length at least 44, (2) the star K1,4K_{1,4}, and (3) any tree with two vertices of degree at least 33 at constant distance. We strengthen this result by proving three inapproximability results under different complexity assumptions for almost the same class of graphs (we weaken condition (2) that GG does not contain K1,5K_{1,5}). First, under the ETH, there is no f(k)no(k/logk)f(k)\cdot n^{o(k/\log k)} algorithm for any computable function ff. Then, under the deterministic Gap-ETH, there is a constant δ>0\delta>0 such that no δ\delta-approximation can be computed in f(k)nO(1)f(k) \cdot n^{O(1)} time. Also, under the stronger randomized Gap-ETH there is no such approximation algorithm with runtime f(k)no(k)f(k)\cdot n^{o(k)}. Finally, we consider the parameterization by the excluded graph HH, and show that under the ETH, IS has no no(α(H))n^{o(\alpha(H))} algorithm in HH-free graphs and under Gap-ETH there is no d/ko(1)d/k^{o(1)}-approximation for K1,dK_{1,d}-free graphs with runtime f(d,k)nO(1)f(d,k) n^{O(1)}.Comment: Preliminary version of the paper in WG 2020 proceeding

    Mechanism Design for Perturbation Stable Combinatorial Auctions

    Full text link
    Motivated by recent research on combinatorial markets with endowed valuations by (Babaioff et al., EC 2018) and (Ezra et al., EC 2020), we introduce a notion of perturbation stability in Combinatorial Auctions (CAs) and study the extend to which stability helps in social welfare maximization and mechanism design. A CA is γ-stable\gamma\textit{-stable} if the optimal solution is resilient to inflation, by a factor of γ1\gamma \geq 1, of any bidder's valuation for any single item. On the positive side, we show how to compute efficiently an optimal allocation for 2-stable subadditive valuations and that a Walrasian equilibrium exists for 2-stable submodular valuations. Moreover, we show that a Parallel 2nd Price Auction (P2A) followed by a demand query for each bidder is truthful for general subadditive valuations and results in the optimal allocation for 2-stable submodular valuations. To highlight the challenges behind optimization and mechanism design for stable CAs, we show that a Walrasian equilibrium may not exist for γ\gamma-stable XOS valuations for any γ\gamma, that a polynomial-time approximation scheme does not exist for (2ϵ)(2-\epsilon)-stable submodular valuations, and that any DSIC mechanism that computes the optimal allocation for stable CAs and does not use demand queries must use exponentially many value queries. We conclude with analyzing the Price of Anarchy of P2A and Parallel 1st Price Auctions (P1A) for CAs with stable submodular and XOS valuations. Our results indicate that the quality of equilibria of simple non-truthful auctions improves only for γ\gamma-stable instances with γ3\gamma \geq 3
    corecore