401 research outputs found

    Surface roughness during depositional growth and sublimation of ice crystals

    Get PDF
    Full version of an earlier discussion paper (Chou et al. 2018)Ice surface properties can modify the scattering properties of atmospheric ice crystals and therefore affect the radiative properties of mixed-phase and cirrus clouds. The Ice Roughness Investigation System (IRIS) is a new laboratory setup designed to investigate the conditions under which roughness develops on single ice crystals, based on their size, morphology and growth conditions (relative humidity and temperature). Ice roughness is quantified through the analysis of speckle in 2-D light-scattering patterns. Characterization of the setup shows that a supersaturation of 20 % with respect to ice and a temperature at the sample position as low as-40 °C could be achieved within IRIS. Investigations of the influence of humidity show that higher supersaturations with respect to ice lead to enhanced roughness and irregularities of ice crystal surfaces. Moreover, relative humidity oscillations lead to gradual ratcheting-up of roughness and irregularities, as the crystals undergo repeated growth-sublimation cycles. This memory effect also appears to result in reduced growth rates in later cycles. Thus, growth history, as well as supersaturation and temperature, influences ice crystal growth and properties, and future atmospheric models may benefit from its inclusion in the cloud evolution process and allow more accurate representation of not just roughness but crystal size too, and possibly also electrification properties.Peer reviewe

    Results and recommendations from an intercomparison of six Hygroscopicity-TDMA systems

    Get PDF
    The performance of six custom-built Hygrocopicity-Tandem Differential Mobility Analyser (H-TDMA) systems was investigated in the frame of an international calibration and intercomparison workshop held in Leipzig, February 2006. The goal of the workshop was to harmonise H-TDMA measurements and develop recommendations for atmospheric measurements and their data evaluation. The H-TDMA systems were compared in terms of the sizing of dry particles, relative humidity (RH) uncertainty, and consistency in determination of number fractions of different hygroscopic particle groups. The experiments were performed in an air-conditioned laboratory using ammonium sulphate particles or an external mixture of ammonium sulphate and soot particles. The sizing of dry particles of the six H-TDMA systems was within 0.2 to 4.2% of the selected particle diameter depending on investigated size and individual system. Measurements of ammonium sulphate aerosol found deviations equivalent to 4.5% RH from the set point of 90% RH compared to results from previous experiments in the literature. Evaluation of the number fraction of particles within the clearly separated growth factor modes of a laboratory generated externally mixed aerosol was done. The data from the H-TDMAs was analysed with a single fitting routine to investigate differences caused by the different data evaluation procedures used for each H-TDMA. The differences between the H-TDMAs were reduced from +12/-13% to +8/-6% when the same analysis routine was applied. We conclude that a common data evaluation procedure to determine number fractions of externally mixed aerosols will improve the comparability of H-TDMA measurements. It is recommended to ensure proper calibration of all flow, temperature and RH sensors in the systems. It is most important to thermally insulate the aerosol humidification unit and the second DMA and to monitor these temperatures to an accuracy of 0.2 degrees C. For the correct determination of external mixtures, it is necessary to take into account size-dependent losses due to diffusion in the plumbing between the DMAs and in the aerosol humidification unit.Peer reviewe

    The simplicity project: easing the burden of using complex and heterogeneous ICT devices and services

    Get PDF
    As of today, to exploit the variety of different "services", users need to configure each of their devices by using different procedures and need to explicitly select among heterogeneous access technologies and protocols. In addition to that, users are authenticated and charged by different means. The lack of implicit human computer interaction, context-awareness and standardisation places an enormous burden of complexity on the shoulders of the final users. The IST-Simplicity project aims at leveraging such problems by: i) automatically creating and customizing a user communication space; ii) adapting services to user terminal characteristics and to users preferences; iii) orchestrating network capabilities. The aim of this paper is to present the technical framework of the IST-Simplicity project. This paper is a thorough analysis and qualitative evaluation of the different technologies, standards and works presented in the literature related to the Simplicity system to be developed

    Vertex Cover Kernelization Revisited: Upper and Lower Bounds for a Refined Parameter

    Get PDF
    An important result in the study of polynomial-time preprocessing shows that there is an algorithm which given an instance (G,k) of Vertex Cover outputs an equivalent instance (G',k') in polynomial time with the guarantee that G' has at most 2k' vertices (and thus O((k')^2) edges) with k' <= k. Using the terminology of parameterized complexity we say that k-Vertex Cover has a kernel with 2k vertices. There is complexity-theoretic evidence that both 2k vertices and Theta(k^2) edges are optimal for the kernel size. In this paper we consider the Vertex Cover problem with a different parameter, the size fvs(G) of a minimum feedback vertex set for G. This refined parameter is structurally smaller than the parameter k associated to the vertex covering number vc(G) since fvs(G) <= vc(G) and the difference can be arbitrarily large. We give a kernel for Vertex Cover with a number of vertices that is cubic in fvs(G): an instance (G,X,k) of Vertex Cover, where X is a feedback vertex set for G, can be transformed in polynomial time into an equivalent instance (G',X',k') such that |V(G')| <= 2k and |V(G')| <= O(|X'|^3). A similar result holds when the feedback vertex set X is not given along with the input. In sharp contrast we show that the Weighted Vertex Cover problem does not have a polynomial kernel when parameterized by the cardinality of a given vertex cover of the graph unless NP is in coNP/poly and the polynomial hierarchy collapses to the third level.Comment: Published in "Theory of Computing Systems" as an Open Access publicatio

    The power of linear-time data reduction for matching.

    Get PDF
    Finding maximum-cardinality matchings in undirected graphs is arguably one of the most central graph primitives. For m-edge and n-vertex graphs, it is well-known to be solvable in O(m\sqrt{n}) time; however, for several applications this running time is still too slow. We investigate how linear-time (and almost linear-time) data reduction (used as preprocessing) can alleviate the situation. More specifically, we focus on linear-time kernelization. We start a deeper and systematic study both for general graphs and for bipartite graphs. Our data reduction algorithms easily comply (in form of preprocessing) with every solution strategy (exact, approximate, heuristic), thus making them attractive in various settings

    Systems of Linear Equations over F2\mathbb{F}_2 and Problems Parameterized Above Average

    Full text link
    In the problem Max Lin, we are given a system Az=bAz=b of mm linear equations with nn variables over F2\mathbb{F}_2 in which each equation is assigned a positive weight and we wish to find an assignment of values to the variables that maximizes the excess, which is the total weight of satisfied equations minus the total weight of falsified equations. Using an algebraic approach, we obtain a lower bound for the maximum excess. Max Lin Above Average (Max Lin AA) is a parameterized version of Max Lin introduced by Mahajan et al. (Proc. IWPEC'06 and J. Comput. Syst. Sci. 75, 2009). In Max Lin AA all weights are integral and we are to decide whether the maximum excess is at least kk, where kk is the parameter. It is not hard to see that we may assume that no two equations in Az=bAz=b have the same left-hand side and n=rankAn={\rm rank A}. Using our maximum excess results, we prove that, under these assumptions, Max Lin AA is fixed-parameter tractable for a wide special case: m≤2p(n)m\le 2^{p(n)} for an arbitrary fixed function p(n)=o(n)p(n)=o(n). Max rr-Lin AA is a special case of Max Lin AA, where each equation has at most rr variables. In Max Exact rr-SAT AA we are given a multiset of mm clauses on nn variables such that each clause has rr variables and asked whether there is a truth assignment to the nn variables that satisfies at least (1−2−r)m+k2−r(1-2^{-r})m + k2^{-r} clauses. Using our maximum excess results, we prove that for each fixed r≥2r\ge 2, Max rr-Lin AA and Max Exact rr-SAT AA can be solved in time 2O(klog⁡k)+mO(1).2^{O(k \log k)}+m^{O(1)}. This improves 2O(k2)+mO(1)2^{O(k^2)}+m^{O(1)}-time algorithms for the two problems obtained by Gutin et al. (IWPEC 2009) and Alon et al. (SODA 2010), respectively

    Parameterized complexity of the MINCCA problem on graphs of bounded decomposability

    Full text link
    In an edge-colored graph, the cost incurred at a vertex on a path when two incident edges with different colors are traversed is called reload or changeover cost. The "Minimum Changeover Cost Arborescence" (MINCCA) problem consists in finding an arborescence with a given root vertex such that the total changeover cost of the internal vertices is minimized. It has been recently proved by G\"oz\"upek et al. [TCS 2016] that the problem is FPT when parameterized by the treewidth and the maximum degree of the input graph. In this article we present the following results for the MINCCA problem: - the problem is W[1]-hard parameterized by the treedepth of the input graph, even on graphs of average degree at most 8. In particular, it is W[1]-hard parameterized by the treewidth of the input graph, which answers the main open problem of G\"oz\"upek et al. [TCS 2016]; - it is W[1]-hard on multigraphs parameterized by the tree-cutwidth of the input multigraph; - it is FPT parameterized by the star tree-cutwidth of the input graph, which is a slightly restricted version of tree-cutwidth. This result strictly generalizes the FPT result given in G\"oz\"upek et al. [TCS 2016]; - it remains NP-hard on planar graphs even when restricted to instances with at most 6 colors and 0/1 symmetric costs, or when restricted to instances with at most 8 colors, maximum degree bounded by 4, and 0/1 symmetric costs.Comment: 25 pages, 11 figure

    Parameterized Complexity of the k-anonymity Problem

    Full text link
    The problem of publishing personal data without giving up privacy is becoming increasingly important. An interesting formalization that has been recently proposed is the kk-anonymity. This approach requires that the rows of a table are partitioned in clusters of size at least kk and that all the rows in a cluster become the same tuple, after the suppression of some entries. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is known to be APX-hard even when the records values are over a binary alphabet and k=3k=3, and when the records have length at most 8 and k=4k=4 . In this paper we study how the complexity of the problem is influenced by different parameters. In this paper we follow this direction of research, first showing that the problem is W[1]-hard when parameterized by the size of the solution (and the value kk). Then we exhibit a fixed parameter algorithm, when the problem is parameterized by the size of the alphabet and the number of columns. Finally, we investigate the computational (and approximation) complexity of the kk-anonymity problem, when restricting the instance to records having length bounded by 3 and k=3k=3. We show that such a restriction is APX-hard.Comment: 22 pages, 2 figure

    Locality and Bounding-Box Quality of Two-Dimensional Space-Filling Curves

    Full text link
    Space-filling curves can be used to organise points in the plane into bounding-box hierarchies (such as R-trees). We develop measures of the bounding-box quality of space-filling curves that express how effective different space-filling curves are for this purpose. We give general lower bounds on the bounding-box quality measures and on locality according to Gotsman and Lindenbaum for a large class of space-filling curves. We describe a generic algorithm to approximate these and similar quality measures for any given curve. Using our algorithm we find good approximations of the locality and the bounding-box quality of several known and new space-filling curves. Surprisingly, some curves with relatively bad locality by Gotsman and Lindenbaum's measure, have good bounding-box quality, while the curve with the best-known locality has relatively bad bounding-box quality.Comment: 24 pages, full version of paper to appear in ESA. Difference with first version: minor editing; Fig. 2(m) correcte
    • …
    corecore