344 research outputs found
Parameterizing by the Number of Numbers
The usefulness of parameterized algorithmics has often depended on what
Niedermeier has called, "the art of problem parameterization". In this paper we
introduce and explore a novel but general form of parameterization: the number
of numbers. Several classic numerical problems, such as Subset Sum, Partition,
3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with
Target Sums, have multisets of integers as input. We initiate the study of
parameterizing these problems by the number of distinct integers in the input.
We rely on an FPT result for ILPF to show that all the above-mentioned problems
are fixed-parameter tractable when parameterized in this way. In various
applied settings, problem inputs often consist in part of multisets of integers
or multisets of weighted objects (such as edges in a graph, or jobs to be
scheduled). Such number-of-numbers parameterized problems often reduce to
subproblems about transition systems of various kinds, parameterized by the
size of the system description. We consider several core problems of this kind
relevant to number-of-numbers parameterization. Our main hardness result
considers the problem: given a non-deterministic Mealy machine M (a finite
state automaton outputting a letter on each transition), an input word x, and a
census requirement c for the output word specifying how many times each letter
of the output alphabet should be written, decide whether there exists a
computation of M reading x that outputs a word y that meets the requirement c.
We show that this problem is hard for W[1]. If the question is whether there
exists an input word x such that a computation of M on x outputs a word that
meets c, the problem becomes fixed-parameter tractable
Explaining Snapshots of Network Diffusions: Structural and Hardness Results
Much research has been done on studying the diffusion of ideas or
technologies on social networks including the \textit{Influence Maximization}
problem and many of its variations. Here, we investigate a type of inverse
problem. Given a snapshot of the diffusion process, we seek to understand if
the snapshot is feasible for a given dynamic, i.e., whether there is a limited
number of nodes whose initial adoption can result in the snapshot in finite
time. While similar questions have been considered for epidemic dynamics, here,
we consider this problem for variations of the deterministic Linear Threshold
Model, which is more appropriate for modeling strategic agents. Specifically,
we consider both sequential and simultaneous dynamics when deactivations are
allowed and when they are not. Even though we show hardness results for all
variations we consider, we show that the case of sequential dynamics with
deactivations allowed is significantly harder than all others. In contrast,
sequential dynamics make the problem trivial on cliques even though it's
complexity for simultaneous dynamics is unknown. We complement our hardness
results with structural insights that can help better understand diffusions of
social networks under various dynamics.Comment: 14 pages, 3 figure
Band gap bowing in NixMg1-xO.
Epitaxial transparent oxide NixMg1-xO (0 ≤ x ≤ 1) thin films were grown on MgO(100) substrates by pulsed laser deposition. High-resolution synchrotron X-ray diffraction and high-resolution transmission electron microscopy analysis indicate that the thin films are compositionally and structurally homogeneous, forming a completely miscible solid solution. Nevertheless, the composition dependence of the NixMg1-xO optical band gap shows a strong non-parabolic bowing with a discontinuity at dilute NiO concentrations of x 0.074 and account for the anomalously large band gap narrowing in the NixMg1-xO solid solution system
Results and recommendations from an intercomparison of six Hygroscopicity-TDMA systems
The performance of six custom-built Hygrocopicity-Tandem Differential Mobility Analyser (H-TDMA) systems was investigated in the frame of an international calibration and intercomparison workshop held in Leipzig, February 2006. The goal of the workshop was to harmonise H-TDMA measurements and develop recommendations for atmospheric measurements and their data evaluation. The H-TDMA systems were compared in terms of the sizing of dry particles, relative humidity (RH) uncertainty, and consistency in determination of number fractions of different hygroscopic particle groups. The experiments were performed in an air-conditioned laboratory using ammonium sulphate particles or an external mixture of ammonium sulphate and soot particles. The sizing of dry particles of the six H-TDMA systems was within 0.2 to 4.2% of the selected particle diameter depending on investigated size and individual system. Measurements of ammonium sulphate aerosol found deviations equivalent to 4.5% RH from the set point of 90% RH compared to results from previous experiments in the literature. Evaluation of the number fraction of particles within the clearly separated growth factor modes of a laboratory generated externally mixed aerosol was done. The data from the H-TDMAs was analysed with a single fitting routine to investigate differences caused by the different data evaluation procedures used for each H-TDMA. The differences between the H-TDMAs were reduced from +12/-13% to +8/-6% when the same analysis routine was applied. We conclude that a common data evaluation procedure to determine number fractions of externally mixed aerosols will improve the comparability of H-TDMA measurements. It is recommended to ensure proper calibration of all flow, temperature and RH sensors in the systems. It is most important to thermally insulate the aerosol humidification unit and the second DMA and to monitor these temperatures to an accuracy of 0.2 degrees C. For the correct determination of external mixtures, it is necessary to take into account size-dependent losses due to diffusion in the plumbing between the DMAs and in the aerosol humidification unit.Peer reviewe
Cell wall characteristics during sexual reproduction of Mougeotia sp. (Zygnematophyceae) revealed by electron microscopy, glycan microarrays and RAMAN spectroscopy
Mougeotia spp. collected from field samples were investigated for their conjugation morphology by light-, fluorescence-, scanning- and transmission electron microscopy. During a scalarifom conjugation, the extragametangial zygospores were initially surrounded by a thin cell wall that developed into a multi-layered zygospore wall. Maturing zygospores turned dark brown and were filled with storage compounds such as lipids and starch. While M. parvula had a smooth surface, M. disjuncta had a punctated surface structure and a prominent suture. The zygospore wall consisted of a polysaccharide rich endospore, followed by a thin layer with a lipid-like appaerance, a massive electron dense mesospore and a very thin exospore composed of polysaccharides. Glycan microarray analysis of zygospores of different developmental stages revealed the occurrence of pectins and hemicelluloses, mostly composed of homogalacturonan (HG), xyloglucans, xylans, arabino-galactan proteins and extensins. In situ localization by the probe OG7-13AF 488 labelled HG in young zygospore walls, vegetative filaments and most prominently in conjugation tubes and cross walls. Raman imaging showed the distribution of proteins, lipids, carbohydrates and aromatic components of the mature zygospore with a spatial resolution of ~ 250 nm. The carbohydrate nature of the endo- and exospore was confirmed and in-between an enrichment of lipids and aromatic components, probably algaenan or a sporopollenin-like material. Taken together, these results indicate that during zygospore formation, reorganizations of the cell walls occured, leading to a resistant and protective structure
Improved FPT algorithms for weighted independent set in bull-free graphs
Very recently, Thomass\'e, Trotignon and Vuskovic [WG 2014] have given an FPT
algorithm for Weighted Independent Set in bull-free graphs parameterized by the
weight of the solution, running in time . In this article
we improve this running time to . As a byproduct, we also
improve the previous Turing-kernel for this problem from to .
Furthermore, for the subclass of bull-free graphs without holes of length at
most for , we speed up the running time to . As grows, this running time is
asymptotically tight in terms of , since we prove that for each integer , Weighted Independent Set cannot be solved in time in the class of -free graphs unless the
ETH fails.Comment: 15 page
The simplicity project: easing the burden of using complex and heterogeneous ICT devices and services
As of today, to exploit the variety of different "services", users need to configure each of their devices by using different procedures and need to explicitly select among heterogeneous access technologies and protocols. In addition to that, users are authenticated and charged by different means. The lack of implicit human computer interaction, context-awareness and standardisation places an enormous burden of complexity on the shoulders of the final users. The IST-Simplicity project aims at leveraging such problems by: i) automatically creating and customizing a user communication space; ii) adapting services to user terminal characteristics and to users preferences; iii) orchestrating network capabilities. The aim of this paper is to present the technical framework of the IST-Simplicity project. This paper is a thorough analysis and qualitative evaluation of the different technologies, standards and works presented in the literature related to the Simplicity system to be developed
Systems of Linear Equations over and Problems Parameterized Above Average
In the problem Max Lin, we are given a system of linear equations
with variables over in which each equation is assigned a
positive weight and we wish to find an assignment of values to the variables
that maximizes the excess, which is the total weight of satisfied equations
minus the total weight of falsified equations. Using an algebraic approach, we
obtain a lower bound for the maximum excess.
Max Lin Above Average (Max Lin AA) is a parameterized version of Max Lin
introduced by Mahajan et al. (Proc. IWPEC'06 and J. Comput. Syst. Sci. 75,
2009). In Max Lin AA all weights are integral and we are to decide whether the
maximum excess is at least , where is the parameter.
It is not hard to see that we may assume that no two equations in have
the same left-hand side and . Using our maximum excess results,
we prove that, under these assumptions, Max Lin AA is fixed-parameter tractable
for a wide special case: for an arbitrary fixed function
.
Max -Lin AA is a special case of Max Lin AA, where each equation has at
most variables. In Max Exact -SAT AA we are given a multiset of
clauses on variables such that each clause has variables and asked
whether there is a truth assignment to the variables that satisfies at
least clauses. Using our maximum excess results, we
prove that for each fixed , Max -Lin AA and Max Exact -SAT AA can
be solved in time This improves
-time algorithms for the two problems obtained by Gutin et
al. (IWPEC 2009) and Alon et al. (SODA 2010), respectively
Particle characterization at the Cape Verde atmospheric observatory during the 2007 RHaMBLe intensive
The chemical characterization of filter high volume (HV) and Berner impactor (BI) samples PM during RHaMBLe (Reactive Halogens in the Marine Boundary Layer) 2007 shows that the Cape Verde aerosol particles are mainly composed of sea salt, mineral dust and associated water. Minor components are nss-salts, OC and EC. The influence from the African continent on the aerosol constitution was generally small but air masses which came from south-western Europe crossing the Canary Islands transported dust to the sampling site together with other loadings. The mean mass concentration was determined for PM10 to 17 μg/m3 from impactor samples and to 24.2 μg/m3 from HV filter samples. Non sea salt (nss) components of PM were found in the submicron fractions and nitrate in the coarse mode fraction. Bromide was found in all samples with much depleted concentrations in the range 1–8 ng/m3 compared to fresh sea salt aerosol indicating intense atmospheric halogen chemistry. Loss of bromide by ozone reaction during long sampling time is supposed and resulted totally in 82±12% in coarse mode impactor samples and in filter samples in 88±6% bromide deficits. A chloride deficit was determined to 8% and 1% for the coarse mode particles (3.5–10 μm; 1.2–3.5 μm) and to 21% for filter samples.
During 14 May with high mineral dust loads also the maximum of OC (1.71μg/m3) and EC (1.25 μg/m3) was measured. The minimum of TC (0.25 μg/m3) was detected during the period 25 to 27 May when pure marine air masses arrived. The concentrations of carbonaceous material decrease with increasing particle size from 60% for the ultra fine particles to 2.5% in coarse mode PM.
Total iron (dust vs. non-dust: 0.53 vs. 0.06 μg m3), calcium (0.22 vs. 0.03 μg m3) and potassium (0.33 vs. 0.02 μg m3) were found as good indicators for dust periods because of their heavily increased concentration in the 1.2 to 3.5 μm fraction as compared to their concentration during the non-dust periods. For the organic constituents, oxalate (78–151 ng/m3) and methanesulfonic acid (MSA, 25–100 ng/m3) are the major compounds identified. A good correlation between nss-sulphate and MSA was found for the majority of days indicating active DMS chemistry and low anthropogenic influences
- …