598 research outputs found
Local search for stable marriage problems
The stable marriage (SM) problem has a wide variety of practical
applications, ranging from matching resident doctors to hospitals, to matching
students to schools, or more generally to any two-sided market. In the
classical formulation, n men and n women express their preferences (via a
strict total order) over the members of the other sex. Solving a SM problem
means finding a stable marriage where stability is an envy-free notion: no man
and woman who are not married to each other would both prefer each other to
their partners or to being single. We consider both the classical stable
marriage problem and one of its useful variations (denoted SMTI) where the men
and women express their preferences in the form of an incomplete preference
list with ties over a subset of the members of the other sex. Matchings are
permitted only with people who appear in these lists, an we try to find a
stable matching that marries as many people as possible. Whilst the SM problem
is polynomial to solve, the SMTI problem is NP-hard. We propose to tackle both
problems via a local search approach, which exploits properties of the problems
to reduce the size of the neighborhood and to make local moves efficiently. We
evaluate empirically our algorithm for SM problems by measuring its runtime
behaviour and its ability to sample the lattice of all possible stable
marriages. We evaluate our algorithm for SMTI problems in terms of both its
runtime behaviour and its ability to find a maximum cardinality stable
marriage.For SM problems, the number of steps of our algorithm grows only as
O(nlog(n)), and that it samples very well the set of all stable marriages. It
is thus a fair and efficient approach to generate stable marriages.Furthermore,
our approach for SMTI problems is able to solve large problems, quickly
returning stable matchings of large and often optimal size despite the
NP-hardness of this problem.Comment: 12 pages, Proc. COMSOC 2010 (Third International Workshop on
Computational Social Choice
ProLuCID: An improved SEQUEST-like algorithm with enhanced sensitivity and specificity
AbstractProLuCID, a new algorithm for peptide identification using tandem mass spectrometry and protein sequence databases has been developed. This algorithm uses a three tier scoring scheme. First, a binomial probability is used as a preliminary scoring scheme to select candidate peptides. The binomial probability scores generated by ProLuCID minimize molecular weight bias and are independent of database size. A modified cross-correlation score is calculated for each candidate peptide identified by the binomial probability. This cross-correlation scoring function models the isotopic distributions of fragment ions of candidate peptides which ultimately results in higher sensitivity and specificity than that obtained with the SEQUEST XCorr. Finally, ProLuCID uses the distribution of XCorr values for all of the selected candidate peptides to compute a Z score for the peptide hit with the highest XCorr. The ProLuCID Z score combines the discriminative power of XCorr and DeltaCN, the standard parameters for assessing the quality of the peptide identification using SEQUEST, and displays significant improvement in specificity over ProLuCID XCorr alone. ProLuCID is also able to take advantage of high resolution MS/MS spectra leading to further improvements in specificity when compared to low resolution tandem MS data. A comparison of filtered data searched with SEQUEST and ProLuCID using the same false discovery rate as estimated by a target-decoy database strategy, shows that ProLuCID was able to identify as many as 25% more proteins than SEQUEST. ProLuCID is implemented in Java and can be easily installed on a single computer or a computer cluster.This article is part of a Special Issue entitled: Computational Proteomics
Recommended from our members
ProLuCID: An improved SEQUEST-like algorithm with enhanced sensitivity and specificity.
ProLuCID, a new algorithm for peptide identification using tandem mass spectrometry and protein sequence databases has been developed. This algorithm uses a three tier scoring scheme. First, a binomial probability is used as a preliminary scoring scheme to select candidate peptides. The binomial probability scores generated by ProLuCID minimize molecular weight bias and are independent of database size. A modified cross-correlation score is calculated for each candidate peptide identified by the binomial probability. This cross-correlation scoring function models the isotopic distributions of fragment ions of candidate peptides which ultimately results in higher sensitivity and specificity than that obtained with the SEQUEST XCorr. Finally, ProLuCID uses the distribution of XCorr values for all of the selected candidate peptides to compute a Z score for the peptide hit with the highest XCorr. The ProLuCID Z score combines the discriminative power of XCorr and DeltaCN, the standard parameters for assessing the quality of the peptide identification using SEQUEST, and displays significant improvement in specificity over ProLuCID XCorr alone. ProLuCID is also able to take advantage of high resolution MS/MS spectra leading to further improvements in specificity when compared to low resolution tandem MS data. A comparison of filtered data searched with SEQUEST and ProLuCID using the same false discovery rate as estimated by a target-decoy database strategy, shows that ProLuCID was able to identify as many as 25% more proteins than SEQUEST. ProLuCID is implemented in Java and can be easily installed on a single computer or a computer cluster. This article is part of a Special Issue entitled: Computational Proteomics
Volume-energy correlations in the slow degrees of freedom of computer-simulated phospholipid membranes
Constant-pressure molecular-dynamics simulations of phospholipid membranes in
the fluid phase reveal strong correlations between equilibrium fluctuations of
volume and energy on the nanosecond time-scale. The existence of strong
volume-energy correlations was previously deduced indirectly by Heimburg from
experiments focusing on the phase transition between the fluid and the ordered
gel phases. The correlations, which are reported here for three different
membranes (DMPC, DMPS-Na, and DMPSH), have volume-energy correlation
coefficients ranging from 0.81 to 0.89. The DMPC membrane was studied at two
temperatures showing that the correlation coefficient increases as the phase
transition is approached
Measurements of Humidity in the Atmosphere and Validation Experiments (MOHAVE)-2009: overview of campaign operations and results
International audienceThe Measurements of Humidity in the Atmosphere and Validation Experiment (MOHAVE) 2009 campaign took place on 11-27 October 2009 at the JPL Table Mountain Facility in California (TMF). The main objectives of the campaign were to (1) validate the water vapor measurements of several instruments, including, three Raman lidars, two microwave radiometers, two Fourier-Transform spectrometers, and two GPS receivers (column water), (2) cover water vapor measurements from the ground to the mesopause without gaps, and (3) study upper tropospheric humidity variability at timescales varying from a few minutes to several days. A total of 58 radiosondes and 20 Frost-Point hygrometer sondes were launched. Two types of radiosondes were used during the campaign. Non negligible differences in the readings between the two radiosonde types used (Vaisala RS92 and InterMet iMet-1) made a small, but measurable impact on the derivation of water vapor mixing ratio by the Frost-Point hygrometers. As observed in previous campaigns, the RS92 humidity measurements remained within 5 % of the Frost-point in the lower and mid-troposphere, but were too dry in the upper troposphere. Over 270 h of water vapor measurements from three Raman lidars (JPL and GSFC) were compared to RS92, CFH, and NOAA-FPH. The JPL lidar profiles reached 20 km when integrated all night, and 15 km when integrated for 1 h. Excellent agreement between this lidar and the frost-point hygrometers was found throughout the measurement range, with only a 3 % (0.3 ppmv) mean wet bias for the lidar in the upper troposphere and lower stratosphere (UTLS). The other two lidars provided satisfactory results in the lower and mid-troposphere (2-5 % wet bias over the range 3-10 km), but suffered from contamination by fluorescence (wet bias ranging from 5 to 50 % between 10 km and 15 km), preventing their use as an independent measurement in the UTLS. The comparison between all available stratospheric sounders allowed to identify only the largest biases, in particular a 10 % dry bias of the Water Vapor Millimeter-wave Spectrometer compared to the Aura-Microwave Limb Sounder. No other large, or at least statistically significant, biases could be observed. Total Precipitable Water (TPW) measurements from six different co-located instruments were available. Several retrieval groups provided their own TPW retrievals, resulting in the comparison of 10 different datasets. Agreement within 7 % (0.7 mm) was found between all datasets. Such good agreement illustrates the maturity of these measurements and raises confidence levels for their use as an alternate or complementary source of calibration for the Raman lidars. Tropospheric and stratospheric ozone and temperature measurements were also available during the campaign. The water vapor and ozone lidar measurements, together with the advected potential vorticity results from the high-resolution transport model MIMOSA, allowed the identification and study of a deep stratospheric intrusion over TMF. These observations demonstrated the lidar strong potential for future long-term monitoring of water vapor in the UTLS
Conceptual Model Interoperability: a Metamodel-driven Approach
Linking, integrating, or converting conceptual data models represented in different modelling languages is a common aspect in the design and maintenance of complex information systems. While such languages seem similar, they are known to be distinct and no unifying framework exists that respects all of their language features in either model transformations or inter-model assertions to relate them. We aim to address this issue using an approach where the rules are enhanced with a logic-based metamodel. We present the main approach and some essential metamodel-driven rules for the static, structural, components of ER, EER, UML v2.4.1, ORM, and ORM2. The transformations for model elements and patterns are used with the metamodel to verify correctness of inter-model assertions across models in different languages
Behavior and Impact of Zirconium in the Soil–Plant System: Plant Uptake and Phytotoxicity
Because of the large number of sites they pollute, toxic metals that contaminate terrestrial ecosystems are increasingly of environmental and sanitary concern (Uzu et al. 2010, 2011; Shahid et al. 2011a, b, 2012a). Among such metals is zirconium (Zr), which has the atomic number 40 and is a transition metal that resembles titanium in physical and chemical properties (Zaccone et al. 2008). Zr is widely used in many chemical industry processes and in nuclear reactors (Sandoval et al. 2011; Kamal et al. 2011), owing to its useful properties like hardness, corrosion-resistance and permeable to neutrons (Mushtaq 2012). Hence, the recent increased use of Zr by industry, and the occurrence of the Chernobyl and Fukashima catastrophe have enhanced environmental levels in soil and waters (Yirchenko and Agapkina 1993; Mosulishvili et al. 1994 ; Kruglov et al. 1996)
- …