326 research outputs found

    Passive water control at the surface of a superhydrophobic lichen

    Get PDF
    Some lichens have a super-hydrophobic upper surface, which repels water drops, keeping the surface dry but probably preventing water uptake. Spore ejection requires water and is most efficient just after rainfall. This study was carried out to investigate how super-hydrophobic lichens manage water uptake and repellence at their fruiting bodies, or podetia. Drops of water were placed onto separate podetia of Cladonia chlorophaea and observed using optical microscopy and cryo-scanning-electron microscopy (cryo-SEM) techniques to determine the structure of podetia and to visualise their interaction with water droplets. SEM and optical microscopy studies revealed that the surface of the podetia was constructed in a three-level structural hierarchy. By cryo-SEM of water-glycerol droplets placed on the upper part of the podetium, pinning of the droplet to specific, hydrophilic spots (pycnidia/apothecia) was observed. The results suggest a mechanism for water uptake, which is highly sophisticated, using surface wettability to generate a passive response to different types of precipitation in a manner similar to the Namib Desert beetle. This mechanism is likely to be found in other organisms as it offers passive but selective water control

    On the phase transitions of graph coloring and independent sets

    Full text link
    We study combinatorial indicators related to the characteristic phase transitions associated with coloring a graph optimally and finding a maximum independent set. In particular, we investigate the role of the acyclic orientations of the graph in the hardness of finding the graph's chromatic number and independence number. We provide empirical evidence that, along a sequence of increasingly denser random graphs, the fraction of acyclic orientations that are `shortest' peaks when the chromatic number increases, and that such maxima tend to coincide with locally easiest instances of the problem. Similar evidence is provided concerning the `widest' acyclic orientations and the independence number

    The dynamic mass spectrometry probe (DMSP) - Advanced process analytics for therapeutic cell manufacturing, health monitoring and biomarker discovery

    Get PDF
    Spatially and temporally resolved in situ monitoring of biochemical cell culture environments, e.g., in application to therapeutic cell bioreactors, is of critical importance for facilitating the development of new and reliable quality control methodologies for cell therapies. Identifying and monitoring secreted biomolecular critical quality attributes (CQAs) to enable online feedback control will enable large scale, cost-effective manufacturing of therapeutic cells. These CQA biomarkers have varying concentrations within a bioreactor, both in time and space. Current methods for monitoring these diverse biomolecules are generally ex-situ, time consuming, destructive, provide bulk measurements, or lack the ability to reveal the complete secretome/metabolome composition. The Dynamic Mass Spectrometry Probe (DMSP) synergistically incorporates a sampling interface for localized intake of a small fluid volume of the cellular content, a micro-fabricated mass exchanger for sample conditioning and inline separation, and an integrated electrospray ionization (ESI) emitter for softly ionizing (i.e. preserved biochemical structure) extracted biomolecules for mass spectrometry (MS). ESI-MS via DMSP treatment enables both biomarker discovery and transient (~1 min) analysis of biochemical information indicative of cell health and potency. DMSP is manufactured using advanced batch microfabrication techniques, which minimize dead volume (~20 nL) and ensure repeatable operation and precise geometry of each device. DMSP treatment removes 99% of compounds that interfere with mass spectrometry analysis, such as inorganic salts, while retaining biomolecules of interest within the sample for ESI-MS analysis. DMSP has demonstrated the ability to substantially increase signal to noise ratio in MS detection of biomolecules, and to further enhance sensitivity for probing lower biomarker concentrations via introduction of ESI-MS enhancing molecules (i.e. proton donating chemicals, protein denaturing solvents, and supercharging agents) into the sample within the integrated mass exchanger. To exemplify the DMSP’s unique capabilities, Fig. 1 demonstrates detection of multiple low-concentration protein biomarkers sampled from a biochemically-complex cell media solution serving as a proxy to samples taken directly from cell growth bioreactors [1]. Please click Additional Files below to see the full abstract

    Network conduciveness with application to the graph-coloring and independent-set optimization transitions

    Full text link
    We introduce the notion of a network's conduciveness, a probabilistically interpretable measure of how the network's structure allows it to be conducive to roaming agents, in certain conditions, from one portion of the network to another. We exemplify its use through an application to the two problems in combinatorial optimization that, given an undirected graph, ask that its so-called chromatic and independence numbers be found. Though NP-hard, when solved on sequences of expanding random graphs there appear marked transitions at which optimal solutions can be obtained substantially more easily than right before them. We demonstrate that these phenomena can be understood by resorting to the network that represents the solution space of the problems for each graph and examining its conduciveness between the non-optimal solutions and the optimal ones. At the said transitions, this network becomes strikingly more conducive in the direction of the optimal solutions than it was just before them, while at the same time becoming less conducive in the opposite direction. We believe that, besides becoming useful also in other areas in which network theory has a role to play, network conduciveness may become instrumental in helping clarify further issues related to NP-hardness that remain poorly understood

    Iterative focused screening with biological fingerprints identifies selective Asc-1 inhibitors distinct from traditional high throughput screening

    Get PDF
    N-methyl-d-aspartate receptors (NMDARs) mediate glutamatergic signaling that is critical to cognitive processes in the central nervous system, and NMDAR hypofunction is thought to contribute to cognitive impairment observed in both schizophrenia and Alzheimer’s disease. One approach to enhance the function of NMDAR is to increase the concentration of an NMDAR coagonist, such as glycine or d-serine, in the synaptic cleft. Inhibition of alanine–serine–cysteine transporter-1 (Asc-1), the primary transporter of d-serine, is attractive because the transporter is localized to neurons in brain regions critical to cognitive function, including the hippocampus and cortical layers III and IV, and is colocalized with d-serine and NMDARs. To identify novel Asc-1 inhibitors, two different screening approaches were performed with whole-cell amino acid uptake in heterologous cells stably expressing human Asc-1: (1) a high-throughput screen (HTS) of 3 M compounds measuring 35S l-cysteine uptake into cells attached to scintillation proximity assay beads in a 1536 well format and (2) an iterative focused screen (IFS) of a 45 000 compound diversity set using a 3H d-serine uptake assay with a liquid scintillation plate reader in a 384 well format. Critically important for both screening approaches was the implementation of counter screens to remove nonspecific inhibitors of radioactive amino acid uptake. Furthermore, a 15 000 compound expansion step incorporating both on- and off-target data into chemical and biological fingerprint-based models for selection of additional hits enabled the identification of novel Asc-1-selective chemical matter from the IFS that was not identified in the full-collection HTS

    Recognizing Treelike k-Dissimilarities

    Full text link
    A k-dissimilarity D on a finite set X, |X| >= k, is a map from the set of size k subsets of X to the real numbers. Such maps naturally arise from edge-weighted trees T with leaf-set X: Given a subset Y of X of size k, D(Y) is defined to be the total length of the smallest subtree of T with leaf-set Y . In case k = 2, it is well-known that 2-dissimilarities arising in this way can be characterized by the so-called "4-point condition". However, in case k > 2 Pachter and Speyer recently posed the following question: Given an arbitrary k-dissimilarity, how do we test whether this map comes from a tree? In this paper, we provide an answer to this question, showing that for k >= 3 a k-dissimilarity on a set X arises from a tree if and only if its restriction to every 2k-element subset of X arises from some tree, and that 2k is the least possible subset size to ensure that this is the case. As a corollary, we show that there exists a polynomial-time algorithm to determine when a k-dissimilarity arises from a tree. We also give a 6-point condition for determining when a 3-dissimilarity arises from a tree, that is similar to the aforementioned 4-point condition.Comment: 18 pages, 4 figure

    A comparison of methods for the determination of dissolved oxygen in seawater

    Get PDF
    An intercalibration of dissolved oxygen methods was conducted at 2 stations in the Sargasso Sea between April 28 and May 3, 1990. The experiment compared three techniques using automated endpoint detection with the manual Winkler method using a starch endpoint. Institutions participating in the intercomparison were the Bedford Institute of Oceanography (automated photometric titration), the University of Delaware (automated amperometric titration), the Scripps Institution of Oceanography (manual titration), and the Woods Hole Oceanographic Institution (automated amperometric titration). Differences in measured oxygen concentrations between institutions were encouragingly small. However, small, systematic differences in dissolved oxygen between institutions did exist. The range between the highest and lowest oxygen values reported by the 4 institutions never exceeded 0.6% over the entire concentration range studied (3.4 to 6.2 mlj1). The good agreement is probably due to the use of the essentials of Carpenter's (1965) modification of the Winkler method by all institutions. The intercalibration revealed several aspects of dissolved oxygen measurements that require further research: (1) the intercalibration should be extended to very low oxygen concentrations; (2) procedures for measur ing and applying corrections for the seawater blank need to be formalized; (3) a simple procedure to measure the temperature of seawater at the time of sampling needs to be developed; and (4) the solubility of atmospheric oxygen in the Winkler reagents must be measured as a function of temperature. The intercalibration also revealed that analytical techniques required for precise and accurate volumetric measurements were often not applied, even by experienced analysts. It was found that uncalibrated pipets were used to dispense standards, that the volumes of oxygen flasks were not corrected for buoyancy, and that corrections for the thermal expansion of aqueous solutions were often not applied.This research was supported by National Science Foundation Grants OCE 88- 22542 and OCE 88-21977 and OCE 89-07815. Preparation and distribution of this report by the WHP Office, Woods Hole Oceanographic Institution, Woods Hole, MA. 02543 USA, was supported by NSF Grant OCE 89-07815

    Causal Set Dynamics: A Toy Model

    Get PDF
    We construct a quantum measure on the power set of non-cyclic oriented graphs of N points, drawing inspiration from 1-dimensional directed percolation. Quantum interference patterns lead to properties which do not appear to have any analogue in classical percolation. Most notably, instead of the single phase transition of classical percolation, the quantum model displays two distinct crossover points. Between these two points, spacetime questions such as "does the network percolate" have no definite or probabilistic answer.Comment: 28 pages incl. 5 figure

    GraphCombEx: A Software Tool for Exploration of Combinatorial Optimisation Properties of Large Graphs

    Full text link
    We present a prototype of a software tool for exploration of multiple combinatorial optimisation problems in large real-world and synthetic complex networks. Our tool, called GraphCombEx (an acronym of Graph Combinatorial Explorer), provides a unified framework for scalable computation and presentation of high-quality suboptimal solutions and bounds for a number of widely studied combinatorial optimisation problems. Efficient representation and applicability to large-scale graphs and complex networks are particularly considered in its design. The problems currently supported include maximum clique, graph colouring, maximum independent set, minimum vertex clique covering, minimum dominating set, as well as the longest simple cycle problem. Suboptimal solutions and intervals for optimal objective values are estimated using scalable heuristics. The tool is designed with extensibility in mind, with the view of further problems and both new fast and high-performance heuristics to be added in the future. GraphCombEx has already been successfully used as a support tool in a number of recent research studies using combinatorial optimisation to analyse complex networks, indicating its promise as a research software tool
    • …
    corecore