93 research outputs found

    Quantitative CT analysis in patients with pulmonary emphysema: is lung function influenced by concomitant unspecific pulmonary fibrosis?

    Get PDF
    Purpose: Quantitative analysis of CT scans has proven to be a reproducible technique, which might help to understand the pathophysiology of chronic obstructive pulmonary disease (COPD) and combined pulmonary fibrosis and emphysema. The aim of this retrospective study was to find out if the lung function of patients with COPD with Global Initiative for Chronic Obstructive Lung Disease (GOLD) stages III or IV and pulmonary emphysema is measurably influenced by high attenuation areas as a correlate of concomitant unspecific fibrotic changes of lung parenchyma. Patients and methods: Eighty-eight patients with COPD GOLD stage III or IV underwent CT and pulmonary function tests. Quantitative CT analysis was performed to determine low attenuation volume (LAV) and high attenuation volume (HAV), which are considered to be equivalents of fibrotic (HAV) and emphysematous (LAV) changes of lung parenchyma. Both parameters were determined for the whole lung, as well as peripheral and central lung areas only. Multivariate regression analysis was used to correlate HAV with different parameters of lung function. Results: Unlike LAV, HAV did not show significant correlation with parameters of lung function. Even in patients with a relatively high HAVof more than 10%, in contrast to HAV (p=0.786) only LAV showed a significantly negative correlation with forced expiratory volume in 1 second (r=−0.309, R2=0.096, p=0.003). A severe decrease of DLCO% was associated with both larger HAV (p=0.045) and larger LAV (p=0.001). Residual volume and FVC were not influenced by LAV or HAV. Conclusion: In patients with COPD GOLD stage III-IV, emphysematous changes of lung parenchyma seem to have such a strong influence on lung function, which is a possible effect of concomitant unspecific fibrosis is overwhelmed

    Comparison of distinctive models for calculating an interlobar emphysema heterogeneity index in patients prior to endoscopic lung volume reduction

    Get PDF
    Background: The degree of interlobar emphysema heterogeneity is thought to play an important role in the outcome of endoscopic lung volume reduction (ELVR) therapy of patients with advanced COPD. There are multiple ways one could possibly define interlobar emphysema heterogeneity, and there is no standardized definition. Purpose: The aim of this study was to derive a formula for calculating an interlobar emphysema heterogeneity index (HI) when evaluating a patient for ELVR. Furthermore, an attempt was made to identify a threshold for relevant interlobar emphysema heterogeneity with regard to ELVR. Patients and methods: We retrospectively analyzed 50 patients who had undergone technically successful ELVR with placement of one-way valves at our institution and had received lung function tests and computed tomography scans before and after treatment. Predictive accuracy of the different methods for HI calculation was assessed with receiver-operating characteristic curve analysis, assuming a minimum difference in forced expiratory volume in 1 second of 100 mL to indicate a clinically important change. Results: The HI defined as emphysema score of the targeted lobe (TL) minus emphysema score of the ipsilateral nontargeted lobe disregarding the middle lobe yielded the best predicative accuracy (AUC =0.73, P=0.008). The HI defined as emphysema score of the TL minus emphysema score of the lung without the TL showed a similarly good predictive accuracy (AUC =0.72, P=0.009). Subgroup analysis suggests that the impact of interlobar emphysema heterogeneity is of greater importance in patients with upper lobe predominant emphysema than in patients with lower lobe predominant emphysema. Conclusion: This study reveals the most appropriate ways of calculating an interlobar emphysema heterogeneity with regard to ELVR

    Superhydrophobic Terrestrial Cyanobacteria and Land Plant Transition

    Get PDF
    Plants and other organisms have evolved structures and mechanisms for colonizing land since the Early Ordovician. In this context, their surfaces, the crucial physical interface with the environment, are mainly considered barriers against water loss. It is suggested that extreme water repellency (superhydrophobicity) was an additional key innovation for the transition of algae from water to land some 400 mya. Superhydrophobicity enhances gas exchange on land and excludes aquatic competitors in water films. In a different context, in material science and surface technology, superhydrophobicity has also become one of the most important bioinspired innovations enabling the avoidance of water films and contamination. Here, we present data for an extremely water-repellent cyanobacterial biofilm of the desiccation tolerant Hassallia byssoidea providing evidence for a much earlier prokaryotic Precambrian (ca. 1–2 bya) origin of superhydrophobicity and chemical heterogeneities associated with land transition. The multicellular cyanobacterium is functionally differentiated in a submerged basal hydrophilic absorbing portion like a “rhizoid” and an upright emersed superhydrophobic “phyllocauloid” filament for assimilation, nitrogen fixation, and splash dispersed diaspores. Additional data are provided for superhydrophobic surfaces in terrestrial green algae and in virtually all ancestral land plants (Bryophytes, ferns and allies, Amborella, Nelumbo), slime molds, and fungi. Rethinking of superhydrophobicity as an essential first step for life in terrestrial environments is suggested

    Learning Coupled Forward-Inverse Models with Combined Prediction Errors

    Get PDF
    Challenging tasks in unstructured environments require robots to learn complex models. Given a large amount of information, learning multiple simple models can offer an efficient alternative to a monolithic complex network. Training multiple models-that is, learning their parameters and their responsibilities-has been shown to be prohibitively hard as optimization is prone to local minima. To efficiently learn multiple models for different contexts, we thus develop a new algorithm based on expectation maximization (EM). In contrast to comparable concepts, this algorithm trains multiple modules of paired forward-inverse models by using the prediction errors of both forward and inverse models simultaneously. In particular, we show that our method yields a substantial improvement over only considering the errors of the forward models on tasks where the inverse space contains multiple solutions

    Demonstration based trajectory optimization for generalizable robot motions

    Get PDF
    Learning motions from human demonstrations provides an intuitive way for non-expert users to teach tasks to robots. In particular, intelligent robotic co-workers should not only mimic human demonstrations but should also be able to adapt them to varying application scenarios. As such, robots must have the ability to generalize motions to different workspaces, e.g. to avoid obstacles not present during original demonstrations. Towards this goal our work proposes a unified method to (1) generalize robot motions to different workspaces, using a novel formulation of trajectory optimization that explicitly incorporates human demonstrations, and (2) to locally adapt and reuse the optimized solution in the form of a distribution of trajectories. This optimized distribution can be used, online, to quickly satisfy via-points and goals of a specific task. We validate the method using a 7 degrees of freedom (DoF) lightweight arm that grasps and places a ball into different boxes while avoiding obstacles that were not present during the original human demonstrations

    Liquid polystyrene: a room-temperature photocurable soft lithography compatible pour-and-cure-type polystyrene

    Get PDF
    Materials matter in microfluidics. Since the introduction of soft lithography as a prototyping technique and polydimethylsiloxane (PDMS) as material of choice the microfluidics community has settled with using this material almost exclusively. However{,} for many applications PDMS is not an ideal material given its limited solvent resistance and hydrophobicity which makes it especially disadvantageous for certain cell-based assays. For these applications polystyrene (PS) would be a better choice. PS has been used in biology research and analytics for decades and numerous protocols have been developed and optimized for it. However{,} PS has not found widespread use in microfluidics mainly because{,} being a thermoplastic material{,} it is typically structured using industrial polymer replication techniques. This makes PS unsuitable for prototyping. In this paper{,} we introduce a new structuring method for PS which is compatible with soft lithography prototyping. We develop a liquid PS prepolymer which we term as {"}Liquid Polystyrene{"} (liqPS). liqPS is a viscous free-flowing liquid which can be cured by visible light exposure using soft replication templates{,} e.g.{,} made from PDMS. Using liqPS prototyping microfluidic systems in PS is as easy as prototyping microfluidic systems in PDMS. We demonstrate that cured liqPS is (chemically and physically) identical to commercial PS. Comparative studies on mouse fibroblasts L929 showed that liqPS cannot be distinguished from commercial PS in such experiments. Researchers can develop and optimize microfluidic structures using liqPS and soft lithography. Once the device is to be commercialized it can be manufactured using scalable industrial polymer replication techniques in PS - the material is the same in both cases. Therefore{,} liqPS effectively closes the gap between {"}microfluidic prototyping{"} and {"}industrial microfluidics{"} by providing a common material

    The Complexity of Computing Minimal Unidirectional Covering Sets

    Full text link
    Given a binary dominance relation on a set of alternatives, a common thread in the social sciences is to identify subsets of alternatives that satisfy certain notions of stability. Examples can be found in areas as diverse as voting theory, game theory, and argumentation theory. Brandt and Fischer [BF08] proved that it is NP-hard to decide whether an alternative is contained in some inclusion-minimal upward or downward covering set. For both problems, we raise this lower bound to the Theta_{2}^{p} level of the polynomial hierarchy and provide a Sigma_{2}^{p} upper bound. Relatedly, we show that a variety of other natural problems regarding minimal or minimum-size covering sets are hard or complete for either of NP, coNP, and Theta_{2}^{p}. An important consequence of our results is that neither minimal upward nor minimal downward covering sets (even when guaranteed to exist) can be computed in polynomial time unless P=NP. This sharply contrasts with Brandt and Fischer's result that minimal bidirectional covering sets (i.e., sets that are both minimal upward and minimal downward covering sets) are polynomial-time computable.Comment: 27 pages, 7 figure

    Proteomic-based stratification of intermediate-risk prostate cancer patients

    Full text link
    Gleason grading is an important prognostic indicator for prostate adenocarcinoma and is crucial for patient treatment decisions. However, intermediate-risk patients diagnosed in the Gleason grade group (GG) 2 and GG3 can harbour either aggressive or non-aggressive disease, resulting in under- or overtreatment of a significant number of patients. Here, we performed proteomic, differential expression, machine learning, and survival analyses for 1,348 matched tumour and benign sample runs from 278 patients. Three proteins (F5, TMEM126B, and EARS2) were identified as candidate biomarkers in patients with biochemical recurrence. Multivariate Cox regression yielded 18 proteins, from which a risk score was constructed to dichotomize prostate cancer patients into low- and high-risk groups. This 18-protein signature is prognostic for the risk of biochemical recurrence and completely independent of the intermediate GG. Our results suggest that markers generated by computational proteomic profiling have the potential for clinical applications including integration into prostate cancer management

    A metapopulation model to assess the capacity of spread of meticillin-resistant Staphylococcus aureus ST398 in humans.

    Get PDF
    The emergence of the livestock-associated clone of meticillin-resistant Staphylococcus aureus (MRSA) ST398 is a serious public health issue throughout Europe. In The Netherlands a stringent 'search-and-destroy' policy has been adopted, keeping low the level of MRSA prevalence. However, reports have recently emerged of transmission events between humans showing no links to livestock, contradicting belief that MRSA ST398 is poorly transmissible in humans. The question regarding the transmissibility of MRSA ST398 in humans therefore remains of great interest. Here, we investigated the capacity of MRSA ST398 to spread into an entirely susceptible human population subject to the effect of a single MRSA-positive commercial pig farm. Using a stochastic, discrete-time metapopulation model, we explored the effect of varying both the probability of persistent carriage and that of acquiring MRSA due to contact with pigs on the transmission dynamics of MRSA ST398 in humans. In particular, we assessed the value and key determinants of the basic reproduction ratio (R(0)) for MRSA ST398. Simulations showed that the presence of recurrent exposures with pigs in risky populations allows MRSA ST398 to persist in the metapopulation and transmission events to occur beyond the farming community, even when the probability of persistent carriage is low. We further showed that persistent carriage should occur in less than 10% of the time for MRSA ST398 to conserve epidemiological characteristics similar to what has been previously reported. These results indicate that implementing control policy that only targets human carriers may not be sufficient to control MRSA ST398 in the community if it remains in pigs. We argue that farm-level control measures should be implemented if an eradication programme is to be considered
    corecore