468 research outputs found
Recommended from our members
Evaluation of hydrodynamic scaling in porous media using finger dimensions
The use of dimensionless scaling is ubiquitous to hydrodynamic analysis, providing a powerful method of extending limited experimetnal results and generalizing theories. Miller and Miller [1956] contributed a scaling framework for immiscible fluid flow through porous media that relied on consistency of the contact angle between systems to be compared. It is common to assume that the effective contact angle will be zero in clean sand material where water is the wetting liquid. The well-documented unstable wetting process of fingered flow is used here as a diagnostic tool for the scaling relationships for infiltration into sandy media. Through comparison of finger cross sections produced using three liquids as well as various concentrations of anionic surfactant, it is shown that the zero contact angle assumption is very poor even for laboratory cleaned silica sand: Experimental results demonstrate effective contact angles approaching 60°. Scaling was effective for a given liquid between sands of differing particle size. These results suggest that caution should be exercised when applying scaling theory to initial wetting of porous media by liquids of differing gas-liquid interfacial tensions
Recommended from our members
A phase II study of temozolomide vs. procarbazine in patients with glioblastoma multiforme at first relapse.
A randomized, multicentre, open-label, phase II study compared temozolomide (TMZ), an oral second-generation alkylating agent, and procarbazine (PCB) in 225 patients with glioblastoma multiforme at first relapse. Primary objectives were to determine progression-free survival (PFS) at 6 months and safety for TMZ and PCB in adult patients who failed conventional treatment. Secondary objectives were to assess overall survival and health-related quality of life (HRQL). TMZ was given orally at 200 mg/m(2)/day or 150 mg/m(2)/day (prior chemotherapy) for 5 days, repeated every 28 days. PCB was given orally at 150 mg/m(2)/day or 125 mg/m(2)/day (prior chemotherapy) for 28 days, repeated every 56 days. HRQL was assessed using the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire (EORTC QLQ-C30 [+3]) and the Brain Cancer Module 20 (BCM20). The 6-month PFS rate for patients who received TMZ was 21%, which met the protocol objective. The 6-month PFS rate for those who received PCB was 8% (P = 0.008, for the comparison). Overall PFS significantly improved with TMZ, with a median PFS of 12.4 weeks in the TMZ group and 8.32 weeks in the PCB group (P = 0.0063). The 6-month overall survival rate for TMZ patients was 60% vs. 44% for PCB patients (P = 0.019). Freedom from disease progression was associated with maintenance of HRQL, regardless of treatment received. TMZ had an acceptable safety profile; most adverse events were mild or moderate in severity
Probabilistic classification of acute myocardial infarction from multiple cardiac markers
Logistic regression and Gaussian mixture model (GMM) classifiers have been trained to estimate the probability of acute myocardial infarction (AMI) in patients based upon the concentrations of a panel of cardiac markers. The panel consists of two new markers, fatty acid binding protein (FABP) and glycogen phosphorylase BB (GPBB), in addition to the traditional cardiac troponin I (cTnI), creatine kinase MB (CKMB) and myoglobin. The effect of using principal component analysis (PCA) and Fisher discriminant analysis (FDA) to preprocess the marker concentrations was also investigated. The need for classifiers to give an accurate estimate of the probability of AMI is argued and three categories of performance measure are described, namely discriminatory ability, sharpness, and reliability. Numerical performance measures for each category are given and applied. The optimum classifier, based solely upon the samples take on admission, was the logistic regression classifier using FDA preprocessing. This gave an accuracy of 0.85 (95% confidence interval: 0.78–0.91) and a normalised Brier score of 0.89. When samples at both admission and a further time, 1–6 h later, were included, the performance increased significantly, showing that logistic regression classifiers can indeed use the information from the five cardiac markers to accurately and reliably estimate the probability AMI
A distributed stream temperature model using high resolution temperature observations
International audienceDistributed temperature data are used as input and as calibration data for an energy based temperature model of a first order stream in Luxembourg. A DTS (Distributed Temperature Sensing) system with a fiber optic cable of 1500 m was used to measure stream water temperature with 1 m resolution each 2 min. Four groundwater inflows were identified and quantified (both temperature and relative discharge). The temperature model calculates the total energy balance including solar radiation (with shading effects), longwave radiation, latent heat, sensible heat and river bed conduction. The simulated temperature is compared with the observed temperature at all points along the stream. Knowledge of the lateral inflow appears to be crucial to simulate the temperature distribution and conversely, that stream temperature can be used successfully to identify sources of lateral inflow. The DTS fiber optic is an excellent tool to provide this knowledge
Residual Votes Attributable to Technology: An Assessment of the Reliability of Existing Voting Equipment
American elections are conducted using a hodge-podge of different voting technologies: paper ballots, lever machines, punch cards, optically scanned ballots, and electronic machines. And the technologies we use change frequently. Over the last two decades, counties have moved away from paper ballots and lever machines and toward optically scanned ballots and electronic machines. The changes have not occurred from a concerted initiative, but from local experimentation. Some local governments have even opted to go back to the older methods of paper and levers
RIPCAL: a tool for alignment-based analysis of repeat-induced point mutations in fungal genomic sequences
Background
Repeat-induced point mutation (RIP) is a fungal-specific genome defence mechanism that alters the sequences of repetitive DNA, thereby inactivating coding genes. Repeated DNA sequences align between mating and meiosis and both sequences undergo C:G to T:A transitions. In most fungi these transitions preferentially affect CpA di-nucleotides thus altering the frequency of certain di-nucleotides in the affected sequences. The majority of previously published in silico analyses were limited to the comparison of ratios of pre- and post-RIP di-nucleotides in putatively RIP-affected sequences – so-called RIP indices. The analysis of RIP is significantly more informative when comparing sequence alignments of repeated sequences. There is, however, a dearth of bioinformatics tools available to the fungal research community for alignment-based RIP analysis of repeat families.
Results
We present RIPCAL http://www.sourceforge.net/projects/ripcal, a software tool for the automated analysis of RIP in fungal genomic DNA repeats, which performs both RIP index and alignment-based analyses. We demonstrate the ability of RIPCAL to detect RIP within known RIP-affected sequences of Neurospora crassa and other fungi. We also predict and delineate the presence of RIP in the genome of Stagonospora nodorum – a Dothideomycete pathogen of wheat. We show that RIP has affected different members of the S. nodorum rDNA tandem repeat to different extents depending on their genomic contexts.
Conclusion
The RIPCAL alignment-based method has considerable advantages over RIP indices for the analysis of whole genomes. We demonstrate its application to the recently published genome assembly of S. nodorum
A Proposal for Integrated Efficacy-to-Effectiveness (E2E) Clinical Trials
We propose an “efficacy-to-effectiveness” (E2E) clinical trial design, in which an effectiveness trial would commence seamlessly upon completion of the efficacy trial. Efficacy trials use inclusion/exclusion criteria to produce relatively homogeneous samples of participants with the target condition, conducted in settings that foster adherence to rigorous clinical protocols. Effectiveness trials use inclusion/exclusion criteria that generate heterogeneous samples that are more similar to the general patient spectrum, conducted in more varied settings, with protocols that approximate typical clinical care. In E2E trials, results from the efficacy trial component would be used to design the effectiveness trial component, to confirm and/or discern associations between clinical characteristics and treatment effects in typical care, and potentially to test new hypotheses. An E2E approach may improve the evidentiary basis for selecting treatments, expand understanding of the effectiveness of treatments in subgroups with particular clinical features, and foster incorporation of effectiveness information into regulatory processes.National Center for Research Resources (U.S.) (Grant UL1 RR025752)National Center for Advancing Translational Sciences (U.S.) (Grant UL1 TR000073
- …