8,638 research outputs found

    Effects of high versus low flux membranes on O2 saturation in hemodialysis patients

    Get PDF
    Background: Several studies have been carried out to evaluate the effects of dialysis on O2 saturation. While the dialysis procedure may lead to hypoxia under different circumstances, there are few studies available on the effects of membrane type on O2 saturation in these patients. objectives: This study was to appraise the effects of high and low flux membrane on pulse oxymetery in dialysis patients. Patients and Methods: In a cross-sectional evaluation, 43 hemodialysis patients without pulmonary disease were enrolled. Of this group, dialysis was performed by low and high flux membranes, and pulse oxymetery was applied before and after the procedures. Results: Mean age of the patients was 56.34 years. Of these patients, 23 (53.5) and 20 (46.5) were women and men, respectively. Type of membrane (high flux vs. low flux) did not show any significant effect on pulse oxymetery results (P > 0.05). Conclusions: Due to the lack of a significant difference in pulse oxymetery and creation of hypoxia between two types of membranes in hemodialysis patients, as well as the high cost of high flux membrane as compared to the low flux membrane, we do not suggest the use of high flux membrane in dialysis. © 2013, Kowsar Corp.; Published by Kowsar Corp

    Treatment of Linear and Nonlinear Dielectric Property of Molecular Monolayer and Submonolayer with Microscopic Dipole Lattice Model: I. Second Harmonic Generation and Sum-Frequency Generation

    Full text link
    In the currently accepted models of the nonlinear optics, the nonlinear radiation was treated as the result of an infinitesimally thin polarization sheet layer, and a three layer model was generally employed. The direct consequence of this approach is that an apriori dielectric constant, which still does not have a clear definition, has to be assigned to this polarization layer. Because the Second Harmonic Generation (SHG) and the Sum-Frequency Generation vibrational Spectroscopy (SFG-VS) have been proven as the sensitive probes for interfaces with the submonolayer coverage, the treatment based on the more realistic discrete induced dipole model needs to be developed. Here we show that following the molecular optics theory approach the SHG, as well as the SFG-VS, radiation from the monolayer or submonolayer at an interface can be rigorously treated as the radiation from an induced dipole lattice at the interface. In this approach, the introduction of the polarization sheet is no longer necessary. Therefore, the ambiguity of the unaccounted dielectric constant of the polarization layer is no longer an issue. Moreover, the anisotropic two dimensional microscopic local field factors can be explicitly expressed with the linear polarizability tensors of the interfacial molecules. Based on the planewise dipole sum rule in the molecular monolayer, crucial experimental tests of this microscopic treatment with SHG and SFG-VS are discussed. Many puzzles in the literature of surface SHG and SFG spectroscopy studies can also be understood or resolved in this framework. This new treatment may provide a solid basis for the quantitative analysis in the surface SHG and SFG studies.Comment: 23 pages, 3 figure

    Design of an electrochemical micromachining machine

    Get PDF
    Electrochemical micromachining (μECM) is a non-conventional machining process based on the phenomenon of electrolysis. μECM became an attractive area of research due to the fact that this process does not create any defective layer after machining and that there is a growing demand for better surface integrity on different micro applications including microfluidics systems, stress-free drilled holes in automotive and aerospace manufacturing with complex shapes, etc. This work presents the design of a next generation μECM machine for the automotive, aerospace, medical and metrology sectors. It has three axes of motion (X, Y, Z) and a spindle allowing the tool-electrode to rotate during machining. The linear slides for each axis use air bearings with linear DC brushless motors and 2-nm resolution encoders for ultra precise motion. The control system is based on the Power PMAC motion controller from Delta Tau. The electrolyte tank is located at the rear of the machine and allows the electrolyte to be changed quickly. This machine features two process control algorithms: fuzzy logic control and adaptive feed rate. A self-developed pulse generator has been mounted and interfaced with the machine and a wire ECM grinding device has been added. The pulse generator has the possibility to reverse the pulse polarity for on-line tool fabrication.The research reported in this paper is supported by the European Commission within the project “Minimizing Defects in Micro-Manufacturing Applications (MIDEMMA)” (FP7-2011-NMPICT- FoF-285614)

    Simulating quantum statistics with entangled photons: a continuous transition from bosons to fermions

    Get PDF
    In contrast to classical physics, quantum mechanics divides particles into two classes-bosons and fermions-whose exchange statistics dictate the dynamics of systems at a fundamental level. In two dimensions quasi-particles known as 'anyons' exhibit fractional exchange statistics intermediate between these two classes. The ability to simulate and observe behaviour associated to fundamentally different quantum particles is important for simulating complex quantum systems. Here we use the symmetry and quantum correlations of entangled photons subjected to multiple copies of a quantum process to directly simulate quantum interference of fermions, bosons and a continuum of fractional behaviour exhibited by anyons. We observe an average similarity of 93.6\pm0.2% between an ideal model and experimental observation. The approach generalises to an arbitrary number of particles and is independent of the statistics of the particles used, indicating application with other quantum systems and large scale application.Comment: 10 pages, 5 figure

    Hot Streaks in Artistic, Cultural, and Scientific Careers

    Full text link
    The hot streak, loosely defined as winning begets more winnings, highlights a specific period during which an individual's performance is substantially higher than her typical performance. While widely debated in sports, gambling, and financial markets over the past several decades, little is known if hot streaks apply to individual careers. Here, building on rich literature on lifecycle of creativity, we collected large-scale career histories of individual artists, movie directors and scientists, tracing the artworks, movies, and scientific publications they produced. We find that, across all three domains, hit works within a career show a high degree of temporal regularity, each career being characterized by bursts of high-impact works occurring in sequence. We demonstrate that these observations can be explained by a simple hot-streak model we developed, allowing us to probe quantitatively the hot streak phenomenon governing individual careers, which we find to be remarkably universal across diverse domains we analyzed: The hot streaks are ubiquitous yet unique across different careers. While the vast majority of individuals have at least one hot streak, hot streaks are most likely to occur only once. The hot streak emerges randomly within an individual's sequence of works, is temporally localized, and is unassociated with any detectable change in productivity. We show that, since works produced during hot streaks garner significantly more impact, the uncovered hot streaks fundamentally drives the collective impact of an individual, ignoring which leads us to systematically over- or under-estimate the future impact of a career. These results not only deepen our quantitative understanding of patterns governing individual ingenuity and success, they may also have implications for decisions and policies involving predicting and nurturing individuals with lasting impact

    Regulatory control and the costs and benefits of biochemical noise

    Get PDF
    Experiments in recent years have vividly demonstrated that gene expression can be highly stochastic. How protein concentration fluctuations affect the growth rate of a population of cells, is, however, a wide open question. We present a mathematical model that makes it possible to quantify the effect of protein concentration fluctuations on the growth rate of a population of genetically identical cells. The model predicts that the population's growth rate depends on how the growth rate of a single cell varies with protein concentration, the variance of the protein concentration fluctuations, and the correlation time of these fluctuations. The model also predicts that when the average concentration of a protein is close to the value that maximizes the growth rate, fluctuations in its concentration always reduce the growth rate. However, when the average protein concentration deviates sufficiently from the optimal level, fluctuations can enhance the growth rate of the population, even when the growth rate of a cell depends linearly on the protein concentration. The model also shows that the ensemble or population average of a quantity, such as the average protein expression level or its variance, is in general not equal to its time average as obtained from tracing a single cell and its descendants. We apply our model to perform a cost-benefit analysis of gene regulatory control. Our analysis predicts that the optimal expression level of a gene regulatory protein is determined by the trade-off between the cost of synthesizing the regulatory protein and the benefit of minimizing the fluctuations in the expression of its target gene. We discuss possible experiments that could test our predictions.Comment: Revised manuscript;35 pages, 4 figures, REVTeX4; to appear in PLoS Computational Biolog

    Behavioural stress responses predict environmental perception in European sea bass (Dicentrarchus labrax)

    Get PDF
    Individual variation in the response to environmental challenges depends partly on innate reaction norms, partly on experience-based cognitive/emotional evaluations that individuals make of the situation. The goal of this study was to investigate whether pre-existing differences in behaviour predict the outcome of such assessment of environmental cues, using a conditioned place preference/avoidance (CPP/CPA) paradigm. A comparative vertebrate model (European sea bass, Dicentrarchus labrax) was used, and ninety juvenile individuals were initially screened for behavioural reactivity using a net restraining test. Thereafter each individual was tested in a choice tank using net chasing as aversive stimulus or exposure to familiar conspecifics as appetitive stimulus in the preferred or non preferred side respectively (called hereafter stimulation side). Locomotor behaviour (i.e. time spent, distance travelled and swimming speed in each tank side) of each individual was recorded and analysed with video software. The results showed that fish which were previously exposed to appetitive stimulus increased significantly the time spent on the stimulation side, while aversive stimulus led to a strong decrease in time spent on the stimulation side. Moreover, this study showed clearly that proactive fish were characterised by a stronger preference for the social stimulus and when placed in a putative aversive environment showed a lower physiological stress responses than reactive fish. In conclusion, this study showed for the first time in sea bass, that the CPP/CPA paradigm can be used to assess the valence (positive vs. negative) that fish attribute to different stimuli and that individual behavioural traits is predictive of how stimuli are perceived and thus of the magnitude of preference or avoidance behaviour.European Commission [265957]; Portuguese Fundacao para a Ciencia e Tecnologia (FCT) [FRH/BPD/72952/2010]; FCT [SFRH/BD/80029/2011

    Single Step Solution Processed GaAs Thin Films from GaMe 3 and BuAsH 2 under Ambient Pressure

    Get PDF
    This article reports on the possibility of low-cost GaAs formed under ambient pressure via a single step solution processed route from only readily available precursors, tBuAsH2 and GaMe3. The thin films of GaAs on glass substrates were found to have good crystallinity with crystallites as large as 150 nm and low contamination with experimental results matching well with theoretical density of states calculations. These results open up a route to efficient and cost-effective scale up of GaAs thin films with high material properties for widespread industrial use. Confirmation of film quality was determined using XRD, Raman, EDX mapping, SEM, HRTEM, XPS, and SIMS

    Calculating Unknown Eigenvalues with a Quantum Algorithm

    Full text link
    Quantum algorithms are able to solve particular problems exponentially faster than conventional algorithms, when implemented on a quantum computer. However, all demonstrations to date have required already knowing the answer to construct the algorithm. We have implemented the complete quantum phase estimation algorithm for a single qubit unitary in which the answer is calculated by the algorithm. We use a new approach to implementing the controlled-unitary operations that lie at the heart of the majority of quantum algorithms that is more efficient and does not require the eigenvalues of the unitary to be known. These results point the way to efficient quantum simulations and quantum metrology applications in the near term, and to factoring large numbers in the longer term. This approach is architecture independent and thus can be used in other physical implementations

    Biopsy confirmation of metastatic sites in breast cancer patients:clinical impact and future perspectives

    Get PDF
    Determination of hormone receptor (estrogen receptor and progesterone receptor) and human epidermal growth factor receptor 2 status in the primary tumor is clinically relevant to define breast cancer subtypes, clinical outcome,and the choice of therapy. Retrospective and prospective studies suggest that there is substantial discordance in receptor status between primary and recurrent breast cancer. Despite this evidence and current recommendations,the acquisition of tissue from metastatic deposits is not routine practice. As a consequence, therapeutic decisions for treatment in the metastatic setting are based on the features of the primary tumor. Reasons for this attitude include the invasiveness of the procedure and the unreliable outcome of biopsy, in particular for biopsies of lesions at complex visceral sites. Improvements in interventional radiology techniques mean that most metastatic sites are now accessible by minimally invasive methods, including surgery. In our opinion, since biopsies are diagnostic and changes in biological features between the primary and secondary tumors can occur, the routine biopsy of metastatic disease needs to be performed. In this review, we discuss the rationale for biopsy of suspected breast cancer metastases, review issues and caveats surrounding discordance of biomarker status between primary and metastatic tumors, and provide insights for deciding when to perform biopsy of suspected metastases and which one (s) to biopsy. We also speculate on the future translational implications for biopsy of suspected metastatic lesions in the context of clinical trials and the establishment of bio-banks of biopsy material taken from metastatic sites. We believe that such bio-banks will be important for exploring mechanisms of metastasis. In the future,advances in targeted therapy will depend on the availability of metastatic tissue
    corecore