10,204 research outputs found

    Acute toxicity of four heavy metals to Sphaeroma walkeri and Ciroiana bovina (Crustacea:Isopoda)

    Get PDF
    The median lethal concentrations (LC50) of two Isopoda species exposed to each tested metal (Cu. Co, Cd and Zn) in static tests for different exposure periods are quite variable depending on the tested metal The LC50 values for Sphaeroma walkeri after 24 hours exposure to Cu and Co were estimated graphically to be 11.20 and 7.00 mg/1 respectively. The correspoding values for Cirolana bovina exposed to Cu, Co, Cd and Zn were 3.60, 11.0, 3.80 and 4.80 mg/1 respectively. For 2 days the LC50 of S. walkeri exposed to Cd was 5.60 mg/l, but it was 10.10 mg/l for 3 days exposure to Zn. After prolonged exposure the LC50 values decreased proportionally with the exposure duration of the test the percentages of surviving animals demonstrated a progressive decrease with increasing concentratins as a main factor from the analysis of variance (ANOV A). The sensitivity of adult S. walkeri exposed to the four heavy metals for different exposure times ranked: Cd>Co>Zn>Cu. Cirolana bovina appeared to be more sensitive to Cu. Cd and Zn than to Co. Species in order of increasing sensitivity is C. bovina more than S. walker

    Does the Superior Colliculus Control Perceptual Sensitivity or Choice Bias during Attention? Evidence from a Multialternative Decision Framework

    Get PDF
    Distinct networks in the forebrain and the midbrain coordinate to control spatial attention. The critical involvement of the superior colliculus (SC)—the central structure in the midbrain network—in visuospatial attention has been shown by four seminal, published studies in monkeys (Macaca mulatta) performing multialternative tasks. However, due to the lack of a mechanistic framework for interpreting behavioral data in such tasks, the nature of the SC's contribution to attention remains unclear. Here we present and validate a novel decision framework for analyzing behavioral data in multialternative attention tasks. We apply this framework to re-examine the behavioral evidence from these published studies. Our model is a multidimensional extension to signal detection theory that distinguishes between two major classes of attentional mechanisms: those that alter the quality of sensory information or “sensitivity,” and those that alter the selective gating of sensory information or “choice bias.” Model-based simulations and model-based analyses of data from these published studies revealed a converging pattern of results that indicated that choice-bias changes, rather than sensitivity changes, were the primary outcome of SC manipulation. Our results suggest that the SC contributes to attentional performance predominantly by generating a spatial choice bias for stimuli at a selected location, and that this bias operates downstream of forebrain mechanisms that enhance sensitivity. The findings lead to a testable mechanistic framework of how the midbrain and forebrain networks interact to control spatial attention

    Rhegmatogenous retinal detachment: a review of current practice in diagnosis and management.

    Get PDF
    Rhegmatogenous retinal detachment (RRD) is a common condition with an increasing incidence, related to the ageing demographics of many populations and the rising global prevalence of myopia, both well known risk factors. Previously untreatable, RRD now achieves primary surgical success rates of over 80%-90% with complex cases also amenable to treatment. The optimal management for RRD attracts much debate with the main options of pneumatic retinopexy, scleral buckling and vitrectomy all having their proponents based on surgeon experience and preference, case mix and equipment availability. The aim of this review is to provide an overview for the non-retina specialist that will aid and inform their understanding and discussions with patients. We review the incidence and pathogenesis of RRD, present a systematic approach to diagnosis and treatment with special consideration to managing the fellow eye and summarise surgical success and visual recovery following different surgical options

    Inert gas clearance from tissue by co-currently and counter-currently arranged microvessels

    Get PDF
    To elucidate the clearance of dissolved inert gas from tissues, we have developed numerical models of gas transport in a cylindrical block of tissue supplied by one or two capillaries. With two capillaries, attention is given to the effects of co-current and counter-current flow on tissue gas clearance. Clearance by counter-current flow is compared with clearance by a single capillary or by two co-currently arranged capillaries. Effects of the blood velocity, solubility, and diffusivity of the gas in the tissue are investigated using parameters with physiological values. It is found that under the conditions investigated, almost identical clearances are achieved by a single capillary as by a co-current pair when the total flow per tissue volume in each unit is the same (i.e., flow velocity in the single capillary is twice that in each co-current vessel). For both co-current and counter-current arrangements, approximate linear relations exist between the tissue gas clearance rate and tissue blood perfusion rate. However, the counter-current arrangement of capillaries results in less-efficient clearance of the inert gas from tissues. Furthermore, this difference in efficiency increases at higher blood flow rates. At a given blood flow, the simple conduction-capacitance model, which has been used to estimate tissue blood perfusion rate from inert gas clearance, underestimates gas clearance rates predicted by the numerical models for single vessel or for two vessels with co-current flow. This difference is accounted for in discussion, which also considers the choice of parameters and possible effects of microvascular architecture on the interpretation of tissue inert gas clearance

    Appraising Forgeability and Surface Cracking in New Generation Cast and Wrought Superalloys

    Get PDF
    Surface cracking poses a major problem in industrial forging, but the scientific understanding of the phenomenon is hampered by the difficulty of replicating it in a laboratory setting. In this work, a novel laboratory-scale experimental method is presented to investigate forgeability in new generation cast and wrought superalloys. This new approach makes possible appraising the prevalence and severity of surface cracking by mimicking the die chilling effects characteristic of hot die forging. Two high γ′-reinforced alloys are used to explore this methodology. A Gleeble thermo-mechanical simulator is used to conduct hot compression tests following a non-isothermal cycle, with the aim to simulate the cooling of the near-surface regions during the forging process. FEA simulations, sample geometry design, and heat-treatments are used to ensure the correspondence between laboratory and real-scale forging. A wide range of surface cracking results are obtained for different forging temperatures and cooling rates—proving the soundness of the method. Surprisingly, samples heated up to higher initial temperatures typically show more extensive surface cracking. These findings indicate that—along with the local mechanical conditions of the forging—die-chilling effects and forging temperatures are paramount in controlling surface cracking, as they dictate the key variables governing the distribution and kinetics of γ′ formation

    Risk scoring models for trade credit in small and medium enterprises

    Get PDF
    Trade credit refers to providing goods and services on a deferred payment basis. Commercial credit management is a matter of great importance for most small and medium enterprises (SMEs), since it represents a significant portion of their assets. Commercial lending involves assuming some credit risk due to exposure to default. Thus, the management of trade credit and payment delays is strongly related to the liquidation and bankruptcy of enterprises. In this paper we study the relationship between trade credit management and the level of risk in SMEs. Despite its relevance for most SMEs, this problem has not been sufficiently analyzed in the existing literature. After a brief review of existing literature, we use a large database of enterprises to analyze data and propose a multivariate decision-tree model which aims at explaining the level of risk as a function of several variables, both of financial and non-financial nature. Decision trees replace the equation in parametric regression models with a set of rules. This feature is an important aid for the decision process of risk experts, as it allows them to reduce time and then the economic cost of their decisions

    How often should we monitor for reliable detection of atrial fibrillation recurrence? Efficiency considerations and implications for study design

    Get PDF
    OBJECTIVE: Although atrial fibrillation (AF) recurrence is unpredictable in terms of onset and duration, current intermittent rhythm monitoring (IRM) diagnostic modalities are short-termed and discontinuous. The aim of the present study was to investigate the necessary IRM frequency required to reliably detect recurrence of various AF recurrence patterns. METHODS: The rhythm histories of 647 patients (mean AF burden: 12Âą22% of monitored time; 687 patient-years) with implantable continuous monitoring devices were reconstructed and analyzed. With the use of computationally intensive simulation, we evaluated the necessary IRM frequency to reliably detect AF recurrence of various AF phenotypes using IRM of various durations. RESULTS: The IRM frequency required for reliable AF detection depends on the amount and temporal aggregation of the AF recurrence (p<0.0001) as well as the duration of the IRM (p<0.001). Reliable detection (>95% sensitivity) of AF recurrence required higher IRM frequencies (>12 24-hour; >6 7-day; >4 14-day; >3 30-day IRM per year; p<0.0001) than currently recommended. Lower IRM frequencies will under-detect AF recurrence and introduce significant bias in the evaluation of therapeutic interventions. More frequent but of shorter duration, IRMs (24-hour) are significantly more time effective (sensitivity per monitored time) than a fewer number of longer IRM durations (p<0.0001). CONCLUSIONS: Reliable AF recurrence detection requires higher IRM frequencies than currently recommended. Current IRM frequency recommendations will fail to diagnose a significant proportion of patients. Shorter duration but more frequent IRM strategies are significantly more efficient than longer IRM durations. CLINICAL TRIAL REGISTRATION URL: Unique identifier: NCT00806689

    Renormalization Group Approach to Cosmological Back Reaction Problems

    Get PDF
    We investigated the back reaction of cosmological perturbations on the evolution of the universe using the second order perturbation of the Einstein's equation. To incorporate the back reaction effect due to the inhomogeneity into the framework of the cosmological perturbation, we used the renormalization group method. The second order zero mode solution which appears by the non-linearities of the Einstein's equation is regarded as a secular term of the perturbative expansion, we renormalized a constant of integration contained in the background solution and absorbed the secular term to this constant. For a dust dominated universe, using the second order gauge invariant quantity, we derived the renormalization group equation which determines the effective dynamics of the Friedman-Robertson-Walker universe with the back reaction effect in a gauge invariant manner. We obtained the solution of the renormalization group equation and found that perturbations of the scalar mode and the long wavelength tensor mode works as positive spatial curvature, and the short wavelength tensor mode as radiation fluid.Comment: 18 pages, revtex, to appear in Phys. Rev.
    • …
    corecore