510 research outputs found

    Determination of aflatoxin M1 levels in 1 white cheese samples by ELISA in Gilan province, Iran

    Get PDF
    Aflatoxin M1 (AFM1) in milk and milk products is considered to pose certain hygienic risks for human health. These metabolites are not destroyed during the pasteurization and heating process. This study was undertaken to determine the presence and levels of aflatoxin M1 (AFM1) in Iranian white cheese consumed in Gilan province (Northern Iran). A total of 90 cheese samples was randomly obtained from retail outlets. ELISA technique was used to determine the presence and the level of AFM1. In 78 of the 90 cheese samples examined (86.66%), the presence of AFM1 was detected in concentrations between 7.2 - 413ng/l. The mean level of AFM1 in positive samples was 151.97 ng/l. AFM1 levels in 21 samples (23.33%) were higher than the maximum tolerance limit (250 ng/l) accepted by the European countries. Aflatoxin high concentration in milk and milk products cause widespread negative impact on public health and demonstrate considerable economic losses for producers. Therefore, it is necessary to establish strategies for reducing aflatoxin levels in animal feed and milk products. © IDOSI Publications, 2012

    Patient level analytics using self-organising maps: a case study on type-1 diabetes self-care survey responses

    Get PDF
    Survey questionnaires are often heterogeneous because they contain both quantitative (numeric) and qualitative (text) responses, as well as missing values. While traditional, model-based methods are commonly used by clinicians, we deploy Self Organizing Maps (SOM) as a means to visualise the data. In a survey study aiming at understanding the self-care behaviour of 611 patients with Type-1 Diabetes, we show that SOM can be used to (1) identify co-morbidities; (2) to link self-care factors that are dependent on each other; and (3) to visualise individual patient profiles; In evaluation with clinicians and experts in Type-1 Diabetes, the knowledge and insights extracted using SOM correspond well to clinical expectation. Furthermore, the output of SOM in the form of a U-matrix is found to offer an interesting alternative means of visualising patient profiles instead of a usual tabular form

    Disposition kinetics and dosage regimen of levofloxacin on concomitant administration with paracetamol in crossbred calves

    Get PDF
    The disposition kinetics of levofloxacin was investigated in six male crossbred calves following single intravenous administration, at a dose of 4 mg/kg body weight, into the jugular vein subsequent to a single intramuscular injection of paracetamol (50 mg/kg). At 1 min after the injection of levofloxacin, the concentration of levofloxacin in plasma was 17.2 ± 0.36 µg/ml, which rapidly declined to 6.39 ± 0.16 µg/ml at 10 min. The drug level above the MIC90 in plasma, was detected for up to 10 h. Levofloxacin was rapidly distributed from blood to the tissue compartment as evidenced by the high values of the distribution coefficient, α (17.3 ± 1.65 /h) and the ratio of K12/K21 (1.83 ± 0.12). The values of AUC and Vdarea were 12.7 ± 0.12 µg.h/ml and 0.63 ± 0.01 l/kg. The high ratio of the AUC/MIC (126.9 ± 1.18) obtained in this study indicated the excellent antibacterial activity of levofloxacin in calves. The elimination half-life, MRT and total body clearance were 1.38 ± 0.01 h, 1.88 ± 0.01 h and 0.32 ± 0.003 l/kg/h, respectively. Based on the pharmacokinetic parameters, an appropriate intravenous dosage regimen for levofloxacin would be 5 mg/kg repeated at 24 h intervals when prescribed with paracetamol in calves

    Massively parallel finite element simulation of compressible and incompressible flows

    Get PDF
    We present a review of where our research group stands in parallel finite element simulation of flow problems on the Connection Machines, an effort that started for our group in the fourth quarter of 1991. This review includes an overview of our work on computation of flow problems involving moving boundaries and interfaces, such as free surfaces, two-liquid interfaces, and fluid-structure and fluid-particle interactions. With numerous examples, we demonstrate that, with these new computational capabilities, today we are at a point where we routinely solve practical flow problems, including those in 3D and those involving moving boundaries and interfaces. We solve these problems with unstructured grids and implicit methods, with some of the problem sizes exceeding 5 000 000 equations, and with computational speeds up to two orders of magnitude higher than what was previously available to us on the traditional vector supercomputers

    Parallel finite element simulation of large ram-air parachutes

    Get PDF
    In the near future, large ram-air parachutes are expected to provide the capability of delivering 21 ton payloads from altitudes as high as 25,000 ft. In development and test and evaluation of these parachutes the size of the parachute needed and the deployment stages involved make high-performance computing (HPC) simulations a desirable alternative to costly airdrop tests. Although computational simulations based on realistic, 3D, time-dependent models will continue to be a major computational challenge, advanced finite element simulation techniques recently developed for this purpose and the execution of these techniques on HPC platforms are significant steps in the direction to meet this challenge. In this paper, two approaches for analysis of the inflation and gliding of ram-air parachutes are presented. In one of the approaches the point mass flight mechanics equations are solved with the time-varying drag and lift areas obtained from empirical data. This approach is limited to parachutes with similar configurations to those for which data are available. The other approach is 3D finite element computations based on the Navier-Stokes equations governing the airflow around the parachute canopy and Newton's law of motion governing the 3D dynamics of the canopy, with the forces acting on the canopy calculated from the simulated flow field. At the earlier stages of canopy inflation the parachute is modelled as an expanding box, whereas at the later stages, as it expands, the box transforms to a parafoil and glides. These finite element computations are carried out on the massively parallel supercomputers CRAY T3D and Thinking Machines CM-5, typically with millions of coupled, non-linear finite element equations solved simultaneously at every time step or pseudo-time step of the simulation

    A new mixed preconditioning method based on the clustered element-by-element preconditioners

    Get PDF
    We describe a new mixed preconditioning method for finite element computations. In the clustered element-by-element (CEBE) preconditioning, the elements are merged into clusters, and the preconditioners are defined as series products of cluster level matrices. The (cluster companion) CC preconditioners are based on companion meshes associated with different levels of clustering. For each level of clustering, we construct a CEBE preconditioner and an associatedC C preconditioner. Because these two preconditioners complement each other, when they are mixed, they give better performance. Our numerical tests, for two- and three-dimensional problems governed by the Poisson equation, show that the mixed CEBE/CC preconditioning results in convergence rates which are significantly better than those obtained with the best of the CEBE and CC methods

    A novel multi-fidelity modelling-based framework for reliability-based design optimisation of composite structures

    Get PDF
    A new multi-fidelity modelling-based probabilistic optimisation framework for composite structures is presented in this paper. The multi-fidelity formulation developed herein significantly reduces the required computational time, allowing for more design variables to be considered early in the design stage. Multi-fidelity models are created by the use of finite element models, surrogate models and response correction surfaces. The accuracy and computational efficiency of the proposed optimisation methodology are demonstrated in two engineering examples of composite structures: a reliability analysis, and a reliability-based design optimisation. In these two benchmark examples, each random design variable is assigned an expected level of uncertainty. Monte Carlo Simulation (MCS), the First-Order Reliability Method (FORM) and the Second-Order Reliability Method (SORM) are used within the multi-fidelity framework to calculate the probability of failure. The reliability optimisation is a multi-objective problem that finds the optimal front, which provides both the maximum linear buckling load and minimum mass. The results show that multi-fidelity models provide high levels of accuracy while reducing computation time drastically

    The impact of computer game-making on the creativity of elementary students

    Get PDF
    Background and Objectives: Today Computer games are one of the most effective media among various educational media. However, playing games can be used as a training method in education process. But in the opinion of many researchers, the making the game by the students can have deeper effects and it is an instructive and enjoyable experience for people, especially students. Computer game-making will stimulate thinking and creativity because in this action individual engaged in the multi-dimension activities. Due to the computer game-making by students is the new phenomenon and lack of enough research on the impact of this activity on creativity, there is a need for more research in this area. In this regard, the main purpose of this study was to investigate the effect of computer game-making on the creativity of elementary male students and sub-objectives were include: 1-Investigating the effect of computer game-making on the fluid dimension of creativity. 2- Investigating the effect of computer game making on the originality dimension of creativity. 3-Investigating the effect of computer game making on the flexibility dimension of creativity. 4- Investigating the effect of computer game making on the expansion dimension of creativity. Materials and Methods: The research method in this study was a quasi-experimental pre-test and post-test design with a control group. The statistical population was all boys' primary schools in the city of Islamshahr. One of these schools was selected. The sample of this study was 40 students from fourth to sixth grade who were selected using simple random sampling method and were randomly assigned to the control and experimental groups. The data collection tool was the Torrance Form B Creativity Test, which includes four subscales: fluidity, flexibility, originality, and expansion. The method of data collection was survey and the method of data analysis was differential t-test. Findings: Data analysis showed that computer game-making had a positive effect on creativity and its dimensions. Regarding the sub-hypotheses, the results showed that the mean score of the fluidity dimension of creativity in the experimental group was 9.21 with a standard deviation of 4.52 and the mean score of the control group was 2.25 with a standard deviation of 1.43. Analysis of the data showed that the experimental group experienced more changes in the creative fluid dimension than the control group and their fluidity increased  compare to before playing, (sig 0.001). Therefore, the research hypothesis was confirmed at the level of P Conclusion: game-making involves the user in a real multidimensional problem, and it can provide the context for fostering creativity. Game-making required the presentation of new ideas in the process of designing and producing games, and students saw the result of their work objectively, they had an inner motivation to keep working and continued to complete their ideas as well as come up with new ideas, and this is the point that has been emphasized in theories of creativity.   ===================================================================================== COPYRIGHTS  ©2020 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.  ====================================================================================

    Synthesis, Cytotoxicity Assessment, and Molecular Docking of 4-Substituted-2-p-tolylthiazole Derivatives as Probable c-Src and erb Tyrosine Kinase Inhibitors

    Get PDF
    In the current project we focused on the synthesis of 4-Substituted-2-p-tolylthiazole derivatives. Cytotoxicity of synthesized compounds were evaluated against T47D breast cancer cell line and also all of the final compounds 3−7 were docked into the active site of c-Src and erb tyrosine kinases. Compound 4 was the most potent derivative in cytotoxicity assay (IC50 = 2.5 µg/mL) and it was also the most potent inhibitor of erb tyrosine kinase (Binding free energy: −10.18 kcal/mol).(doi: 10.5562/cca1939

    Coordination of smart home energy management systems in neighborhood areas: A systematic review

    Get PDF
    High penetration of selfish Home Energy Management Systems (HEMSs) causes adverse effects such as rebound peaks, instabilities, and contingencies in different regions of distribution grid. To avoid these effects and relieve power grid stress, the concept of HEMSs coordination has been suggested. Particularly, this concept can be employed to fulfill important grid objectives in neighborhood areas such as flattening aggregated load profile, decreasing electricity bills, facilitating energy trading, diminishing reverse power flow, managing distributed energy resources, and modifying consumers' consumption/generation patterns. This paper reviews the latest investigations into coordinated HEMSs. The required steps to implement these systems, accounting for coordination topologies and techniques, are thoroughly explored. This exploration is mainly reported through classifying coordination approaches according to their utilization of decomposition algorithms. Furthermore, major features, advantages, and disadvantages of the methods are examined. Specifically, coordination process characteristics, its mathematical issues and essential prerequisites, as well as players concerns are analyzed. Subsequently, specific applications of coordination designs are discussed and categorized. Through a comprehensive investigation, this work elaborates significant remarks on critical gaps in existing studies toward a useful coordination structure for practical HEMSs implementations. Unlike other reviews, the present survey focuses on effective frameworks to determine future opportunities that make the concept of coordinated HEMSs feasible. Indeed, providing effective studies on HEMSs coordination concept is beneficial to both consumers and service providers since as reported, these systems can lead to 5% to 30% reduction in electricity bills
    corecore