5,402 research outputs found

    Three-dimensional finite element modelling of stack pollutant emissions

    Get PDF
    In this paper we propose a finite element method approach formodelling the air quality in a local scale over complex terrain. The area of interest is up to tens of kilometres and it includes pollutant sources. The proposed methodology involves the generation of an adaptive tetrahedral mesh, the computation of an ambient wind field, the inclusion of the plume rise effect in the wind field, and the simulation of transport and reaction of pollutants. The methodology is used to simulate a fictitious pollution episode in La Palma island (Canary Island, Spain).Peer ReviewedPostprint (published version

    Wind Field Diagnostic Model

    Get PDF
    [EN]This chapter describes Wind3D, a mass-consistent diagnostic model with an updated vertical wind profile and atmospheric parameterization. First, a description of Wind3D is provided, along with their governing equations. Next, the finite element formulation of the model and the description of the solver of the corresponding linear system are presented. The model requires an initial wind field, interpolated from data obtained in a few points of the domain. It is constructed using a logarithmic wind profile that considers the effect of both stable boundary layer (SBL) and the convective boundary layer (CBL). One important aspect of mass-consistent models is that they are quite sensitive to the values of some of their parameters. To deal with this problem, a strategy for parameter estimation based on a memetic algorithm is presented. Finally, a numerical experiment over complex terrain is presented along with some concluding remarks

    Non-parametric three-way mixed ANOVA with aligned rank tests

    Get PDF
    Research problems that require a non-parametric analysis of multifactor designs with repeated measures arise in the behavioural sciences. There is, however, a lack of available procedures in commonly used statistical packages. In the present study, a generalization of the aligned rank test for the two-way interaction is proposed for the analysis of the typical sources of variation in a three-way analysis of variance (ANOVA) with repeated measures. It can be implemented in the usual statistical packages. Its statistical properties are tested by using simulation methods with two sample sizes (n = 30 and n = 10) and three distributions (normal, exponential and double exponential). Results indicate substantial increases in power for non-normal distributions in comparison with the usual parametric tests. Similar levels of Type I error for both parametric and aligned rank ANOVA were obtained with non-normal distributions and large sample sizes. Degrees-of-freedom adjustments for Type I error control in small samples are proposed. The procedure is applied to a case study with 30 participants per group where it detects gender differences in linguistic abilities in blind children not shown previously by other methods.We would like to thank Robert Steiner and David W. Smith of New Mexico State University for their support in conducting this study

    Artificial Intelligence for breast cancer detection:Technology, challenges, and prospects

    Get PDF
    Purpose: This review provides an overview of the current state of artificial intelligence (AI) technology for automated detection of breast cancer in digital mammography (DM) and digital breast tomosynthesis (DBT). It aims to discuss the technology, available AI systems, and the challenges faced by AI in breast cancer screening. Methods: The review examines the development of AI technology in breast cancer detection, focusing on deep learning (DL) techniques and their differences from traditional computer-aided detection (CAD) systems. It discusses data pre-processing, learning paradigms, and the need for independent validation approaches. Results: DL-based AI systems have shown significant improvements in breast cancer detection. They have the potential to enhance screening outcomes, reduce false negatives and positives, and detect subtle abnormalities missed by human observers. However, challenges like the lack of standardised datasets, potential bias in training data, and regulatory approval hinder their widespread adoption. Conclusions: AI technology has the potential to improve breast cancer screening by increasing accuracy and reducing radiologist workload. DL-based AI systems show promise in enhancing detection performance and eliminating variability among observers. Standardised guidelines and trustworthy AI practices are necessary to ensure fairness, traceability, and robustness. Further research and validation are needed to establish clinical trust in AI. Collaboration between researchers, clinicians, and regulatory bodies is crucial to address challenges and promote AI implementation in breast cancer screening.</p

    Diferencias de género en la percepción del atractivo facial ante caras de ambos sexos

    Get PDF
    Quartes Jornades de Foment de la Investigació de la FCHS (Any 1998-1999)En el estudio de la percepción del atractivo facial es escasa la investigación que recoja ambos sexos tanto en la muestra de caras como de observadores. El presente trabajo pretende describir los perfiles de atractivo para cada sexo y las diferencias entre el género de los evaluadores en la percepción del atractivo facial. Se recoge la evaluación del atractivo de una muestra de 20 participantes (10 M, 10 F) ante dos grupos de 32 caras, presentadas durante 1 s. en pantalla de ordenador. Las caras se han construido combinando los rasgos longitud y amplitud de mandíbula, tipo de cabello, tamaño de ojos y forma de labios, con dos niveles para cada rasgo. El atractivo se evaluó mediante una escala de 5 niveles (nada atractivo-muy atractivo). Se realiza un análisis de fiabilidad de las valoraciones del atractivo. Los resultados informan que el perfil atractivo en caras femeninas corresponde, para ambos géneros, a mandíbula corta, cabello claro y liso, y labios gruesos, añadiendo los hombres la mandíbula estrecha. Ante caras masculinas, los hombres perciben como atractivo la mandíbula larga y los labios gruesos, mientras que las mujeres perciben el perfil atractivo masculino con labios gruesos y el cabello liso y claro. Se encuentra diferencias de género únicamente en la forma de los labios. El estudio del atractivo facial ha sido abordado a partir de numerosos temas asociados: las diferencias culturales (Cunningham, Roberts, Barbee, Druen y Wu, 1995; Buss, 1989; Zebrowitz, 1993), la influencia de la asimetría facial (Gangestad, Thornhill y Yeo, 1994; Grammer y Thornhill, 1994), el promedio de caras como criterio de atractivo (Perrett, May and Yoshikawa, 1994; Johnston and Franklin, 1993), y el reconocimiento facial en función del atractivo (por ejemplo, Sarno y Alley, 1997). Para una buena revisión del estudio de la percepción facial se recomienda los trabajos de V. Bruce (especialmente, Bruce and Young, 1998). Referente al análisis de los rasgos relevantes en la cara atractiva, se puede agrupar dos tipos de teorías: descriptivas y explicativas. Las teorías descriptivas son representadas por Cunningham (1986; 1995), gracias al Modelo Adaptativo Múltiple propuesto en sus trabajos. Cunningham y colaboradores presentan un patrón descriptivo de rasgos en función de cinco parámetros: rasgos neonatos, que engoblaría ojos grandes, nariz pequeña, mandíbula corta y piel lisa y suave; rasgos de madurez sexual, que relacionaría los pómulos prominentes en mujeres y la mandíbula larga en hombres; rasgos de senectud, asociados a un pelo blanco y calvicie, por ejemplo; rasgos expresivos, los cuales corresponden a labios gruesos y cejas altas; y, finalmente, rasgos de cuidados personales, que hace referencia al estilo de cabello, peso, figura, uso de cosméticos, tatuajes... Según el Modelo Adaptativo Múltiple, las tres primeras agrupaciones de rasgos dependen de factores biológicos, y el resto a factores personales y sociales. En definitiva, Cunningham postula que la interacción entre rasgos neonatos, de madurez sexual y expresivos se asocia a una percepción de mayor atractivo. Las teorías explicativas están encabezadas por la Hipótesis de la Selección Sexual, de Johnston y Franklin (1993). En ésta se postula que los ragos atractivos, como la mandíbula corta, funcionan como indicadores de alta fertilidad. Este valor reproductivo se relaciona con el concepto de selección natural: La belleza es un atributo funcional, contribuyendo a la supervivencia de los genes individuales. Una revisión de las publicaciones relacionadas con el atractivo facial permite señalar la omisión generalizada de rostros masculinos como estimulación evaluada. Un trabajo vinculados a la asociación entre atractivo y el componente P300 del potencial cerebral evocado permiten establecer conclusiones sobre el perfil masculino atractivo en una muestra norteamericana (Oliver, Guan y Johnston, 1999),en donde aparecen también diferencias de género en la percepción del cabello, los labios, la longitud y la amplitud de mandíbula. La presente investigación tiene como objetivos contrastar los resultados en los juicios de atractivo en este último estudio mediante una muestra española, utilizando el mismo conjunto de estímulos. En este caso se describirá los perfiles de caras masculinas y femeninas mediante análisis separados para hombres y mujeres, en lugar de un análisis conjunto con género como factor. Se evaluará además la estabilidad de las medidas mediante un análisis de fiabilidad. Las hipótesis que se establecen para el primer objetivo se definen mediante un perfil de atractivo de caras femeninas con ojos grandes, longitud de mandíbula corta (Cunningham, 1986; 1995) y labios gruesos (Johnston, 1993). Ante caras masculinas, el perfil esperado se limita a una mandíbula larga (Cunningham, 1986; 1995). Las hipótesis asociadas a las diferencias de género van en la línea de las expuestas en Oliver y colaboradores (1999), es decir, la percepción del atractivo en el cabello, los labios y la longitud de mandíbula será diferente significativamente en función del género del evaluador

    On the Numerical Modelling of Machining Processes via the Particle Finite Element Method (PFEM)

    Get PDF
    Metal cutting or machining is a process in which a thin layer or metal, the chip, is removed by a wedge-shaped tool from a large body. Metal cutting processes are present in big industries (automotive, aerospace, home appliance, etc.) that manufacture big products, but also high tech industries where small piece but high precision is needed. The importance of machining is such that, it is the most common manufacturing processes for producing parts and obtaining specified geometrical dimensions and surface finish, its cost represent 15% of the value of all manufactured products in all industrialized countries. Cutting is a complex physical phenomena in which friction, adiabatic shear bands, excessive heating, large strains and high rate strains are present. Tool geometry, rake angle and cutting speed play an important role in chip morphology, cutting forces, energy consumption and tool wear. The study of metal cutting is difficult from an experimental point of view, because of the high speed at which it takes place under industrial machining conditions (experiments are difficult to carry out), the small scale of the phenomena which are to be observed, the continuous development of tool and workpiece materials and the continuous development of tool geometries, among others reasons. Simulation of machining processes in which the workpiece material is highly deformed on metal cutting is a major challenge of the finite element method (FEM). The principal problem in using a conventional FE model with langrangian mesh is mesh distortion in the high deformation. Traditional Langrangian approaches such as FEM cannot resolve the large deformations very well. Element distortion has been always matter of concern which limited the analysis to incipient chip formation in some studies. Instead, FEM with an Eulerian formulation require the knowledge of the chip geometry in advance, which, undoubtedly, restricts the range of cutting conditions capable of being analyzed. Furthermore serrated and discontinuous chip formation cannot be simulated. The main objective of this work is precisely to contribute to solve some of the problems described above through the extension of the Particle Finite Element Method (PFEM) to thermo-mechanical problems in solid mechanics which involve large strains and rotations, multiple contacts and generation of new surfaces, with the main focus in the numerical simulation of metal cutting process. In this work, we exploit the particle and lagrangian nature of PFEM and the advantages of finite element discretization to simulate the different chip shapes (continuous and serrated) that appear when cutting materials like steel and titanium at different cutting speeds. The new ingredients of PFEM are focused on the insertion and remotion of particles, the use of constrained Delaunay triangulation and a novel transfer operator of the internal variables. The remotion and insertion of particles circumvents the difficulties associated to element distortion, allowing the separation of chip and workpiece without using a physical or geometrical criterion. The constrained Delaunay improves mass conservation and the chip shape through the simulation, and the transfer allows us to minimize the error due to numerical diffusion. The thermo-mechanical problem, formulated in the framework of continuum mechanics, is integrated using an isothermal split in conjunction with implicit, semi-explicit and IMPLEX schemes. The tool has been discretized using a standard three-node triangle finite element. The workpiece has been discretized using a mixed displacement-pressure finite element to deal with the incompressibility constraint imposed by plasticity. The mixed finite element has been stabilized using the Polynomial Pressure Projection (PPP), initially applied in the literature to the Stokes equation in the field of fluid mechanics. The behavior of the tool is described using a Neo-Hookean Hyperelastic constitutive model. The behavior of the workpiece is described using a rate dependent, isotropic, finite strain j2 elastoplasticity with three different yields functions used to describe the strain hardening, the strain rate hardening and the thermal softening (Simo, Johnson Cook, Baker) of different materials under a wide variety of cutting conditions. The friction at the tool chip interface is modeled using the Norton-Hoff friction law. The heat transfer at the tool chip interface includes heat transfer due to conduction and friction. To validate the proposed mixed displacement-pressure formulation, we present three benchmark problems which validate the approach, namely, plain strain Cook&acute;s membrane, the Taylor impact test and a thermo-mechanical traction test. The isothermal-IMPLEX split presented in this work has been validated using a thermo-mechanical traction test. Besides, in order to explore the possibilities of the numerical model as a tool for assisting in the design and analysis of metal cutting processes a set of representative numerical simulations are presented in this work, among them: cutting using a rate independent yield function, cutting using different rake angles, cutting with a deformable tool and a frictionless approach, cutting with a deformable tool including friction and heat transfer, the transition from continuous to serrated chip formation increasing the cutting speed. We have assembled several numerical tec niques which enable the simulation of orthogonal cutting processes. Our simulations demonstrate the ability of the PFEM to predict chip morphologies consistent with experimental observations. Also, our results show that the suitable selection of the global time integration scheme may involve savings in computation time up to 9 times. Furthermore, this work present a sensibility analysis to cutting conditions by means of a Design of Experiments (DoE). The Design of Experiments carried out with PFEM has been compared with DoE carried out with AdvantaEdge, Deform, Abaqus and Experiments. The results obtained with PFEM and other numerical simulations are very similar, while, a comparison of numerical simulations and experiments show some differences in the output variables that depend on the friction phenomena. The results suggest that is necessary to improve the modelization of the friction at the tool-chip interface

    The subcommissural organ of the rat secretes Reissner's fiber glycoproteins and CSF-soluble proteins reaching the internal and external CSF compartments

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The subcommissural organ (SCO) is a highly conserved brain gland present throughout the vertebrate phylum; it secretes glycoproteins into the cerebrospinal fluid (CSF), where they aggregate to form Reissner's fiber (RF). SCO-spondin is the major constituent protein of RF. Evidence exists that the SCO also secretes proteins that remain soluble in the CSF. The aims of the present investigation were: (i) to identify and partially characterize the SCO-secretory compounds present in the SCO gland itself and in the RF of the Sprague-Dawley rat and non-hydrocephalic hyh mouse, and in the CSF of rat; (ii) to make a comparative analysis of the proteins present in these three compartments; (iii) to identify the proteins secreted by the SCO into the CSF at different developmental periods.</p> <p>Methods</p> <p>The proteins of the SCO secreted into the CSF were studied (i) by injecting specific antibodies into ventricular CSF <it>in vivo</it>; (ii) by immunoblots of SCO, RF and CSF samples, using specific antibodies against the SCO secretory proteins (AFRU and anti-P15). In addition, the glycosylated nature of SCO-compounds was analysed by concanavalin A and wheat germ agglutinin binding. To analyse RF-glycoproteins, RF was extracted from the central canal of juvenile rats and mice; to investigate the CSF-soluble proteins secreted by the SCO, CSF samples were collected from the cisterna magna of rats at different stages of development (from E18 to PN30).</p> <p>Results</p> <p>Five glycoproteins were identified in the rat SCO with apparent molecular weights of 630, 450, 390, 320 and 200 kDa. With the exception of the 200-kDa compound, all other compounds present in the rat SCO were also present in the mouse SCO. The 630 and 390 kDa compounds of the rat SCO have affinity for concanavalin A but not for wheat germ agglutinin, suggesting that they correspond to precursor forms. Four of the AFRU-immunoreactive compounds present in the SCO (630, 450, 390, 320 kDa) were absent from the RF and CSF. These may be precursor and/or partially processed forms. Two other compounds (200, 63 kDa) were present in SCO, RF and CSF and may be processed forms. The presence of these proteins in both, RF and CSF suggests a steady-state RF/CSF equilibrium for these compounds. Eight AFRU-immunoreactive bands were consistently found in CSF samples from rats at E18, E20 and PN1. Only four of these compounds were detected in the cisternal CSF of PN30 rats. The 200 kDa compound appears to be a key compound in rats since it was consistently found in all samples of SCO, RF and embryonic and juvenile CSF.</p> <p>Conclusion</p> <p>It is concluded that (i) during the late embryonic life, the rat SCO secretes compounds that remain soluble in the CSF and reach the subarachnoid space; (ii) during postnatal life, there is a reduction in the number and concentration of CSF-soluble proteins secreted by the SCO. The molecular structure and functional significance of these proteins remain to be elucidated. The possibility they are involved in brain development has been discussed.</p

    Urban Air Quality Modelling using Finite Elements

    Get PDF
    [EN]Urban air quality simulation requires models with di erent characteristics to those used in mesoscale or microscale. The spatial discretisation resolution is one of them. Urban geometries require smaller elements than those in other scales. Mesh for this kind of geometries are generated using the Meccano method; a mesh generator that has generated high-quality meshes of complex geometries [1]. In this work, we have added capabilities to insert buildings into the mesh maintaining the element quality. Wind eld should also be suitable for urban scale. To this end, we will use a mass-consistent model [2]. Thisapproximation has performed e ciently in microscale problems, coupling with mesoscale numerical weather prediction models. Finally, an adaptive nite element method is used to simulate the convection-di usion-reaction equation [3, 4]. The problem can be convectiondominant, so it is stabilised using a Least-Squares nite element method. The resulting matrix is symmetric and is solved using the Conjugate Gradient method preconditioned with an incomplete Cholesky factorisation. The model is applied to the city of Las Palmas de Gran Canaria

    Practical values and uncertainty in regulatory decision making

    Get PDF
    Regulatory science, which generates knowledge relevant for regulatory decision‐making, is different from standard academic science in that it is oriented mainly towards the attainment of non‐epistemic (practical) aims. The role of uncertainty and the limits to the relevance of academic science are being recognized more and more explicitly in regulatory decision‐making. This has led to the introduction of regulation‐specific scientific methodologies in order to generate decision‐relevant data. However, recent practical experience with such non‐standard methodologies indicates that they, too, may be subject to important limitations. We argue that the attainment of non‐epistemic values and aims (like the protection of human health and the environment) requires not only control of the quality of the data and the methodologies, but also the selection of the level of regulation deemed adequate in each specific case (including a decision about which of the two, under‐regulation or over‐regulation, would be more acceptable)
    corecore