12 research outputs found

    RTVE’s transmedia strategy aimed at young audiences: the case of Playz (2017-2020)

    Get PDF
    Generally speaking, audio-visual consumption is changing. More specifically, in recent years young people have increased their viewing of content through the Internet, which is often supplied through online platforms. This article focuses on one the Playz platform, which is one of the key strategies being used by Spanish public television (RTVE), as it aims to reconnect with the new generations through transmedia narratives. Based on studies undertaken by Costa Sánchez (2013) and Cascajosa-Virino (2018), a content analysis of the series broadcast on Playz between 2017 and 2020 has been carried out, taking into account the duration, year of release, and number of episodes or seasons for each of them. This study confirms a clear interest by this platform in generating products that are innovative from their very conception. In fact, a large number of transmedia strategies have been identified, such as episodes turned into films, a high degree of interactivity with the audience, original music videos, promotional events and more, which is in line with Playz’s public service obligation to reach out to all types of audiences through all the platforms available to them.El consumo audiovisual está cambiando de manera general y, en concreto, la juventud ha incrementado durante los últimos años el visionado de contenidos a través de Internet, a menudo servidos a través de plataformas online. Este artículo se centra en una de esas, Playz, una pieza estratégica de la televisión pública española (RTVE), en su objetivo de reconectar con las nuevas generaciones a través de narrativas transmedia. Partiendo de los estudios acometidos por Costa Sánchez (2013) y por Cascajosa-Virino (2018), se ha realizado un análisis del contenido de las series de Playz entre 2017 y 2020, y se han tenido en cuenta desde la temática, hasta la duración, pasando por el número de capítulos o de temporadas de cada una de ellas. Este trabajo muestra el claro interés de esta plataforma por generar productos innovadores desde su misma concepción. De hecho, se han identificado un gran número de estrategias transmedia (capítulos convertidos en películas, alto grado de interactividad con la audiencia, vídeos musicales originales, actos promocionales…), que siguen la línea de la obligación de servicio público de Playz de acercarse a toda clase de público y desde todas las plataformas puestas a su alcance

    Design and execution of a verification, validation, and uncertainty quantification plan for a numerical model of left ventricular flow after LVAD implantation

    Get PDF
    BACKGROUND: Left ventricular assist devices (LVADs) are implantable pumps that act as a life support therapy for patients with severe heart failure. Despite improving the survival rate, LVAD therapy can carry major complications. Particularly, the flow distortion introduced by the LVAD in the left ventricle (LV) may induce thrombus formation. While previous works have used numerical models to study the impact of multiple variables in the intra-LV stagnation regions, a comprehensive validation analysis has never been executed. The main goal of this work is to present a model of the LV-LVAD system and to design and follow a verification, validation and uncertainty quantification (VVUQ) plan based on the ASME V&V40 and V&V20 standards to ensure credible predictions. METHODS: The experiment used to validate the simulation is the SDSU cardiac simulator, a bench mock-up of the cardiovascular system that allows mimicking multiple operation conditions for the heart-LVAD system. The numerical model is based on Alya, the BSC’s in-house platform for numerical modelling. Alya solves the Navier-Stokes equation with an Arbitrary Lagrangian-Eulerian (ALE) formulation in a deformable ventricle and includes pressure-driven valves, a 0D Windkessel model for the arterial output and a LVAD boundary condition modeled through a dynamic pressure-flow performance curve. The designed VVUQ plan involves: (a) a risk analysis and the associated credibility goals; (b) a verification stage to ensure correctness in the numerical solution procedure; (c) a sensitivity analysis to quantify the impact of the inputs on the four quantities of interest (QoIs) (average aortic root flow , maximum aortic root flow , average LVAD flow , and maximum LVAD flow ); (d) an uncertainty quantification using six validation experiments that include extreme operating conditions. RESULTS: Numerical code verification tests ensured correctness of the solution procedure and numerical calculation verification showed a grid convergence index (GCI)95% <3.3%. The total Sobol indices obtained during the sensitivity analysis demonstrated that the ejection fraction, the heart rate, and the pump performance curve coefficients are the most impactful inputs for the analysed QoIs. The Minkowski norm is used as validation metric for the uncertainty quantification. It shows that the midpoint cases have more accurate results when compared to the extreme cases. The total computational cost of the simulations was above 100 [core-years] executed in around three weeks time span in Marenostrum IV supercomputer. Conclusions This work details a novel numerical model for the LV-LVAD system, that is supported by the design and execution of a VVUQ plan created following recognised international standards. We present a methodology demonstrating that stringent VVUQ according to ASME standards is feasible but computationally expensive.This project was funded in part by the FDA Critical Path Initiative and by an appointment to the Research Participation Program at the Division of Biomedical Physics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, administered by the Oak Ridge Institute for Science, and Education through an interagency agreement between the U.S. Department of Energy and FDA to RAG. MV and AS acknowledge the funding from the project CompBioMed2 (H2020-EU.1.4.1.3. Grant number: 823712), SilicoFCM (H2020-EU.3.1.5. Grant number: 777204), and NEOTEC 2019 - "Generador de Corazones Virtuales" (“Ministerio de Economía y competititvidad”, EXP - 00123159 / SNEO-20191113). AS salary is partially funded by the “Ministerio de Economía y competititvidad” under the Torres Quevedo Program (grant number: PTQ2019-010528). CB salary is partially funded by the Torres Quevedo Program (grant number: PTQ2018-010290). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.Peer ReviewedPostprint (published version

    Domain decomposition methods for domain composition purpose: Chimera, overset, gluing and sliding mesh methods

    Get PDF
    Domain composition methods (DCM) consist in obtaining a solution to a problem, from the formulations of the same problem expressed on various subdomains. These methods have therefore the opposite objective of domain decomposition methods (DDM). Indeed, in contrast to DCM, these last techniques are usually applied to matching meshes as their purpose consists mainly in distributing the work in parallel environments. However, they are sometimes based on the same methodology as after decomposing, DDM have to recompose. As a consequence, in the literature, the term DDM has many times substituted DCM. DCM are powerful techniques that can be used for different purposes: to simplify the meshing of a complex geometry by decomposing it into different meshable pieces; to perform local refinement to adapt to local mesh requirements; to treat subdomains in relative motion (Chimera, sliding mesh); to solve multiphysics or multiscale problems, etc. The term DCM is generic and does not give any clue about how the fragmented solutions on the different subdomains are composed into a global one. In the literature, many methodologies have been proposed: they are mesh-based, equation-based, or algebraic-based. In mesh-based formulations, the coupling is achieved at the mesh level, before the governing equations are assembled into an algebraic system (mesh conforming, Shear-Slip Mesh Update, HERMESH). The equation-based counterpart recomposes the solution from the strong or weak formulation itself, and are implemented during the assembly of the algebraic system on the subdomain meshes. The different coupling techniques can be formulated for the strong formulation at the continuous level, for the weak formulation either at the continuous or at the discrete level (iteration-by-subdomains, mortar element, mesh free interpolation). Although the different methods usually lead to the same solutions at the continuous level, which usually coincide with the solution of the problem on the original domain, they have very different behaviors at the discrete level and can be implemented in many different ways. Eventually, algebraic- based formulations treat the composition of the solutions directly on the matrix and right-hand side of the individual subdomain algebraic systems. The present work introduces mesh-based, equation-based and algebraicbased DCM. It however focusses on algebraic-based domain composition methods, which have many advantages with respect to the others: they are relatively problem independent; their implicit implementation can be hidden in the iterative solver operations, which enables one to avoid intensive code rewriting; they can be implemented in a multi-code environment

    Postverbal subjects in Romance and German : Some notes on the Unaccusative Hypothesis

    No full text
    This study deals with the problems presented by postverbal subjetcs in constructions with unaccusative verbs, because they apparently are counterexamples to the explanatory power of the Unaccusative Hypothesis. The paper defends the position that such examples do not weaken the hypothesis since it is assumed that such postverbal subjects ought to leave the original object position and move to a position where nominative case is regularly assigned. This account is supported by two kinds of consideration. On the one hand, there exists no inherent partitive case to justify the lack of movement, and on the other, agreement facts and distribution of expletive pronouns in different Romance languages and in German show that the postverbal position of subjects are not but the result of movements compatible with the assignment of nominative case

    RTVE’s transmedia strategy aimed at young audiences: the case of Playz (2017-2020)

    No full text
    Generally speaking, audio-visual consumption is changing. More specifically, in recent years young people have increased their viewing of content through the Internet, which is often supplied through online platforms. This article focuses on one the Playz platform, which is one of the key strategies being used by Spanish public television (RTVE), as it aims to reconnect with the new generations through transmedia narratives. Based on studies undertaken by Costa Sánchez (2013) and Cascajosa-Virino (2018), a content analysis of the series broadcast on Playz between 2017 and 2020 has been carried out, taking into account the duration, year of release, and number of episodes or seasons for each of them. This study confirms a clear interest by this platform in generating products that are innovative from their very conception. In fact, a large number of transmedia strategies have been identified, such as episodes turned into films, a high degree of interactivity with the audience, original music videos, promotional events and more, which is in line with Playz’s public service obligation to reach out to all types of audiences through all the platforms available to them.El consumo audiovisual está cambiando de manera general y, en concreto, la juventud ha incrementado durante los últimos años el visionado de contenidos a través de Internet, a menudo servidos a través de plataformas online. Este artículo se centra en una de esas, Playz, una pieza estratégica de la televisión pública española (RTVE), en su objetivo de reconectar con las nuevas generaciones a través de narrativas transmedia. Partiendo de los estudios acometidos por Costa Sánchez (2013) y por Cascajosa-Virino (2018), se ha realizado un análisis del contenido de las series de Playz entre 2017 y 2020, y se han tenido en cuenta desde la temática, hasta la duración, pasando por el número de capítulos o de temporadas de cada una de ellas. Este trabajo muestra el claro interés de esta plataforma por generar productos innovadores desde su misma concepción. De hecho, se han identificado un gran número de estrategias transmedia (capítulos convertidos en películas, alto grado de interactividad con la audiencia, vídeos musicales originales, actos promocionales…), que siguen la línea de la obligación de servicio público de Playz de acercarse a toda clase de público y desde todas las plataformas puestas a su alcance

    Alya: computational solid mechanics for supercomputers

    Get PDF
    While solid mechanics codes are now conventional tools both in industry and research, the increasingly more exigent requirements of both sectors are fuelling the need for more computational power and more advanced algorithms. For obvious reasons, commercial codes are lagging behind academic codes often dedicated either to the implementation of one new technique, or the upscaling of current conventional codes to tackle massively large scale computational problems. Only in a few cases, both approaches have been followed simultaneously. In this article, a solid mechanics simulation strategy for parallel supercomputers based on a hybrid approach is presented. Hybrid parallelization exploits the thread-level parallelism of multicore architectures, combining MPI tasks with OpenMP threads. This paper describes the proposed strategy, programmed in Alya, a parallel multi-physics code. Hybrid parallelization is specially well suited for the current trend of supercomputers, namely large clusters of multicores. The strategy is assessed through transient non-linear solid mechanics problems, both for explicit and implicit schemes, running on thousands of cores. In order to demonstrate the flexibility of the proposed strategy under advance algorithmic evolution of computational mechanics, a non-local parallel overset meshes method (Chimera-like) is implemented and the conservation of the scalability is demonstrated

    Concomitant respiratory failure can impair myocardial oxygenation in patients with acute cardiogenic shock supported by VA-ECMO

    Get PDF
    Venous-arterial extracorporeal membrane oxygenation (VA-ECMO) treatment for acute cardiogenic shock in patients who also have acute lung injury predisposes development of a serious complication called “north-south syndrome” (NSS) which causes cerebral hypoxia. NSS is poorly characterized and hemodynamic studies have focused on cerebral perfusion ignoring the heart. We hypothesized in NSS the heart would be more likely to receive hypoxemic blood than the brain due to the proximity of the coronary arteries to the aortic annulus. To test this, we conducted a computational fluid dynamics simulation of blood flow in a human supported by VA-ECMO. Simulations quantified the fraction of blood at each aortic branching vessel originating from residual native cardiac output versus VA-ECMO. As residual cardiac function was increased, simulations demonstrated myocardial hypoxia would develop prior to cerebral hypoxia. These results illustrate the conditions where NSS will develop and the relative cardiac function that will lead to organ-specific hypoxia.This work was supported in part by the University of Minnesota’s Medical School Academic Investment Education Program grant and the Institute for Engineering in Medicine. We also acknowledge the Partnership for Advanced Computing in Europe (PRACE) for awarding us access to Joliot-Curie Rome supercomputer at Bruyères-le-Châtel, under the project Cardiovascular-COVID. Additionally, we would like to acknowledge the Torres Quevedo Program, the Ramón y Cajal Program, and the European Institute of Innovation and Technology for support.Peer ReviewedPostprint (published version

    Performance assessment of an electrostatic filter-diverter stent cerebrovascular protection device. Is it possible not to use anticoagulants in atrial fibrilation elderly patients?

    No full text
    Stroke is the second leading cause of death worldwide. Nearly two-thirds of strokes are produced by cardioembolisms, and half of cardioembolic strokes are triggered by Atrial Fibrillation (AF), the most common type of arrhythmia. A more recent cause of cardioembolisms is Transcatheter Aortic Valve Replacements (TAVRs), which may onset post-procedural adverse events such as stroke and Silent Brain Infarcts (SBIs), for which no definitive treatment exists, and which will only get worse as TAVRs are implanted in younger and lower risk patients. It is well known that some specific characteristics of elderly patients may lower the safety and efficacy of anticoagulation therapy, making it a real urgency to find alternative therapies. We propose a device consisting of a strut structure placed at the base of the treated artery to model the potential risk of cerebral embolisms caused by dislodged debris of varying sizes. This work analyzes a design based on a patented medical device, intended to block cardioembolisms from entering the cerebrovascular system, with a particular focus on AF, and potentially TAVR patients. The study has been carried out in two stages. Both of them based on computational fluid dynamics (CFD) coupled with Lagrangian particle tracking method. The first stage of the work evaluates a variety of strut thicknesses and inter-strut spacings, contrasting with the device-free baseline geometry. The analysis is carried out by imposing flowrate waveforms characteristic of both healthy and AF patients. Boundary conditions are calibrated to reproduce physiological flowrates and pressures in a patient's aortic arch. In the second stage, the optimal geometric design from the first stage was employed, with the addition of lateral struts to prevent the filtration of particles and electronegatively charged strut surfaces, studying the effect of electrical forces on the clots if they are considered charged. Flowrate boundary conditions were used to emulate both healthy and AF conditions. Results from numerical simulations coming form the first stage indicate that the device blocks particles of sizes larger than the inter-strut spacing. It was found that lateral strut space had the highest impact on efficacy. Based on the results of the second stage, deploying the electronegatively charged device in all three aortic arch arteries, the number of particles entering these arteries was reduced on average by 62.6% and 51.2%, for the healthy and diseased models respectively, matching or surpassing current oral anticoagulant efficacy. In conclusion, the device demonstrated a two-fold mechanism for filtering emboli: while the smallest particles are deflected by electrostatic repulsion, avoiding microembolisms, which could lead to cognitive impairment, the largest ones are mechanically filtered since they cannot fit in between the struts, effectively blocking the full range of particle sizes analyzed in this study. The device presented in this manuscript offers an anticoagulant-free method to prevent stroke and SBIs, imperative given the growing population of AF and elderly patients

    Domain Decomposition Methods for Domain Composition Purpose: Chimera, Overset, Gluing and Sliding Mesh Methods

    No full text
    The final publication is available at link.springer.com via http://dx.doi.org/10.1007/s11831-016-9198-8Domain composition methods (DCM) consist in obtaining a solution to a problem, from the formulations of the same problem expressed on various subdomains. These methods have therefore the opposite objective of domain decomposition methods (DDM). Indeed, in contrast to DCM, these last techniques are usually applied to matching meshes as their purpose consists mainly in distributing the work in parallel environments. However, they are sometimes based on the same methodology as after decomposing, DDM have to recompose. As a consequence, in the literature, the term DDM has many times substituted DCM. DCM are powerful techniques that can be used for different purposes: to simplify the meshing of a complex geometry by decomposing it into different meshable pieces; to perform local refinement to adapt to local mesh requirements; to treat subdomains in relative motion (Chimera, sliding mesh); to solve multiphysics or multiscale problems, etc. The term DCM is generic and does not give any clue about how the fragmented solutions on the different subdomains are composed into a global one. In the literature, many methodologies have been proposed: they are mesh-based, equation-based, or algebraic-based. In mesh-based formulations, the coupling is achieved at the mesh level, before the governing equations are assembled into an algebraic system (mesh conforming, Shear-Slip Mesh Update, HERMESH). The equation-based counterpart recomposes the solution from the strong or weak formulation itself, and are implemented during the assembly of the algebraic system on the subdomain meshes. The different coupling techniques can be formulated for the strong formulation at the continuous level, for the weak formulation either at the continuous or at the discrete level (iteration-by-subdomains, mortar element, mesh free interpolation). Although the different methods usually lead to the same solutions at the continuous level, which usually coincide with the solution of the problem on the original domain, they have very different behaviors at the discrete level and can be implemented in many different ways. Eventually, algebraic- based formulations treat the composition of the solutions directly on the matrix and right-hand side of the individual subdomain algebraic systems. The present work introduces mesh-based, equation-based and algebraicbased DCM. It however focusses on algebraic-based domain composition methods, which have many advantages with respect to the others: they are relatively problem independent; their implicit implementation can be hidden in the iterative solver operations, which enables one to avoid intensive code rewriting; they can be implemented in a multi-code environment
    corecore