241 research outputs found

    Models of collective cell motion for cell populations with different aspect ratio: diffusion, proliferation & travelling waves

    Get PDF
    Continuum, partial differential equation models are often used to describe the collective motion of cell populations, with various types of motility represented by the choice of diffusion coefficient, and cell proliferation captured by the source terms. Previously, the choice of diffusion coefficient has been largely arbitrary, with the decision to choose a particular linear or nonlinear form generally based on calibration arguments rather than making any physical connection with the underlying individual-level properties of the cell motility mechanism. In this work we provide a new link between individual-level models, which account for important cell properties such as varying cell shape and volume exclusion, and population-level partial differential equation models. We work in an exclusion process framework, considering aligned, elongated cells that may occupy more than one lattice site, in order to represent populations of agents with different sizes. Three different idealisations of the individual-level mechanism are proposed, and these are connected to three different partial differential equations, each with a different diffusion coefficient; one linear, one nonlinear and degenerate and one nonlinear and nondegenerate. We test the ability of these three models to predict the population-level response of a cell spreading problem for both proliferative and nonproliferative cases. We also explore the potential of our models to predict long time travelling wave invasion rates and extend our results to two-dimensional spreading and invasion. Our results show that each model can accurately predict density data for nonproliferative systems, but that only one does so for proliferative systems. Hence great care must be taken to predict density data with varying cell shape

    SAM-SoS: A stochastic software architecture modeling and verification approach for complex System-of-Systems

    Get PDF
    A System-of-Systems (SoS) is a complex, dynamic system whose Constituent Systems (CSs) are not known precisely at design time, and the environment in which they operate is uncertain. SoS behavior is unpredictable due to underlying architectural characteristics such as autonomy and independence. Although the stochastic composition of CSs is vital to achieving SoS missions, their unknown behaviors and impact on system properties are unavoidable. Moreover, unknown conditions and volatility have significant effects on crucial Quality Attributes (QAs) such as performance, reliability and security. Hence, the structure and behavior of a SoS must be modeled and validated quantitatively to foresee any potential impact on the properties critical for achieving the missions. Current modeling approaches lack the essential syntax and semantics required to model and verify SoS behaviors at design time and cannot offer alternative design choices for better design decisions. Therefore, the majority of existing techniques fail to provide qualitative and quantitative verification of SoS architecture models. Consequently, we have proposed an approach to model and verify Non-Deterministic (ND) SoS in advance by extending the current algebraic notations for the formal models as a hybrid stochastic formalism to specify and reason architectural elements with the required semantics. A formal stochastic model is developed using a hybrid approach for architectural descriptions of SoS with behavioral constraints. Through a model-driven approach, stochastic models are then translated into PRISM using formal verification rules. The effectiveness of the approach has been tested with an end-to-end case study design of an emergency response SoS for dealing with a fire situation. Architectural analysis is conducted on the stochastic model, using various qualitative and quantitative measures for SoS missions. Experimental results reveal critical aspects of SoS architecture model that facilitate better achievement of missions and QAs with improved design, using the proposed approach

    The influence of metabolic profile of obese men on the severity of erectile dysfunction: Are metabolically healthy obese individuals protected?

    Get PDF
    Objective: To determine the prevalence of erectile dysfunction (ED) in metabolically healthy obese (MHO) individuals, and to compare ED severity and hypogonadism prevalence in MHO, metabolically unhealthy obese (MUO) and metabolically healthy non-obese individuals.Material and methods: ED patients (n=460) were evaluated by standardized protocol, that included clinical evaluation, abridged 5-item version of the International Index of Erectile Function (IIEF-5) questionnaire survey, and Penile Duplex Doppler Ultrasound (PDDU) exam. Patients were classified as obese [body mass index (BMI) =30.0 kg/m2] and non-obese (BMI <30.0 kg/m2), and metabolic health status was defined by National Cholesterol Education Program Adult Treatment Panel III (NCEP ATPIII) criteria. Statistical analysis was performed and statistical significance was considered at p-level <0.05.Results: The mean age of the subjects was 56.2±10.5 years. MHO was present in 40% of obese individuals (n=37). MUO had lower mean peak systolic velocity (mPSV) compared to MHO (28.1 cm/s vs. 36.9 cm/s; p=0.005), and IIEF-5 scores were also lower in MUO compared to MHO patients (10.2 vs. 13.1; p=0.018). No statistical differences in IIEF-5 score, mPSV and hypogonadism prevalence between MHO and metabolically healthy non-obese (MHNO) patients were observed.Conclusion: Our results lead us to conclude that healthy metabolic profile protects obese individuals from severity of ED. The strong association between obesity and ED may be otherwise attributed to metabolic abnormalities present in the obese

    Dynamic wind turbine models in power system simulation tool DIgSILENT

    Get PDF

    Engineering Resilient Space Systems

    Get PDF
    Several distinct trends will influence space exploration missions in the next decade. Destinations are becoming more remote and mysterious, science questions more sophisticated, and, as mission experience accumulates, the most accessible targets are visited, advancing the knowledge frontier to more difficult, harsh, and inaccessible environments. This leads to new challenges including: hazardous conditions that limit mission lifetime, such as high radiation levels surrounding interesting destinations like Europa or toxic atmospheres of planetary bodies like Venus; unconstrained environments with navigation hazards, such as free-floating active small bodies; multielement missions required to answer more sophisticated questions, such as Mars Sample Return (MSR); and long-range missions, such as Kuiper belt exploration, that must survive equipment failures over the span of decades. These missions will need to be successful without a priori knowledge of the most efficient data collection techniques for optimum science return. Science objectives will have to be revised ‘on the fly’, with new data collection and navigation decisions on short timescales. Yet, even as science objectives are becoming more ambitious, several critical resources remain unchanged. Since physics imposes insurmountable light-time delays, anticipated improvements to the Deep Space Network (DSN) will only marginally improve the bandwidth and communications cadence to remote spacecraft. Fiscal resources are increasingly limited, resulting in fewer flagship missions, smaller spacecraft, and less subsystem redundancy. As missions visit more distant and formidable locations, the job of the operations team becomes more challenging, seemingly inconsistent with the trend of shrinking mission budgets for operations support. How can we continue to explore challenging new locations without increasing risk or system complexity? These challenges are present, to some degree, for the entire Decadal Survey mission portfolio, as documented in Vision and Voyages for Planetary Science in the Decade 2013–2022 (National Research Council, 2011), but are especially acute for the following mission examples, identified in our recently completed KISS Engineering Resilient Space Systems (ERSS) study: 1. A Venus lander, designed to sample the atmosphere and surface of Venus, would have to perform science operations as components and subsystems degrade and fail; 2. A Trojan asteroid tour spacecraft would spend significant time cruising to its ultimate destination (essentially hibernating to save on operations costs), then upon arrival, would have to act as its own surveyor, finding new objects and targets of opportunity as it approaches each asteroid, requiring response on short notice; and 3. A MSR campaign would not only be required to perform fast reconnaissance over long distances on the surface of Mars, interact with an unknown physical surface, and handle degradations and faults, but would also contain multiple components (launch vehicle, cruise stage, entry and landing vehicle, surface rover, ascent vehicle, orbiting cache, and Earth return vehicle) that dramatically increase the need for resilience to failure across the complex system. The concept of resilience and its relevance and application in various domains was a focus during the study, with several definitions of resilience proposed and discussed. While there was substantial variation in the specifics, there was a common conceptual core that emerged—adaptation in the presence of changing circumstances. These changes were couched in various ways—anomalies, disruptions, discoveries—but they all ultimately had to do with changes in underlying assumptions. Invalid assumptions, whether due to unexpected changes in the environment, or an inadequate understanding of interactions within the system, may cause unexpected or unintended system behavior. A system is resilient if it continues to perform the intended functions in the presence of invalid assumptions. Our study focused on areas of resilience that we felt needed additional exploration and integration, namely system and software architectures and capabilities, and autonomy technologies. (While also an important consideration, resilience in hardware is being addressed in multiple other venues, including 2 other KISS studies.) The study consisted of two workshops, separated by a seven-month focused study period. The first workshop (Workshop #1) explored the ‘problem space’ as an organizing theme, and the second workshop (Workshop #2) explored the ‘solution space’. In each workshop, focused discussions and exercises were interspersed with presentations from participants and invited speakers. The study period between the two workshops was organized as part of the synthesis activity during the first workshop. The study participants, after spending the initial days of the first workshop discussing the nature of resilience and its impact on future science missions, decided to split into three focus groups, each with a particular thrust, to explore specific ideas further and develop material needed for the second workshop. The three focus groups and areas of exploration were: 1. Reference missions: address/refine the resilience needs by exploring a set of reference missions 2. Capability survey: collect, document, and assess current efforts to develop capabilities and technology that could be used to address the documented needs, both inside and outside NASA 3. Architecture: analyze the impact of architecture on system resilience, and provide principles and guidance for architecting greater resilience in our future systems The key product of the second workshop was a set of capability roadmaps pertaining to the three reference missions selected for their representative coverage of the types of space missions envisioned for the future. From these three roadmaps, we have extracted several common capability patterns that would be appropriate targets for near-term technical development: one focused on graceful degradation of system functionality, a second focused on data understanding for science and engineering applications, and a third focused on hazard avoidance and environmental uncertainty. Continuing work is extending these roadmaps to identify candidate enablers of the capabilities from the following three categories: architecture solutions, technology solutions, and process solutions. The KISS study allowed a collection of diverse and engaged engineers, researchers, and scientists to think deeply about the theory, approaches, and technical issues involved in developing and applying resilience capabilities. The conclusions summarize the varied and disparate discussions that occurred during the study, and include new insights about the nature of the challenge and potential solutions: 1. There is a clear and definitive need for more resilient space systems. During our study period, the key scientists/engineers we engaged to understand potential future missions confirmed the scientific and risk reduction value of greater resilience in the systems used to perform these missions. 2. Resilience can be quantified in measurable terms—project cost, mission risk, and quality of science return. In order to consider resilience properly in the set of engineering trades performed during the design, integration, and operation of space systems, the benefits and costs of resilience need to be quantified. We believe, based on the work done during the study, that appropriate metrics to measure resilience must relate to risk, cost, and science quality/opportunity. Additional work is required to explicitly tie design decisions to these first-order concerns. 3. There are many existing basic technologies that can be applied to engineering resilient space systems. Through the discussions during the study, we found many varied approaches and research that address the various facets of resilience, some within NASA, and many more beyond. Examples from civil architecture, Department of Defense (DoD) / Defense Advanced Research Projects Agency (DARPA) initiatives, ‘smart’ power grid control, cyber-physical systems, software architecture, and application of formal verification methods for software were identified and discussed. The variety and scope of related efforts is encouraging and presents many opportunities for collaboration and development, and we expect many collaborative proposals and joint research as a result of the study. 4. Use of principled architectural approaches is key to managing complexity and integrating disparate technologies. The main challenge inherent in considering highly resilient space systems is that the increase in capability can result in an increase in complexity with all of the 3 risks and costs associated with more complex systems. What is needed is a better way of conceiving space systems that enables incorporation of capabilities without increasing complexity. We believe principled architecting approaches provide the needed means to convey a unified understanding of the system to primary stakeholders, thereby controlling complexity in the conception and development of resilient systems, and enabling the integration of disparate approaches and technologies. A representative architectural example is included in Appendix F. 5. Developing trusted resilience capabilities will require a diverse yet strategically directed research program. Despite the interest in, and benefits of, deploying resilience space systems, to date, there has been a notable lack of meaningful demonstrated progress in systems capable of working in hazardous uncertain situations. The roadmaps completed during the study, and documented in this report, provide the basis for a real funded plan that considers the required fundamental work and evolution of needed capabilities. Exploring space is a challenging and difficult endeavor. Future space missions will require more resilience in order to perform the desired science in new environments under constraints of development and operations cost, acceptable risk, and communications delays. Development of space systems with resilient capabilities has the potential to expand the limits of possibility, revolutionizing space science by enabling as yet unforeseen missions and breakthrough science observations. Our KISS study provided an essential venue for the consideration of these challenges and goals. Additional work and future steps are needed to realize the potential of resilient systems—this study provided the necessary catalyst to begin this process

    Dynamic Influences of Wind Power on The Power System

    Get PDF

    Development of high efficiency dye sensitized solar cells: novel conducting oxides, tandem devices and flexible solar cells

    Get PDF
    Photovoltaic technologies use light from the sun to create electricity, using a wide range of materials and mechanisms. The generation of clean, renewable energy using this technology must become price competitive with conventional power generation if it is to succeed on a large scale. The field of photovoltaics can be split into many sub-groups, however the overall aim of each is to reduce the cost per watt of the produced electricity. One such solar cell which has potential to reduce the cost significantly is the dye sensitised solar cell (DSC), which utilises cheap materials and processing methods. The reduction in cost of the generated electricity is largely dependent on two parameters. Firstly, the efficiency that the solar cell can convert light into electricity and secondly, the cost to deposit the solar cell. This thesis aims to address both factors, specifically looking at altering the transparent conducting oxide (TCO) and substrate in the solar cell. One method to improve the overall conversion efficiency of the device is to implement the DSC as the top cell in a tandem structure, with a bottom infra-red absorbing solar cell. The top solar cell in such a structure must not needlessly absorb photons which the bottom solar cell can utilise, which can be the case in solar cells utilising standard transparent contacts such as fluorine-doped tin oxide. In this work, transparent conducting oxides with high mobility such as titanium-doped indium oxide (ITiO) have been used to successfully increase the amount of photons through a DSC, available for a bottom infra-red sensitive solar cell such as Cu(In,Ga)Se2 (CIGS). Although electrically and optically of very high quality, the production of DSCs on this material is difficult due to the heat and chemical instability of the film, as well as the poor adhesion of TiO2 on the ITiO surface. Deposition of a interfacial SnO2 layer and a post-deposition annealing treatment in vacuum aided the deposition process, and transparent DSCs of 7.4% have been fabricated. The deposition of a high quality TCO utilising cheap materials is another method to improve the cost/watt ratio. Aluminium-doped zinc oxide (AZO) is a TCO which offers very high optical and electronic quality, whilst avoiding the high cost of indium based TCOs. The chemical and thermal instability of AZO films though present a problem due to the processing steps used in DSC fabrication. Such films etch very easily in slightly acidic environments, and are susceptible to a loss of conductivity upon annealing in air, so some steps have to be taken to fabricate intact devices. In this work, thick layers of SnO2 have been used to reduce the amount of etching on the surface of the film, whilst careful control of the deposition parameters can produce AZO films of high stability. High efficiency devices close to 9% have been fabricated using these stacked layers. Finally, transferring solar cells from rigid to flexible substrates offers cost advantages, since the price of the glass substrate is a significant part of the final cost of the cell. Also, the savings associated with roll to roll deposition of solar cells is large since the production doesn't rely on a batch process, using heavy glass substrates, but a fast, continuous process. This work has explored using the high temperature stable polymer, polyimide, commonly used in CIGS and CdTe solar cells. AZO thin films have been deposited on 7.5um thick polyimide foils, and DSCs of efficiency over 4% have been fabricated on the substrates, using standard processing methods

    Integrating knowledge about complex adaptive systems: insights from modelling the Eastern Baltic cod

    Get PDF
    Currently, the Eastern Baltic cod (EBC) is in continuing decline. Supporting management efforts to assist in its recovery will require a functional understanding of the dynamics of the EBC and the Baltic ecosystem. However, aquatic environments are challenging to research as they are elusive, encompass many scientific disciplines and are complex adaptive systems. This thesis explores how modelling and simulation methods can be applied and adapted to meet the specific needs of fisheries biologies’ current challenges regarding the EBC and potentially those of other stocks in similar situations.Aktuell verschlechtert sich der Zustand des Ostdorsches anhaltend und unterstützende Bewirtschaftungsmaßnahmen zu identifizieren erfordert ein funktionales Verständnis des Bestands und des Ökosystems Ostsee. Die Erforschung aquatischer Systeme ist jedoch schwierig: sie sind flüchtig, umfassen eine Vielzahl an Disziplinen und sind komplexe adaptiver Systeme. Diese Arbeit untersucht, wie Modellierungs- und Simulationsmethoden angewendet und angepasst werden können, um den Anforderungen der Fischereibiologie beim Ostdorsch und potentiell bei anderer Bestände in ähnlichen Situationen zu begegnen
    • …
    corecore