992 research outputs found
A sensitivity analysis of the PAWN sensitivity index
The PAWN index is gaining traction among the modelling community as a sensitivity measure. However, the robustness to its design parameters has not yet been scrutinized: the size (N) and sampling (ε) of the model output, the number of conditioning intervals (n) or the summary statistic (θ). Here we fill this gap by running a sensitivity analysis of a PAWN-based sensitivity analysis. We compare the results with the design uncertainties of the Sobol’ total-order index (S*Ti). Unlike in S*Ti, the design uncertainties in PAWN create non-negligible chances of producing biased results when ranking or screening inputs. The dependence of PAWN upon (N, n, ε, θ) is difficult to tame, as these parameters interact with one another. Even in an ideal setting in which the optimum choice for (N, n, ε, θ) is known in advance, PAWN might not allow to distinguish an influential, non-additive model input from a truly non-influential model input
Recommended from our members
Quantitative storytelling in the making of a composite indicator
The reasons for and against composite indicators are briefly reviewed, as well as the available theories for their construction. After noting the strong normative dimension of these measures—which ultimately aim to ‘tell a story’, e.g. to promote the social discovery of a particular phenomenon, we inquire whether a less partisan use of a composite indicator can be proposed by allowing more latitude in the framing of its construction. We thus explore whether a composite indicator can be built to tell ‘more than one story’ and test this in practical contexts. These include measures used in convergence analysis in the field of cohesion policies and a recent case involving the World Bank’s Doing Business Index. Our experiments are built to imagine different constituencies and stakeholders who agree on the use of evidence and of statistical information while differing on the interpretation of what is relevant and vital
B.I.M. and cultural heritage: multi-scalar and multi-dimensional analysis and representation of an historical settlement. The case study of Montemagno, a New Village in Piedmont
Among its many and varied potential, B.I.M. includes those of diachronic and multi-scalar management of digital models. These potentialities
have been tested in an innovative way on the case study of a medieval settlement of particular morphology, for its comb-shaped layout interspersed
by narrow alleys, placed on a system of three ridges, in Asti territory. This interdisciplinary research, has involved analyses at the urban,
micro-urban and architectural levels. The methodology uses tools whose applications to historical buildings is currently object of study by scholars
in the field of H.B.I.M . Compared to existing studies, however, the present approach uses some functionality for the control of the time
dimension not yet properly explored, interlacing with the archaeology of the elevations and the stratigraphic Harris Diagram (Matrix), in order to
facilitate the analysis
Analysis of Advanced Air and Fuel Management Systems for Future Automotive Diesel Engine Generations
The increasing stringency of pollutant emissions regulations, aiming to fuel neutral NOx limits, is expected to foster the implementation of new technologies in terms of aftertreatment, air management and fuel injection systems. In this field, modern diesel engines are equipped with electronically-controlled flexible fuel injection systems and air/gas/EGR control valves. The only part in the air system ‘left for revolutionary’ is the valvetrain and a fully flexible Variable Valve Actuation (VVA) is becoming nowadays highly desirable for modern diesel engine. In this context, the purpose of the research activity was, on one hand, to evaluate and identify, through numerical simulation, the best VVA strategies to be implemented in a passenger car diesel engine by quantifying and choosing benefits vs drawbacks of VVA strategies. On the other hand, the definition of the best injection pattern for BSFC, Emission and NVH improvements through the adoption of Genetic Algorithm was performed. The EURO VI medium diesel engine (1.6 l 4L) developed by General Motors Global Propulsion Systems was selected as case study
Recommended from our members
Silver as a constraint for a large-scale development of solar photovoltaics? Scenario-making to the year 2050 supported by expert engagement and global sensitivity analysis
In this study we assess whether availability of silver could constrain a large-scale deployment of solar photovoltaics (PV). While silver-paste use in photovoltaics cell metallization is becoming more efficient, solar photovoltaics power capacity installation is growing at an exponential pace. Along photovoltaics, silver is also employed in an array of industrial and non-industrial applications. The trends of these uses are examined up to the year 2050. The technical coefficients for the expansion in photovoltaics power capacity and contraction in silver paste use have been assessed through an expert-consultation process. The trend of use in the non-PV sectors has been estimated through an ARIMA (auto-regressive integrated moving average) model. The yearly and cumulative silver demand are evaluated against the technological potential for increasing silver mining and the estimates of its global natural availability, respectively. The model implemented is tested with a quasi-random Monte Carlo variance-based global sensitivity analysis. The result of our inquiry is that silver may not represent a constraint for a very-large-scale deployment of photovoltaics (up to tens TW in installed power capacity) provided the present decreasing trend in the use of silver paste in the photovoltaics sector continues at an adequate pace. Silver use in non-photovoltaic sectors plays also a tangible influence on potential constraints. In terms of natural constraints, most of the uncertainty is dependent on the actual estimates of silver natural budget
Modelling of combustion and knock onset risk in a high-performance turbulent jet ignition engine
The reduction of CO2 emissions, and hence of fuel consumption, is currently a key driver for the development
of innovative SI engines for passenger car applications. In recent years, motorsport technical regulations in the
highest categories have seen the introduction of limits concerning the fuel flow rate and the total amount of fuel
per race, thus driving engine development toward further reduction of specific fuel consumption. Among the
different techniques that can be shared between conventional and high-performance SI engines, turbocharging,
compression ratio increase and Turbulent Jet Ignition (TJI) have shown a significant potential for fuel consumption reduction. The combination of turbocharging and compression ratio increase, however, can promote the
onset of knocking combustion, with detrimental effects on engine’s efficiency and durability. Additionally, engines equipped with TJI systems show unusual combustion development and knock onset.
In this study a methodology for the 3D-CFD modelling of combustion and knock onset risk was developed for a
high-performance turbocharged engine featuring a passive TJI system. First, a comprehensive numerical study
was carried out in a commercially available software, CONVERGE 2.4, in order to develop a 3D-CFD model
able to reproduce the available experimental data. The resulting 3D-CFD model was then validated on different
working conditions featuring different spark advances. Lastly, a methodology for the assessment of knock onset
risk was developed, which led to the definition of two novel knock-risk indexes based on the progress of chemical
reactions within the combustion chamber. The proposed knock-risk indexes showed good agreement with the
experimental data
Senior Recital: Andrea Lucas, Soprano; Grace Eom, Piano; November 5, 2022
Kemp Recital HallNovember 5, 2022Saturday Afternoon4:30 p.m
Mythopoiesis and collective imagination in videogames
As videogames become more and more popular, their ability to generate and communicate mythologies (mythopoiesis) appears clearer. Pokémon, The Legend of Zelda, and Halo are just a few of the specific transmedial storyworlds created through (relatively few) years of reiteration. At the same time, recent examples of massively diffused products also picture remediations of heritage, folk tales, architecture, and other cultural elements, reaching users of any background.
Franchises like Assassin’s Creed, God of War, or Final Fantasy take large inspiration from various cultural heritages. By doing so, video-ludic remediations add to previously shared imaginary some peculiar interactive (ergodic) features: since video games have specific features that imply interaction by (and with) the user, the remediated cultural elements acquire properties that were not present in any previous representation. The interest of this study is to enlighten how it is possible for blockbuster videogames to build over previous archetypes and imaginaries, creating common knowledge about certain cultural objects, myths, and figures, among players on a global scale. The main focus of this research will be Japanese cultural heritage representation in recent popular videogames such as Nioh, Ghost of Tsushima, and Sekiro: Shadows Die Twice. In a comparative analysis of these products, the study will try to underline the common elements of blockbuster remediations, while exploring the emerging interactive (ergodic) features that the mentioned videogames add to previously shared imaginary of portrayed cultural elements. Any emerging evidence will then serve to build a tentative framework or method to remediate and represent any given cultural element in future videogame projects that aim to properly communicate heritage on a large scale such as the global digital game market
Graduate Recital: Andrea Steele, Clarinet; Lu Liu, Piano; March 18, 2010
Kemp Recital HallMarch 18, 2010Thursday Evening7:00 p.m
- …