1,809 research outputs found
Numerical studies of the thermal design sensitivity calculation for a reaction-diffusion system with discontinuous derivatives
The aim of this study is to find a reliable numerical algorithm to calculate thermal design sensitivities of a transient problem with discontinuous derivatives. The thermal system of interest is a transient heat conduction problem related to the curing process of a composite laminate. A logical function which can smoothly approximate the discontinuity is introduced to modify the system equation. Two commonly used methods, the adjoint variable method and the direct differentiation method, are then applied to find the design derivatives of the modified system. The comparisons of numerical results obtained by these two methods demonstrate that the direct differentiation method is a better choice to be used in calculating thermal design sensitivity
Spatiotemporal and temporal forecasting of ambient air pollution levels through data-intensive hybrid artificial neural network models
Outdoor air pollution (AP) is a serious public threat which has been linked to severe respiratory and cardiovascular illnesses, and premature deaths especially among those residing in highly urbanised cities. As such, there is a need to develop early-warning and risk management tools to alleviate its effects. The main objective of this research is to develop AP forecasting models based on Artificial Neural Networks (ANNs) according to an identified model-building protocol from existing related works. Plain, hybrid and ensemble ANN model architectures were developed to estimate the temporal and spatiotemporal variability of hourly NO2 levels in several locations in the Greater London area. Wavelet decomposition was integrated with Multilayer Perceptron (MLP) and Long Short-term Memory (LSTM) models to address the issue of high variability of AP data and improve the estimation of peak AP levels. Block-splitting and crossvalidation procedures have been adapted to validate the models based on Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Willmott’s index of agreement (IA). The results of the proposed models present better performance than those from the benchmark models. For instance, the proposed wavelet-based hybrid approach provided 39.15% and 28.58% reductions in RMSE and MAE indices, respectively, on the performance of the benchmark MLP model results for the temporal forecasting of NO2 levels. The same approach reduced the RMSE and MAE indices of the benchmark LSTM model results by 12.45% and 20.08%, respectively, for the spatiotemporal estimation of NO2 levels in one site at Central London. The proposed hybrid deep learning approach offers great potential to be operational in providing air pollution forecasts in areas without a reliable database. The model-building protocol adapted in this thesis can also be applied to studies using measurements from other sites.Outdoor air pollution (AP) is a serious public threat which has been linked to severe respiratory and cardiovascular illnesses, and premature deaths especially among those residing in highly urbanised cities. As such, there is a need to develop early-warning and risk management tools to alleviate its effects. The main objective of this research is to develop AP forecasting models based on Artificial Neural Networks (ANNs) according to an identified model-building protocol from existing related works. Plain, hybrid and ensemble ANN model architectures were developed to estimate the temporal and spatiotemporal variability of hourly NO2 levels in several locations in the Greater London area. Wavelet decomposition was integrated with Multilayer Perceptron (MLP) and Long Short-term Memory (LSTM) models to address the issue of high variability of AP data and improve the estimation of peak AP levels. Block-splitting and crossvalidation procedures have been adapted to validate the models based on Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Willmott’s index of agreement (IA). The results of the proposed models present better performance than those from the benchmark models. For instance, the proposed wavelet-based hybrid approach provided 39.15% and 28.58% reductions in RMSE and MAE indices, respectively, on the performance of the benchmark MLP model results for the temporal forecasting of NO2 levels. The same approach reduced the RMSE and MAE indices of the benchmark LSTM model results by 12.45% and 20.08%, respectively, for the spatiotemporal estimation of NO2 levels in one site at Central London. The proposed hybrid deep learning approach offers great potential to be operational in providing air pollution forecasts in areas without a reliable database. The model-building protocol adapted in this thesis can also be applied to studies using measurements from other sites
On processing development for fabrication of fiber reinforced composite, part 2
Fiber-reinforced composite laminates are used in many aerospace and automobile applications. The magnitudes and durations of the cure temperature and the cure pressure applied during the curing process have significant consequences for the performance of the finished product. The objective of this study is to exploit the potential of applying the optimization technique to the cure cycle design. Using the compression molding of a filled polyester sheet molding compound (SMC) as an example, a unified Computer Aided Design (CAD) methodology, consisting of three uncoupled modules, (i.e., optimization, analysis and sensitivity calculations), is developed to systematically generate optimal cure cycle designs. Various optimization formulations for the cure cycle design are investigated. The uniformities in the distributions of the temperature and the degree with those resulting from conventional isothermal processing conditions with pre-warmed platens. Recommendations with regards to further research in the computerization of the cure cycle design are also addressed
Detecting Mutations in the Mycobacterium tuberculosis Pyrazinamidase Gene pncA to Improve Infection Control and Decrease Drug Resistance Rates in Human Immunodeficiency Virus Coinfection.
Hospital infection control measures are crucial to tuberculosis (TB) control strategies within settings caring for human immunodeficiency virus (HIV)-positive patients, as these patients are at heightened risk of developing TB. Pyrazinamide (PZA) is a potent drug that effectively sterilizes persistent Mycobacterium tuberculosis bacilli. However, PZA resistance associated with mutations in the nicotinamidase/pyrazinamidase coding gene, pncA, is increasing. A total of 794 patient isolates obtained from four sites in Lima, Peru, underwent spoligotyping and drug resistance testing. In one of these sites, the HIV unit of Hospital Dos de Mayo (HDM), an isolation ward for HIV/TB coinfected patients opened during the study as an infection control intervention: circulating genotypes and drug resistance pre- and postintervention were compared. All other sites cared for HIV-negative outpatients: genotypes and drug resistance rates from these sites were compared with those from HDM. HDM patients showed high concordance between multidrug resistance, PZA resistance according to the Wayne method, the two most common genotypes (spoligotype international type [SIT] 42 of the Latino American-Mediterranean (LAM)-9 clade and SIT 53 of the T1 clade), and the two most common pncA mutations (G145A and A403C). These associations were absent among community isolates. The infection control intervention was associated with 58-92% reductions in TB caused by SIT 42 or SIT 53 genotypes (odds ratio [OR] = 0.420, P = 0.003); multidrug-resistant TB (OR = 0.349, P < 0.001); and PZA-resistant TB (OR = 0.076, P < 0.001). In conclusion, pncA mutation typing, with resistance testing and spoligotyping, was useful in identifying a nosocomial TB outbreak and demonstrating its resolution after implementation of infection control measures
AI-Based Innovation in B2B Marketing: An Interdisciplinary Framework Incorporating Academic and Practitioner Perspectives
Artificial intelligence (AI) rests at the frontier of technology, service, and industry. AI research is helping to reconfigure innovative businesses in the consumer marketplace. This paper addresses existing literature on AI and presents an emergent B2B marketing framework for AI innovation as a cycle of the critical elements identified in cross-functional studies that represent both academic and practitioner strategic orientations. We contextualize the prevalence of AI-based innovation themes by utilizing bibliometric and semantic content analysis methods across two studies and drawing data from two distinct sources, academics, and industry practitioners. Our findings reveal four key analytical components: (1) IT tools and resource environment, (2) innovative actors and agents, (3) marketing knowledge and innovation, and (4) communications and exchange relationships. The academic literature and industry material analyzed in our studies imply that as markets integrate AI technology into their offerings and services, a governing opportunity to better foster and encourage mutually beneficial co-creation in the AI innovation process emerges
Small Differences in Experience Bring Large Differences in Performance
In many life situations, people choose sequentially between repeating a past action in expectation of a familiar outcome (exploitation), or choosing a novel action whose outcome is largely uncertain (exploration). For instance, in each quarter, a manager can budget advertising for an existing product, earning a predictable boost in sales. Or she can spend to develop a completely new product, whose prospects are more ambiguous. Such decisions are central to economics, psychology, business, and innovation; and they have been studied mostly by modelling in agent-based simulations or examining statistical relationships in archival or survey data. Using experiments across cultures, we add unique evidence about causality and variations. We find that exploration is boosted by three past experiences: When decision-makers
fall below top performance; undergo performance stability; or suffer low overall performance. In contrast, individual-level variables, including risk
and ambiguity preferences, are poor predictors of exploration. The results provide insights into how decisions are made, substantiating the
micro-foundations of strategy and assisting in balancing exploration with exploitation
Internal lee wave closures: Parameter sensitivity and comparison to observations
This is the final version. Available from AGU via the DOI in this recordThe SOFine and DIMES data analyzed in this paper can be obtained through the British Oceanographic Data Centre (BODC) by navigating the following links, respectively: http://archive.noc.ac.uk/SOFINE/and http://dimes.ucsd.edu/en/data/This paper examines two internal lee wave closures that have been used together with ocean models to predict the time‐averaged global energy conversion rate into lee waves and dissipation rate associated with lee waves and topographic blocking: the Garner (2005) scheme and the Bell (1975) theory. The closure predictions in two Southern Ocean regions where geostrophic flows dominate over tides are examined and compared to microstructure profiler observations of the turbulent kinetic energy dissipation rate, where the latter are assumed to reflect the dissipation associated with topographic blocking and generated lee wave energy. It is shown that when applied to these Southern Ocean regions, the two closures differ most in their treatment of topographic blocking. For several reasons, pointwise validation of the closures is not possible using existing observations, but horizontally averaged comparisons between closure predictions and observations are made. When anisotropy of the underlying topography is accounted for, the two horizontally averaged closure predictions near the seafloor are approximately equal. The dissipation associated with topographic blocking is predicted by the Garner (2005) scheme to account for the majority of the depth‐integrated dissipation over the bottom 1000 m of the water column, where the horizontally averaged predictions lie well within the spatial variability of the horizontally averaged observations. Simplifications made by the Garner (2005) scheme that are inappropriate for the oceanic context, together with imperfect observational information, can partially account for the prediction‐observation disagreement, particularly in the upper water column.D. S. Trossman and B. K. Arbic gratefully acknowledge support from National Science Foundation (NSF) grant OCE‐0960820 and Office of Naval Research (ONR) grant N00014‐11‐1‐0487. S. Waterman gratefully acknowledges support from the Australian Research Council (grants DE120102927 and CE110001028) and the National Science and Engineering Research Council of Canada (grant 22R23085)
GASP II. A MUSE view of extreme ram-pressure stripping along the line of sight: kinematics of the jellyfish galaxy JO201
This paper presents a spatially-resolved kinematic study of the jellyfish
galaxy JO201, one of the most spectacular cases of ram-pressure stripping (RPS)
in the GASP (GAs Stripping Phenomena in Galaxies with MUSE) survey. By studying
the environment of JO201, we find that it is moving through the dense
intra-cluster medium of Abell 85 at supersonic speeds along our line of sight,
and that it is likely accompanied by a small group of galaxies. Given the
density of the intra-cluster medium and the galaxy's mass, projected position
and velocity within the cluster, we estimate that JO201 must so far have lost
~50% of its gas during infall via RPS. The MUSE data indeed reveal a smooth
stellar disk, accompanied by large projected tails of ionised (Halpha) gas,
composed of kinematically cold (velocity dispersion <40km/s) star-forming knots
and very warm (>100km/s) diffuse emission which extend out to at least ~50 kpc
from the galaxy centre. The ionised Halpha-emitting gas in the disk rotates
with the stars out to ~6 kpc but in the disk outskirts becomes increasingly
redshifted with respect to the (undisturbed) stellar disk. The observed
disturbances are consistent with the presence of gas trailing behind the
stellar component, resulting from intense face-on RPS happening along the line
of sight. Our kinematic analysis is consistent with the estimated fraction of
lost gas, and reveals that stripping of the disk happens outside-in, causing
shock heating and gas compression in the stripped tails.Comment: ApJ, revised version after referee comments, 15 pages, 16 figures.
The interactive version of Figure 9 can be viewed at
web.oapd.inaf.it/gasp/publications.htm
- …