20181 research outputs found
Sort by
Digital transformation and profit growth: a configurational analysis of regional dynamics
This study adopts Configuration Theory to explore how diverse combinations of regional factors contribute to profitability, emphasizing the principle of equifinality, which posits that multiple, equally effective configurations can lead to similar outcomes. This study examines the interplay of multiple factors—enterprise informatization, digital infrastructure, e-commerce, technological investment, innovation, hardware, and software—across four key themes: Digital Readiness and Technological Integration, Market and Economic Enablers, Innovation Capacity and Activity, and Foundational Artifacts and Resources. Using data from 31 provinces in China from 2015 to 2022, this study employs fuzzy-set Qualitative Comparative Analysis (fsQCA) to uncover pathways to regional profit growth. The study identifies five distinct configurations contributing to profit growth across China's provinces. In most configurations, e-commerce and technological investment emerge as central drivers. However, in less developed regions, profit growth relies more on improvements in digital infrastructure and hardware, with innovation and enterprise informatization playing a less significant role. The findings also reveal that profit growth requires addressing the weakest elements in the ecosystem—whether digital infrastructure, technological capabilities, or other factors. Strategies tailored to regional conditions must prioritize improving these weaker components to achieve sustained growth, as ignoring them can limit overall success.IEEE Transactions on Engineering Managemen
Chip away everything that doesn't look like an elephant
This paper addresses the question of how conceptual models are created in a simulation modelling activity. Assuming an entity-based approach to simulation, some techniques for discovering good entity classes are considered, including personation. Also considered are the notations by which a conceptual model can be represented, and the modes of thought required for good conceptual modelling. Specifically excluded from consideration is the idea of applying a cut-and-dried method. The shortcomings of computers for conceptual modelling are remarked upon.12th Simulation Workshop (SW25
Securing UAV flying ad hoc wireless networks: authentication development for robust communications
Unmanned Aerial Vehicles (UAVs) have revolutionized numerous domains by introducing exceptional capabilities and efficiencies. As UAVs become increasingly integrated into critical operations, ensuring the security of their communication channels emerges as a paramount concern. This paper investigates the importance of safeguarding UAV communication against cyber threats, considering both intra-UAV and UAV–ground station interactions in the scope of the Flying Ad Hoc Networks (FANETs). To leverage the advancements in security methodologies, particularly focusing on Physical Unclonable Functions (PUFs), this paper proposes a novel authentication framework tailored for UAV networking systems. Investigating the existing literature, we categorize related studies into authentication strategies, illuminating the evolving landscape of UAV security. The proposed framework demonstrated a high level of security with lower communication and computation costs in comparison with selected studies with similar types of attacks. This paper highlights the urgent need for strong security measures to mitigate the increasing threats that UAVs encounter and ensure their sustained effectiveness in a variety of applications. The results indicate that the proposed protocol is sufficiently secure and, in terms of communication cost, achieves an 18% improvement compared to the best protocol in the referenced studies.Sensor
International interlaboratory study to normalize liquid chromatography-based mycotoxin retention times through implementation of a retention index system
Monitoring for mycotoxins in food or feed matrices is necessary to ensure the safety and security of global food systems. Due to a lack of standardized methods and individual laboratory priorities, most institutions have developed their own methods for mycotoxin determinations. Given the diversity of mycotoxin chemical structures and physicochemical properties, searching databases, and comparing data between institutions is complicated. We previously introduced incorporating a retention index (RI) system into liquid chromatography mass spectrometry (LC-MS) based mycotoxin determinations. To validate this concept, we designed an interlaboratory study where each participating laboratory was sent N-alkylpyridinium-3-sulfonates (NAPS) RI standards, and 36 mycotoxin standards for analysis using their pre-optimized LC-MS methods. Data from 44 analytical methods were submitted from 24 laboratories representing various manufacturer platforms, LC columns, and mobile phase compositions. Mycotoxin retention times (tR) were converted to RI values based on their elution relative to the NAPS standards. Trichothecenes (deoxynivalenol, 3-acetyldeoxynivalenol, 15-acetyldeoxynivalenol) showed tR consistency (± 20–50 RI units, 1–5 % median RI) regardless of mobile phase or type of chromatography column in this study. For the remaining mycotoxins tested, the RI values were strongly impacted by the mobile phase composition and column chemistry. The ability to predict tR was evaluated based on the median RI mycotoxin values and the NAPS tR. These values were corrected using Tanimoto coefficients to investigate whether structurally similar compounds could be used as anchors to further improve accuracy. This study demonstrated the power of employing an RI system for mycotoxin determinations, further enhancing the confidence of identifications.Genome Canada, FWF Austrian Science Fund, Agriculture and Agri-Food Canada, Ministry of Education, Universities and Research, National Research Council Canada, MitacsThis research was supported by the NRC (Biotoxin Metrology, Nova Scotia), the ALIFAR project (Italian Ministry of University, Dipartimenti di Eccellenza 2023–2027), Genome Canada Technology Development Grant and MITACS scholarship, with resources provided by the VetCore Facility (Mass Spectrometry) of the University of Veterinary Medicine Vienna.Moreover, this research was supported by the Austrian Science Fund (FWF, P33188), the Mass Spectrometry Centre of the Faculty of Chemistry and the Exposome Austria Research Infrastructure at the University of Vienna.Journal of Chromatography
Total pressure distortion reconstruction methods from velocimetry data within an aero-engine intake at crosswind
The integration of Very High Bypass Ratio (VHBR) turbofan engines with short intakes may present challenges due to increased total pressure distortion, particularly under crosswind conditions. Current industrial practices rely on a limited number of intrusive pressure sensors arranged on rakes at the Aerodynamic Interface Plane (AIP), to characterise this total pressure distortion. However, non-intrusive measurement techniques provide a more effective way to capture the complex, unsteady flow fields within the intake, offering higher spatial resolution compared to conventional methods. In this study, velocity data obtained from Stereoscopic Particle Image Velocimetry (S-PIV) during wind tunnel tests of a short intake configuration were employed to reconstruct the instantaneous total pressure fields at the AIP within the intake. Two reconstruction methods were used: Direct Spatial Integration (DSI) of the momentum equation and the Poisson Pressure Equation (PPE). These methods were first applied to numerical data from RANS simulations. The results of the reconstruction of the total pressure field based on the S-PIV data were compared against rake measurements. The methods enabled a more comprehensive assessment of total pressure distortion, offering improvements over conventional sensor-based ap-proaches in identifying and characterising total pressure non-uniformities within an intake.This work was conducted under the NIFTI project which received funding from the Clean Sky 2 Joint Undertaking (JU) under Grant Agreement No 86491116th European Turbomachinery Conference (ETC16
Techno-economic study for degraded gas turbine on pipeline application in the oil and gas industry.
Gas compression through pipelines is a capital intensive project. Therefore, it is
imperative to investigate the viability of investing in such a project. Thus, the techno-
economic and environmental risk assessment (TERA) tool to rapidly evaluate the entire
natural gas pipeline project becomes vital. This research has investigated the impacts of
gas turbine (GT) degradation in the application of TERA for a natural gas pipeline, taking
into account the equipment selection, ambient conditions and periodic engine overhaul.
Three scenarios (optimistic, medium and pessimistic) defining different levels of
deterioration of the GT in comparison with the clean condition were examined in each
season of the years (rainy, dry and hot season) based on the location of Trans-Saharan
gas pipeline with 18 compression stations. The developed TERA model considered
different modules such as the pipeline/gas compressor, performance, emission, a
simplified lifing and economic module.
The pipeline/gas compressor module evaluated the performance of the 4180km pipeline
and gas compressor power across all compression stations in both isothermal and non-
isothermal conditions. Aspen-Hysys/micro-soft excel and MATLAB were used to develop
the model. The result showed that for every 1% increase in pipe exit pressure resulted
in a 1.8% increase in the volume of the gas flow in the pipeline. Having evaluated the
gas compressor (GC) power across the 18 compressor station, the investigation also
revealed that for every 1% rise in the gas temperature resulted in a 3.4% rise in the
power required by the gas compressor to move the gas. The GT performance was
modelled using TURBOAMATCH at fixed power of the engine with respect to the
different scenarios under investigation. The performance result was linked with the
developed emission, lifing and economic model in MATLAB. The result revealed that for
every 1% degradation (reduction in flow capacity and isentropic efficiency) at a constant
power of engine operation, between an ambient temperature of 16.2ᴼC and 29ᴼC, CO₂
emission increases between 0.71% and 0.78% when compared with the clean condition.
Also, at the same operating condition, the NOx emission increases between 1.66% and
1.8%. However, NOx emission at different compressor station varies from one station to
another due to the influence of different ambient conditions, engine power settings and
number of engines used. Lifing result showed that as the engine degrades, its creep life
reduces at high TET to deliver the same power at a fixed number of engines
Net present value (NPV) at different discount rates (DR) (0%, 5%, 10% and 15%) were
used to evaluate the economic viability of the project, taking into account engine
divestment and leasing for the redundant fleets after overhaul. The study further
investigated how Rescheduling of GT Overhaul (ROH) from the baseline condition
affects the economic viability of the pipeline project. The result showed that implementing
the ROH reduces the number of GT used for the optimistic, medium and pessimistic
scenarios by 8%, 2% and 4% respectively, for the same number of the compressor
station and at the same operating conditions when compared with the baseline condition.
The result also showed that running the engine on degraded mode increases the life
cycle cost while the NPV reduces as the degradation increases. For instance, at 10%
DR, the baseline NPV for the clean, optimistic, medium, and pessimistic scenarios were
19.6, 17.1 billion, respectively showing that the NPV decreases with
increase in degradation, unlike other studies that analysed the NPV on clean engine
operation only. Remarkably, the NPV for engine divestment was 0.2% to 20.3% lower
than the NPV for leasing depending on the different scenarios and DR, indicating that
NPV leasing gives better benefits than that of engine divestment.
Furthermore, the implementation of on-line compressor washing to investigate the
impacts on the pipeline project and emission reduction using TURBOMATCH and
MATLAB for the developed model revealed that the CO₂ emission and cost of CO₂ for
the optimistic, medium and pessimistic scenarios had a reduction of 5.8%, 6.1% and
6.5% respectively when compared with the baseline condition. Also, at 5% DR, the NPV
for the three scenarios after compressor washing increase by 6%, 5.2% and 4.8%,
respectively when compared with the baseline case. The proposed methods and result
in this research will offer a useful decision-making guide for all pipeline investors to invest
in a natural gas pipeline business, taking into account different operating conditions and
the impacts of engine degradation.PhD in Aerospac
Enzymes targeting distinct hydrolysis blind-spots of thermal and biological pre-treatments significantly uplift biogas production
Thermal hydrolysis process (THP) and biological hydrolysis (BH) are key pre-treatment technologies for anaerobic digestion (AD), termed advanced anaerobic digesters (AADs). They target the rate-limiting hydrolysis step in AD. This study evaluates full-scale pre-treatments for macromolecule bias and the implementation of hydrolysis enzymes to enhance biogas yield. Findings show THP significantly improves protein and carbohydrate solubilisation by 30% and 25%, respectively, but fully hydrolyses only carbohydrates. In contrast, BH targets fibres and proteins, achieving 35% and 23% solubilisation, and only partially hydrolyses carbohydrates. Biomethane potential (BMP) tests indicate that protease enzymes raise biomethane yield by 20-30% for AAD with THP pre-treatment. In comparison, α-amylase increases it by over 30% for AAD with BH pre-treatment. This study tailors enzyme selection and dosage to specifically address the unique "hydrolysis blind spot" of each pre-treatment, providing a strategic framework to enhance AD technologies by an improved understanding of macromolecule selectivity and their transformation pathways.Bioresource Technolog
The interplay of agile capabilities in crisis response
Purpose
Large-scale disruptions that lead to extreme environmental uncertainty, combined with perceived threats and time pressure, have prompted some organizations to rapidly form new networks. This research aims to focus on how actors in these newly formed networks leverage their agile capabilities in response to extreme disruptions.
Design/methodology/approach
Grounded in the agility literature, this study employs an abductive research approach and a multi-case design. Data were collected from 18 actors embedded in four newly formed networks located in the United Kingdom, Italy, Colombia and the USA.
Findings
Through six propositions and an empirically derived model of supply chain agility under extreme uncertainty, the findings reveal a dynamic interplay among agile capabilities. They also illustrate how the utilization of these capabilities shifts in environments characterized by severe unpredictability.
Practical implications
The research underscores the importance of allocating equal attention to both cognitive and physical dimensions of agility. Under conditions of extreme uncertainty, firms may need to adopt more entrepreneurial behaviors to enhance agility; however, this can increase risk exposure, which must be managed proactively.
Originality/value
This study contributes to the body of knowledge on supply chain agility by identifying the interrelationships between agility dimensions and demonstrating how extreme uncertainty influences their practical application.International Journal of Operations & Production Managemen
Numerical analysis of crack path effects on the vibration behaviour of aluminium alloy beams and its identification via artificial neural networks
Understanding and predicting the behaviour of fatigue cracks are essential for ensuring safety, optimising maintenance strategies, and extending the lifespan of critical components in industries such as aerospace, automotive, civil engineering and energy. Traditional methods using vibration-based dynamic responses have provided effective tools for crack detection but often fail to predict crack propagation paths accurately. This study focuses on identifying crack propagation paths in an aluminium alloy 2024-T42 cantilever beam using dynamic response through numerical simulations and artificial neural networks (ANNs). A unified damping ratio of the specimens was measured using an ICP® accelerometer vibration sensor for the numerical simulation. Through systematic investigation of 46 crack paths of varying depths and orientations, it was observed that the crack propagation path significantly influenced the beam’s natural frequencies and resonance amplitudes. The results indicated a decreasing frequency trend and an increasing amplitude trend as the propagation angle changed from vertical to inclined. A similar trend was observed when the crack path changed from a predominantly vertical orientation to a more complex path with varying angles. Using ANNs, a model was developed to predict natural frequencies and amplitudes from the given crack paths, achieving a high accuracy with a mean absolute percentage error of 1.564%.Sensor
Evaluating the scope of peer review in digital Forensics: insights from Norway and the U.K.
This paper investigates the implementation and utilisation of peer review practices in digital forensics (DF) within Norway and the U.K. Through a comprehensive survey of 113 DF practitioners and managers, we explore the extent to which peer review is integrated into DF investigations and the variations in practices between these two countries. Our findings reveal that while both Norway and the U.K. recognize the importance of peer review in ensuring the integrity and accuracy of DF work, there is a tendency to limit peer reviews to the examination of reports, rather than extending them to more thorough verification of results and methodologies. Utilising the Peer Review Hierarchy for DF as an analytical framework, our study highlights a significant gap in the depth of peer review practices, with both countries primarily focusing on lower-level reviews that are less likely to detect critical errors. The paper discusses the implications of these findings in the field of DF, emphasising the need for more robust and comprehensive peer review mechanisms to enhance the quality and reliability of digital evidence. Furthermore, we discuss the systemic and resource-related challenges that may hinder the implementation of more extensive peer review practices.Science & Justic