Missouri University of Science and Technology
Missouri University of Science and Technology (Missouri S&T): Scholars' MineNot a member yet
68630 research outputs found
Sort by 
Scaling Analysis of Electrodeposited Copper and the Influence of a Modified Polysaccharide on Surface Roughness
The influence of a commercially available modified polysaccharide (HydroStar®, Chemstar Chemical Products) on the roughness of short-term and small-scale copper electrodeposits was investigated using Atomic Force Microscopy (AFM), linear scan profilometry and scaling analysis. Copper was deposited on a 316L stainless steel cathode at 40 °C and 300 A m−2 from an electrolyte containing 40 g L−1 Cu2+, 170 g L−1 H2SO4, 1.5 g L−1 Fe3+, 15 mg L−1 Cl− and either 0, 10, 50 or 100 mg L−1 of HydroStar. Copper deposits produced between 15 and 25 min were imaged using AFM and 2D linear scan profilometry was used to gather surface features of copper samples produced in the range of 120 and 240 min. Scaling analysis was applied to quantify the surface characteristics of limiting roughness (δ) and critical length (LC) from which δ/LC was computed and related to the aspect ratio of surface features. All copper deposits showed a general rise in δ and LC with deposition time but the growth rates decreased when HydroStar was included in the electrolyte indicating that the additive lowers the vertical height of surface features as well as their widths. Furthermore, copper deposits were more consistently produced in the presence of HydroStar and, for a given value of limiting roughness, had surface features with wider base than those created in the absence of the additive. The results show that the modified polysaccharide acts to create smooth copper deposits by generating surface features with lower aspect ratios
A Gaze-driven Manufacturing Assembly Assistant System with Integrated Step Recognition, Repetition Analysis, and Real-time Feedback
Modern manufacturing faces significant challenges, including efficiency bottlenecks and high error rates in manual assembly operations. To address these challenges, we implement artificial intelligence (AI) and propose a gaze-driven assembly assistant system that leverages artificial intelligence for human-centered smart manufacturing. Our system processes video inputs of assembly activities using a Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) network for assembly step recognition, a Transformer network for repetitive action counting, and a gaze tracker for eye gaze estimation. The application of AI integrates the outputs of these tasks to deliver real-time visual assistance through a software interface that displays relevant tools, parts, and procedural instructions based on recognized steps and gaze data. Experimental results demonstrate the system\u27s high performance, achieving 98.36% accuracy in assembly step recognition, a mean absolute error (MAE) of 4.37%, and an off-by-one accuracy (OBOA) of 95.88% in action counting. Compared to existing solutions, our gaze-driven assistant offers superior precision and efficiency, providing a scalable and adaptable framework suitable for complex and large-scale manufacturing environments
Assessing The Impact of Radar-Rainfall Uncertainty on Streamflow Simulation
Hydrological models and quantitative precipitation estimation (QPE) are critical elements of flood forecasting systems. Both are subject to considerable uncertainties. Quantifying their relative contribution to the forecasted streamflow and flood uncertainty has remained challenging. Past work documented in the literature focused on one of these elements separately from the other. With this in mind, we present a systematic approach to assess the impact of QPE uncertainty in streamflow forecasting. Our approach explores the operational Iowa Flood Center (IFC) hydrological model performance after altering two radar-based QPE products. We ran the Hillslope Link Model (HLM) for Iowa between 2015 and 2020, altering the Multi-Radar/Multi-Sensor (MRMS) system and the specific attenuation-based (IFCA) IFC radar-derived product with a multiplicative error term. We assessed the forecasting system performance at 112 USGS streamflow gauges using the altered QPE products. Our results suggest that addressing rainfall uncertainty has the potential for much-improved flood forecasting spatially and seasonally. We identified spatial patterns linking prediction improvements to the radar\u27s location and the magnitude of rainfall. Also, we observed seasonal trends suggesting underestimations during the cold season (October–April). The patterns for different radar products are generally similar but also show some differences, implying that the QPE algorithm plays a role. This study\u27s results are a step toward separating modeling and QPE uncertainties. Future work involving larger areas and different hydrological and error models is essential to improve our understanding of the impact of QPE uncertainty
Ceno: Non-uniform, Segment and Parallel Zero-Knowledge Virtual Machine
In this paper, we explore a novel Zero-knowledge Virtual Machine (zkVM) framework leveraging succinct, non-interactive zero-knowledge proofs for verifiable computation over any code. Our approach divides the proof of program execution into two stages. In the first stage, the process breaks down program execution into segments, identifying and grouping identical sections. These segments are then proved through data-parallel circuits that allow for varying amounts of duplication. In the subsequent stage, the verifier examines these segment proofs, reconstructing the program\u27s control and data flow based on the segments\u27 duplication number and the original program. The second stage can be further attested by a uniform recursive proof. We propose two specific designs of this concept, where segmentation and parallelization occur at two levels: opcode and basic block. Both designs try to minimize the control flow that affects the circuit size and support dynamic copy numbers, ensuring that computational costs directly correlate with the actual code executed (i.e., you only pay as much as you use). In our second design, in particular, by proposing an innovative data-flow reconstruction technique in the second stage, we can drastically cut down on the stack operations even compared to the original program execution. Note that the two designs are complementary rather than mutually exclusive. Integrating both approaches in the same zkVM could unlock more significant potential to accommodate various program patterns. We present an asymmetric GKR scheme to implement our designs, pairing a non-uniform prover and a uniform verifier to generate proofs for dynamic-length data-parallel circuits. The use of a GKR prover also significantly reduces the size of the commitment. GKR allows us to commit only the circuit\u27s input and output, whereas in Plonkish-based solutions, the prover needs to commit to all the witnesses
DisTGranD: Granular Event/sub-event Classification For Disaster Response
Efficient crisis management relies on prompt and precise analysis of disaster data from various sources, including social media. The advantage of fine-grained, annotated, class-labeled data is the provision of a diversified range of information compared to high-level label datasets. In this study, we introduce a dataset richly annotated at a low level to more accurately classify crisis-related communication. To this end, we first present DisTGranD, an extensively annotated dataset of over 47,600 tweets related to earthquakes and hurricanes. The dataset uses the Automatic Content Extraction (ACE) standard to provide detailed classification into dual-layer annotation for events and sub-events and identify critical triggers and supporting arguments. The inter-annotator evaluation of DisTGranD demonstrated high agreement among annotators, with Fleiss Kappa scores of 0.90 and 0.93 for event and sub-event types, respectively. Moreover, a transformer-based embedded phrase extraction method showed XLNet achieving an impressive 96% intra-label similarity score for event type and 97% for sub-event type. We further proposed a novel deep learning classification model, RoBiCCus, which achieved ≥90% accuracy and F1-Score in the event and sub-event type classification tasks on our DisTGranD dataset and outperformed other models on publicly available disaster datasets. DisTGranD dataset represents a nuanced class-labeled framework for detecting and classifying disaster-related social media content, which can significantly aid decision-making in disaster response. This robust dataset enables deep-learning models to provide insightful, actionable data during crises. Our annotated dataset and code are publicly available on GitHub 1
Effect Of Thermomechanical Processing And Heat Treatment On The Microstructure And Mechanical Properties Of Ultra-High Strength Steels
A martensite-based and an austenite-based Fe-Mn-Al steels were investigated aiming the pilot-scale production of steels that meet military specifications for Rolled Homogeneous Armor (RHA) and High-hardness Armor (HHA) plates. Each material underwent specific thermomechanical processes and the following hardness, Charpy V-notch (CVN) impact toughness at -40 °C and room-temperature tensile properties were correlated with the resulting microstructures.
Specimens of the martensitic steel were austenitized at 1010 °C, quenched and tempered at 150, 175, 200, 225 and 250 °C for times up to four hours. As for the Fe-Mn-Al steel, its hot rolling schedule was design to promote strain accumulation in the as-rolled matrix, which is conducive to the precipitation of NiAl during annealing heat treatments. Specimens of this steel were annealed at 700, 750, 800 and 900 °C for times up to one hour.
The results show evidence of tempered martensite embrittlement (TME) in the martensite-based steel, especially when tempered for four hours at the investigated temperature range.
Regarding the Fe-Mn-Al steel, electron backscattered analyses (EBSD) revealed strain accumulation in the as-rolled austenitic matrix. Annealing at 750 °C for 30 min. strengthened the steel through intra-granular precipitation of NiAl. It was followed by a drop in the CVN impact toughness. Nevertheless, literature review showed that the investigated material outperformed similar austenite-based Fe-Mn-Al steels strengthened through k-carbide --Abstract, p. i
Some New Hardy-Type Inequalities with Negative Parameters on Time Scales
In this paper, we present new Hardy-type inequalities with negative parameters on a time scale T. The adopted approach draws upon the use of a reversed Hölder dynamic inequality, a chain rule, and the integration by parts rule on time scales. In the continuous case, our results contain integral inequalities due to Benaissa and Budak, while in the discrete case, the obtained inequalities are essentially new. Additionally, we demonstrate the applicability of our results in the quantum case
Porosity Prediction in LPBF of AISI 316L Stainless Steel: Refined Volumetric Energy Density and FEM Simulation Approach
Porosity in laser powder bed fusion (LPBF) additive manufacturing significantly affects the mechanical properties and performance of produced parts. Traditional volumetric energy density (VED) model has limitations in accurately predicting porosity, as it does not account for material-specific properties and thermal dynamics. This study investigates a comparative analysis of porosity formation in LPBF of AISI 316L stainless steel through experiments, finite element (FE), and analytical models. In the case of analytical model, a modified VED (MVED) relationship is proposed, incorporating material properties and thermo-physical characteristics to address the shortcomings of conventional VED approaches. LPBF experiments were conducted to print the samples by varying process parameters, and X-ray computed tomography was utilized to characterize the porosity within the fabricated samples. FEM simulations were also conducted to predict thermal distributions, melt pool dimensions and corresponding porosity. It was found that the MVED analytical model demonstrated improved empirical correlation with experimental porosity compared to the traditional VED, with an R-squared value of 0.88 versus 0.75 for the traditional model. This improvement highlights the importance of considering material-specific properties in energy density calculations. FEM results showed good agreement with experimental observations of porosity trends across different processing conditions, accurately predicting thermal distributions and melt pool dimensions. The presented approach provides insights into porosity formation mechanisms and offers potential for optimizing LPBF processing parameters to minimize defects, while addressing the limitations of traditional VED models
Variable Selection in Mixture Cure Models using Elastic Net Penalty: Application to COVID-19 Data
In survival analysis, it is often assumed that all individuals will eventually experience the event of interest if followed long enough. However, in many real-world scenarios, a subset of individuals remains event-free indefinitely. For instance, in clinical studies, some patients never relapse and are considered cured rather than censored. Traditional survival models are inadequate for capturing this heterogeneity. Mixture cure models address this limitation by distinguishing between cured and susceptible individuals while modeling the survival of the latter. A key challenge in mixture cure modeling is selecting relevant covariates, particularly when dealing with time-varying effects. This study develops a penalized logistic/Cox proportional hazards mixture cure model incorporating time-varying covariates for both the incidence and latency components. The model is implemented using the smoothly clipped absolute deviation (SCAD) penalty to facilitate variable selection and improve model interpretability. To achieve this, we modified the penPHcure package to accommodate SCAD regularization and generate time-varying covariates. The proposed approach is applied to real-world data on the time to death for hospitalized COVID-19 patients in Limpopo Province, South Africa, demonstrating its practical applicability in survival analysis
Tensile and Fatigue Properties of Haynes ® 233 Manufactured by Wire-arc Additive Manufacturing
Haynes® 233 is a newly developed nickel-based superalloy currently in the early stages of commercial adoption. With the growing interest in fabricating large and complex components using wire-arc additive manufacturing (WAAM), this alloy presents a promising option for industrial applications. This study investigates the microstructure, tensile, and fatigue properties of heat-treated (HT) WAAM Haynes ® 233 and compares them to its wrought counterpart. Yield strength (YS), ultimate tensile strength (UTS), and fatigue strength of WAAM Haynes ® 233 are 709.4 MPa, 890.1 MPa, and 253.8 MPa, respectively. These values indicate a 63.8 % increase in YS, a 1.11 % decrease in UTS, and a 21.7 % increase in fatigue strength compared to the wrought material. The slight decrease in UTS is attributed to process-induced defects, while the improved fatigue strength in the HT material is due to the increased volume fraction and spatial distribution of carbides in the microstructure. Fractography of the tensile fracture surfaces indicated ductile failure in the wrought material and brittle failure in the HT WAAM material, with defects acting as initiation sites for fatigue failure in both materials