832 research outputs found
Improvement of modal scaling factors using mass additive technique
A general investigation into the improvement of modal scaling factors of an experimental modal model using additive technique is discussed. Data base required by the proposed method consists of an experimental modal model (a set of complex eigenvalues and eigenvectors) of the original structure and a corresponding set of complex eigenvalues of the mass-added structure. Three analytical methods,i.e., first order and second order perturbation methods, and local eigenvalue modification technique, are proposed to predict the improved modal scaling factors. Difficulties encountered in scaling closely spaced modes are discussed. Methods to compute the necessary rotational modal vectors at the mass additive points are also proposed to increase the accuracy of the analytical prediction
A new method to real-normalize measured complex modes
A time domain subspace iteration technique is presented to compute a set of normal modes from the measured complex modes. By using the proposed method, a large number of physical coordinates are reduced to a smaller number of model or principal coordinates. Subspace free decay time responses are computed using properly scaled complex modal vectors. Companion matrix for the general case of nonproportional damping is then derived in the selected vector subspace. Subspace normal modes are obtained through eigenvalue solution of the (M sub N) sup -1 (K sub N) matrix and transformed back to the physical coordinates to get a set of normal modes. A numerical example is presented to demonstrate the outlined theory
Application of the Financial Industry Business Ontology (FIBO) for development of a financial organization ontology
The article considers an approach to a formalized description and meaning harmonization for financial terms and means of semantic modeling. Ontologies for the semantic models are described with the help of special languages developed for the Semantic Web. Results of FIBO application to solution of different tasks in the Russian financial sector are given
Dynamic similarity design method for an aero-engine dualrotor test rig
This paper presents a dynamic similarity design method to design a scale dynamic similarity model (DSM) for a dual-rotor test rig of an aero-engine. Such a test rig is usually used to investigate the major dynamic characteristics of the full-size model (FSM) and to reduce the testing cost and time for experiments on practical aero engine structures. Firstly, the dynamic equivalent model (DEM) of a dual-rotor system is modelled based on its FSM using parametric modelling, and the first 10 frequencies and mode shapes of the DEM are updated to agree with the FSM by modifying the geometrical shapes of the DEM. Then, the scaling laws for the relative parameters (such as geometry sizes of the rotors, stiffness of the supports, inherent properties) between the DEM and its scale DSM were derived from their equations of motion, and the scaling factors of the above-mentioned parameters are determined by the theory of dimensional analyses. After that, the corresponding parameters of the scale DSM of the dual-rotor test rig can be determined by using the scaling factors. In addition, the scale DSM is further updated by considering the coupling effect between the disks and shafts. Finally, critical speed and unbalance response analysis of the FSM and the updated scale DSM are performed to validate the proposed method
Particle swarm optimization with sequential niche technique for dynamic finite element model updating
Peer reviewedPostprin
Increasing the LLM Accuracy for Question Answering: Ontologies to the Rescue!
There is increasing evidence that question-answering (QA) systems with Large
Language Models (LLMs), which employ a knowledge graph/semantic representation
of an enterprise SQL database (i.e. Text-to-SPARQL), achieve higher accuracy
compared to systems that answer questions directly on SQL databases (i.e.
Text-to-SQL). Our previous benchmark research showed that by using a knowledge
graph, the accuracy improved from 16% to 54%. The question remains: how can we
further improve the accuracy and reduce the error rate? Building on the
observations of our previous research where the inaccurate LLM-generated SPARQL
queries followed incorrect paths, we present an approach that consists of 1)
Ontology-based Query Check (OBQC): detects errors by leveraging the ontology of
the knowledge graph to check if the LLM-generated SPARQL query matches the
semantic of ontology and 2) LLM Repair: use the error explanations with an LLM
to repair the SPARQL query. Using the chat with the data benchmark, our primary
finding is that our approach increases the overall accuracy to 72% including an
additional 8% of "I don't know" unknown results. Thus, the overall error rate
is 20%. These results provide further evidence that investing knowledge graphs,
namely the ontology, provides higher accuracy for LLM powered question
answering systems.Comment: 16 page
Effects of source and receiver locations in predicting room transfer functions by a phased beam tracing method
Using parametric model order reduction for inverse analysis of large nonlinear cardiac simulations
Predictive high-fidelity finite element simulations of human cardiac mechanics commonly require a large number of structural degrees of freedom. Additionally, these models are often coupled with lumped-parameter models of hemodynamics. High computational demands, however, slow down model calibration and therefore limit the use of cardiac simulations in clinical practice. As cardiac models rely on several patient-specific parameters, just one solution corresponding to one specific parameter set does not at all meet clinical demands. Moreover, while solving the nonlinear problem, 90% of the computation time is spent solving linear systems of equations. We propose to reduce the structural dimension of a monolithically coupled structure-Windkessel system by projection onto a lower-dimensional subspace. We obtain a good approximation of the displacement field as well as of key scalar cardiac outputs even with very few reduced degrees of freedom, while achieving considerable speedups. For subspace generation, we use proper orthogonal decomposition of displacement snapshots. Following a brief comparison of subspace interpolation methods, we demonstrate how projection-based model order reduction can be easily integrated into a gradient-based optimization. We demonstrate the performance of our method in a real-world multivariate inverse analysis scenario. Using the presented projection-based model order reduction approach can significantly speed up model personalization and could be used for many-query tasks in a clinical setting
A Benchmark to Understand the Role of Knowledge Graphs on Large Language Model's Accuracy for Question Answering on Enterprise SQL Databases
Enterprise applications of Large Language Models (LLMs) hold promise for
question answering on enterprise SQL databases. However, the extent to which
LLMs can accurately respond to enterprise questions in such databases remains
unclear, given the absence of suitable Text-to-SQL benchmarks tailored to
enterprise settings. Additionally, the potential of Knowledge Graphs (KGs) to
enhance LLM-based question answering by providing business context is not well
understood. This study aims to evaluate the accuracy of LLM-powered question
answering systems in the context of enterprise questions and SQL databases,
while also exploring the role of knowledge graphs in improving accuracy. To
achieve this, we introduce a benchmark comprising an enterprise SQL schema in
the insurance domain, a range of enterprise queries encompassing reporting to
metrics, and a contextual layer incorporating an ontology and mappings that
define a knowledge graph. Our primary finding reveals that question answering
using GPT-4, with zero-shot prompts directly on SQL databases, achieves an
accuracy of 16%. Notably, this accuracy increases to 54% when questions are
posed over a Knowledge Graph representation of the enterprise SQL database.
Therefore, investing in Knowledge Graph provides higher accuracy for LLM
powered question answering systems.Comment: 34 page
- …
