2,169 research outputs found
Hydrodynamic effects of a cannula in a USP dissolution testing apparatus 2
Dissolution testing is routinely used in the pharmaceutical industry to provide in vitro drug release information for drug development and quality control purposes. The USP Testing Apparatus 2 is the most common dissolution testing system for solid dosage forms. Usually, sampling cannulas are used to take samples manually from the dissolution medium. However, the inserted cannula can alter the normal fluid flow within the vessel and produce different dissolution testing results.
The hydrodynamic effects introduced by a permanently inserted cannula in a USP Dissolution Testing Apparatus 2 were evaluated by two approaches. Firstly, the dissolution tests were conducted with two dissolution systems, the testing system (with cannula) and the standard system (without cannula), for nine different tablet positions using non-disintegrating salicylic acid calibrator tablets. The dissolution profiles at each tablet location in the two systems were compared using statistical tools. Secondly, Particle Image Velocimetry (PIV) was used to obtain experimentally velocity vector maps and velocity profiles in the vessel for the two systems and to quantify changes in the velocities on selected horizontal so-surfaces.
The results show that the system with the cannula produced higher dissolution profiles than that without the cannula and that the magnitude of the difference between dissolution profiles in the two systems depended on tablet location. However, in most dissolution tests, the changes in dissolution profile due to the cannula were small enough to satisfy the FDA criteria for similarity between dissolution profiles (f1 and f2 values).
PIV measurements showed slightly changes in the velocities of the fluid flow in the vessel where the cannula was inserted. The most significant velocity changes were observed closest to the cannula. However, generally the hydrodynamic effect generated by the cannula did not appear to be particularly strong, which was consistent to dissolution test results.
It can be concluded that the hydrodynamic effects generated by the inserted cannula are real and observable. Such effects result in slightly mod fications of the fluid flow in the dissolution vessel and in detectable differences in the dissolution profiles, which, although limited, can introduce variations in test results possibly leading to failure of routine dissolution tests
Computationally efficient methods for solving time-variable-order time-space fractional reaction-diffusion equation
Fractional differential equations are becoming more widely accepted as a powerful tool in modelling anomalous diffusion, which is exhibited by various materials and processes. Recently, researchers have suggested that rather than using constant order fractional operators, some processes are more accurately modelled using fractional orders that vary with time and/or space. In this paper we develop computationally efficient techniques for solving time-variable-order time-space fractional reaction-diffusion equations (tsfrde) using the finite difference scheme. We adopt the Coimbra variable order time fractional operator and variable order fractional Laplacian operator in space where both orders are functions of time. Because the fractional operator is nonlocal, it is challenging to efficiently deal with its long range dependence when using classical numerical techniques to solve such equations. The novelty of our method is that the numerical solution of the time-variable-order tsfrde is written in terms of a matrix function vector product at each time step. This product is approximated efficiently by the Lanczos method, which is a powerful iterative technique for approximating the action of a matrix function by projecting onto a Krylov subspace. Furthermore an adaptive preconditioner is constructed that dramatically reduces the size of the required Krylov subspaces and hence the overall computational cost. Numerical examples, including the variable-order fractional Fisher equation, are presented to demonstrate the accuracy and efficiency of the approach
Maximizing the minimum and maximum forcing numbers of perfect matchings of graphs
Let be a simple graph with vertices and a perfect matching. The
forcing number of a perfect matching of is the smallest
cardinality of a subset of that is contained in no other perfect matching
of . Among all perfect matchings of , the minimum and maximum values
of are called the minimum and maximum forcing numbers of , denoted
by and , respectively. Then . Che and Chen
(2011) proposed an open problem: how to characterize the graphs with
. Later they showed that for bipartite graphs , if and
only if is complete bipartite graph . In this paper, we solve the
problem for general graphs and obtain that if and only if is a
complete multipartite graph or ( with arbitrary additional
edges in the same partite set). For a larger class of graphs with
we show that is -connected and a brick (3-connected and
bicritical graph) except for . In particular, we prove that the
forcing spectrum of each such graph is continued by matching 2-switches and
the minimum forcing numbers of all such graphs form an integer interval
from to
Quasi-maximum Likelihood Inference for Linear Double Autoregressive Models
This paper investigates the quasi-maximum likelihood inference including
estimation, model selection and diagnostic checking for linear double
autoregressive (DAR) models, where all asymptotic properties are established
under only fractional moment of the observed process. We propose a Gaussian
quasi-maximum likelihood estimator (G-QMLE) and an exponential quasi-maximum
likelihood estimator (E-QMLE) for the linear DAR model, and establish the
consistency and asymptotic normality for both estimators. Based on the G-QMLE
and E-QMLE, two Bayesian information criteria are proposed for model selection,
and two mixed portmanteau tests are constructed to check the adequacy of fitted
models. Moreover, we compare the proposed G-QMLE and E-QMLE with the existing
doubly weighted quantile regression estimator in terms of the asymptotic
efficiency and numerical performance. Simulation studies illustrate the
finite-sample performance of the proposed inference tools, and a real example
on the Bitcoin return series shows the usefulness of the proposed inference
tools.Comment: 8 table and 8 figure
Surface wrinkling of a hyperelastic half-space coated by a liquid crystal elastomer film
We consider the stability of a hyperelastic substrate coated by a liquid crystal elastomer film and
subjected to compressive forces. In this problem, the liquid crystal elastomer directors are free to
evolve and this possible variation needs to be included in the stability analysis. We consider the
case where the initial directors are aligned either in the horizontal or in the vertical direction and
obtain an exact bifurcation condition for surface wrinkling. We show that director reorientation
increases both the critical compressive strain and the critical wavenumber, hence stabilizing the
material. In the small wavenumber limit we carry out an asymptotic analysis and obtain analytical
solutions for the critical stretch and the critical wavenumber, which can be useful in applications
Hybrid Augmented Automated Graph Contrastive Learning
Graph augmentations are essential for graph contrastive learning. Most
existing works use pre-defined random augmentations, which are usually unable
to adapt to different input graphs and fail to consider the impact of different
nodes and edges on graph semantics. To address this issue, we propose a
framework called Hybrid Augmented Automated Graph Contrastive Learning (HAGCL).
HAGCL consists of a feature-level learnable view generator and an edge-level
learnable view generator. The view generators are end-to-end differentiable to
learn the probability distribution of views conditioned on the input graph. It
insures to learn the most semantically meaningful structure in terms of
features and topology, respectively. Furthermore, we propose an improved joint
training strategy, which can achieve better results than previous works without
resorting to any weak label information in the downstream tasks and extensive
evaluation of additional work
- β¦