630 research outputs found
Structural studies on the interactions of a P2N tridentate ligand with copper(I) silver(I) and S : a dissertation presented in partial fulfilment of the degree of Master of Philosophy at Massey University
This thesis presents a study of the coordination chemistry, chemical reactivity, spectroscopy, structure and bonding of the hybrid polydentate ligand 2-(diphenylphosphino)-N-[2-(diphenylphosphino)benzylidene]benzeneamine (PNCP) with copper(I), silver(I) and sulfur. The hybrid polydentate (PNCP) ligand contains two inequivalent phosphorus (soft) and one nitrogen (hard) donor atoms, Chapter One is a brief overview of tertiary phosphines used as monodentate, bidentate, tridentate and polydentate ligands with transition metals. In Chapter Two, the preparation structure and characterisation of PNCP have been studied. Reactions of PNCP with sulphur have been investigated and a small site selectivity for one of the P atoms noted. Experiments have also included selective synthesis of the unsymmetrical mono-sulphide tertiary phosphine ligands SPNCP, PNCPS and of the di-sulfide SPNCPS ligand, as well as a study on the molecular structure of the 3-coordinate complex, [Cu(SPNCPS)]CIO₄. In Chapter Three the preparation of a series of copper(I) complexes of the general formula [Cu(PNCP)ClO₄] and [Cu(PNCP)L]ClO₄ (L- ligands containing S, N donor atoms) have been reported. The crystal structure of [Cu(PNCP)ClO₄] has been determined, and shows PNCP acts as a tridentate ligand coordinated to copper(I) via two phosphorus and one nitrogen donor atoms. The copper(I) atom has a distorted tetrahedral environment with two short Cu-P bonds and a slightly long Cu-N bond. In Chapter Four, studies on the preparation of the mononuclear complex [Ag(PNCP)ClO₄] and the dinuclear complex [Ag(PNCP)(SCN)]₂ are presented. Both complexes were characterized by a variety of physicochemical techniques. The tridentate behaviour of PNCP in the complex [Ag(PNCP)ClO₄] was established but the Ag-N bond was long and weak. In the complex [Ag(PNCP)(SCN)]₂ the Ag-N bond not exist and PNCP acts as a bidentate ligand
Estimation of Copula-Based Semiparametric Time Series Models
This paper studies the estimation of a class of copula-based semiparametric stationary Markov models. These models are characterized by nonparametric invariant (or marginal) distributions and parametric copula functions that capture the temporal dependence of the processes; the implied transition distributions are all semiparametric. Models in this class are easy to simulate, and can be expressed as semiparametric regression transformation models. One advantage of this copula approach is to separate out the temporal dependence(such as tail dependence) from the marginal behavior (such as fat tailedness) of a time series. We present conditions under which processes generated by models in this class are -mixing; naturally, these conditions depend only on the copula specification. Simple estimators of the marginal distribution and the copula parameter are provided, and their asymptotic properties are established under easily verifiable conditions. Estimators of important features of the transition distribution such as the (nonlinear) conditional moments and conditional quantiles are easily obtained from estimators of the marginal distribution and the copula parameter; their consistency and asymptotic normality can be obtained using the Delta method. In addition, the semiparametric conditional quantile estimators are automatically monotonic across quantiles.Copula; Nonlinear Markov models; Semiparametric estimation;Conditional quantile
Estimation and Model Selection of Semiparametric Multivariate Survival Functions under General Censorship
Many models of semiparametric multivariate survival functions are characterized by nonparametric marginal survival functions and parametric copula functions, where different copulas imply different dependence structures. This paper considers estimation and model selection for these semiparametric multivariate survival functions, allowing for misspecified parametric copulas and data subject to general censoring. We first establish convergence of the two-step estimator of the copula parameter to the pseudo-true value defined as the value of the parameter that minimizes the KLIC between the parametric copula induced multivariate density and the unknown true density. We then derive its root--n asymptotically normal distribution and provide a simple consistent asymptotic variance estimator by accounting for the impact of the nonparametric estimation of the marginal survival functions. These results are used to establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application of the model selection test to the Loss-ALAE insurance data set is provided.Multivariate survival models, Misspecified copulas, Penalized pseudo-likelihood ratio, Fixed or random censoring, Kaplan-Meier estimator
PRISTA-Net: Deep Iterative Shrinkage Thresholding Network for Coded Diffraction Patterns Phase Retrieval
The problem of phase retrieval (PR) involves recovering an unknown image from
limited amplitude measurement data and is a challenge nonlinear inverse problem
in computational imaging and image processing. However, many of the PR methods
are based on black-box network models that lack interpretability and
plug-and-play (PnP) frameworks that are computationally complex and require
careful parameter tuning. To address this, we have developed PRISTA-Net, a deep
unfolding network (DUN) based on the first-order iterative shrinkage
thresholding algorithm (ISTA). This network utilizes a learnable nonlinear
transformation to address the proximal-point mapping sub-problem associated
with the sparse priors, and an attention mechanism to focus on phase
information containing image edges, textures, and structures. Additionally, the
fast Fourier transform (FFT) is used to learn global features to enhance local
information, and the designed logarithmic-based loss function leads to
significant improvements when the noise level is low. All parameters in the
proposed PRISTA-Net framework, including the nonlinear transformation,
threshold parameters, and step size, are learned end-to-end instead of being
manually set. This method combines the interpretability of traditional methods
with the fast inference ability of deep learning and is able to handle noise at
each iteration during the unfolding stage, thus improving recovery quality.
Experiments on Coded Diffraction Patterns (CDPs) measurements demonstrate that
our approach outperforms the existing state-of-the-art methods in terms of
qualitative and quantitative evaluations. Our source codes are available at
\emph{https://github.com/liuaxou/PRISTA-Net}.Comment: 12 page
Estimation and Model Selection of Semiparametric Multivariate Survival Functions under General Censorship
Many models of semiparametric multivariate survival functions are characterized by nonparametric marginal survival functions and parametric copula functions, where different copulas imply different dependence structures. This paper considers estimation and model selection for these semiparametric multivariate survival functions, allowing for misspecified parametric copulas and data subject to general censoring. We first establish convergence of the two-step estimator of the copula parameter to the pseudo-true value defined as the value of the parameter that minimizes the KLIC between the parametric copula induced multivariate density and the unknown true density. We then derive its root–n asymptotically normal distribution and provide a simple consistent asymptotic variance estimator by accounting for the impact of the nonparametric estimation of the marginal survival functions. These results are used to establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application of the model selection test to the Loss-ALAE insurance data set is provided
Laboratory measurements and model sensitivity studies of dust deposition ice nucleation
We investigated the ice nucleating properties of mineral dust particles to understand the sensitivity of simulated cloud properties to two different representations of contact angle in the Classical Nucleation Theory (CNT). These contact angle representations are based on two sets of laboratory deposition ice nucleation measurements: Arizona Test Dust (ATD) particles of 100, 300 and 500 nm sizes were tested at three different temperatures (−25, −30 and −35 °C), and 400 nm ATD and kaolinite dust species were tested at two different temperatures (−30 and −35 °C). These measurements were used to derive the onset relative humidity with respect to ice (RH<sub>ice</sub>) required to activate 1% of dust particles as ice nuclei, from which the onset single contact angles were then calculated based on CNT. For the probability density function (PDF) representation, parameters of the log-normal contact angle distribution were determined by fitting CNT-predicted activated fraction to the measurements at different RH<sub>ice</sub>. Results show that onset single contact angles vary from ~18 to 24 degrees, while the PDF parameters are sensitive to the measurement conditions (i.e. temperature and dust size). Cloud modeling simulations were performed to understand the sensitivity of cloud properties (i.e. ice number concentration, ice water content, and cloud initiation times) to the representation of contact angle and PDF distribution parameters. The model simulations show that cloud properties are sensitive to onset single contact angles and PDF distribution parameters. The comparison of our experimental results with other studies shows that under similar measurement conditions the onset single contact angles are consistent within ±2.0 degrees, while our derived PDF parameters have larger discrepancies
Nest-DGIL: Nesterov-optimized Deep Geometric Incremental Learning for CS Image Reconstruction
Proximal gradient-based optimization is one of the most common strategies for
solving image inverse problems as well as easy to implement. However, these
techniques often generate heavy artifacts in image reconstruction. One of the
most popular refinement methods is to fine-tune the regularization parameter to
alleviate such artifacts, but it may not always be sufficient or applicable due
to increased computational costs. In this work, we propose a deep geometric
incremental learning framework based on second Nesterov proximal gradient
optimization. The proposed end-to-end network not only has the powerful
learning ability for high/low frequency image features,but also can
theoretically guarantee that geometric texture details will be reconstructed
from preliminary linear reconstruction.Furthermore, it can avoid the risk of
intermediate reconstruction results falling outside the geometric decomposition
domains and achieve fast convergence. Our reconstruction framework is
decomposed into four modules including general linear reconstruction, cascade
geometric incremental restoration, Nesterov acceleration and post-processing.
In the image restoration step,a cascade geometric incremental learning module
is designed to compensate for the missing texture information from different
geometric spectral decomposition domains. Inspired by overlap-tile strategy, we
also develop a post-processing module to remove the block-effect in
patch-wise-based natural image reconstruction. All parameters in the proposed
model are learnable,an adaptive initialization technique of physical-parameters
is also employed to make model flexibility and ensure converging smoothly. We
compare the reconstruction performance of the proposed method with existing
state-of-the-art methods to demonstrate its superiority. Our source codes are
available at https://github.com/fanxiaohong/Nest-DGIL.Comment: 15 page
A Multi-scale Generalized Shrinkage Threshold Network for Image Blind Deblurring in Remote Sensing
Remote sensing images are essential for many earth science applications, but
their quality can be degraded due to limitations in sensor technology and
complex imaging environments. To address this, various remote sensing image
deblurring methods have been developed to restore sharp, high-quality images
from degraded observational data. However, most traditional model-based
deblurring methods usually require predefined hand-craft prior assumptions,
which are difficult to handle in complex applications, and most deep
learning-based deblurring methods are designed as a black box, lacking
transparency and interpretability. In this work, we propose a novel blind
deblurring learning framework based on alternating iterations of shrinkage
thresholds, alternately updating blurring kernels and images, with the
theoretical foundation of network design. Additionally, we propose a learnable
blur kernel proximal mapping module to improve the blur kernel evaluation in
the kernel domain. Then, we proposed a deep proximal mapping module in the
image domain, which combines a generalized shrinkage threshold operator and a
multi-scale prior feature extraction block. This module also introduces an
attention mechanism to adaptively adjust the prior importance, thus avoiding
the drawbacks of hand-crafted image prior terms. Thus, a novel multi-scale
generalized shrinkage threshold network (MGSTNet) is designed to specifically
focus on learning deep geometric prior features to enhance image restoration.
Experiments demonstrate the superiority of our MGSTNet framework on remote
sensing image datasets compared to existing deblurring methods.Comment: 12 pages
Automated and Context-Aware Repair of Color-Related Accessibility Issues for Android Apps
Approximately 15% of the world's population is suffering from various
disabilities or impairments. However, many mobile UX designers and developers
disregard the significance of accessibility for those with disabilities when
developing apps. A large number of studies and some effective tools for
detecting accessibility issues have been conducted and proposed to mitigate
such a severe problem. However, compared with detection, the repair work is
obviously falling behind. Especially for the color-related accessibility
issues, which is one of the top issues in apps with a greatly negative impact
on vision and user experience. Apps with such issues are difficult to use for
people with low vision and the elderly. Unfortunately, such an issue type
cannot be directly fixed by existing repair techniques. To this end, we propose
Iris, an automated and context-aware repair method to fix the color-related
accessibility issues (i.e., the text contrast issues and the image contrast
issues) for apps. By leveraging a novel context-aware technique that resolves
the optimal colors and a vital phase of attribute-to-repair localization, Iris
not only repairs the color contrast issues but also guarantees the consistency
of the design style between the original UI page and repaired UI page. Our
experiments unveiled that Iris can achieve a 91.38% repair success rate with
high effectiveness and efficiency. The usefulness of Iris has also been
evaluated by a user study with a high satisfaction rate as well as developers'
positive feedback. 9 of 40 submitted pull requests on GitHub repositories have
been accepted and merged into the projects by app developers, and another 4
developers are actively discussing with us for further repair. Iris is publicly
available to facilitate this new research direction.Comment: 11 pages plus 2 additional pages for reference
- …