10,856 research outputs found
A GCV based Arnoldi-Tikhonov regularization method
For the solution of linear discrete ill-posed problems, in this paper we
consider the Arnoldi-Tikhonov method coupled with the Generalized Cross
Validation for the computation of the regularization parameter at each
iteration. We study the convergence behavior of the Arnoldi method and its
properties for the approximation of the (generalized) singular values, under
the hypothesis that Picard condition is satisfied. Numerical experiments on
classical test problems and on image restoration are presented
Embedded techniques for choosing the parameter in Tikhonov regularization
This paper introduces a new strategy for setting the regularization parameter
when solving large-scale discrete ill-posed linear problems by means of the
Arnoldi-Tikhonov method. This new rule is essentially based on the discrepancy
principle, although no initial knowledge of the norm of the error that affects
the right-hand side is assumed; an increasingly more accurate approximation of
this quantity is recovered during the Arnoldi algorithm. Some theoretical
estimates are derived in order to motivate our approach. Many numerical
experiments, performed on classical test problems as well as image deblurring
are presented
A deep representation for depth images from synthetic data
Convolutional Neural Networks (CNNs) trained on large scale RGB databases
have become the secret sauce in the majority of recent approaches for object
categorization from RGB-D data. Thanks to colorization techniques, these
methods exploit the filters learned from 2D images to extract meaningful
representations in 2.5D. Still, the perceptual signature of these two kind of
images is very different, with the first usually strongly characterized by
textures, and the second mostly by silhouettes of objects. Ideally, one would
like to have two CNNs, one for RGB and one for depth, each trained on a
suitable data collection, able to capture the perceptual properties of each
channel for the task at hand. This has not been possible so far, due to the
lack of a suitable depth database. This paper addresses this issue, proposing
to opt for synthetically generated images rather than collecting by hand a 2.5D
large scale database. While being clearly a proxy for real data, synthetic
images allow to trade quality for quantity, making it possible to generate a
virtually infinite amount of data. We show that the filters learned from such
data collection, using the very same architecture typically used on visual
data, learns very different filters, resulting in depth features (a) able to
better characterize the different facets of depth images, and (b) complementary
with respect to those derived from CNNs pre-trained on 2D datasets. Experiments
on two publicly available databases show the power of our approach
From source to target and back: symmetric bi-directional adaptive GAN
The effectiveness of generative adversarial approaches in producing images
according to a specific style or visual domain has recently opened new
directions to solve the unsupervised domain adaptation problem. It has been
shown that source labeled images can be modified to mimic target samples making
it possible to train directly a classifier in the target domain, despite the
original lack of annotated data. Inverse mappings from the target to the source
domain have also been evaluated but only passing through adapted feature
spaces, thus without new image generation. In this paper we propose to better
exploit the potential of generative adversarial networks for adaptation by
introducing a novel symmetric mapping among domains. We jointly optimize
bi-directional image transformations combining them with target self-labeling.
Moreover we define a new class consistency loss that aligns the generators in
the two directions imposing to conserve the class identity of an image passing
through both domain mappings. A detailed qualitative and quantitative analysis
of the reconstructed images confirm the power of our approach. By integrating
the two domain specific classifiers obtained with our bi-directional network we
exceed previous state-of-the-art unsupervised adaptation results on four
different benchmark datasets
Alien Registration- Russo, Maria (Portland, Cumberland County)
https://digitalmaine.com/alien_docs/25377/thumbnail.jp
How toxic are gold nanoparticles? The state-of-the-art.
With the growing interest in biotechnological applications of gold nanoparticles and their effects exerted on the body, the possible toxicity is becoming an increasingly important issue. Numerous investigations carried out, in the last few years, under different experimental conditions, following different protocols, have produced in part conflicting results which have leaded to different views about the effective gold nanoparticle safety in human applications.
This work is intended to provide an overview on the most recent experimental results in order to summarize the current state-of-the-art. However, rather than to present a comprehensive review of the available literature in this field, that, among other things, is really huge, we have selected some representative examples of both in vivo and in vitro investigations, with the aim of offering a scenario from which clearly emerges the need of an urgent and impelling standardization of the experimental protocols. To date, despite the great potential, the safety of gold nanoparticles is highly controversial and important concerns have been raised with the need to be properly addressed. Factors such as shape, size, surface charge, surface coating and surface functionalization are expected to influence interactions with biological systems at different extents, with different outcomes, as far as gold nanoparticle potentiality in biomedical applications is concerned. Moreover, despite the continuous attempt to establish a correlation between structure and interactions with biological systems, we are still far from assessing the
toxicological profile of gold nanoparticles in an unquestionable manner. This review is intended to provide a contribution in this direction, offering some suggestions in order to reach the systematization of data over the most relevant physico-chemical parameters, which govern and control toxicity, at different cellular and organismal levels
Chemiresistive polyaniline-based gas sensors: a mini review
This review focuses on some recent advances made in the field of gas sensors based on polyaniline [PANI], a conducting polymer with excellent electronic conductivity and electrochemical properties. Conducting polymers represent an important class of organic materials with an enhanced resistivity towards external stimuli. Among them, PANI polymers have attracted wide interest because of the versatility in their use, combined with the easy of synthesis, high yield and good environmental stability, together with a favorable response to guest molecules at room temperature. Moreover, PANI can be shaped into various structures with different morphologies and the possibility of obtaining nanofibers, in addition to thin films, has opened a rapid development of ultrasensitive chemical sensors, with improved processability and functionality. This review provides a brief description of the current status of gas chemiresistive sensors based on polyaniline and highlights the properties and applications of these devices in diverse range of applications. © 2015 Elsevier B.V. All rights reserved
Our products are safe (don't tell anyone!). Why don't supermarkets advertise their private food safety standards?
Large retail chains have spent considerable resources to promote production protocols and traceability across the supply chain, aiming at increasing food safety. Yet, the majority of consumers are unaware of these private food safety standards (PFSS) and retailers are not informing them. This behavior denotes a pooling paradox: supermarkets spend a large amount of money for food safety and yet they forget to inform consumers. The result is a pooling equilibrium where consumers cannot discriminate among high quality and low quality products and supermarkets give up the potential price premium. This paper provides an economic explanation for the paradox using a contract-theory model. We found that PFSS implementation may be rational even if consumers have no willingness to pay for safety, because the standard can be used as a tool to solve asymmetric information along the supply chain. Using the PFSS, supermarkets can achieve a separating equilibrium where opportunistic suppliers have no incentive to accept the contract. Even if consumers exhibit a limited (but strictly positive) willingness to pay for safety, advertising may be profit-reducing. If the expected price margin is high enough, supermarkets have incentive to supply both certified and uncertified products. In this case, we show that, if consumers perceive undifferentiated products as âreasonably safeâ, supermarkets may maximize profits by pooling the goods and selling them as undifferentiated. This result is not driven by advertising costs, as we derive it assuming free advertising.Agribusiness, Food Consumption/Nutrition/Food Safety,
- âŠ