76 research outputs found
Generative Models as Distributions of Functions
Generative models are typically trained on grid-like data such as images. As
a result, the size of these models usually scales directly with the underlying
grid resolution. In this paper, we abandon discretized grids and instead
parameterize individual data points by continuous functions. We then build
generative models by learning distributions over such functions. By treating
data points as functions, we can abstract away from the specific type of data
we train on and construct models that are agnostic to discretization. To train
our model, we use an adversarial approach with a discriminator that acts on
continuous signals. Through experiments on a wide variety of data modalities
including images, 3D shapes and climate data, we demonstrate that our model can
learn rich distributions of functions independently of data type and
resolution.Comment: Added experiments for learning distributions of functions on
manifolds. Added more 3D experiments and comparisons to baseline
Integrating electromagnetic and hydrodynamic models for the characterization of radar targets in marine environment
The paper describes a simulation methodology for radar targets in marine environment. Our approach is based on a
new model, called "Scattering Center Set Unified Representation", which is able to approximate the backscattered radar
target echo for any aspect angle. This model is fast to calculate and has the advantage to take into account the partial
or total target concealing due to the sea waves. It associates to any target aspect a scattering center set. An amplitude
map accounts for each scattering center anisotropy and geometrical visibility. A virtual model is then used for describing
the target motion and its concealing by the sea waves. The influence of the sea clutter is also taken into account. The
radar signatures used in our simulations have been measured in the anechoic chamber of ENSIETA for four scale-reduced naval targets. The paper also presents some imagery and classification results, which are aimed to
illustrate the other side of the naval target characterization problem.L'article présente une méthodologie de simulation d'échos de cibles radar en environnement marin. La
procédure est basée sur un nouveau modèle, appelé « Ensemble de Points Brillants en Représentation Unifiée »
(EPB-RU), qui permet d'approcher le signal écho d'une cible radar pour l'ensemble de ses orientations. Ce
modèle est rapide à calculer et a l'avantage de prendre en compte le masquage partiel ou total de la cible par
les vagues de la mer. Il associe à chaque orientation de la cible un ensemble de points brillants (EPB). Pour
chaque point brillant du modèle, une carte d'amplitude prend en compte son anisotropie et sa visibilité en
fonction de l'angle de visée. Un modèle virtuel combiné mer-navire est utilisé pour décrire le mouvement de la
cible et le masquage introduit par les vagues de la mer. L'influence du fouillis de mer est également prise en
compte. Les signatures radar utilisées dans nos simulations correspondent à quatre maquettes de cibles
navales mesurées dans la chambre anéchoïde de l'ENSIETA. L'article présente aussi quelques résultats
d'imagerie radar et de classification, qui illustrent l'aspect inverse du problème de la caractérisation des
cibles navales dans leur environnement
Optical computed tomography for spatially isotropic four-dimensional imaging of live single cells
abstract: Quantitative three-dimensional (3D) computed tomography (CT) imaging of living single cells enables orientation-independent morphometric analysis of the intricacies of cellular physiology. Since its invention, x-ray CT has become indispensable in the clinic for diagnostic and prognostic purposes due to its quantitative absorption-based imaging in true 3D that allows objects of interest to be viewed and measured from any orientation. However, x-ray CT has not been useful at the level of single cells because there is insufficient contrast to form an image. Recently, optical CT has been developed successfully for fixed cells, but this technology called Cell-CT is incompatible with live-cell imaging due to the use of stains, such as hematoxylin, that are not compatible with cell viability. We present a novel development of optical CT for quantitative, multispectral functional 4D (three spatial + one spectral dimension) imaging of living single cells. The method applied to immune system cells offers truly isotropic 3D spatial resolution and enables time-resolved imaging studies of cells suspended in aqueous medium. Using live-cell optical CT, we found a heterogeneous response to mitochondrial fission inhibition in mouse macrophages and differential basal remodeling of small (0.1 to 1 fl) and large (1 to 20 fl) nuclear and mitochondrial structures on a 20- to 30-s time scale in human myelogenous leukemia cells. Because of its robust 3D measurement capabilities, live-cell optical CT represents a powerful new tool in the biomedical research field
Sub-aperture SAR Imaging with Uncertainty Quantification
In the problem of spotlight mode airborne synthetic aperture radar (SAR)
image formation, it is well-known that data collected over a wide azimuthal
angle violate the isotropic scattering property typically assumed. Many
techniques have been proposed to account for this issue, including both
full-aperture and sub-aperture methods based on filtering, regularized least
squares, and Bayesian methods. A full-aperture method that uses a hierarchical
Bayesian prior to incorporate appropriate speckle modeling and reduction was
recently introduced to produce samples of the posterior density rather than a
single image estimate. This uncertainty quantification information is more
robust as it can generate a variety of statistics for the scene. As proposed,
the method was not well-suited for large problems, however, as the sampling was
inefficient. Moreover, the method was not explicitly designed to mitigate the
effects of the faulty isotropic scattering assumption. In this work we
therefore propose a new sub-aperture SAR imaging method that uses a sparse
Bayesian learning-type algorithm to more efficiently produce approximate
posterior densities for each sub-aperture window. These estimates may be useful
in and of themselves, or when of interest, the statistics from these
distributions can be combined to form a composite image. Furthermore, unlike
the often-employed lp-regularized least squares methods, no user-defined
parameters are required. Application-specific adjustments are made to reduce
the typically burdensome runtime and storage requirements so that appropriately
large images can be generated. Finally, this paper focuses on incorporating
these techniques into SAR image formation process. That is, for the problem
starting with SAR phase history data, so that no additional processing errors
are incurred
Multitemporal and multispectral data fusion for super-resolution of Sentinel-2 images
Multispectral Sentinel-2 images are a valuable source of Earth observation
data, however spatial resolution of their spectral bands limited to 10 m, 20 m,
and 60 m ground sampling distance remains insufficient in many cases. This
problem can be addressed with super-resolution, aimed at reconstructing a
high-resolution image from a low-resolution observation. For Sentinel-2,
spectral information fusion allows for enhancing the 20 m and 60 m bands to the
10 m resolution. Also, there were attempts to combine multitemporal stacks of
individual Sentinel-2 bands, however these two approaches have not been
combined so far. In this paper, we introduce DeepSent -- a new deep network for
super-resolving multitemporal series of multispectral Sentinel-2 images. It is
underpinned with information fusion performed simultaneously in the spectral
and temporal dimensions to generate an enlarged multispectral image. In our
extensive experimental study, we demonstrate that our solution outperforms
other state-of-the-art techniques that realize either multitemporal or
multispectral data fusion. Furthermore, we show that the advantage of DeepSent
results from how these two fusion types are combined in a single architecture,
which is superior to performing such fusion in a sequential manner.
Importantly, we have applied our method to super-resolve real-world Sentinel-2
images, enhancing the spatial resolution of all the spectral bands to 3.3 m
nominal ground sampling distance, and we compare the outcome with very
high-resolution WorldView-2 images. We will publish our implementation upon
paper acceptance, and we expect it will increase the possibilities of
exploiting super-resolved Sentinel-2 images in real-life applications.Comment: Submitted to IEEE Transactions On Geoscience And Remote Sensin
Transformation vs Tradition: Artificial General Intelligence (AGI) for Arts and Humanities
Recent advances in artificial general intelligence (AGI), particularly large
language models and creative image generation systems have demonstrated
impressive capabilities on diverse tasks spanning the arts and humanities.
However, the swift evolution of AGI has also raised critical questions about
its responsible deployment in these culturally significant domains
traditionally seen as profoundly human. This paper provides a comprehensive
analysis of the applications and implications of AGI for text, graphics, audio,
and video pertaining to arts and the humanities. We survey cutting-edge systems
and their usage in areas ranging from poetry to history, marketing to film, and
communication to classical art. We outline substantial concerns pertaining to
factuality, toxicity, biases, and public safety in AGI systems, and propose
mitigation strategies. The paper argues for multi-stakeholder collaboration to
ensure AGI promotes creativity, knowledge, and cultural values without
undermining truth or human dignity. Our timely contribution summarizes a
rapidly developing field, highlighting promising directions while advocating
for responsible progress centering on human flourishing. The analysis lays the
groundwork for further research on aligning AGI's technological capacities with
enduring social goods
- …