5,953 research outputs found
A study of remote sensing as applied to regional and small watersheds. Volume 1: Summary report
The accuracy of remotely sensed measurements to provide inputs to hydrologic models of watersheds is studied. A series of sensitivity analyses on continuous simulation models of three watersheds determined: (1)Optimal values and permissible tolerances of inputs to achieve accurate simulation of streamflow from the watersheds; (2) Which model inputs can be quantified from remote sensing, directly, indirectly or by inference; and (3) How accurate remotely sensed measurements (from spacecraft or aircraft) must be to provide a basis for quantifying model inputs within permissible tolerances
Changes from Classical Statistics to Modern Statistics and Data Science
A coordinate system is a foundation for every quantitative science,
engineering, and medicine. Classical physics and statistics are based on the
Cartesian coordinate system. The classical probability and hypothesis testing
theory can only be applied to Euclidean data. However, modern data in the real
world are from natural language processing, mathematical formulas, social
networks, transportation and sensor networks, computer visions, automations,
and biomedical measurements. The Euclidean assumption is not appropriate for
non Euclidean data. This perspective addresses the urgent need to overcome
those fundamental limitations and encourages extensions of classical
probability theory and hypothesis testing , diffusion models and stochastic
differential equations from Euclidean space to non Euclidean space. Artificial
intelligence such as natural language processing, computer vision, graphical
neural networks, manifold regression and inference theory, manifold learning,
graph neural networks, compositional diffusion models for automatically
compositional generations of concepts and demystifying machine learning
systems, has been rapidly developed. Differential manifold theory is the
mathematic foundations of deep learning and data science as well. We urgently
need to shift the paradigm for data analysis from the classical Euclidean data
analysis to both Euclidean and non Euclidean data analysis and develop more and
more innovative methods for describing, estimating and inferring non Euclidean
geometries of modern real datasets. A general framework for integrated analysis
of both Euclidean and non Euclidean data, composite AI, decision intelligence
and edge AI provide powerful innovative ideas and strategies for fundamentally
advancing AI. We are expected to marry statistics with AI, develop a unified
theory of modern statistics and drive next generation of AI and data science.Comment: 37 page
A hybrid algorithm for Bayesian network structure learning with application to multi-label learning
We present a novel hybrid algorithm for Bayesian network structure learning,
called H2PC. It first reconstructs the skeleton of a Bayesian network and then
performs a Bayesian-scoring greedy hill-climbing search to orient the edges.
The algorithm is based on divide-and-conquer constraint-based subroutines to
learn the local structure around a target variable. We conduct two series of
experimental comparisons of H2PC against Max-Min Hill-Climbing (MMHC), which is
currently the most powerful state-of-the-art algorithm for Bayesian network
structure learning. First, we use eight well-known Bayesian network benchmarks
with various data sizes to assess the quality of the learned structure returned
by the algorithms. Our extensive experiments show that H2PC outperforms MMHC in
terms of goodness of fit to new data and quality of the network structure with
respect to the true dependence structure of the data. Second, we investigate
H2PC's ability to solve the multi-label learning problem. We provide
theoretical results to characterize and identify graphically the so-called
minimal label powersets that appear as irreducible factors in the joint
distribution under the faithfulness condition. The multi-label learning problem
is then decomposed into a series of multi-class classification problems, where
each multi-class variable encodes a label powerset. H2PC is shown to compare
favorably to MMHC in terms of global classification accuracy over ten
multi-label data sets covering different application domains. Overall, our
experiments support the conclusions that local structural learning with H2PC in
the form of local neighborhood induction is a theoretically well-motivated and
empirically effective learning framework that is well suited to multi-label
learning. The source code (in R) of H2PC as well as all data sets used for the
empirical tests are publicly available.Comment: arXiv admin note: text overlap with arXiv:1101.5184 by other author
Learn the Force We Can: Enabling Sparse Motion Control in Multi-Object Video Generation
We propose a novel unsupervised method to autoregressively generate videos
from a single frame and a sparse motion input. Our trained model can generate
unseen realistic object-to-object interactions. Although our model has never
been given the explicit segmentation and motion of each object in the scene
during training, it is able to implicitly separate their dynamics and extents.
Key components in our method are the randomized conditioning scheme, the
encoding of the input motion control, and the randomized and sparse sampling to
enable generalization to out of distribution but realistic correlations. Our
model, which we call YODA, has therefore the ability to move objects without
physically touching them. Through extensive qualitative and quantitative
evaluations on several datasets, we show that YODA is on par with or better
than state of the art video generation prior work in terms of both
controllability and video quality.Comment: Accepted to AAAI 2024. Project website:
https://araachie.github.io/yod
Pathway to Future Symbiotic Creativity
This report presents a comprehensive view of our vision on the development
path of the human-machine symbiotic art creation. We propose a classification
of the creative system with a hierarchy of 5 classes, showing the pathway of
creativity evolving from a mimic-human artist (Turing Artists) to a Machine
artist in its own right. We begin with an overview of the limitations of the
Turing Artists then focus on the top two-level systems, Machine Artists,
emphasizing machine-human communication in art creation. In art creation, it is
necessary for machines to understand humans' mental states, including desires,
appreciation, and emotions, humans also need to understand machines' creative
capabilities and limitations. The rapid development of immersive environment
and further evolution into the new concept of metaverse enable symbiotic art
creation through unprecedented flexibility of bi-directional communication
between artists and art manifestation environments. By examining the latest
sensor and XR technologies, we illustrate the novel way for art data collection
to constitute the base of a new form of human-machine bidirectional
communication and understanding in art creation. Based on such communication
and understanding mechanisms, we propose a novel framework for building future
Machine artists, which comes with the philosophy that a human-compatible AI
system should be based on the "human-in-the-loop" principle rather than the
traditional "end-to-end" dogma. By proposing a new form of inverse
reinforcement learning model, we outline the platform design of machine
artists, demonstrate its functions and showcase some examples of technologies
we have developed. We also provide a systematic exposition of the ecosystem for
AI-based symbiotic art form and community with an economic model built on NFT
technology. Ethical issues for the development of machine artists are also
discussed
DreamLLM: Synergistic Multimodal Comprehension and Creation
This paper presents DreamLLM, a learning framework that first achieves
versatile Multimodal Large Language Models (MLLMs) empowered with frequently
overlooked synergy between multimodal comprehension and creation. DreamLLM
operates on two fundamental principles. The first focuses on the generative
modeling of both language and image posteriors by direct sampling in the raw
multimodal space. This approach circumvents the limitations and information
loss inherent to external feature extractors like CLIP, and a more thorough
multimodal understanding is obtained. Second, DreamLLM fosters the generation
of raw, interleaved documents, modeling both text and image contents, along
with unstructured layouts. This allows DreamLLM to learn all conditional,
marginal, and joint multimodal distributions effectively. As a result, DreamLLM
is the first MLLM capable of generating free-form interleaved content.
Comprehensive experiments highlight DreamLLM's superior performance as a
zero-shot multimodal generalist, reaping from the enhanced learning synergy.Comment: see project page at https://dreamllm.github.io
- …