15 research outputs found
Streaming Support for Data Intensive Cloud-Based Sequence Analysis
Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of “resources-on-demand” and “pay-as-you-go”, scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client's site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation
Multi-Spectral Remote Sensing Image Retrieval Using Geospatial Foundation Models
Image retrieval enables an efficient search through vast amounts of satellite
imagery and returns similar images to a query. Deep learning models can
identify images across various semantic concepts without the need for
annotations. This work proposes to use Geospatial Foundation Models, like
Prithvi, for remote sensing image retrieval with multiple benefits: i) the
models encode multi-spectral satellite data and ii) generalize without further
fine-tuning. We introduce two datasets to the retrieval task and observe a
strong performance: Prithvi processes six bands and achieves a mean Average
Precision of 97.62% on BigEarthNet-43 and 44.51% on ForestNet-12, outperforming
other RGB-based models. Further, we evaluate three compression methods with
binarized embeddings balancing retrieval speed and accuracy. They match the
retrieval speed of much shorter hash codes while maintaining the same accuracy
as floating-point embeddings but with a 32-fold compression. The code is
available at https://github.com/IBM/remote-sensing-image-retrieval.Comment: Proceedings of the IEEE International Geoscience and Remote Sensing
Symposium (IGARSS
CLAIMED -- the open source framework for building coarse-grained operators for accelerated discovery in science
In modern data-driven science, reproducibility and reusability are key
challenges. Scientists are well skilled in the process from data to
publication. Although some publication channels require source code and data to
be made accessible, rerunning and verifying experiments is usually hard due to
a lack of standards. Therefore, reusing existing scientific data processing
code from state-of-the-art research is hard as well. This is why we introduce
CLAIMED, which has a proven track record in scientific research for addressing
the repeatability and reusability issues in modern data-driven science. CLAIMED
is a framework to build reusable operators and scalable scientific workflows by
supporting the scientist to draw from previous work by re-composing workflows
from existing libraries of coarse-grained scientific operators. Although
various implementations exist, CLAIMED is programming language, scientific
library, and execution environment agnostic.Comment: Received IEEE OSS Award 2023 -
https://conferences.computer.org/services/2023/symposia/oss.htm
TensorBank:Tensor Lakehouse for Foundation Model Training
Storing and streaming high dimensional data for foundation model training
became a critical requirement with the rise of foundation models beyond natural
language. In this paper we introduce TensorBank, a petabyte scale tensor
lakehouse capable of streaming tensors from Cloud Object Store (COS) to GPU
memory at wire speed based on complex relational queries. We use Hierarchical
Statistical Indices (HSI) for query acceleration. Our architecture allows to
directly address tensors on block level using HTTP range reads. Once in GPU
memory, data can be transformed using PyTorch transforms. We provide a generic
PyTorch dataset type with a corresponding dataset factory translating
relational queries and requested transformations as an instance. By making use
of the HSI, irrelevant blocks can be skipped without reading them as those
indices contain statistics on their content at different hierarchical
resolution levels. This is an opinionated architecture powered by open
standards and making heavy use of open-source technology. Although, hardened
for production use using geospatial-temporal data, this architecture
generalizes to other use case like computer vision, computational neuroscience,
biological sequence analysis and more
Lossy neural compression for geospatial analytics: a review
Over the past decades, there has been an explosion in the amount of available Earth observation (EO) data. The unprecedented coverage of Earth’s surface and atmosphere by satellite imagery has resulted in large volumes of data that must be transmitted to ground stations, stored in data centers, and distributed to end users. Modern Earth system models (ESMs) face similar challenges, operating at high spatial and temporal resolutions, producing petabytes of data per simulated day. Data compression has gained relevance over the past decade, with neural compression (NC) emerging from deep learning and information theory, making EO data and ESM outputs ideal candidates because of their abundance of unlabeled data.
In this review, we outline recent developments in NC applied to geospatial data. We introduce the fundamental concepts of NC, including seminal works in its traditional applications to image and video compression domains with a focus on lossy compression. We discuss the unique characteristics of EO and ESM data, contrasting them with “natural images,” and we explain the additional challenges and opportunities they present. Additionally, we review current applications of NC across various EO modalities and explore the limited efforts in ESM compression to date. The advent of self-supervised learning (SSL) and foundation models (FMs) has advanced methods to efficiently distill representations from vast amounts of unlabeled data. We connect these developments to NC for EO, highlighting the similarities between the two fields and elaborate on the potential of transferring compressed feature representations for machine-to-machine communication. Based on insights drawn from this review, we devise future directions relevant to applications in EO and ESMs
Prithvi WxC: Foundation Model for Weather and Climate
Triggered by the realization that AI emulators can rival the performance of traditional numerical weather prediction models running on HPC systems, there is now an increasing number of large AI models that address use cases such as forecasting, downscaling, or nowcasting. While the parallel developments in the AI literature focus on foundation models -- models that can be effectively tuned to address multiple, different use cases -- the developments on the weather and climate side largely focus on single-use cases with particular emphasis on mid-range forecasting. We close this gap by introducing Prithvi WxC, a 2.3 billion parameter foundation model developed using 160 variables from the Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2). Prithvi WxC employs an encoder-decoder-based architecture, incorporating concepts from various recent transformer models to effectively capture both regional and global dependencies in the input data. The model has been designed to accommodate large token counts to model weather phenomena in different topologies at fine resolutions. Furthermore, it is trained with a mixed objective that combines the paradigms of masked reconstruction with forecasting. We test the model on a set of challenging downstream tasks namely: Autoregressive rollout forecasting, Downscaling, Gravity wave flux parameterization, and Extreme events estimation. The pretrained model with 2.3 billion parameters, along with the associated fine-tuning workflows, has been publicly released as an open-source contribution via Hugging Face
