73 research outputs found
Domain Adaptation for Sustainable Soil Management using Causal and Contrastive Constraint Minimization
Monitoring organic matter is pivotal for maintaining soil health and can help
inform sustainable soil management practices. While sensor-based soil
information offers higher-fidelity and reliable insights into organic matter
changes, sampling and measuring sensor data is cost-prohibitive. We propose a
multi-modal, scalable framework that can estimate organic matter from remote
sensing data, a more readily available data source while leveraging sparse soil
information for improving generalization. Using the sensor data, we preserve
underlying causal relations among sensor attributes and organic matter.
Simultaneously we leverage inherent structure in the data and train the model
to discriminate among domains using contrastive learning. This causal and
contrastive constraint minimization ensures improved generalization and
adaptation to other domains. We also shed light on the interpretability of the
framework by identifying attributes that are important for improving
generalization. Identifying these key soil attributes that affect organic
matter will aid in efforts to standardize data collection efforts.Comment: Neurips workshop on Tackling Climate Change 202
DBO: Response Time Fairness for Cloud-Hosted Financial Exchanges
In this paper, we consider the problem of hosting financial exchanges in the
cloud. Financial exchanges require predictable, equal latency to all market
participants to ensure fairness for various tasks, such as high speed trading.
However, it is extremely difficult to ensure equal latency to all market
participants in existing cloud deployments, because of various reasons, such as
congestion, and unequal network paths. In this paper, we address the unfairness
that stems from lack of determinism in cloud networks. We argue that
predictable or bounded latency is not necessary to achieve fairness. Inspired
by the use of logical clocks in distributed systems, we present Delivery Based
Ordering (DBO), a new approach that ensures fairness by instead correcting for
differences in latency to the participants. We evaluate DBO both in our
hardware test bed and in a public cloud deployment and demonstrate that it is
feasible to achieve guaranteed fairness and sub-100 microsecond latency while
operating at high transaction rates
Machine learning can guide experimental approaches for protein digestibility estimations
Food protein digestibility and bioavailability are critical aspects in
addressing human nutritional demands, particularly when seeking sustainable
alternatives to animal-based proteins. In this study, we propose a machine
learning approach to predict the true ileal digestibility coefficient of food
items. The model makes use of a unique curated dataset that combines
nutritional information from different foods with FASTA sequences of some of
their protein families. We extracted the biochemical properties of the proteins
and combined these properties with embeddings from a Transformer-based protein
Language Model (pLM). In addition, we used SHAP to identify features that
contribute most to the model prediction and provide interpretability. This
first AI-based model for predicting food protein digestibility has an accuracy
of 90% compared to existing experimental techniques. With this accuracy, our
model can eliminate the need for lengthy in-vivo or in-vitro experiments,
making the process of creating new foods faster, cheaper, and more ethical.Comment: 50 pages, submitted to Nature Foo
Opportunistic Use of Client Repeaters to Improve Performance of WLANs
Currently deployed IEEE 802.11WLANs (Wi-Fi networks) share access point (AP) bandwidth on a per-packet basis. However, the various stations communicating with the AP often have different signal qualities, resulting in different transmission rates. This induces a phenomenon known as the rate anomaly problem, in which stations with lower signal quality transmit at lower rates and consume a significant majority of airtime, thereby dramatically reducing the throughput of stations transmitting at high rates. We propose a practical, deployable system, called SoftRepeater, in which stations cooperatively address the rate anomaly problem. Specifically, higher-rate Wi-Fi stations opportunistically transform themselves into repeaters for stations with low data-rates when transmitting to/from the AP. The key challenge is to determine when it is beneficial to enable the repeater functionality. In this paper, we propose an initiation protocol that ensures that repeater functionality is enabled only when appropriate. Also, our system can run directly on top of today's 802.11 infrastructure networks. We also describe a novel, zero-overhead network coding scheme that further alleviates undesirable symptoms of the rate anomaly problem. We evaluate our system using simulation and testbed implementation, and find that SoftRepeater can improve cumulative throughput by up to 200%
Towards Large-Scale Learned Solvers for Parametric PDEs with Model-Parallel Fourier Neural Operators
Fourier neural operators (FNOs) are a recently introduced neural network
architecture for learning solution operators of partial differential equations
(PDEs), which have been shown to perform significantly better than comparable
approaches based on convolutional networks. Once trained, FNOs can achieve
speed-ups of multiple orders of magnitude over conventional numerical PDE
solvers. However, due to the high dimensionality of their input data and
network weights, FNOs have so far only been applied to two-dimensional or small
three-dimensional problems. To remove this limited problem-size barrier, we
propose a model-parallel version of FNOs based on domain-decomposition of both
the input data and network weights. We demonstrate that our model-parallel FNO
is able to predict time-varying PDE solutions of over 3.2 billions variables on
Summit using up to 768 GPUs and show an example of training a distributed FNO
on the Azure cloud for simulating multiphase CO dynamics in the Earth's
subsurface
On the Energy Overhead of Mobile Storage Systems
Abstract Secure digital cards and embedded multimedia cards are pervasively used as secondary storage devices in portable electronics, such as smartphones and tablets. These devices cost under 70 cents per gigabyte. They deliver more than 4000 random IOPS and 70 MBps of sequential access bandwidth. Additionally, they operate at a peak power lower than 250 milliwatts. However, software storage stack above the device level on most existing mobile platforms is not optimized to exploit the low-energy characteristics of such devices. This paper examines the energy consumption of the storage stack on mobile platforms. We conduct several experiments on mobile platforms to analyze the energy requirements of their respective storage stacks. Software storage stack consumes up to 200 times more energy when compared to storage hardware, and the security and privacy requirements of mobile apps are a major cause. A storage energy model for mobile platforms is proposed to help developers optimize the energy requirements of storage intensive applications. Finally, a few optimizations are proposed to reduce the energy consumption of storage systems on these platforms
- …