89 research outputs found
The Application of the Internet of Things to Enhance Urban Sustainability
This article examines opportunities and challenges faced by planners when applying Internet of Things (IoT) as a tool to facilitate urban sustainable development in the context of the Smart Cities movement. As an important element in the Smart Cities concept, IoT is expected to enhance urban sustainability through the sensor network that detects and transmits environmental data. However, there are still various challenges that add a layer of difficulty to the process of using IoT to achieve this goal. The article first identifies the concept and relationship of three key background issues: Smart Cities, Internet of Things, and sustainability. Then the article investigates the challenges of using IoT technology to assist urban sustainability in various aspects. Next, the article proposes possible responses to those challenges through three fields of application: waste management, smart streetlights, and smart homes. It is of great importance for urban planners to understand the complexity of these challenges due to the interdisciplinary nature of such applications. Therefore, it is essential for the field of urban planning to collaborate with other sectors to better utilize IoT technologies towards sustainability.https://deepblue.lib.umich.edu/bitstream/2027.42/136581/1/Zhang_TheApplicationOfTheInternetOfThingsToEnhanceUrbanSustainability.pd
DProvSQL: Accuracy-Aware Privacy Provenance Framework for Differentially Private SQL Engine
Recent years have witnessed the adoption of differential privacy (DP) in practical database query systems. Such systems, like PrivateSQL and FLEX, allow data analysts to query sensitive data while providing a rigorous and provable privacy guarantee. However, existing systems may use more privacy budgets than necessary in certain cases where different data analysts with different privilege levels ask a similar or even the same query. In light of this deficiency, we propose \oursystem, a fine-grained privacy provenance framework that tracks the privacy loss to each single data analyst and we build algorithms that make use of this framework to maximize the number of queries that could be answered. We implement \oursystem as a middleware between the data analysts and the existing differentially private SQL query answering systems. The empirical results on the TPC-H dataset show that our approach can answer around 4x more queries than the baseline approach on average with marginal performance overhead
Preventing Inferences through Data Dependencies on Sensitive Data
Simply restricting the computation to non-sensitive part of the data may lead to inferences on sensitive data through data dependencies. Inference control from data dependencies has been studied in the prior work. However, existing solutions either detect and deny queries which may lead to leakage â resulting in poor utility, or only protects against exact reconstruction of the sensitive data â resulting in poor security. In this paper, we present a novel security model called full deniability. Under this stronger security model, any information inferred about sensitive data from non-sensitive data is considered as a leakage. We describe algorithms for efficiently implementing full deniability on a given database instance with a set of data dependencies and sensitive cells. Using experiments on two different datasets, we demonstrate that our approach protects against realistic adversaries while hiding only minimal number of additional non-sensitive cells and scales well with database size and sensitive data
Recovery from Non-Decomposable Distance Oracles
A line of work has looked at the problem of recovering an input from distance
queries. In this setting, there is an unknown sequence , and one chooses a set of queries and
receives for a distance function . The goal is to make as few
queries as possible to recover . Although this problem is well-studied for
decomposable distances, i.e., distances of the form for some function , which includes the important cases of
Hamming distance, -norms, and -estimators, to the best of our
knowledge this problem has not been studied for non-decomposable distances, for
which there are important special cases such as edit distance, dynamic time
warping (DTW), Frechet distance, earth mover's distance, and so on. We initiate
the study and develop a general framework for such distances. Interestingly,
for some distances such as DTW or Frechet, exact recovery of the sequence
is provably impossible, and so we show by allowing the characters in to be
drawn from a slightly larger alphabet this then becomes possible. In a number
of cases we obtain optimal or near-optimal query complexity. We also study the
role of adaptivity for a number of different distance functions. One motivation
for understanding non-adaptivity is that the query sequence can be fixed and
the distances of the input to the queries provide a non-linear embedding of the
input, which can be used in downstream applications involving, e.g., neural
networks for natural language processing.Comment: This work has been presented at conference The 14th Innovations in
Theoretical Computer Science (ITCS 2023) and accepted for publishing in the
journal IEEE Transactions on Information Theor
Finite Volume Graph Network(FVGN): Predicting unsteady incompressible fluid dynamics with finite volume informed neural network
In recent years, the development of deep learning is noticeably influencing
the progress of computational fluid dynamics. Numerous researchers have
undertaken flow field predictions on a variety of grids, such as MAC grids,
structured grids, unstructured meshes, and pixel-based grids which have been
many works focused on. However, predicting unsteady flow fields on unstructured
meshes remains challenging. When employing graph neural networks (GNNs) for
these predictions, the message-passing mechanism can become inefficient,
especially with denser unstructured meshes. Furthermore, unsteady flow field
predictions often rely on autoregressive neural networks, which are susceptible
to error accumulation during extended predictions. In this study, we integrate
the traditional finite volume method to devise a spatial integration strategy
that enables the formulation of a physically constrained loss function. This
aims to counter the error accumulation that emerged in autoregressive neural
networks during long-term predictions. Concurrently, we merge vertex-center and
cell-center grids from the finite volume method, introducing a dual
message-passing mechanism within a single GNN layer to enhance the
message-passing efficiency. We benchmark our approach against MeshGraphnets for
unsteady flow field predictions on unstructured meshes. Our findings indicate
that the methodologies combined in this study significantly enhance the
precision of flow field predictions while substantially minimizing the training
time cost. We offer a comparative analysis of flow field predictions, focusing
on cylindrical, airfoil, and square column obstacles in two-dimensional
incompressible fluid dynamics scenarios. This analysis encompasses lift
coefficient, drag coefficient, and pressure coefficient distribution comparison
on the boundary layers
Achieving Adversarial Robustness via Sparsity
Network pruning has been known to produce compact models without much
accuracy degradation. However, how the pruning process affects a network's
robustness and the working mechanism behind remain unresolved. In this work, we
theoretically prove that the sparsity of network weights is closely associated
with model robustness. Through experiments on a variety of adversarial pruning
methods, we find that weights sparsity will not hurt but improve robustness,
where both weights inheritance from the lottery ticket and adversarial training
improve model robustness in network pruning. Based on these findings, we
propose a novel adversarial training method called inverse weights inheritance,
which imposes sparse weights distribution on a large network by inheriting
weights from a small network, thereby improving the robustness of the large
network
Photooxidation of a twisted isoquinolinone
Understanding the oxidation mechanism and positions of twistacenes and twistheteroacenes under ambient conditions is very important because such knowledge can guide us to design and synthesize novel, larger stable analogues. Herein, we demonstrated for the first time that a twisted isoquinolinone can decompose under oxygen and light at room temperature. The asâdecomposed productâ
1 was fully characterized through conventional methods as well as singleâcrystal structure analysis. Moreover, the physical properties of the asâobtained product were carefully investigated and the possible formation mechanism was proposed
Measuring and Mitigating Constraint Violations of In-Context Learning for Utterance-to-API Semantic Parsing
In executable task-oriented semantic parsing, the system aims to translate
users' utterances in natural language to machine-interpretable programs (API
calls) that can be executed according to pre-defined API specifications. With
the popularity of Large Language Models (LLMs), in-context learning offers a
strong baseline for such scenarios, especially in data-limited regimes.
However, LLMs are known to hallucinate and therefore pose a formidable
challenge in constraining generated content. Thus, it remains uncertain if LLMs
can effectively perform task-oriented utterance-to-API generation where
respecting API's structural and task-specific constraints is crucial.
In this work, we seek to measure, analyze and mitigate such constraints
violations. First, we identify the categories of various constraints in
obtaining API-semantics from task-oriented utterances, and define fine-grained
metrics that complement traditional ones. Second, we leverage these metrics to
conduct a detailed error analysis of constraints violations seen in
state-of-the-art LLMs, which motivates us to investigate two mitigation
strategies: Semantic-Retrieval of Demonstrations (SRD) and API-aware
Constrained Decoding (API-CD). Our experiments show that these strategies are
effective at reducing constraints violations and improving the quality of the
generated API calls, but require careful consideration given their
implementation complexity and latency
- âŠ