222 research outputs found
A study into the sustainable system between the wind and the villages in Rincón de Ademuz. Spain
El objetivo del estudio es analizar el sistema sostenible de Rincón de Ademuz, donde perdura en el tiempo un poblado que permanece asentado desde hace dos mil años.Ji, W. (2014). A study into the sustainable system between the wind and the villages in Rincón de Ademuz. Spain [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/39001TESI
Wind and the villages in Rincón de Ademuz, Spain
[EN] This study focuses on a sustainable system which makes it possible for the villages in the region of Rincón de Ademuz to have stood within their natural environment for over two thousand years. For this analysis the study has focused specifically on the wind factor. The dry weather and the wind trajectory make it possible to create a comfortable living environment in the villages. This research analyzed the position of a building unit in order to offer a clear representation of the relationship between wind and these villages.Ji, W.; Mileto, C.; Vegas López-Manzanares, F. (2022). Wind and the villages in Rincón de Ademuz, Spain. En Proceedings HERITAGE 2022 - International Conference on Vernacular Heritage: Culture, People and Sustainability. Editorial Universitat Politècnica de València. 111-117. https://doi.org/10.4995/HERITAGE2022.2022.1570211111
Semantically Controllable Generation of Physical Scenes with Explicit Knowledge
Deep Generative Models (DGMs) are known for their superior capability in
generating realistic data. Extending purely data-driven approaches, recent
specialized DGMs may satisfy additional controllable requirements such as
embedding a traffic sign in a driving scene, by manipulating patterns
\textit{implicitly} in the neuron or feature level. In this paper, we introduce
a novel method to incorporate domain knowledge \textit{explicitly} in the
generation process to achieve semantically controllable scene generation. We
categorize our knowledge into two types to be consistent with the composition
of natural scenes, where the first type represents the property of objects and
the second type represents the relationship among objects. We then propose a
tree-structured generative model to learn complex scene representation, whose
nodes and edges are naturally corresponding to the two types of knowledge
respectively. Knowledge can be explicitly integrated to enable semantically
controllable scene generation by imposing semantic rules on properties of nodes
and edges in the tree structure. We construct a synthetic example to illustrate
the controllability and explainability of our method in a clean setting. We
further extend the synthetic example to realistic autonomous vehicle driving
environments and conduct extensive experiments to show that our method
efficiently identifies adversarial traffic scenes against different
state-of-the-art 3D point cloud segmentation models satisfying the traffic
rules specified as the explicit knowledge.Comment: 14 pages, 6 figures. Under revie
Non-Autoregressive Sentence Ordering
Existing sentence ordering approaches generally employ encoder-decoder
frameworks with the pointer net to recover the coherence by recurrently
predicting each sentence step-by-step. Such an autoregressive manner only
leverages unilateral dependencies during decoding and cannot fully explore the
semantic dependency between sentences for ordering. To overcome these
limitations, in this paper, we propose a novel Non-Autoregressive Ordering
Network, dubbed \textit{NAON}, which explores bilateral dependencies between
sentences and predicts the sentence for each position in parallel. We claim
that the non-autoregressive manner is not just applicable but also particularly
suitable to the sentence ordering task because of two peculiar characteristics
of the task: 1) each generation target is in deterministic length, and 2) the
sentences and positions should match exclusively. Furthermore, to address the
repetition issue of the naive non-autoregressive Transformer, we introduce an
exclusive loss to constrain the exclusiveness between positions and sentences.
To verify the effectiveness of the proposed model, we conduct extensive
experiments on several common-used datasets and the experimental results show
that our method outperforms all the autoregressive approaches and yields
competitive performance compared with the state-of-the-arts. The codes are
available at:
\url{https://github.com/steven640pixel/nonautoregressive-sentence-ordering}.Comment: Accepted at Findings of EMNLP202
Enhancing the acoustic-to-electrical conversion efficiency of nanofibrous membrane-based triboelectric nanogenerators by nanocomposite composition
Acoustic energy is difficult to capture and utilise in general. The current work proposes a novel nanofibrous membrane-based (NFM) triboelectric nanogenerator (TENG) that can harvest acoustic energy from the environment. The device is ultra-thin, lightweight, and compact. The electrospun NFM used in the TENG contains three nanocomponents: polyacrylonitrile (PAN), polyvinylidene fluoride (PVDF), and multi-walled carbon nanotubes (MWCNTs). The optimal concentration ratio of the three nanocomponents has been identified for the first time, resulting in higher electric output than a single-component NFM TENG. For an incident sound pressure level of 116 dB at 200 Hz, the optimised NFM TENG can output a maximum open-circuit voltage of over 120 V and a short-circuit current of 30μA, corresponding to a maximum areal power density of 2.25 W/m2. The specific power reached 259μW/g. The ability to power digital devices is illustrated by lighting up 62 light-emitting diodes in series and powering other devices. The findings may inspire the design of acoustic NFM TENGs comprising multiple nanocomponents, and show that the NFM TENG can promote the utilisation of acoustic energy for many applications, such as microelectronic devices and the Internet of Things
Observing Exoplanets with High-Dispersion Coronagraphy. II. Demonstration of an Active Single-Mode Fiber Injection Unit
High-dispersion coronagraphy (HDC) optimally combines high contrast imaging
techniques such as adaptive optics/wavefront control plus coronagraphy to high
spectral resolution spectroscopy. HDC is a critical pathway towards fully
characterizing exoplanet atmospheres across a broad range of masses from giant
gaseous planets down to Earth-like planets. In addition to determining the
molecular composition of exoplanet atmospheres, HDC also enables Doppler
mapping of atmosphere inhomogeneities (temperature, clouds, wind), as well as
precise measurements of exoplanet rotational velocities. Here, we demonstrate
an innovative concept for injecting the directly-imaged planet light into a
single-mode fiber, linking a high-contrast adaptively-corrected coronagraph to
a high-resolution spectrograph (diffraction-limited or not). Our laboratory
demonstration includes three key milestones: close-to-theoretical injection
efficiency, accurate pointing and tracking, on-fiber coherent modulation and
speckle nulling of spurious starlight signal coupling into the fiber. Using the
extreme modal selectivity of single-mode fibers, we also demonstrated speckle
suppression gains that outperform conventional image-based speckle nulling by
at least two orders of magnitude.Comment: 10 pages, 7 figures, accepted by Ap
UATVR: Uncertainty-Adaptive Text-Video Retrieval
With the explosive growth of web videos and emerging large-scale
vision-language pre-training models, e.g., CLIP, retrieving videos of interest
with text instructions has attracted increasing attention. A common practice is
to transfer text-video pairs to the same embedding space and craft cross-modal
interactions with certain entities in specific granularities for semantic
correspondence. Unfortunately, the intrinsic uncertainties of optimal entity
combinations in appropriate granularities for cross-modal queries are
understudied, which is especially critical for modalities with hierarchical
semantics, e.g., video, text, etc. In this paper, we propose an
Uncertainty-Adaptive Text-Video Retrieval approach, termed UATVR, which models
each look-up as a distribution matching procedure. Concretely, we add
additional learnable tokens in the encoders to adaptively aggregate
multi-grained semantics for flexible high-level reasoning. In the refined
embedding space, we represent text-video pairs as probabilistic distributions
where prototypes are sampled for matching evaluation. Comprehensive experiments
on four benchmarks justify the superiority of our UATVR, which achieves new
state-of-the-art results on MSR-VTT (50.8%), VATEX (64.5%), MSVD (49.7%), and
DiDeMo (45.8%). The code is available at https://github.com/bofang98/UATVR.Comment: To appear at ICCV202
- …