31 research outputs found
Partial Synchronization on Complex Networks
Network topology plays an important role in governing the collective
dynamics. Partial synchronization (PaS) on regular networks with a few
non-local links is explored. Different PaS patterns out of the symmetry
breaking are observed for different ways of non-local couplings. The criterion
for the emergence of PaS is studied. The emergence of PaS is related to the
loss of degeneration in Lyapunov exponent spectrum. Theoretical and numerical
analysis indicate that non-local coupling may drastically change the dynamical
feature of the network, emphasizing the important topological dependence of
collective dynamics on complex networks.Comment: 4 pages, 4 figure
Using lambda networks to enhance performance of interactive large simulations
The ability to use a visualisation tool to steer large simulations provides innovative and novel usage scenarios, e.g. the ability to use new algorithms for the computation of free energy profiles along a nanopore [1]. However, we find that the performance of interactive simulations is sensitive to the quality of service of the network with variable latency and packet loss in particular having a detrimental effect The use of dedicated networks (provisioned in this case as a circuit-switched point-to-point optical lightpath or lambda) can lead to significant (50% or more) performance enhancement, When funning on say 128 or 256 processors of a high-end supercomputer this saving has a significant value. We perform experiments to understand the impact of network characteristics on the performance of a large parallel classical molecular dynamics simulation when coupled interactively to a remote visualisation tool. This paper discusses the experiments performed and presents the results from the systematic studies. © 2006 IEEE.Published versio
Efficient Keyword Spotting by capturing long-range interactions with Temporal Lambda Networks
Models based on attention mechanisms have shown unprecedented speech
recognition performance. However, they are computationally expensive and
unnecessarily complex for keyword spotting, a task targeted to small-footprint
devices. This work explores the application of Lambda networks, an alternative
framework for capturing long-range interactions without attention, for the
keyword spotting task. We propose a novel \textit{ResNet}-based model by
swapping the residual blocks by temporal Lambda layers. Furthermore, the
proposed architecture is built upon uni-dimensional temporal convolutions that
further reduce its complexity. The presented model does not only reach
state-of-the-art accuracies on the Google Speech Commands dataset, but it is
85% and 65% lighter than its Transformer-based (KWT) and convolutional (Res15)
counterparts while being up to 100 times faster. To the best of our knowledge,
this is the first attempt to explore the Lambda framework within the speech
domain and therefore, we unravel further research of new interfaces based on
this architecture.Comment: speech recognition, keyword spotting, lambda network
Efficient keyword spotting by capturing long-range interactions with temporal lambda networks
Models based on attention mechanisms have shown unprecedented speech recognition performance. However, they are computationally expensive and unnecessarily complex for keyword spotting, a task targeted to small-footprint devices. This work explores the application of Lambda networks, an alternative framework for capturing long-range interactions without attention, for the keyword spotting task. We propose a novel ResNet-based model by swapping the residual blocks by temporal Lambda layers. Furthermore, the proposed architecture is built upon uni-dimensional temporal convolutions that further reduce its complexity. The presented model does not only reach state-of-the-art accuracies on the Google Speech Commands dataset, but it is 85% and 65% lighter than its Transformer-based (KWT) and convolutional (ResNet15) counterparts while being up to 100× faster. To the best of our knowledge, this is the first attempt to explore the Lambda framework within the speech domain and therefore, we unravel further research of new interfaces based on this architecture.Peer ReviewedPostprint (author's final draft
CASPR: Judiciously Using the Cloud for Wide-Area Packet Recovery
We revisit a classic networking problem -- how to recover from lost packets
in the best-effort Internet. We propose CASPR, a system that judiciously
leverages the cloud to recover from lost or delayed packets. CASPR supplements
and protects best-effort connections by sending a small number of coded packets
along the highly reliable but expensive cloud paths. When receivers detect
packet loss, they recover packets with the help of the nearby data center, not
the sender, thus providing quick and reliable packet recovery for
latency-sensitive applications. Using a prototype implementation and its
deployment on the public cloud and the PlanetLab testbed, we quantify the
benefits of CASPR in providing fast, cost effective packet recovery. Using
controlled experiments, we also explore how these benefits translate into
improvements up and down the network stack
Conditional Positional Encodings for Vision Transformers
We propose a conditional positional encoding (CPE) scheme for vision
Transformers. Unlike previous fixed or learnable positional encodings, which
are pre-defined and independent of input tokens, CPE is dynamically generated
and conditioned on the local neighborhood of the input tokens. As a result, CPE
can easily generalize to the input sequences that are longer than what the
model has ever seen during training. Besides, CPE can keep the desired
translation-invariance in the image classification task, resulting in improved
classification accuracy. CPE can be effortlessly implemented with a simple
Position Encoding Generator (PEG), and it can be seamlessly incorporated into
the current Transformer framework. Built on PEG, we present Conditional
Position encoding Vision Transformer (CPVT). We demonstrate that CPVT has
visually similar attention maps compared to those with learned positional
encodings. Benefit from the conditional positional encoding scheme, we obtain
state-of-the-art results on the ImageNet classification task compared with
vision Transformers to date. Our code will be made available at
https://github.com/Meituan-AutoML/CPVT .Comment: A general purpose conditional position encoding for vision
transformer
On the Performance of One-Stage and Two-Stage Object Detectors in Autonomous Vehicles Using Camera Data
Object detection using remote sensing data is a key task of the perception systems of
self-driving vehicles. While many generic deep learning architectures have been proposed for this
problem, there is little guidance on their suitability when using them in a particular scenario such
as autonomous driving. In this work, we aim to assess the performance of existing 2D detection
systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the
on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3)
and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions
and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are
trained using transfer learning and compared in terms of both precision and efficiency, with special
attention to the real-time requirements of this context. For the experimental study, we use theWaymo
Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage
detectors, our findings show that two-stage detectors still provide the most robust performance.
Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the
detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff
but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS
detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.Ministerio de Economía y Competitividad TIN2017-88209-C2-2-RJunta de Andalucía US-1263341Junta de Andalucía P18-RT-277