26 research outputs found
Multi-trajectories automatic planner for StereoElectroEncephaloGraphy (SEEG)
open13E. De Momi; C. Caborni; F. Cardinale; G. Casaceli;
L. Castana; M. Cossu; R. Mai; F. Gozzo; S. Francione; L. Tassi; G. Lo Russo; L. Antiga; G. FerrignoDE MOMI, Elena; Caborni, Chiara; F., Cardinale; G., Casaceli; L., Castana; M., Cossu; R., Mai; F., Gozzo; S., Francione; L., Tassi; G., Lo Russo; L., Antiga; Ferrigno, Giancarl
Efficient Parallel Random Sampling : Vectorized, Cache-Efficient, and Online
We consider the problem of sampling numbers from the range
without replacement on modern architectures. The main result
is a simple divide-and-conquer scheme that makes sequential algorithms more
cache efficient and leads to a parallel algorithm running in expected time
on processors, i.e., scales to massively parallel
machines even for moderate values of . The amount of communication between
the processors is very small (at most ) and independent of
the sample size. We also discuss modifications needed for load balancing,
online sampling, sampling with replacement, Bernoulli sampling, and
vectorization on SIMD units or GPUs
RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving
Although the recent image-based 3D object detection methods using
Pseudo-LiDAR representation have shown great capabilities, a notable gap in
efficiency and accuracy still exist compared with LiDAR-based methods. Besides,
over-reliance on the stand-alone depth estimator, requiring a large number of
pixel-wise annotations in the training stage and more computation in the
inferencing stage, limits the scaling application in the real world.
In this paper, we propose an efficient and accurate 3D object detection
method from stereo images, named RTS3D. Different from the 3D occupancy space
in the Pseudo-LiDAR similar methods, we design a novel 4D feature-consistent
embedding (FCE) space as the intermediate representation of the 3D scene
without depth supervision. The FCE space encodes the object's structural and
semantic information by exploring the multi-scale feature consistency warped
from stereo pair. Furthermore, a semantic-guided RBF (Radial Basis Function)
and a structure-aware attention module are devised to reduce the influence of
FCE space noise without instance mask supervision. Experiments on the KITTI
benchmark show that RTS3D is the first true real-time system (FPS24) for
stereo image 3D detection meanwhile achieves improvement in average
precision comparing with the previous state-of-the-art method. The code will be
available at https://github.com/Banconxuan/RTS3DComment: 9 pages,6 figure
LOCUS: A Multi-Sensor Lidar-Centric Solution for High-Precision Odometry and 3D Mapping in Real-Time
A reliable odometry source is a prerequisite to enable complex autonomy
behaviour in next-generation robots operating in extreme environments. In this
work, we present a high-precision lidar odometry system to achieve robust and
real-time operation under challenging perceptual conditions. LOCUS (Lidar
Odometry for Consistent operation in Uncertain Settings), provides an accurate
multi-stage scan matching unit equipped with an health-aware sensor integration
module for seamless fusion of additional sensing modalities. We evaluate the
performance of the proposed system against state-of-the-art techniques in
perceptually challenging environments, and demonstrate top-class localization
accuracy along with substantial improvements in robustness to sensor failures.
We then demonstrate real-time performance of LOCUS on various types of robotic
mobility platforms involved in the autonomous exploration of the Satsop power
plant in Elma, WA where the proposed system was a key element of the CoSTAR
team's solution that won first place in the Urban Circuit of the DARPA
Subterranean Challenge.Comment: Accepted for publication at IEEE Robotics and Automation Letters,
202
Deferred Maintenance of Disk-Based Random Samples
Random sampling is a well-known technique for approximate processing of large datasets. We introduce a set of algorithms for incremental maintenance of large random samples on secondary storage. We show that the sample maintenance cost can be reduced by refreshing the sample in a deferred manner. We introduce a novel type of log file which follows the intuition that only a “sample” of the operations on the base data has to be considered to maintain a random sample in a statistically correct way. Additionally, we develop a deferred refresh algorithm which updates the sample by using fast sequential disk access only, and which does not require any main memory. We conducted an extensive set of experiments and found, that our algorithms reduce maintenance cost by several orders of magnitude
Affirmative sampling: theory and applications
Affirmative Sampling is a practical and efficient novel algorithm to obtain random samples of distinct elements from a data stream. Its most salient feature is that the size S of the sample will, on expectation, grow with the (unknown) number n of distinct elements in the data stream. As any distinct element has the same probability to be sampled, and the sample size is greater when the “diversity” (the number of distinct elements) is greater, the samples that Affirmative Sampling delivers are more representative than those produced by any scheme where the sample size is fixed a priori - hence its name. Our algorithm is straightforward to implement, and several implementations already exist.This work has been supported by funds from the MOTION Project (Project PID2020-112581GB-C21) of the Spanish Ministry of Science & Innovation MCIN/AEI/10.13039/501100011033, and by Princeton University, and its Department of Computer Science.Peer ReviewedPostprint (published version
Efficient Update of Indexes for Dynamically Changing Web Documents
The original publication is available at www.springerlink.comRecent work on incremental crawling has enabled the indexed document collection of a
search engine to be more synchronized with the changing World Wide Web. However, this
synchronized collection is not immediately searchable, because the keyword index is rebuilt
from scratch less frequently than the collection can be refreshed. An inverted index is usually
used to index documents crawled from the web. Complete index rebuild at high frequency is
expensive. Previous work on incremental inverted index updates have been restricted to adding
and removing documents. Updating the inverted index for previously indexed documents that
have changed has not been addressed.
In this paper, we propose an efficient method to update the inverted index for previously
indexed documents whose contents have changed. Our method uses the idea of landmarks
together with the diff algorithm to significantly reduce the number of postings in the inverted
index that need to be updated. Our experiments verify that our landmark-diff method results
in significant savings in the number of update operations on the inverted index
Multi-dimensional data stream compression for embedded systems
The rise of embedded systems and wireless technologies led to the emergence of
the Internet of Things (IoT). Connected objects in IoT communicate with each
other by transferring data streams over the network. For instance, in Wireless
Sensor Networks (WSNs), sensor-equipped devices use sensors to capture
properties, such as temperature or accelerometer, and send 1D or nD data streams
to a host system. Power consumption is a critical problem for connected objects
that have to work for a long time without being recharged, as it greatly affects
their lifetime and usability. Data summarization is key for energy-constrained
connected devices, as transmitting fewer data can reduce energy usage during
transmission. Data compression, in particular, can compress the data stream
while preserving information to a great extent. Many compression methods have
been proposed in previous research. However, most of them are either not
applicable to connected objects, due to resource limitation, or only handle
one-dimensional streams while data acquired in connected objects are often
multi-dimensional. Lightweight Temporal Compression (LTC) is among the lossy
stream compression methods that provide the highest compression rate for the
lowest CPU and memory consumption. In this thesis, we investigate the extension
of LTC to multi-dimensional streams. First, we provide a formulation of the
algorithm in an arbitrary vectorial space of dimension n. Then, we implement the
algorithm for the infinity and Euclidean norms, in spaces of dimension 2D+t and
3D+t. We evaluate our implementation on 3D acceleration streams of human
activities, on Neblina, a module integrating multiple sensors developed by our
partner Motsai. Results show that the 3D implementation of LTC can save up to
20% in energy consumption for slow-paced activities, with a memory usage of
about 100 B. Finally, we compare our method with polynomial regression
compression methods in different dimensions. Our results show that our extension
of LTC gives a higher compression ratio than the polynomial regression method,
while using less memory and CPU