995 research outputs found
Random sensory networks: a delay in analysis
A fundamental function performed by a sensory network is the retrieval of data gathered collectively by sensor nodes. The metrics that measure the efficiency of this data collection process are time and energy. In this paper, we study via simple discrete mathematical models, the statistics of the data collection time in sensory networks. Specifically, we analyze the average minimum delay in collecting randomly located/distributed sensors data for networks of various topologies when the number of nodes becomes large. Furthermore, we analyze the impact of various parameters such as size of packet, transmission range, and channel erasure probability on the optimal time performance. Our analysis applies to directional antenna systems as well as omnidirectional ones. This paper focuses on directional antenna systems and briefly presents results on omnidirectional antenna systems. Finally, a simple comparative analysis shows the respective advantages of the two systems
CED: Color Event Camera Dataset
Event cameras are novel, bio-inspired visual sensors, whose pixels output
asynchronous and independent timestamped spikes at local intensity changes,
called 'events'. Event cameras offer advantages over conventional frame-based
cameras in terms of latency, high dynamic range (HDR) and temporal resolution.
Until recently, event cameras have been limited to outputting events in the
intensity channel, however, recent advances have resulted in the development of
color event cameras, such as the Color-DAVIS346. In this work, we present and
release the first Color Event Camera Dataset (CED), containing 50 minutes of
footage with both color frames and events. CED features a wide variety of
indoor and outdoor scenes, which we hope will help drive forward event-based
vision research. We also present an extension of the event camera simulator
ESIM that enables simulation of color events. Finally, we present an evaluation
of three state-of-the-art image reconstruction methods that can be used to
convert the Color-DAVIS346 into a continuous-time, HDR, color video camera to
visualise the event stream, and for use in downstream vision applications.Comment: Conference on Computer Vision and Pattern Recognition Workshop
Discrete quantum dot like emitters in monolayer MoSe2: Spatial mapping, Magneto-optics and Charge tuning
Transition metal dichalcogenide monolayers such as MoSe2,MoS2 and WSe2 are
direct bandgap semiconductors with original optoelectronic and spin-valley
properties. Here we report spectrally sharp, spatially localized emission in
monolayer MoSe2. We find this quantum dot like emission in samples exfoliated
onto gold substrates and also suspended flakes. Spatial mapping shows a
correlation between the location of emitters and the existence of wrinkles
(strained regions) in the flake. We tune the emission properties in magnetic
and electric fields applied perpendicular to the monolayer plane. We extract an
exciton g-factor of the discrete emitters close to -4, as for 2D excitons in
this material. In a charge tunable sample we record discrete jumps on the meV
scale as charges are added to the emitter when changing the applied voltage.
The control of the emission properties of these quantum dot like emitters paves
the way for further engineering of the light matter interaction in these
atomically thin materials.Comment: 5 pages, 2 figure
The BRaliBase Dent â a Tale of Benchmark Design and Interpretation
Löwes B, Chauve C, Ponty Y, Giegerich R. The BRaliBase Dent â a Tale of Benchmark Design and Interpretation. Briefings in Bioinformatics. 2017;18(2):306-311.BRaliBase is a widely used benchmark for assessing the accuracy of RNA secondary structure alignment
methods. In most case studies based on the BRaliBase benchmark, one can observe a puzzling drop
in accuracy in the 40%-60% sequence identity range, the so-called âBRaliBase Dentâ. In the present
note, we show this dent is due to a bias in the composition of the BRaliBase benchmark, namely the
inclusion of a disproportionate number of tRNAs, which exhibit a very conserved secondary structure.
Our analysis, aside of its interest regarding the specific case of the BRaliBase benchmark, also raises
important questions regarding the design and use of benchmarks in computational biology
An Asynchronous Kalman Filter for Hybrid Event Cameras
Event cameras are ideally suited to capture HDR visual information without
blur but perform poorly on static or slowly changing scenes. Conversely,
conventional image sensors measure absolute intensity of slowly changing scenes
effectively but do poorly on high dynamic range or quickly changing scenes. In
this paper, we present an event-based video reconstruction pipeline for High
Dynamic Range (HDR) scenarios. The proposed algorithm includes a frame
augmentation pre-processing step that deblurs and temporally interpolates frame
data using events. The augmented frame and event data are then fused using a
novel asynchronous Kalman filter under a unifying uncertainty model for both
sensors. Our experimental results are evaluated on both publicly available
datasets with challenging lighting conditions and fast motions and our new
dataset with HDR reference. The proposed algorithm outperforms state-of-the-art
methods in both absolute intensity error (48% reduction) and image similarity
indexes (average 11% improvement).Comment: 12 pages, 6 figures, published in International Conference on
Computer Vision (ICCV) 202
An Asynchronous Linear Filter Architecture for Hybrid Event-Frame Cameras
Event cameras are ideally suited to capture High Dynamic Range (HDR) visual
information without blur but provide poor imaging capability for static or
slowly varying scenes. Conversely, conventional image sensors measure absolute
intensity of slowly changing scenes effectively but do poorly on HDR or quickly
changing scenes. In this paper, we present an asynchronous linear filter
architecture, fusing event and frame camera data, for HDR video reconstruction
and spatial convolution that exploits the advantages of both sensor modalities.
The key idea is the introduction of a state that directly encodes the
integrated or convolved image information and that is updated asynchronously as
each event or each frame arrives from the camera. The state can be read-off
as-often-as and whenever required to feed into subsequent vision modules for
real-time robotic systems. Our experimental results are evaluated on both
publicly available datasets with challenging lighting conditions and fast
motions, along with a new dataset with HDR reference that we provide. The
proposed AKF pipeline outperforms other state-of-the-art methods in both
absolute intensity error (69.4% reduction) and image similarity indexes
(average 35.5% improvement). We also demonstrate the integration of image
convolution with linear spatial kernels Gaussian, Sobel, and Laplacian as an
application of our architecture.Comment: 17 pages, 10 figures, Accepted by IEEE Transactions on Pattern
Analysis and Machine Intelligence (TPAMI) in August 202
Phonon-assisted Photoluminescence from Dark Excitons in Monolayers of Transition Metal Dichalcogenides
The photoluminescence (PL) spectrum of transition metal dichalcogenides
(TMDs) shows a multitude of emission peaks below the bright exciton line and
not all of them have been explained yet. Here, we study the emission traces of
phonon-assisted recombinations of momentum-dark excitons. To this end, we
develop a microscopic theory describing simultaneous exciton, phonon and photon
interaction and including consistent many-particle dephasing. We explain the
drastically different PL below the bright exciton in tungsten- and
molybdenum-based materials as result of different configurations of bright and
dark states. In good agreement with experiments, we show that WSe exhibits
clearly visible low-temperature PL signals stemming from the phonon-assisted
recombination of momentum-dark excitons
- âŠ