13 research outputs found
Architecture to Detect, Track, and Classify Objects using LiDAR Measurements in Highway Scenarios
Self-driving cars require a holistic perception of
their environment. To achieve this requirement, a plethora of
sensor technologies exists e.g. RGB-camera, ultra-sonic and
radar. Those sensor technologies have different range, as well as
resolution and behave differently with varying weather conditions.
Another technology is Light Detection and Ranging (LiDAR),
which enables precise distance measurements. In combination
with RGB-cameras, ultra-sonic, and radar, LiDAR closes the gap
to enable the holistic perception of the environment.
Due to limited experience with LiDAR sensors, there is a lack
of understanding how to detect, track, and classify objects (e.g.
cars, guardrails) using LiDAR data. In this paper, we propose an
architecture to detect, track, and classify objects based on LiDAR
measurements in highway scenarios.We evaluate our architecture
using preliminary sensor data obtained from a setup including
six Ibeo Lux sensors and additional a roof mounted Velodyne
HDL-64E
Impact of the antenna orientation for distance estimation
Indoor localization is important for a wide range of
use cases including industrial, medical and scientific applications.
The location accuracy is affected by the localization algorithm
and the quality of the measurements as input for the algorithm.
Many indoor localization systems employ ultra-wideband distance
measurements, as they offer high accuracy and are cost
effective. One of the methods for distance measurement is twoway
ranging. This paper investigates the impact of the antenna
orientation on the distance measurement based on symmetrical
double-sided two-way ranging. We show that up to 0.25m of
the measurement error is attributed to the orientation of the
antennas. We provide explanations and suggest solutions to
reduce the effect
A new localization algorithm based on neural networks
Indoor localization plays a major role in a wide
range of applications. To determine the location of a tag,
localization algorithm is required. In the past, machine learning
algorithms were difficult to implement in consumer hardware,
but with the advent of tensor processing units, even smartphones
are capable to use artificial intelligence to solve complex problems.
In this paper, we investigate a machine learning algorithm
based on neural networks and compare the result to a linear
least squares estimator. We design and evaluate different neural
networks. Based on our observation, the neural network delivers
poor performance compared to the linear least squares estimator
ImageCLEF 2019: Multimedia Retrieval in Lifelogging, Medical, Nature, and Security Applications
This paper presents an overview of the foreseen ImageCLEF 2019 lab that will be organized as part of the Conference and Labs of the Evaluation Forum - CLEF Labs 2019. ImageCLEF is an ongoing evaluation initiative (started in 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval of visual data with the aim of providing information access to large collections of images in various usage scenarios and domains. In 2019, the 17th edition of ImageCLEF will run four main tasks: (i) a Lifelog task (videos, images and other sources) about daily activities understanding, retrieval and summarization, (ii) a Medical task that groups three previous tasks (caption analysis, tuberculosis prediction, and medical visual question answering) with newer data, (iii) a new Coral task about segmenting and labeling collections of coral images for 3D modeling, and (iv) a new Security
task addressing the problems of automatically identifying forged content and retrieve hidden information. The strong participation, with over 100 research groups registering and 31 submitting results for the tasks in 2018 shows an important interest in this benchmarking campaign and we expect the new tasks to attract at least as many researchers for 2019
ImageCLEF 2020: Multimedia Retrieval in Lifelogging, Medical, Nature, and Security Applications
This paper presents an overview of the 2020 ImageCLEF lab that will be organized as part of the Conference and Labs of the Evaluation Forum - CLEF Labs 2020 in Thessaloniki, Greece. ImageCLEF is an ongoing evaluation initiative (run since 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval of visual data with the aim of providing information access to large collections of images in various usage scenarios and domains. In 2020, the 18th edition of ImageCLEF will organize four main tasks: (i) a Lifelog task (videos, images and other sources) about daily activity understanding, retrieval and summarization, (ii) a Medical task that groups three previous tasks (caption analysis, tuberculosis prediction, and medical visual question answering) with new data and adapted tasks, (iii) a Coral task about segmenting and labeling collections of coral images for 3D modeling, and a new (iv) Web user interface task addressing the problems of detecting and recognizing hand drawn website UIs (User Interfaces) for generating automatic code. The strong participation, with over 235 research groups registering and 63 submitting over 359 runs for the tasks in 2019 shows an important interest in this benchmarking campaign. We expect the new tasks to attract at least as many researchers for 2020
ImageCLEF 2019: Multimedia Retrieval in Medicine, Lifelogging, Security and Nature
This paper presents an overview of the ImageCLEF 2019 lab, organized as part of the Conference and Labs of the Evaluation Forum - CLEF Labs 2019. ImageCLEF is an ongoing evaluation initiative (started in 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval of visual data with the aim of providing information access to large collections of images in various usage scenarios and domains. In 2019, the 17th edition of ImageCLEF runs four main tasks: (i) a medical task that groups three previous tasks (caption analysis, tuberculosis prediction, and medical visual question answering) with new data, (ii) a lifelog task (videos, images and other sources) about daily activities understanding, retrieval and summarization, (iii) a new security task addressing the problems of automatically identifying forged content and retrieve hidden information, and (iv) a new coral task about segmenting and labeling collections of coral images for 3D modeling. The strong participation, with 235 research groups registering, and 63 submitting over 359 runs, shows an important interest in this benchmark campaign
Iterative approach for anchor configuration of positioning systems
With anchor positions and measurements of distances between an object and anchors, positioning algorithms calculate the position of an object, e.g. via lateration. Positioning systems require calibration and configuration prior to operation. In the past, approaches employed reference nodes with GPS or other reference location systems to determine anchor positions. In this article, we propose an approach to determine anchor positions without prior knowledge. We evaluate our approach with simulations and real data based on the Decawave DW1000 radio and show that the error is proportional to the mean error of the distance estimation
Wireless medical sensors – context, robustness and safety
Wireless medical sensors are an emerging technology. Wireless sensors form networks and are placed in an unknown environment. For indoor scenarios context detection of medical sensors, e.g. removal of sensors from a specific room, is important. Current algorithms for context detection of wireless sensors are based on RF signals, but RF signal propagation and room location show only a weak correlation. Recent approaches with RSSI-measurements are based on prior fingerprinting and therefore costly. In our approach, we equip wireless sensor nodes with a barometric sensor to measure pressure disturbances that occur, when doors of rooms are opened or closed. By signal processing of these disturbances our proposed algorithm detects rooms and estimates distances without prior knowledge in an unknown environment. Based on these measurement we automatically build a topology graph representing the room context and distances for indoor environment in a model for buildings. We evaluate our algorithm within a wireless sensor network and show the performance of our solution
(Sub-)Picosecond Surface Correlations of Femtosecond Laser Excited Al-Coated Multilayers Observed by Grazing-Incidence X-ray Scattering
Femtosecond high-intensity laser pulses at intensities surpassing 1014 W/cm2 can generate a diverse range of functional surface nanostructures. Achieving precise control over the production of these functional structures necessitates a thorough understanding of the surface morphology dynamics with nanometer-scale spatial resolution and picosecond-scale temporal resolution. In this study, we show that single XFEL pulses can elucidate structural changes on surfaces induced by laser-generated plasmas using grazing-incidence small-angle X-ray scattering (GISAXS). Using aluminium-coated multilayer samples we distinguish between sub-picosecond (ps) surface morphology dynamics and subsequent multi-ps subsurface density dynamics with nanometer-depth sensitivity. The observed subsurface density dynamics serve to validate advanced simulation models representing matter under extreme conditions. Our findings promise to open new avenues for laser material-nanoprocessing and high-energy-density science
Overview of the ImageCLEF 2020: Multimedia Retrieval in Medical, Lifelogging, Nature, and Internet Applications
This paper presents an overview of the ImageCLEF 2020 lab that was organized as part of the Conference and Labs of the Evaluation Forum - CLEF Labs 2020. ImageCLEF is an ongoing evaluation initiative (first run in 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval of visual data with the aim of providing information access to large collections of images in various usage scenarios and domains. In 2020, the 18th edition of ImageCLEF runs four main tasks: (i) a medical task that groups three previous tasks, i.e., caption analysis, tuberculosis prediction, and medical visual question answering and question generation, (ii) a lifelog task (videos, images and other sources) about daily activity understanding, retrieval and summarization, (iii) a coral task about segmenting and labeling collections of coral reef images, and (iv) a new Internet task addressing the problems of identifying hand-drawn user interface components. Despite the current pandemic situation, the benchmark campaign received a strong participation with over 40 groups submitting more than 295 runs