17 research outputs found
Real-time object detection using monocular vision for low-cost automotive sensing systems
This work addresses the problem of real-time object detection in automotive environments
using monocular vision. The focus is on real-time feature detection,
tracking, depth estimation using monocular vision and finally, object detection by
fusing visual saliency and depth information.
Firstly, a novel feature detection approach is proposed for extracting stable and
dense features even in images with very low signal-to-noise ratio. This methodology
is based on image gradients, which are redefined to take account of noise as
part of their mathematical model. Each gradient is based on a vector connecting a
negative to a positive intensity centroid, where both centroids are symmetric about
the centre of the area for which the gradient is calculated. Multiple gradient vectors
define a feature with its strength being proportional to the underlying gradient
vector magnitude. The evaluation of the Dense Gradient Features (DeGraF) shows
superior performance over other contemporary detectors in terms of keypoint density,
tracking accuracy, illumination invariance, rotation invariance, noise resistance
and detection time.
The DeGraF features form the basis for two new approaches that perform dense
3D reconstruction from a single vehicle-mounted camera. The first approach tracks
DeGraF features in real-time while performing image stabilisation with minimal
computational cost. This means that despite camera vibration the algorithm can
accurately predict the real-world coordinates of each image pixel in real-time by comparing
each motion-vector to the ego-motion vector of the vehicle. The performance
of this approach has been compared to different 3D reconstruction methods in order
to determine their accuracy, depth-map density, noise-resistance and computational
complexity. The second approach proposes the use of local frequency analysis of
i
ii
gradient features for estimating relative depth. This novel method is based on the
fact that DeGraF gradients can accurately measure local image variance with subpixel
accuracy. It is shown that the local frequency by which the centroid oscillates
around the gradient window centre is proportional to the depth of each gradient
centroid in the real world. The lower computational complexity of this methodology
comes at the expense of depth map accuracy as the camera velocity increases, but
it is at least five times faster than the other evaluated approaches.
This work also proposes a novel technique for deriving visual saliency maps by
using Division of Gaussians (DIVoG). In this context, saliency maps express the
difference of each image pixel is to its surrounding pixels across multiple pyramid
levels. This approach is shown to be both fast and accurate when evaluated against
other state-of-the-art approaches. Subsequently, the saliency information is combined
with depth information to identify salient regions close to the host vehicle.
The fused map allows faster detection of high-risk areas where obstacles are likely
to exist. As a result, existing object detection algorithms, such as the Histogram of
Oriented Gradients (HOG) can execute at least five times faster.
In conclusion, through a step-wise approach computationally-expensive algorithms
have been optimised or replaced by novel methodologies to produce a fast object
detection system that is aligned to the requirements of the automotive domain
Deep Learning from Label Proportions for Emphysema Quantification
We propose an end-to-end deep learning method that learns to estimate
emphysema extent from proportions of the diseased tissue. These proportions
were visually estimated by experts using a standard grading system, in which
grades correspond to intervals (label example: 1-5% of diseased tissue). The
proposed architecture encodes the knowledge that the labels represent a
volumetric proportion. A custom loss is designed to learn with intervals. Thus,
during training, our network learns to segment the diseased tissue such that
its proportions fit the ground truth intervals. Our architecture and loss
combined improve the performance substantially (8% ICC) compared to a more
conventional regression network. We outperform traditional lung densitometry
and two recently published methods for emphysema quantification by a large
margin (at least 7% AUC and 15% ICC), and achieve near-human-level performance.
Moreover, our method generates emphysema segmentations that predict the spatial
distribution of emphysema at human level.Comment: Accepted to MICCAI 201
Adversarial Attack Vulnerability of Medical Image Analysis Systems: Unexplored Factors
Adversarial attacks are considered a potentially serious security threat for
machine learning systems. Medical image analysis (MedIA) systems have recently
been argued to be vulnerable to adversarial attacks due to strong financial
incentives and the associated technological infrastructure.
In this paper, we study previously unexplored factors affecting adversarial
attack vulnerability of deep learning MedIA systems in three medical domains:
ophthalmology, radiology, and pathology. We focus on adversarial black-box
settings, in which the attacker does not have full access to the target model
and usually uses another model, commonly referred to as surrogate model, to
craft adversarial examples. We consider this to be the most realistic scenario
for MedIA systems.
Firstly, we study the effect of weight initialization (ImageNet vs. random)
on the transferability of adversarial attacks from the surrogate model to the
target model. Secondly, we study the influence of differences in development
data between target and surrogate models. We further study the interaction of
weight initialization and data differences with differences in model
architecture. All experiments were done with a perturbation degree tuned to
ensure maximal transferability at minimal visual perceptibility of the attacks.
Our experiments show that pre-training may dramatically increase the
transferability of adversarial examples, even when the target and surrogate's
architectures are different: the larger the performance gain using
pre-training, the larger the transferability. Differences in the development
data between target and surrogate models considerably decrease the performance
of the attack; this decrease is further amplified by difference in the model
architecture. We believe these factors should be considered when developing
security-critical MedIA systems planned to be deployed in clinical practice.Comment: First three authors contributed equall
Big Image Data and Artificial Intelligence: Abridged version of inaugural speech delivered on 22 March 2018
The field of data science and artificial intelligence (AI) is growing at an
unprecedented rate. Manual tasks that for thousands of years could
only be performed by humans are increasingly being taken over by
intelligent machines. But, more importantly, tasks that could never be
performed manually by humans, such as analysing big data, can now
be automated while generating valuable knowledge for humankin
Monitoring the Effect of Metal Ions on the Mobility of Artemia salina Nauplii
This study aims to measure the effect of toxic aqueous solutions of metals on the mobility of Artemia salina nauplii by using digital image processing. The instrument consists of a camera with a macro lens, a dark chamber, a light source and a laptop computer. Four nauplii were inserted into a macro cuvette, which contained copper, cadmium, iron and zinc ions at various concentrations. The nauplii were then filmed inside the dark chamber for two minutes and the video sequence was processed by a motion tracking algorithm that estimated their mobility. The results obtained by this system were compared to the mortality assay of the Artemia salina nauplii. Despite the small number of tested organisms, this system demonstrates great sensitivity in quantifying the mobility of the nauplii, which leads to significantly lower EC50 values than those of the mortality assay. Furthermore, concentrations of parts per trillion of toxic compounds could be detected for some of the metals. The main novelty of this instrument relies in the sub-pixel accuracy of the tracking algorithm that enables robust measurement of the deterioration of the mobility of Artemia salina even at very low concentrations of toxic metals
Smart Insect Cameras
Recent studies have shown a worrying decline in the quantity and diversity of insects at a number of locations in Europe (Hallmann et al. 2017) and elsewhere (Lister and Garcia 2018). Although the downward trend that these studies show is clear, they are limited to certain insect groups and geographical locations. Most available studies (see overview in Sánchez-Bayo and Wyckhuys 2019) were performed in nature reserves, leaving rural and urban areas largely understudied. Most studies are based on the long-term collaborative efforts of entomologists and volunteers performing labor-intensive repeat measurements, inherently limiting the number of locations that can be monitored.
We propose a monitoring network for insects in the Netherlands, consisting of a large number of smart insect cameras spread across nature, rural, and urban areas. The aim of the network is to provide a labor-extensive continuous monitoring of different insect groups. In addition, we aimed to develop the cameras at a relatively cheap price point so that cameras can be installed at a large number of locations and encourage participation by citizen science enthusiasts. The cameras are made smart with image processing, consisting of image enhancement, insect detection and species identification being performed, using deep learning based algorithms. The cameras take pictures of a screen, measuring ca. 30×40 cm, every 10 seconds, capturing insects that have landed on the screen (Fig. 1). Several screen setups were evaluated. Vertical screens were used to attract flying insects. Different screen colors and lighting at night, to attract night flying insects such as moths, were used. In addition two horizontal screen orientations were used (1) to emulate pan traps to attract several pollinator species (bees and hoverflies) and (2) to capture ground-based insects and arthropods such as beetles and spiders.
Time sequences of images were analyzed semi-automatically, in the following way. First, single insects are outlined and cropped using boxes at every captured image. Then the cropped single insects in every image were preliminarily identified, using a previously developed deep-learning-based automatic species identification software, Nature Identification API (https://identify.biodiversityanalysis.nl). In the next step, single insects were linked between consecutive images using a tracking algorithm that uses screen position and the preliminary identifications. This step yields for every individual insect a linked series of outlines and preliminary identifications. The preliminary identifications for individual insects can differ between multiple captured images and were therefore combined into one identification using a fusing algorithm. The result of the algorithm is a series of tracks of individual insects with species identifications, which can be subsequently translated into an estimate of the counts of insects per species or species complexes.
Here we show the first set of results acquired during the spring and summer of 2019. We will discuss practical experiences with setting up cameras in the field, including the effectiveness of the different set-ups. We will also show the effectiveness of using automatic species identification in the type of images that were acquired (see attached figure) and discuss to what extent individual species can be identified reliably. Finally, we will discuss the ecological information that can be extracted from the smart insect cameras
CASS•E: Cranfield astrobiological stratospheric sampling experiment
CASS•E is a life detectionexperimentthat aims to be capable of collecting
microorganisms in Earth's Stratosphere. Theexperiment will be launched on
astratosphericballoon in collaboration with Eurolaunch through the BEXUS
(Balloon-borneExperimentsfor Universitv Students) program from Esrange Sweden in
October 2010. It essentially consists of a pump which draws air from the
Stratosphere through a collection filter mechanism. Due to the low number
density of microbes in the Stratosphere compared to the known levels of
contamination at ground level, theexperimentincorporated Planetary Protection
and Contamination Control (PP&CC) protocols in its design and construction in
order to confirm that any microbes detected are trulyStratosphericin origin.
Space qualified cleaning and sterilisation techniques were employed throughout
Assembly Integration and Testing (AIT) as well as biobarriers which were
designed to open only in the stratosphere and so prevent recontamination of the
instrument alter sterilisation. The material presented here covers the design
and AIT of CASS•E. Copyright ©2010 by the International Astronautical
Federation. All rights res
Eindrapportage project Sensequake
Wanneer er een aardbeving heeft plaatsgevonden dan is het van belang dat veiligheidsdiensten en gebouwbeheerders snel een goed beeld hebben van de schade aan gebouwen en risico’s in het gebied. Sensortechnologie in combinatie met data over gebouwen en bewonersaantallen kan van grote meerwaarde zijn voor het snel verkrijgen van het risicobeeld. De meeste sensoren leveren echter technische data die voor crisismanagers moeilijk te interpreteren zijn. Er moet een vertaalslag plaatsvinden door ingenieurs en waarnemers in het veld, waardoor kostbare tijd verloren gaat. Het doel van het Sensequake project was het ontwikkelen van een Decision Support System (DSS) dat na een aardbeving snel sensordata vertaalt naar inzichtelijke informatie over risico’s. Tijdens het project is het DSS ontwikkeld volgens een iteratieve aanpak. Elke twee maanden is er een demonstratie geweest van de resultaten van de verschillende partners. De Veiligheidsregio Groningen (VRG) heeft gefungeerd als belangrijkste leverancier van user requirements. Daarnaast is er een serie interviews gehouden met andere potentiële gebruikers zoals Groningen Seaports en de provincie Groningen. Het consortium heeft een gevorderde versie van het DSS ontwikkeld en gedemonstreerd bij de VRG. De verschillende partners hebben hier aan bijgedragen met deelresultaten zoals beschreven in 4.3. De VRG bevestigt dat in het geval van een zware aardbeving het gebruik van het DSS een forse tijdswinst zal opleveren. In plaats van het inspecteren van een gebied van 80 km2 kunnen de meetploegen gericht naar specifieke geocells met de indicatie van hoog risico gestuurd worden. Om het DSS in productie te kunnen nemen is oprichting van een bedrijf vereist dat de ontwikkeling kan voortzetten en de vereiste service levels kan garanderen. Lector Ihsan Bal onderzoekt momenteel de mogelijkheden om een startup te beginnen op het gebied van structural monitoring. Hij heeft hiervoor groen licht van de overige consortium partners. De laatste fase van het project is uitgevoerd ten tijde van de coronamaatregelen. Gelukkig is de impact beperkt gebleven. De finale demo bij de Veiligheidsregio Groningen is online uitgevoerd. De partner Target is in een laat stadium uitgetreden omdat het project niet meer aansloot bij hun gewijzigde strategische koers
Eindrapportage project Sensequake
Wanneer er een aardbeving heeft plaatsgevonden dan is het van belang dat veiligheidsdiensten en gebouwbeheerders snel een goed beeld hebben van de schade aan gebouwen en risico’s in het gebied. Sensortechnologie in combinatie met data over gebouwen en bewonersaantallen kan van grote meerwaarde zijn voor het snel verkrijgen van het risicobeeld. De meeste sensoren leveren echter technische data die voor crisismanagers moeilijk te interpreteren zijn. Er moet een vertaalslag plaatsvinden door ingenieurs en waarnemers in het veld, waardoor kostbare tijd verloren gaat. Het doel van het Sensequake project was het ontwikkelen van een Decision Support System (DSS) dat na een aardbeving snel sensordata vertaalt naar inzichtelijke informatie over risico’s. Tijdens het project is het DSS ontwikkeld volgens een iteratieve aanpak. Elke twee maanden is er een demonstratie geweest van de resultaten van de verschillende partners. De Veiligheidsregio Groningen (VRG) heeft gefungeerd als belangrijkste leverancier van user requirements. Daarnaast is er een serie interviews gehouden met andere potentiële gebruikers zoals Groningen Seaports en de provincie Groningen. Het consortium heeft een gevorderde versie van het DSS ontwikkeld en gedemonstreerd bij de VRG. De verschillende partners hebben hier aan bijgedragen met deelresultaten zoals beschreven in 4.3. De VRG bevestigt dat in het geval van een zware aardbeving het gebruik van het DSS een forse tijdswinst zal opleveren. In plaats van het inspecteren van een gebied van 80 km2 kunnen de meetploegen gericht naar specifieke geocells met de indicatie van hoog risico gestuurd worden. Om het DSS in productie te kunnen nemen is oprichting van een bedrijf vereist dat de ontwikkeling kan voortzetten en de vereiste service levels kan garanderen. Lector Ihsan Bal onderzoekt momenteel de mogelijkheden om een startup te beginnen op het gebied van structural monitoring. Hij heeft hiervoor groen licht van de overige consortium partners. De laatste fase van het project is uitgevoerd ten tijde van de coronamaatregelen. Gelukkig is de impact beperkt gebleven. De finale demo bij de Veiligheidsregio Groningen is online uitgevoerd. De partner Target is in een laat stadium uitgetreden omdat het project niet meer aansloot bij hun gewijzigde strategische koers