217 research outputs found
Internet tomography : network topology discovery and network performance evaluation
Disertação de mestrado em Redes e Serviços de ComunicaçãoDue to the security threats and complexity of network services, such as video conferencing,
internet telephony or online gaming, which require high QoS guarantees,
the need for monitoring and evaluating network performance, in order to promptly
detect and face security threats and malfunctions, is crucial to the correct operation
of networks and network-based services. As the internet evolves in size and
diversity, these tasks become difficult and demanding. Moreover, administrative
limitations can restrict the position and the scope of the links to be monitored,
while legislation imposes limitations on the information that can be collected and
exported for monitoring purposes and almost all organization can't monitor or
have knowledge or evaluate the performance of the entire network. They only can
do this to part of the network, which corresponds to their own network.
In this thesis, we propose the use of tomographic techniques for network topology
discovery and performance evaluation. Network tomography studies the internal
characteristics of the network using end-to-end probes, ie, it does not need the
cooperation of the internal nodes of the network and can be successfully adopted
in almost all scenarios. Thus, it is possible to have knowledge of the network
characteristics out of the administrative borders.
In this thesis we propose a new approach to Probe Packet Sandwich, where we
use TTL-limited probes to infer the delay of a path hop-by-hop. We have shown
that this approach is more effective than existing ones.
This work was developed under the ERASMUS student mobility program, in the
Telecommunication Networks Research Group, Dept. of Information Engineering,
University of Pisa.Devido às ameaças de segurança e complexidade dos serviços de rede, tais como
videoconferência, telefonia via Internet ou jogos on-line, que exigem altas garantias
de QoS, a necessidade de monotorização e avaliação de desempenho da rede,
a fim de detectar prontamente e enfrentar as ameaças de segurança e mau funcionamento,
é crucial para o correto funcionamento das redes e serviços baseados
em rede. À medida que a Internet evolui em tamanho e diversidade, essas tarefas
tornam-se difíceis e exigentes. Além disso, as limitações administrativas podem restringir
a posição e o alcance dos links a serem monitorizados, enquanto a legislação
impõe limitações sobre as informações que podem ser coletadas e exportadas para
fins de monotorização e quase todas as organizações não podem controlar ou ter
conhecimento ou avaliar o desempenho de toda a rede. Eles só podem fazer isso a
parte da rede, o que corresponde à sua própria rede.
Neste trabalho, nós propomos o uso de técnicas tomográficas para a descoberta
da topologia da rede e avaliação de desempenho. A tomografia de rede estuda as
características internas da rede usando medições fim-a-fim, ou seja, não necessita
da ajuda dos nós internos da rede, podendo ser adoptadas com sucesso em quase
todos os cenários. Desta maneira é possível obter conhecimento das características
da rede para além dos limites administrativos.
Neste tranalho propomos uma nova abordagem do Packet Sandwich Probe, onde
utilizamos pacotes TTL-Limited para inferir o delay de um path hop-by-hop. Nós
mostramos que esta abordagem é mais eficaz que outras já existentes.
Este trabalho foi desenvolvido no âmbito do programa de mobilidade de estudantes
ERASMUS, no Grupo de Investigação em Redes de Telecomunicações, Departamento
de Engenharia de Informação da Universidade de Pisa
Compact mixed integer linear programming models to the Minimum Weighted Tree Reconstruction problem
The Minimum Weighted Tree Reconstruction (MWTR) problem consists of finding a minimum length weighted tree connecting a set of terminal nodes in such a way that the length of the path between each pair of terminal nodes is greater than or equal to a given distance between the considered pair of terminal nodes. This problem has applications in several areas, namely, the inference of phylogenetic trees, the modeling of traffic networks and the analysis of internet infrastructures. In this paper, we investigate the MWTR problem and we present two compact mixed-integer linear programming models to solve the problem. Computational results using two different sets of instances, one from the phylogenetic area and another from the telecommunications area, show that the best of the two models is able to solve instances of the problem having up to 15 terminal nodes
Network-provider-independent overlays for resilience and quality of service.
PhDOverlay networks are viewed as one of the solutions addressing the inefficiency and slow
evolution of the Internet and have been the subject of significant research. Most existing
overlays providing resilience and/or Quality of Service (QoS) need cooperation among
different network providers, but an inter-trust issue arises and cannot be easily solved.
In this thesis, we mainly focus on network-provider-independent overlays and investigate
their performance in providing two different types of service. Specifically, this thesis
addresses the following problems:
Provider-independent overlay architecture: A provider-independent overlay
framework named Resilient Overlay for Mission-Critical Applications (ROMCA)
is proposed. We elaborate its structure including component composition and
functions and also provide several operational examples.
Overlay topology construction for providing resilience service: We investigate the topology design problem of provider-independent overlays aiming to provide resilience service. To be more specific, based on the ROMCA framework, we
formulate this problem mathematically and prove its NP-hardness. Three heuristics are proposed and extensive simulations are carried out to verify their effectiveness.
Application mapping with resilience and QoS guarantees: Assuming application mapping is the targeted service for ROMCA, we formulate this problem as
an Integer Linear Program (ILP). Moreover, a simple but effective heuristic is
proposed to address this issue in a time-efficient manner. Simulations with both
synthetic and real networks prove the superiority of both solutions over existing
ones.
Substrate topology information availability and the impact of its accuracy on overlay performance: Based on our survey that summarizes the methodologies available for inferring the selective substrate topology formed among a group
of nodes through active probing, we find that such information is usually inaccurate
and additional mechanisms are needed to secure a better inferred topology. Therefore, we examine the impact of inferred substrate topology accuracy on overlay
performance given only inferred substrate topology information
Whole Brain Vessel Graphs: A Dataset and Benchmark for Graph Learning and Neuroscience (VesselGraph)
Biological neural networks define the brain function and intelligence of humans and other mammals, and form ultra-large, spatial, structured graphs. Their neuronal organization is closely interconnected with the spatial organization of the brain's microvasculature, which supplies oxygen to the neurons and builds a complementary spatial graph. This vasculature (or the vessel structure) plays an important role in neuroscience; for example, the organization of (and changes to) vessel structure can represent early signs of various pathologies, e.g. Alzheimer's disease or stroke. Recently, advances in tissue clearing have enabled whole brain imaging and segmentation of the entirety of the mouse brain's vasculature. Building on these advances in imaging, we are presenting an extendable dataset of whole-brain vessel graphs based on specific imaging protocols. Specifically, we extract vascular graphs using a refined graph extraction scheme leveraging the volume rendering engine Voreen and provide them in an accessible and adaptable form through the OGB and PyTorch Geometric dataloaders. Moreover, we benchmark numerous state-of-the-art graph learning algorithms on the biologically relevant tasks of vessel prediction and vessel classification using the introduced vessel graph dataset. Our work paves a path towards advancing graph learning research into the field of neuroscience. Complementarily, the presented dataset raises challenging graph learning research questions for the machine learning community, in terms of incorporating biological priors into learning algorithms, or in scaling these algorithms to handle sparse,spatial graphs with millions of nodes and edges. All datasets and code are available for download at this https UR
Descoberta da topologia de rede
Doutoramento em MatemáticaA monitorização e avaliação do desempenho de uma rede são essenciais
para detetar e resolver falhas no seu funcionamento. De modo a
conseguir efetuar essa monitorização, e essencial conhecer a topologia
da rede, que muitas vezes e desconhecida. Muitas das técnicas usadas
para a descoberta da topologia requerem a cooperação de todos os
dispositivos de rede, o que devido a questões e políticas de segurança
e quase impossível de acontecer. Torna-se assim necessário utilizar
técnicas que recolham, passivamente e sem a cooperação de dispositivos
intermédios, informação que permita a inferência da topologia
da rede. Isto pode ser feito recorrendo a técnicas de tomografia, que
usam medições extremo-a-extremo, tais como o atraso sofrido pelos
pacotes.
Nesta tese usamos métodos de programação linear inteira para resolver
o problema de inferir uma topologia de rede usando apenas medições
extremo-a-extremo. Apresentamos duas formulações compactas de
programação linear inteira mista (MILP) para resolver o problema.
Resultados computacionais mostraram que a medida que o número de
dispositivos terminais cresce, o tempo que as duas formulações MILP
compactas necessitam para resolver o problema, também cresce rapidamente.
Consequentemente, elaborámos duas heurísticas com base
nos métodos Feasibility Pump e Local ranching. Uma vez que as medidas
de atraso têm erros associados, desenvolvemos duas abordagens
robustas, um para controlar o número máximo de desvios e outra para
reduzir o risco de custo alto. Criámos ainda um sistema que mede
os atrasos de pacotes entre computadores de uma rede e apresenta a
topologia dessa rede.Monitoring and evaluating the performance of a network is essential
to detect and resolve network failures. In order to achieve this monitoring
level, it is essential to know the topology of the network which
is often unknown. Many of the techniques used to discover the topology
require the cooperation of all network devices, which is almost
impossible due to security and policy issues. It is therefore, necessary
to use techniques that collect, passively and without the cooperation
of intermediate devices, the necessary information to allow the inference
of the network topology. This can be done using tomography
techniques, which use end-to-end measurements, such as the packet
delays.
In this thesis, we used some integer linear programming theory and
methods to solve the problem of inferring a network topology using
only end-to-end measurements. We present two compact mixed integer
linear programming (MILP) formulations to solve the problem. Computational
results showed that as the number of end-devices grows, the
time need by the two compact MILP formulations to solve the problem
also grows rapidly. Therefore, we elaborate two heuristics based on the
Feasibility Pump and Local Branching method. Since the packet delay
measurements have some errors associated, we developed two robust
approaches, one to control the maximum number of deviations and
the other to reduce the risk of high cost. We also created a system
that measures the packet delays between computers on a network and
displays the topology of that network
Machine Learning based Probability Density Functions of photometric redshifts and their application to cosmology in the era of dark Universe
The advent of wide, multiband multiepoch digital surveys of the sky has pushed astronomy
in the big data era. Instruments, such as the Large Synoptic Survey Telescope or LSST, are
in fact capable to produce up to 30 Terabytes of data per night. Such data streams imply that
data acquisition, data reduction, data analysis and data interpretation, cannot be performed
with traditional methods and that automatic procedures need to be implemented. In other
words, Astronomy, like many other sciences, needs the adoption of what has been defined the
fourth paradigm of modern science: the so called "data driven" or "Knowledge Discovery in
Databases - KDD" (after the three older paradigms: theory, experimentation and simulations).
With the words "Knowledge discovery" or "Data mining" we mean the extraction of useful
information from a very large amount of data using automatic or semi-automatic techniques
based on Machine Learning i.e. on algorithms built to teach the machines how to perform
specific tasks typical of the human brain.
This methodological revolution has led to the birth of the new discipline of Astroinformatics,
which, besides the algorithms used to extract knowledge from data, covers also the proper
acquisition and storage of the data, their pre-processing and analysis, as well as their distribu-
tion to the community of users.
This thesis takes place within the framework defined by this new discipline, since it describes
the implementation and the application of a new machine learning method to the evaluation
of photometric redshifts for the large samples of galaxies produced by the ongoing and future
digital surveys of the extragalactic sky. Photometric redshifts (described in Section 1.1)
are in fact fundamental for a huge variety of fundamental topics such as: fixing constraints
to the dark matter and energy content of the Universe, mapping the galaxy color-redshift
relationships, classifying astronomical sources, reconstructing the Large Scale Structure of
the Universe through weak lensing, to quote just a few. Therefore, it comes as no surprise that
in recent years a plethora of methods capable to calculate photo-z’s has been implemented
based either on template models fitting and/or on empirical explorations of the photometric
parameter space. Among the latter, many are based on machine learning but only a few allow
the characterization of the results in terms of a reliable Probability Distribution Function
(PDF).
In fact, Machine learning based techniques while on the one hand are not explicitly dependent
on the physical priors and are capable to produce accurate photo-z estimations within the
photometric ranges covered by a spectroscopic training set, on the other are not easy to
characterize in terms of a photo-z PDF, due to the fact that the analytical relation mapping
the photometric parameters onto the redshift space is virtually unknown. In the course of
my thesis I contributed to design, implement and test the innovative procedure METAPHOR
(Machine-learning Estimation Tool for Accurate PHOtometric Redshifts) capable to provide
reliable PDFs of the error distribution for empirical techniques. METAPHOR is implemented
as a modular workflow, whose internal engine for photo-z estimation makes use of the
MLPQNA neural network (Multi Layer Perceptron with Quasi Newton learning rule) for the
estimation of photo-z’s, with the possibility to easily replace the specific machine learning
model chosen to predict photo-z’s, and of an algorithm for the calculation of individual source
as well as of stacked objects sample PDFs. More in detail, my work in this context has been:
i) the creation of software modules providing some of the functionalities of the entire method
and finalised to obtain and analyze the results on all the datasets used so far (see the list of
publications) and for the EUCLID contest (see below), ii) to fix the natural algorithms for
improving some workflow facilities and, iii) the debugging of the whole procedure. The first
application of METAPHOR was in the framework of the second internal Photo-z challenge
of the Euclid consortium: a contest among different teams, aimed at establishing the best
SED fitting and/or empirical methods, to be included in the official data flow processing
pipelines for the mission. This contest lasted from September 2015 until the end of Jenuary
2016, and it was concluded with the releases of the results on the participants performances,
in the middle of May 2016.
Finally, the original workflow has been improved by adding other statistical estimators in
order to better quantify the significance of the results. Through a comparison of the results
obtained by METAPHOR and by the SED template fitting method Le-Phare on the SDSS-
DR9 (Sloan Digital Sky Survey - Data Release 9) we verified the reliability of our PDF
estimates using three different self-adaptive techniques, namely: MLPQNA, Random Forest
and the standard K-Nearest Neighbors models.
In order to further explore ways to improve the overall performances of photo-z methods,
I also contributed to the implementation of an hybrid procedure based on the combination
of SED template fitting estimates obtained with Le-Phare and of METAPHOR using as test
data those extracted from the ESO (European Southern Observatory) KiDS (Kilo Degree
Survey) Data Release 2.
Always in the context of the KiDS survey, I was involved in the creation of a catalogue of
ML photo-z’s and relative PDFs for the KiDS-DR3 (Data Release 3) survey, widely and
exhaustively described in de Jong et al. (2017). A further work on KiDS DR3 data,Amaro et
al. (2017), has been submitted to MNRAS. The main topic of this last work is to achieve a
deeper analysis of photo-z PDFs obtained using different methods, two machine learning
models (METAPHOR and ANNz2) and one based on SED fitting techniques (BPZ), through
a direct comparison of both cumulative (stacked) and individual PDFs. The comparison has
been made by discriminating between quantitative and qualitative estimators and using a
special dummy PDF as benchmark to assess their capability to measure the quality of error
estimation and invariance with respect to any type of error source. In fact, it is well known
that, in absence of systematics, there are several factors affecting the photo-z reliability, such
as photometric and internal errors of the methods as well as statistical biases. For the first
time we implemented a ML based procedure capable to take into account also the intrinsic
photometric uncertainties.
By modifying the METAPHOR internal mechanism, I derived a dummy PDF method through
which the individual PDFs, called dummy, are made up of a single number, e.g. 1 (the
maximum probability) associated to the the redshift bin of chosen accuracy in which the
only photo-z estimate for that source, falls. All the other redshift bins of a dummy PDF will
be characterized by a probability identically equal to zero. Due to its intrinsic invariance
to different sources of errors, the dummy method enables the possibility to compare PDF
methods independently from the statistical estimator adopted.
The results of this comparison, along with a discussion of the statistical estimators, have
allowed us to conclude that, in order to assess the objective validity and quality of any photo-z
PDF method, a combined set of statistical estimators is required.
Finally, a natural application of photo-z PDFs is that involving the measurements of Weak
Lensing (WL), i.e. the weak distortion of the galaxies images due to the inhomogeneities of
the Universe Large Scale Structure (LSS, made up of voids, filaments, halos) along the line of
sight. The shear or distortion of the galaxy shapes (ellipticities) due to the presence of matter
between the observer and the lensed sources, is evaluated through the tangential component
of the shear. The Excess Surface Density (i.e. a measurement of density distribution of the
lenses), is proportional to the tangential shear, through a geometrical factor, which takes into
account the angular diameter distances among observer, lens, and lensed galaxy source. Such
distances in the geometrical factor are measured through photometric redshifts, or better
through their full posterior probability distributions.
Up to now, such distributions have been measured with template fitting methods: our Ma-
chine Learning METAPHOR has been employed to make a preliminary comparative study
on WL ESD, with respect to the SED fitter results. Furthermore, a confrontation between the
ESD estimates obtained by using both METAPHOR PDFs and photo-z punctual estimates has been performed.
The WL study outcome is very promising since we found that the use of
punctual estimates and relative PDFs lead to indistinguishable results, at least to the required
accuracy. Most importantly, we found a similar trend for the ESD results in the comparison
of our Machine Learning method with a template fitter performance, despite all the limits
of Machine Learning techniques (incompleteness of the training dataset, low reliability for
results extrapolated outside the knowledge base) which become particularly relevant in WL
studies
Medical Image Segmentation: Thresholding and Minimum Spanning Trees
I bildesegmentering deles et bilde i separate objekter eller regioner. Det er et essensielt skritt i bildebehandling for å definere interesseområder for videre behandling eller analyse.
Oppdelingsprosessen reduserer kompleksiteten til et bilde for å forenkle analysen av attributtene oppnådd etter segmentering. Det forandrer representasjonen av informasjonen i det opprinnelige bildet og presenterer pikslene på en måte som er mer meningsfull og lettere å forstå.
Bildesegmentering har forskjellige anvendelser. For medisinske bilder tar segmenteringsprosessen sikte på å trekke ut bildedatasettet for å identifisere områder av anatomien som er relevante for en bestemt studie eller diagnose av pasienten. For eksempel kan man lokalisere berørte eller anormale deler av kroppen. Segmentering av oppfølgingsdata og baseline lesjonssegmentering er også svært viktig for å vurdere behandlingsresponsen.
Det er forskjellige metoder som blir brukt for bildesegmentering. De kan klassifiseres basert på hvordan de er formulert og hvordan segmenteringsprosessen utføres. Metodene inkluderer de som er baserte på terskelverdier, graf-baserte, kant-baserte, klynge-baserte, modell-baserte og hybride metoder, og metoder basert på maskinlæring og dyp læring. Andre metoder er baserte på å utvide, splitte og legge sammen regioner, å finne diskontinuiteter i randen, vannskille segmentering, aktive kontuter og graf-baserte metoder.
I denne avhandlingen har vi utviklet metoder for å segmentere forskjellige typer medisinske bilder. Vi testet metodene på datasett for hvite blodceller (WBCs) og magnetiske resonansbilder (MRI). De utviklede metodene og analysen som er utført på bildedatasettet er presentert i tre artikler.
I artikkel A (Paper A) foreslo vi en metode for segmentering av nukleuser og cytoplasma fra hvite blodceller. Metodene estimerer terskelen for segmentering av nukleuser automatisk basert på lokale minima. Metoden segmenterer WBC-ene før segmentering av cytoplasma avhengig av kompleksiteten til objektene i bildet. For bilder der WBC-ene er godt skilt fra røde blodlegemer (RBC), er WBC-ene segmentert ved å ta gjennomsnittet av bilder som allerede var filtrert med en terskelverdi. For bilder der RBC-er overlapper WBC-ene, er hele WBC-ene segmentert ved hjelp av enkle lineære iterative klynger (SLIC) og vannskillemetoder. Cytoplasmaet oppnås ved å trekke den segmenterte nukleusen fra den segmenterte WBC-en. Metoden testes på to forskjellige offentlig tilgjengelige datasett, og resultatene sammenlignes med toppmoderne metoder.
I artikkel B (Paper B) foreslo vi en metode for segmentering av hjernesvulster basert på minste dekkende tre-konsepter (minimum spanning tree, MST). Metoden utfører interaktiv segmentering basert på MST. I denne artikkelen er bildet lastet inn i et interaktivt vindu for segmentering av svulsten. Fokusregion og bakgrunn skilles ved å klikke for å dele MST i to trær. Ett av disse trærne representerer fokusregionen og det andre representerer bakgrunnen. Den foreslåtte metoden ble testet ved å segmentere to forskjellige 2D-hjerne T1 vektede magnetisk resonans bildedatasett. Metoden er enkel å implementere og resultatene indikerer at den er nøyaktig og effektiv.
I artikkel C (Paper C) foreslår vi en metode som behandler et 3D MRI-volum og deler det i hjernen, ikke-hjernevev og bakgrunnsegmenter. Det er en grafbasert metode som bruker MST til å skille 3D MRI inn i de tre regiontypene. Grafen lages av et forhåndsbehandlet 3D MRI-volum etterfulgt av konstrueringen av MST-en. Segmenteringsprosessen gir tre merkede, sammenkoblende komponenter som omformes tilbake til 3D MRI-form. Etikettene brukes til å segmentere hjernen, ikke-hjernevev og bakgrunn. Metoden ble testet på tre forskjellige offentlig tilgjengelige datasett og resultatene ble sammenlignet med ulike toppmoderne metoder.In image segmentation, an image is divided into separate objects or regions. It is an essential step in image processing to define areas of interest for further processing or analysis.
The segmentation process reduces the complexity of an image to simplify the analysis of the attributes obtained after segmentation. It changes the representation of the information in the original image and presents the pixels in a way that is more meaningful and easier to understand.
Image segmentation has various applications. For medical images, the segmentation process aims to extract the image data set to identify areas of the anatomy relevant to a particular study or diagnosis of the patient. For example, one can locate affected or abnormal parts of the body. Segmentation of follow-up data and baseline lesion segmentation is also very important to assess the treatment response.
There are different methods used for image segmentation. They can be classified based on how they are formulated and how the segmentation process is performed. The methods include those based on threshold values, edge-based, cluster-based, model-based and hybrid methods, and methods based on machine learning and deep learning. Other methods are based on growing, splitting and merging regions, finding discontinuities in the edge, watershed segmentation, active contours and graph-based methods.
In this thesis, we have developed methods for segmenting different types of medical images. We tested the methods on datasets for white blood cells (WBCs) and magnetic resonance images (MRI). The developed methods and the analysis performed on the image data set are presented in three articles.
In Paper A we proposed a method for segmenting nuclei and cytoplasm from white blood cells. The method estimates the threshold for segmentation of nuclei automatically based on local minima. The method segments the WBCs before segmenting the cytoplasm depending on the complexity of the objects in the image. For images where the WBCs are well separated from red blood cells (RBCs), the WBCs are segmented by taking the average of images that were already filtered with a threshold value. For images where RBCs overlap the WBCs, the entire WBCs are segmented using simple linear iterative clustering (SLIC) and watershed methods. The cytoplasm is obtained by subtracting the segmented nucleus from the segmented WBC. The method is tested on two different publicly available datasets, and the results are compared with state of the art methods.
In Paper B, we proposed a method for segmenting brain tumors based on minimum spanning tree (MST) concepts. The method performs interactive segmentation based on the MST. In this paper, the image is loaded in an interactive window for segmenting the tumor. The region of interest and the background are selected by clicking to split the MST into two trees. One of these trees represents the region of interest and the other represents the background. The proposed method was tested by segmenting two different 2D brain T1-weighted magnetic resonance image data sets. The method is simple to implement and the results indicate that it is accurate and efficient.
In Paper C, we propose a method that processes a 3D MRI volume and partitions it into brain, non-brain tissues, and background segments. It is a graph-based method that uses MST to separate the 3D MRI into the brain, non-brain, and background regions. The graph is made from a preprocessed 3D MRI volume followed by constructing the MST. The segmentation process produces three labeled connected components which are reshaped back to the shape of the 3D MRI. The labels are used to segment the brain, non-brain tissues, and the background. The method was tested on three different publicly available data sets and the results were compared to different state of the art methods.Doktorgradsavhandlin
Enabling Scalable Neurocartography: Images to Graphs for Discovery
In recent years, advances in technology have enabled researchers to ask new questions predicated on the collection and analysis of big datasets that were previously too large to study. More specifically, many fundamental questions in neuroscience require studying brain tissue at a large scale to discover emergent properties of neural computation, consciousness, and etiologies of brain disorders. A major challenge is to construct larger, more detailed maps (e.g., structural wiring diagrams) of the brain, known as connectomes.
Although raw data exist, obstacles remain in both algorithm development and scalable image analysis to enable access to the knowledge within these data volumes. This dissertation develops, combines and tests state-of-the-art algorithms to estimate graphs and glean other knowledge across six orders of magnitude, from millimeter-scale magnetic resonance imaging to nanometer-scale electron microscopy.
This work enables scientific discovery across the community and contributes to the tools and services offered by NeuroData and the Open Connectome Project. Contributions include creating, optimizing and evaluating the first known fully-automated brain graphs in electron microscopy data and magnetic resonance imaging data; pioneering approaches to generate knowledge from X-Ray tomography imaging; and identifying and solving a variety of image analysis challenges associated with building graphs suitable for discovery. These methods were applied across diverse datasets to answer questions at scales not previously explored
Fast algorithm for real-time rings reconstruction
The GAP project is dedicated to study the application of GPU in several contexts in which
real-time response is important to take decisions. The definition of real-time depends on
the application under study, ranging from answer time of μs up to several hours in case
of very computing intensive task. During this conference we presented our work in low
level triggers [1] [2] and high level triggers [3] in high energy physics experiments, and
specific application for nuclear magnetic resonance (NMR) [4] [5] and cone-beam CT [6].
Apart from the study of dedicated solution to decrease the latency due to data transport
and preparation, the computing algorithms play an essential role in any GPU application.
In this contribution, we show an original algorithm developed for triggers application, to
accelerate the ring reconstruction in RICH detector when it is not possible to have seeds
for reconstruction from external trackers
- …