1,218 research outputs found
Representation of Imprecise Digital Objects
International audienceIn this paper, we investigate a new framework to handle noisy digital objects. We consider digital closed simple 4-connected curves that are the result of an imperfect digital conversion (scan, picture, etc), and call digital imprecise contours such curves for which an imprecision value is known at each point. This imprecision value stands for the radius of a ball around each point, such that the result of a perfect digitization lies in the union of all the balls. In the first part, we show how to define an imprecise digital object from such an imprecise digital contour. To do so, we define three classes of pixels : inside, outside and uncertain pixels. In the second part of the paper, we build on this definition for a volumetric analysis (as opposed to contour analysis) of imprecise digital objects. From so-called toleranced balls, a filtration of objects, called λ-objects is defined. We show how to define a set of sites to encode this filtration of objects
Efficient Algorithms for Coastal Geographic Problems
The increasing performance of computers has made it possible to solve algorithmically problems for which manual and possibly inaccurate methods have been previously used. Nevertheless, one must still pay attention to the performance of an algorithm if huge datasets are used or if the problem iscomputationally diïŹcult.
Two geographic problems are studied in the articles included in this thesis. In the ïŹrst problem the goal is to determine distances from points, called study points, to shorelines in predeïŹned directions. Together with other in-formation, mainly related to wind, these distances can be used to estimate wave exposure at diïŹerent areas. In the second problem the input consists of a set of sites where water quality observations have been made and of the results of the measurements at the diïŹerent sites. The goal is to select a subset of the observational sites in such a manner that water quality is still measured in a suïŹcient accuracy when monitoring at the other sites is stopped to reduce economic cost.
Most of the thesis concentrates on the ïŹrst problem, known as the fetch length problem. The main challenge is that the two-dimensional map is represented as a set of polygons with millions of vertices in total and the distances may also be computed for millions of study points in several directions. EïŹcient algorithms are developed for the problem, one of them approximate and the others exact except for rounding errors. The solutions also diïŹer in that three of them are targeted for serial operation or for a small number of CPU cores whereas one, together with its further developments, is suitable also for parallel machines such as GPUs.Tietokoneiden suorituskyvyn kasvaminen on tehnyt mahdolliseksi ratkaista algoritmisesti ongelmia, joita on aiemmin tarkasteltu paljon ihmistyötĂ€ vaativilla, mahdollisesti epĂ€tarkoilla, menetelmillĂ€. Algoritmien suorituskykyyn on kuitenkin toisinaan edelleen kiinnitettĂ€vĂ€ huomiota lĂ€htömateriaalin suuren mÀÀrĂ€n tai ongelman laskennallisen vaikeuden takia.
VÀitöskirjaansisÀltyvissÀartikkeleissatarkastellaankahtamaantieteellistÀ ongelmaa. EnsimmÀisessÀ nÀistÀ on mÀÀritettÀvÀ etÀisyyksiÀ merellÀ olevista pisteistÀ lÀhimpÀÀn rantaviivaan ennalta mÀÀrÀtyissÀ suunnissa. EtÀisyyksiÀ ja tuulen voimakkuutta koskevien tietojen avulla on mahdollista arvioida esimerkiksi aallokon voimakkuutta. Toisessa ongelmista annettuna on joukko tarkkailuasemia ja niiltÀ aiemmin kerÀttyÀ tietoa erilaisista vedenlaatua kuvaavista parametreista kuten sameudesta ja ravinteiden mÀÀristÀ. TehtÀvÀnÀ on valita asemajoukosta sellainen osa joukko, ettÀ vedenlaatua voidaan edelleen tarkkailla riittÀvÀllÀ tarkkuudella, kun mittausten tekeminen muilla havaintopaikoilla lopetetaan kustannusten sÀÀstÀmiseksi.
VÀitöskirja keskittyy pÀÀosin ensimmÀisen ongelman, suunnattujen etÀisyyksien, ratkaisemiseen. Haasteena on se, ettÀ tarkasteltava kaksiulotteinen kartta kuvaa rantaviivan tyypillisesti miljoonista kÀrkipisteistÀ koostuvana joukkonapolygonejajaetÀisyyksiÀonlaskettavamiljoonilletarkastelupisteille kymmenissÀ eri suunnissa. Ongelmalle kehitetÀÀn tehokkaita ratkaisutapoja, joista yksi on likimÀÀrÀinen, muut pyöristysvirheitÀ lukuun ottamatta tarkkoja. Ratkaisut eroavat toisistaan myös siinÀ, ettÀ kolme menetelmistÀ on suunniteltu ajettavaksi sarjamuotoisesti tai pienellÀ mÀÀrÀllÀ suoritinytimiÀ, kun taas yksi menetelmistÀ ja siihen tehdyt parannukset soveltuvat myös voimakkaasti rinnakkaisille laitteille kuten GPU:lle.
Vedenlaatuongelmassa annetulla asemajoukolla on suuri mÀÀrÀ mahdollisia osajoukkoja. LisÀksi tehtÀvÀssÀ kÀytetÀÀn aikaa vaativia operaatioita kuten lineaarista regressiota, mikÀ entisestÀÀn rajoittaa sitÀ, kuinka monta osajoukkoa voidaan tutkia. Ratkaisussa kÀytetÀÀnkin heuristiikkoja, jotkaeivÀt vÀlttÀmÀttÀ tuota optimaalista lopputulosta.Siirretty Doriast
Engineering Art Galleries
The Art Gallery Problem is one of the most well-known problems in
Computational Geometry, with a rich history in the study of algorithms,
complexity, and variants. Recently there has been a surge in experimental work
on the problem. In this survey, we describe this work, show the chronology of
developments, and compare current algorithms, including two unpublished
versions, in an exhaustive experiment. Furthermore, we show what core
algorithmic ingredients have led to recent successes
Geometric uncertainty models for correspondence problems in digital image processing
Many recent advances in technology rely heavily on the correct interpretation of an enormous amount of visual information. All available sources of visual data (e.g. cameras in surveillance networks, smartphones, game consoles) must be adequately processed to retrieve the most interesting user information. Therefore, computer vision and image processing techniques gain significant interest at the moment, and will do so in the near future.
Most commonly applied image processing algorithms require a reliable solution for correspondence problems. The solution involves, first, the localization of corresponding points -visualizing the same 3D point in the observed scene- in the different images of distinct sources, and second, the computation of consistent geometric transformations relating correspondences on scene objects.
This PhD presents a theoretical framework for solving correspondence problems with geometric features (such as points and straight lines) representing rigid objects in image sequences of complex scenes with static and dynamic cameras. The research focuses on localization uncertainty due to errors in feature detection and measurement, and its effect on each step in the solution of a correspondence problem.
Whereas most other recent methods apply statistical-based models for spatial localization uncertainty, this work considers a novel geometric approach. Localization uncertainty is modeled as a convex polygonal region in the image space. This model can be efficiently propagated throughout the correspondence finding procedure. It allows for an easy extension toward transformation uncertainty models, and to infer confidence measures to verify the reliability of the outcome in the correspondence framework. Our procedure aims at finding reliable consistent transformations in sets of few and ill-localized features, possibly containing a large fraction of false candidate correspondences.
The evaluation of the proposed procedure in practical correspondence problems shows that correct consistent correspondence sets are returned in over 95% of the experiments for small sets of 10-40 features contaminated with up to 400% of false positives and 40% of false negatives. The presented techniques prove to be beneficial in typical image processing applications, such as image registration and rigid object tracking
The role and use of sketchpad as a modeling tool in secondary schools.
Thesis (Ph.D.)-University of Durban-Westville, 2004.Over the last decade or two, there has been a discernible move to include modeling in the
mathematics curricula in schools. This has come as the result of the demand that society is
making on educational institutions to provide workers that are capable of relating
theoretical knowledge to that of the real world. Successful industries are those that are able
to effectively overcome the complexities of real world problems they encounter on a daily
basis.
This research study focused, to some extent, on the different definitions of modeling and
some of the processes involved. Various examples are given to illustrate some of the
methods employed in the process of modeling. More importantly, this work attempted to
build on existing research and tested some of these ideas in a teaching environment. This
was done in order to investigate the feasibility of introducing mathematical concepts within
the context of dynamic geometry. Learners, who had not been introduced to specific
concepts, such as concurrency, equidistant, and so on, were interviewed using Sketchpad
and their responses were analyzed. The research focused on a few aspects. It attempted to
determine whether learners were able to use modeling to solve a given real world problem.
It also attempted to establish whether learners developed a better understanding when using
Sketchpad.
Several useful implications have evolved from this work that may influence both the
teaching and learning of geometry in school. Initially these learners showed that, to a large
extent, they could not relate mathematics to the real world and vice versa. But a pertinent
finding of this research showed that, with guidance, these learners could apply themselves
creatively. Furthermore it reaffirmed the idea that learners can be taught from the general
to the more specific, enabling them to develop a better understanding of concepts being
taught.
Perhaps the findings and suggestions may be useful to pre-service and in-service educators,
as well as curriculum developers
NASA space station automation: AI-based technology review
Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures
- âŠ