1,751 research outputs found

    Optimal locating-total dominating sets in strips of height 3

    Get PDF
    A set C of vertices in a graph G = (V,E) is total dominating in G if all vertices of V are adjacent to a vertex of C. Furthermore, if a total dominating set C in G has the additional property that for any distinct vertices u, &epsilon; &isin; V \ C the subsets formed by the vertices of C respectively adjacent to u and v are different, then we say that C is a locating-total dominating set in G. Previously, locating-total dominating sets in strips have been studied by Henning and Jafari Rad (2012). In particular, they have determined the sizes of the smallest locating-total dominating sets in the finite strips of height 2 for all lengths. Moreover, they state as open question the analogous problem for the strips of height 3. In this paper, we answer the proposed question by determining the smallest sizes of locating-total dominating sets in the finite strips of height 3 as well as the smallest density in the infinite strip of height 3. Optimal locating-total dominating sets in strips of height 3. Available from: https://www.researchgate.net/publication/273517759_Optimal_locating-total_dominating_sets_in_strips_of_height_3 [accessed Jan 28, 2016].</p

    The kk-visibility Localization Game

    Full text link
    We study a variant of the Localization game in which the cops have limited visibility, along with the corresponding optimization parameter, the kk-visibility localization number ζk\zeta_k, where kk is a non-negative integer. We give bounds on kk-visibility localization numbers related to domination, maximum degree, and isoperimetric inequalities. For all kk, we give a family of trees with unbounded ζk\zeta_k values. Extending results known for the localization number, we show that for k≥2k\geq 2, every tree contains a subdivision with ζk=1\zeta_k = 1. For many nn, we give the exact value of ζk\zeta_k for the n×nn \times n Cartesian grid graphs, with the remaining cases being one of two values as long as nn is sufficiently large. These examples also illustrate that ζi≠ζj\zeta_i \neq \zeta_j for all distinct choices of ii and $j.

    On location, domination and information retrieval

    Get PDF
    The thesis is divided into two main branches: identifying and locatingdominating codes, and information retrieval. The former topics are motivated by the aim to locate objects in sensor networks (or other similar applications) and the latter one by the need to retrieve information in memories such as DNA data storage systems. Albeit the underlying applications, the study on these topics mainly belongs to discrete mathematics; more specically, to the elds of coding and graph theory. The sensor networks are usually represented by graphs where vertices represent the monitored locations and edges the connections between the locations. Moreover, the locations of the sensors are determined by a code. Furthermore, the desired properties of the sensor network are deeply linked with the properties of the underlying code. The number of errors in reading the data is abundant in the DNA data storage systems. In particular, there can occur more errors than a reasonable error-correcting code can handle. However, this problem is somewhat oset by the possibility to obtain multiple approximations of the same information from the data storage. Hence, the information retrieval process can be modelled by the Levenshtein's channel model, where a message is sent through multiple noisy channels and multiple outputs are received. In the rst two papers of the thesis, we introduce and study the new concepts of self- and solid-locating-dominating codes as a natural analogy to self-identifying codes with respect to locating-dominating codes. The rst paper introduces these new codes and considers them in some graphs such as the Hamming graphs. Then, in the second paper, we broaden our view on the topic by considering graph theoretical questions. We give optimal codes in multiple dierent graph classes and some more general results using concepts such as the Dilworth number and graph complements. The third paper focuses on the q-ary Hamming spaces. In particular, we disprove a conjecture proposed by Goddard and Wash related to identifying codes. In the fourth paper, we return to self- and solid-locating-dominating codes and give optimal codes in some graph classes and consider their densities in innite graphs. In the fth paper, we consider information retrieval in memories; in particular, the Levenshtein's channel model. In the channel model, we transmit some codeword belonging to the binary Hamming space through multiple identical channels. With the help of multiple dierent outputs, we give a list of codewords which may have been sent. In the paper, we study the number of channels required to have a rather small (constant) list size when the properties of the channels, the code and the dimension of the Hamming space are xed. In particular, we give an exact relation between the number of channels and the asymptotic value of the maximum list size.Väitöskirja käsittelee kahta aihetta: identioivia ja paikantavia peittokoodeja sekä tiedon noutamista muistista. Ensimmäisen aiheen motivaationa on objektien paikantaminen sensoriverkoista (sekä muut samankaltaiset sovellukset) ja jälkimmäisen tiedonnouto DNA-muisteista. Näiden aiheiden tutkimus kuuluu diskreettiin matematiikkaan, täsmällisemmin koodaus- ja graa-teoriaan. Sensoriverkkoja kuvataan yleensä graafeilla, joissa solmut esittävät tarkkailtuja kohteita ja viivat yhteyksiä näiden kohteiden välillä. Edelleen sensorien paikat määräytyvät annetun koodin perusteella. Tästä johtuen sensoriverkon halutut ominaisuudet pohjautuvat vahvasti alla olevaan koodiin. Luettaessa tietoa DNA-muisteista tapahtuvien virheiden määrä saattaa olla erittäin suuri; erityisesti suurempi kuin kiinnitetyn virheitä korjaavan koodin korjauskyky. Toisaalta tilanne ei ole aivan näin ongelmallinen, sillä DNA-muisteista voidaan saada useita eri arvioita muistiin tallennetusta tiedosta. Näistä syistä johtuen tietojen noutamista DNA-muisteista voidaan mallintaa käyttäen Levenshteinin kanavamallia. Kanavamallissa yksi viesti lähetetään useiden häiriöisten kanavien kautta ja näin vastaanotetaan useita viestejä (yksi jokaisesta kanavasta). Väitöskirjan kahdessa ensimmäisessä julkaisussa esitellään ja tutkitaan uusia paikantavien peittokoodien luokkia, jotka pohjautuvat aiemmin tutkittuihin itse-identioiviin koodeihin. Ensimmäisessä julkaisussa on esitelty nämä koodiluokat sekä tutkittu niitä joissain graafeissa kuten Hammingin graafeissa. Tämän jälkeen toisessa julkaisussa käsitellään yleisiä graa-teoreettisia kysymyksiä. Julkaisussa esitetään optimaaliset koodit useille graaperheille sekä joitain yleisempiä tuloksia käyttäen mm. Dilworthin lukua sekä graakomplementteja. Kolmas julkaisu keskittyy q-arisiin Hammingin avaruuksiin. Erityisesti julkaisussa todistetaan vääräksi Goddardin ja Washin aiemmin esittämä identioivia koodeja koskeva otaksuma. Neljäs artikkeli käsittelee jo kahdessa ensimmäisessä artikkelissa esiteltyjä paikantavien peittokoodien luokkia. Artikkeli esittää optimaalisia koodeja useille graaperheille sekä käsittelee äärettömiä graafeja. Viides artikkeli käsittelee tiedonnoutoa ja erityisesti Levenshteinin kanavamallia. Kanavamallissa binääriseen Hammingin avaruuteen kuuluva koodisana lähetetään useiden identtisten kanavien läpi. Näistä kanavista vastaanotetaan useita eri arvioita lähetetystä koodisanasta ja rakennetaan lista mahdollisesti lähetetyistä sanoista. Artikkelissa tutkitaan kuinka monta kanavaa tarvitaan, jotta tämän listan koko on pieni (vakio), kun kanavien ominaisuudet, koodi ja Hammingin avaruuden dimensio on kiinnitetty. Erityisesti löydetään täsmällinen suhde kanavien lukumäärän ja asymptoottisesti maksimaalisen listan koon välille

    Mapping historical forest biomass for stock-change assessments at parcel to landscape scales

    Full text link
    Understanding historical forest dynamics, specifically changes in forest biomass and carbon stocks, has become critical for assessing current forest climate benefits and projecting future benefits under various policy, regulatory, and stewardship scenarios. Carbon accounting frameworks based exclusively on national forest inventories are limited to broad-scale estimates, but model-based approaches that combine these inventories with remotely sensed data can yield contiguous fine-resolution maps of forest biomass and carbon stocks across landscapes over time. Here we describe a fundamental step in building a map-based stock-change framework: mapping historical forest biomass at fine temporal and spatial resolution (annual, 30m) across all of New York State (USA) from 1990 to 2019, using freely available data and open-source tools. Using Landsat imagery, US Forest Service Forest Inventory and Analysis (FIA) data, and off-the-shelf LiDAR collections we developed three modeling approaches for mapping historical forest aboveground biomass (AGB): training on FIA plot-level AGB estimates (direct), training on LiDAR-derived AGB maps (indirect), and an ensemble averaging predictions from the direct and indirect models. Model prediction surfaces (maps) were tested against FIA estimates at multiple scales. All three approaches produced viable outputs, yet tradeoffs were evident in terms of model complexity, map accuracy, saturation, and fine-scale pattern representation. The resulting map products can help identify where, when, and how forest carbon stocks are changing as a result of both anthropogenic and natural drivers alike. These products can thus serve as inputs to a wide range of applications including stock-change assessments, monitoring reporting and verification frameworks, and prioritizing parcels for protection or enrollment in improved management programs.Comment: Manuscript: 24 pages, 7 figures; Supplements: 12 pages, 5 figures; Submitted to Forest Ecology and Managemen

    Studies for the Commissioning of the CERN CMS Silicon Strip Tracker

    Get PDF
    In 2008 the Large Hadron Collider (LHC) at CERN will start producing proton-proton collisions of unprecedented energy. One of its main experiments is the Compact Muon Solenoid (CMS), a general purpose detector, optimized for the search of the Higgs boson and super symmetric particles. The discovery potential of the CMS detector relies on a high precision tracking system, made of a pixel detector and the largest silicon strip Tracker ever built. In order to operate successfully a device as complex as the CMS silicon strip Tracker, and to fully exploit its potential, the properties of the hardware need to be characterized as precisely as possible, and the reconstruction software needs to be commissioned with physics signals. A number of issues were identified and studied to commission the detector, some of which concern the entire Tracker, while some are specific to the Tracker Outer Barrel (TOB): - the time evolution of the signals in the readout electronics need to be precisely measured and correctly simulated, as it affects the expected occupancy and the data volume, critical issues in high-luminosity running; - the electronics coupling between neighbouring channels affects the cluster size and hence the hit resolution, the tracking precision, the occupancy and the data volume; - the mechanical structure of the Rods (the sub-assemblies of the TOB) is mostly made of carbon fiber elements; aluminum inserts glued to the carbon fi ber frame provide efficient cooling contacts between the silicon detectors and the thin cooling pipe, made of a copper-nickel alloy; the different thermal expansion coefficients of the various components induce stresses on the structure when this is cooled down to the operating temperature, possibly causing small deformations; a detailed characterization of the geometrical precision of the rods and of its possible evolution with temperature is a valuable input for track reconstruction in CMS. These and other issues were studied in this thesis. For this purpose, a large scale test setup, designed to study the detector performance by tracking cosmic muons, was operated over several months. A dedicated trigger system was set up, to select tracks synchronous with the fast readout electronics, and to be able to perform a precise measurement of the time evolution of the front-end signals. Data collected at room temperature and at the Tracker operating temperature of -10°C were used to test reconstruction and alignment algorithms for the Tracker, as well as to perform a detailed qualification of the geometry and the functionality of the structures at different temperatures

    Source apportionment of aerosol measured in the northern South China Sea during springtime

    Get PDF
    2012 Fall.Includes bibliographical references.Large sources of aerosol are known to exist in Asia, but the nature of these sources and their impacts on surface particulate matter concentrations are presently not well understood, due in part to the complex meteorology in the region and the lack of speciated aerosol observations. This work presents findings from a pilot study that was aimed at improving knowledge in these areas. Aerosol was collected at a sea-level surface site using an 8-stage DRUM cascade impactor during an approximately six week study at Dongsha Island in the northern South China Sea in the Spring of 2010. The samples were analyzed by X-ray fluorescence (XRF) for selected elemental concentrations, and factor analysis was performed on the results using principal component analysis (PCA). The six factors extracted by PCA were identified as various dust, pollution, and sea salt aerosol types. A refined coarse mode only factor analysis yielded three coarse factors identified as dust, pollution laden dust, and sea salt. Backtrajectory analysis with the HYSPLIT trajectory model indicated likely source regions for dust factors to be in western and northern China and Mongolia, consistent with the known dust sources in the Gobi and Taklimakan Deserts. Pollution factors tended to be associated with transport from coastal China where large population and industrial centers exist, while sea salt sources indicated more diffuse marine regions. The results were generally consistent with observations from a co-located three-wavelength nephelometer and AERONET radiometer, along with model predictions from the Navy Aerosol Analysis and Prediction System (NAAPS). Backtrajectories indicated that transport of aerosol to the surface at Dongsha was occurring primarily within the boundary layer from regions generally to the north; an observation consistent with the dominance of pollution and dust aerosol in the ground-based data set. In contrast, more westerly flow aloft transported air from regions to the south and west, where biomass burning was a more significant aerosol source; however, this particle type was not clearly identified in the surface aerosol composition, consistent with it remaining primarily aloft and not mixing strongly to the surface during the study. Significant vertical wind shear and temperature inversions in the region support this conceptual understanding and suggest the potential for considerable vertical inhomogeneity in the SCS aerosol environment

    Facial analysis in video : detection and recognition

    Get PDF
    Biometric authentication systems automatically identify or verify individuals using physiological (e.g., face, fingerprint, hand geometry, retina scan) or behavioral (e.g., speaking pattern, signature, keystroke dynamics) characteristics. Among these biometrics, facial patterns have the major advantage of being the least intrusive. Automatic face recognition systems thus have great potential in a wide spectrum of application areas. Focusing on facial analysis, this dissertation presents a face detection method and numerous feature extraction methods for face recognition. Concerning face detection, a video-based frontal face detection method has been developed using motion analysis and color information to derive field of interests, and distribution-based distance (DBD) and support vector machine (SVM) for classification. When applied to 92 still images (containing 282 faces), this method achieves 98.2% face detection rate with two false detections, a performance comparable to the state-of-the-art face detection methods; when applied to videQ streams, this method detects faces reliably and efficiently. Regarding face recognition, extensive assessments of face recognition performance in twelve color spaces have been performed, and a color feature extraction method defined by color component images across different color spaces is shown to help improve the baseline performance of the Face Recognition Grand Challenge (FRGC) problems. The experimental results show that some color configurations, such as YV in the YUV color space and YJ in the YIQ color space, help improve face recognition performance. Based on these improved results, a novel feature extraction method implementing genetic algorithms (GAs) and the Fisher linear discriminant (FLD) is designed to derive the optimal discriminating features that lead to an effective image representation for face recognition. This method noticeably improves FRGC ver1.0 Experiment 4 baseline recognition rate from 37% to 73%, and significantly elevates FRGC xxxx Experiment 4 baseline verification rate from 12% to 69%. Finally, four two-dimensional (2D) convolution filters are derived for feature extraction, and a 2D+3D face recognition system implementing both 2D and 3D imaging modalities is designed to address the FRGC problems. This method improves FRGC ver2.0 Experiment 3 baseline performance from 54% to 72%

    Evaluation of a steep curved rotorcraft IFR procedure in a helicopter-ATC integrated simulation test

    Get PDF

    Resource ecology of the Bolinao coral reef system

    Get PDF
    Resource /Energy Economics and Policy,
    • …
    corecore