12,211 research outputs found
Automated Identification and Differentiation of Spectrally Similar Hydrothermal Minerals on Mars
Early telescopic observations corroborated hydration related absorptions on Mars in the infrared. Images from the Viking missions led to speculation of hydrothermal alteration and were followed by two missions which mapped the spatial variability of the ~ 3 m hydration feature. Since then, the Compact Reconnaissance Imager for Mars (CRISM) has provided high spatial resolution (up to 18m) spectral identification of a suite of hydrothermal and diagenetic minerals which have illuminated a range of formation mechanisms. Presence/absence and spatial segregation or mixing of minerals like prehnite, epidote, chlorite amphiboles, and mixed-layer Fe/Mg smectite-chlorite provide valuable evidence for the geologic setting of deposits on Earth, and these phases are often used as temperature and aqueous chemistry indicators in terrestrial systems. Mapping the distribution of these phases will help to answer whether Mars had widespread conditions favorable for low-grade metamorphism and diagenesis, or only focused hydrothermal systems in areas of high heat flow. Further characterizing the chemistry and structure of these phases will then help to answer how most of the widespread Fe/Mg phyllosilicates formed, further defining early geochemical cycling and climate. A fully automated approach for accurate mapping of important hydrothermal mineral phases on Mars has been a challenge. Due to overlapping features in the M-OH region (~2.2-2.4 m), the strongest absorption features of chlorite, prehnite, and epidote in the short-wave infrared are difficult to distinguish from one another and from the most commonly occurring hydrated silicates on Mars, Fe/Mg smectites. Weaker absorptions are present in both prehnite and epidote which help to distinguish them from chlorite and smectites, but their relative strength in the presence of noise and spatial mixing is often too low to confidently identify them without the noise suppression and feature enhancement methods described here. The spectral signatures of mixed-layer Fe/Mg smectite-chlorite and partially chloritized Fe/Mg smectites have not yet been adequately assessed. Here we evaluate the effectiveness of two empirical and statistical methods for identifying and differentiating these phases using CRISM data
Une méthodologie générale de comparaison de modèles d'estimation régionale de crue
L'estimation du débit QT de période de retour T en un site est généralement effectuée par ajustement d'une distribution statistique aux données de débit maximum annuel de ce site. Cependant, l'estimation en un site où l'on dispose de peu ou d'aucune données hydrologiques doit être effectuée par des méthodes régionales qui consistent à utiliser l'information existante en des sites hydrologiquement semblables au site cible. Cette procédure est effectuée en deux étapes: (a) détermination des sites hydrologiquemcnt semblables(b) estimation régionalePour un découpage donné (étape a), nous proposons trois approches méthodologiques pour comparer les différentes méthodes d'estimation régionale. Ces approches sont décrites en détail dans ce travail. Plus particulièrement il s'agit de- simulation par la méthode du bootstrap - analyse de régression ou Bayes empirique - méthode bayésienne hiérarchiqueEstimation of design flows with a given return period is a common problem in hydrologic practice. At sites where data have been recorded during a number of years, such an estimation can be accomplished by fitting a statistical distribution to the series of annual maximum floods and then computing the (1-1/T) -quantile in the estimated distribution. However, frequently there are no, or only few, data available at the site of interest, and flood estimation must then be based on regional information. In general, regional flood frequency analysis involves two major steps:- determination of a set of gauging stations that are assumed to contain information pertinent to the site of interest. This is referred to as delineation of homogeneous regions.- estimation of the design flood at the target site based on information from the sites ofthe homogeneous region.The merits of regional flood frequency analysis, at ungauged sites as well as at sites where some local information is available, are increasingly being acknowledged, and many research papers have addressed the issue. New methods for delitneating regions and for estimating floods based on regional information have been proposed in the last decade, but scientists tend to focus on the development of new techniques rather than on testing existing ones. The aim ofthis paper is to suggest methodologies for comparing different regional estimation alternatives.The concept of homogeneous regions has been employed for a long time in hydrology, but a rigorous detinition of it has never been given. Usually, the homogeneity concerns dimensionless statistical characteristics of hydrological variables such as the coefficient of variation (Cv) and the coefficient of skewness (Cs) of annual flood series. A homogeneous region can then be thought of as a collection of stations with flood series whose statistical properties, except forscale, are not significantly different from the regional mean values. Tests based on L-moments are at present much applied for validating the homogeneity of a given region. Early approaches to regional flood frequency analysis were based on geographical regions, but recent tendencies are to deline homogeneous regions from the similarity of basins in the space of catchment characteristics which are related to hydrologic characteristics. Cluster analysis can be used to group similar sites, but has the disadvantage that a site in the vicinity ofthe cluster border may be closer to sites in other clusters than to those ofits ovm group. Burn (1990a, b) has recently suggested a method where each site has its owm homogeneous region (or region of influence) in which it is located at the centre of gravity.Once a homogeneous region has been delineated, a regional estimation method must be selected. The index flood method, proposed by Dalrymple (1960), and the direct regression method are among the most commonly used procedures. Cunnane (1988) provides an overview of several other methods. The general performance of a regional estimation method depends on the amount of regional information (hydrological as well as physiographical and climatic), and the size and homogeneity of the region considered relevant to the target site. Being strongly data-dependent, comparisons of regional models will be valid on a local scale only. Hence, one cannot expect to reach a general conclusion regarding the relative performance of different models, although some insight may be gained from case studies.Here, we present methodologies for comparing regional flood frequency procedures (combination of homogeneous regions and estimation methods) for ungauged sites. Hydrological, physiographical and climatic data are assumed to be available at a large number of sites, because a comparison of regional models must be based on real data. The premises of these methodologies are that at each gauged site in the collection of stations considered, one can obtain an unbiased atsite estimate of a given flood quantile, and that the variance of this estimate is known. Regional estimators, obtained by ignoring the hydrological data at the target site, are then compared to the at-site estimate. Three difrerent methodologies are considered in this study:A) Bootstrap simulation of hydrologic dataIn order to preserve spatial correlation of hydrologic data (which may have an important impact on regional flood frequency procedures), we suggest performing bootstrap simulation of vectors rather than scalar values. Each vector corresponds to a year for which data are available at one or more sites in the considered selection of stations; the elements ofthe vectors are the different sites. For a given generated data scenario, an at-site estimate and a regional estimate at each site considered can be calculated. As a performance index for a given regional model, one can use, for example, the average (over sites and bootstrap scenarios) relative deviation ofthe regional estimator from the at-site estimator.B) Regression analysisThe key idea in this methodology is to perform a regression analysis with a regional estimator as an explanatory variable and the unknown quantile, estimated by the at-site method, as the dependent variable. It is reasonable to assume a linear relation between the true quantiles and the regional estimators. The estimated regression coeflicients express the systematic error, or bias, of a given regional procedure, and the model error, estimated for instance by the method of moments, is a measure of its variance. It is preferable that the bias and the variance be as small as possible, suggesting that these quantities be used to order different regional procedures.C) Hierarchical Bayes analysisThe regression method employed in (B) can also be regarded as the resultfrom an empirical Bayes analysis in which point estimates of regression coeflicients and model error are obtained. For several reasons, it may be advantageous to proceed with a complete Bayesian analysis in which bias and model error are considered as uncertain quantities, described by a non-informative prior distribution. Combination of the prior distribution and the likelihood function yields through Bayes, theorem the posterior distribution of bias and model error. In order to compare different regional models, one can then calculate for example the mean or the mode of this distribution and use these values as perfonnance indices, or one can compute the posterior loss
Khovanov homology is an unknot-detector
We prove that a knot is the unknot if and only if its reduced Khovanov
cohomology has rank 1. The proof has two steps. We show first that there is a
spectral sequence beginning with the reduced Khovanov cohomology and abutting
to a knot homology defined using singular instantons. We then show that the
latter homology is isomorphic to the instanton Floer homology of the sutured
knot complement: an invariant that is already known to detect the unknot.Comment: 124 pages, 13 figure
Estimation non paramétrique des quantiles de crue par la méthode des noyaux
La détermination du débit de crue d'une période de retour donnée nécessite l'estimation de la distribution des crues annuelles. L'utilisation des distributions non paramétriques - comme alternative aux lois statistiques - est examinée dans cet ouvrage. Le principal défi dans l'estimation par la méthode des noyaux réside dans le calcul du paramètre qui détermine le degré de lissage de la densité non paramétrique. Nous avons comparé plusieurs méthodes et avons retenu la méthode plug-in et la méthode des moindres carrés avec validation croisée comme les plus prometteuses.Plusieurs conclusions intéressantes ont été tirées de cette étude. Entre autres, pour l'estimation des quantiles de crue, il semble préférable de considérer des estimateurs basés directement sur la fonction de distribution plutôt que sur la fonction de densité. Une comparaison de la méthode plug-in à l'ajustement de trois lois statistiques a permis de conclure que la méthode des noyaux représente une alternative intéressante aux méthodes paramétriques traditionnelles.Traditional flood frequency analysis involves the fitting of a statistical distribution to observed annual peak flows. The choice of statistical distribution is crucial, since it can have significant impact on design flow estimates. Unfortunately, it is often difficult to determine in an objective way which distribution is the most appropriate.To avoid the inherent arbitrariness associated with the choice of distribution in parametric frequency analysis, one can employ a method based on nonparametric density estimation. Although potentially subject to larger standard error of quantile estimates, the use of nonparametric densities eliminates the need for selecting a particular distribution and the potential bias associated with a wrong choice.The kernel method is a conceptually simple approach, similar in nature to a smoothed histogram. The critical parameter in kernel estimation is the smoothing parameter that determines the degree of smoothing. Methods for estimating the smoothing parameter have already been compared in a number of statistical papers. The novelty of our work is the particular emphasis on quantile estimation, in particular the estimation of quantiles outside the range of observed data. The flood estimation problem is unique in this sense and has been the motivating factor for this study.Seven methods for estimating the smoothing parameter are compared in the paper. All methods are based on some goodness-of-fit measures. More specifically, we considered the least-squares cross-validation method, the maximum likelihood cross-validation method, Adamowski's (1985) method, a plug-in method developed by Altman and Leger (1995) and modified by the authors (Faucher et al., 2001), Breiman's goodness-of-fit criterion method (Breiman, 1977), the variable-kernel maximum likelihood method, and the variable-kernel least-squares cross-validation method.The estimation methods can be classified according to whether they are based on fixed or variable kernels, and whether they are based on the goodness-of-fit of the density function or cumulative distribution function.The quality of the different estimation methods was explored in a Monte Carlo study. Hundred (100) samples of sizes 10, 20, 50, and 100 were simulated from an LP3 distribution. The nonparametric estimation methods were then applied to each of the simulated samples, and quantiles with return period 10, 20, 50, 100, 200, and 1000 were estimated. Bias and root-mean square error of quantile estimates were the key figures used to compare methods. The results of the study can be summarized as follows :1. Comparison of kernels. The literature reports that the kernel choice is relatively unimportant compared to the choice of the smoothing parameter. To determine whether this assertion also holds in the case of the estimation of large quantiles outside the range of data, we compared six different kernel candidates. We found no major differences between the biweight, the Normal, the Epanechnikov, and the EV1 kernels. However, the rectangular and the Cauchy kernel should be avoided.2. Comparison of sample size. The quality of estimates, whether parametric or nonparametric, deteriorates as sample size decreases. To examine the degree of sensitivity to sample size, we compared estimates of the 200-year event obtained by assuming a GEV distribution and a nonparametric density estimated by maximum likelihood cross-validation. The main conclusion is that the root mean square error for the parametric model (GEV) is more sensitive to sample size than the nonparametric model. 3. Comparison of estimators of the smoothing parameter. Among the methods considered in the study, the plug-in method, developed by Altman and Leger (1995) and modified by the authors (Faucher et al. 2001), turned out to perform the best along with the least-squares cross-validation method which had a similar performance. Adamowski's method had to be excluded, because it consistently failed to converge. The methods based on variable kernels generally did not perform as well as the fixed kernel methods.4. Comparison of density-based and cumulative distribution-based methods. The only cumulative distribution-based method considered in the comparison study was the plug-in method. Adamowski's method is also based on the cumulative distribution function, but was rejected for the reasons mentioned above. Although the plug-in method did well in the comparison, it is not clear whether this can be attributed to the fact that it is based on estimation of the cumulative distribution function. However, one could hypothesize that when the objective is to estimate quantiles, a method that emphasizes the cumulative distribution function rather than the density should have certain advantages. 5. Comparison of parametric and nonparametric methods. Nonparametric methods were compared with conventional parametric methods. The LP3, the 2-parameter lognormal, and the GEV distributions were used to fit the simulated samples. It was found that nonparametric methods perform quite similarly to the parametric methods. This is a significant result, because data were generated from an LP3 distribution so one would intuitively expect the LP3 model to be superior which however was not the case. In actual applications, flood distributions are often irregular and in such cases nonparametric methods would likely be superior to parametric methods
Extreme events in discrete nonlinear lattices
We perform statistical analysis on discrete nonlinear waves generated though
modulational instability in the context of the Salerno model that interpolates
between the intergable Ablowitz-Ladik (AL) equation and the nonintegrable
discrete nonlinear Schrodinger (DNLS) equation. We focus on extreme events in
the form of discrete rogue or freak waves that may arise as a result of rapid
coalescence of discrete breathers or other nonlinear interaction processes. We
find power law dependence in the wave amplitude distribution accompanied by an
enhanced probability for freak events close to the integrable limit of the
equation. A characteristic peak in the extreme event probability appears that
is attributed to the onset of interaction of the discrete solitons of the AL
equation and the accompanied transition from the local to the global
stochasticity monitored through the positive Lyapunov exponent of a nonlinear
map.Comment: 5 pages, 4 figures; reference added, figure 2 correcte
Revue bibliographique des méthodes de prévision des débits
Dans le domaine de la prévision des débits, une grande variété de méthodes sont disponibles: des modèles stochastiques et conceptuels mais aussi des approches plus novatrices telles que les réseaux de neurones artificiels, les modèles à base de règles floues, la méthode des k plus proches voisins, la régression floue et les splines de régression. Après avoir effectué une revue détaillée de ces méthodes et de leurs applications récentes, nous proposons une classification qui permet de mettre en lumière les différences mais aussi les ressemblances entre ces approches. Elles sont ensuite comparées pour les problèmes différents de la prévision à court, moyen et long terme. Les recommandations que nous effectuons varient aussi avec le niveau d'information a priori. Par exemple, lorsque l'on dispose de séries chronologiques stationnaires de longue durée, nous recommandons l'emploi de la méthode non paramétrique des k plus proches voisins pour les prévisions à court et moyen terme. Au contraire, pour la prévision à plus long terme à partir d'un nombre restreint d'observations, nous suggérons l'emploi d'un modèle conceptuel couplé à un modèle météorologique basé sur l'historique. Bien que l'emphase soit mise sur le problème de la prévision des débits, une grande partie de cette revue, principalement celle traitant des modèles empiriques, est aussi pertinente pour la prévision d'autres variables.A large number of models are available for streamflow forecasting. In this paper we classify and compare nine types of models for short, medium and long-term flow forecasting, according to six criteria: 1. validity of underlying hypotheses, 2. difficulties encountered when building and calibrating the model, 3. difficulties in computing the forecasts, 4. uncertainty modeling, 5. information required by each type of model, and 6. parameter updating. We first distinguish between empirical and conceptual models, the difference being that conceptual models correspond to simplified representations of the watershed, while empirical model only try to capture the structural relationships between inputs to the watershed and outputs, such as streamflow. Amongst empirical models, we distinguish between stochastic models, i.e. models based on the theory of probability, and non-stochastic models. Three types of stochastic models are presented: statistical regression models, Box-Jenkins models, and the nonparametric k-nearest neighbor method. Statistical linear regression is only applicable for long term forecasting (monthly flows, for example), since it requires independent and identically distributed observations. It is a simple method of forecasting, and its hypotheses can be validated a posteriori if sufficient data are available. Box-Jenkins models include linear autoregressive models (AR), linear moving average models (MA), linear autoregressive - moving average models (ARMA), periodic ARMA models (PARMA) and ARMA models with auxiliary inputs (ARMAX). They are more adapted for weekly or daily flow forecasting, since the yallow for the explicit modeling of time dependence. Efficient methods are available for designing the model and updating the parameters as more data become available. For both statistical linear regression and Box-Jenkins models, the inputs must be uncorrelated and linearly related to the output. Furthermore, the process must be stationary. When it is suspected that the inputs are correlated or have a nonlinear effect on the output, the k-nearest neighbor method may be considered. This data-based nonparametric approach simply consists in looking, among past observations of the process, for the k events which are most similar to the present situation. A forecast is then built from the flows which were observed for these k events. Obviously, this approach requires a large database and a stationary process. Furthermore, the time required to calibrate the model and compute the forecasts increases rapidly with the size of the database. A clear advantage of stochastic models is that forecast uncertainty may be quantified by constructing a confidence interval. Three types of non-stochastic empirical models are also discussed: artificial neural networks (ANN), fuzzy linear regression and multivariate adaptive regression splines (MARS). ANNs were originally designed as simple conceptual models of the brain. However, for forecasting purposes, these models can be thought of simply as a subset of non linear empirical models. In fact, the ANN model most commonly used in forecasting, a multi-layer feed-forward network, corresponds to a non linear autoregressive model (NAR). To capture the moving average components of a time series, it is necessary to use recurrent architectures. ANNs are difficult to design and calibrate, and the computation of forecasts is also complex. Fuzzy linear regression makes it possible to extract linear relationships from small data sets, with fewer hypotheses than statistical linear regression. It does not require the observations to be uncorrelated, nor does it ask for the error variance to be homogeneous. However, the model is very sensitive to outliers. Furthermore, a posteriori validation of the hypothesis of linearity is not possible for small data sets. MARS models are based on the hypothesis that time series are chaotic instead of stochastic. The main advantage of the method is its ability to model non-stationary processes. The approach is non-parametric, and therefore requires a large data set.Amongst conceptual models, we distinguish between physical models, hydraulic machines, and fuzzy rule-based systems. Most conceptual hydrologic models are hydraulic machines, in which the watershed is considered to behave like a network of reservoirs. Physical modeling of a watershed would imply using fundamental physical equations at a small scale, such as the law of conservation of mass. Given the complexity of a watershed, this can be done in practice only for water routing. Consequently, only short term flow forecasts can be obtained from a physical model, since the effects of precipitation, infiltration and evaporation must be negligible. Fuzzy rule-based systems make it possible to model the water cycle using fuzzy IF-THEN rules, such as IF it rains a lot in a short period of time, THEN there will be a large flow increase following the concentration time. Each fuzzy quantifier is modeled using a fuzzy number to take into account the uncertainty surrounding it. When sufficient data are available, the fuzzy quantifiers can be constructed from the data. In general, conceptual models require more effort to develop than empirical models. However, for exceptional events, conceptual models can often provide more realistic forecasts, since empirical models are not well suited for extrapolation.A fruitful approach is to combine conceptual and empirical models. One way of doing this, called extended streamflow prediction or ESP, is to combine a stochastic model for generating meteorological scenarios with a conceptual model of the watershed.Based on this review of flow forecasting models, we recommend for short term forecasting (hourly and daily flows) the use of the k-nearest neighbor method, Box-Jenkins models, water routing models or hydraulic machines. For medium term forecasting (weekly flows, for example), we recommend the k-nearest neighbor method and Box-Jenkins models, as well as fuzzy-rule based and ESP models. For long term forecasting (monthly flows), we recommend statistical and fuzzy regression, Box-Jenkins, MARS and ESP models. It is important to choose a type of model which is appropriate for the problem at hand and for which the information available is sufficient. Each type of model having its advantages, it can be more efficient to combine different approaches when forecasting streamflow
Utilisation de l'information historique en analyse hydrologique fréquentielle
L'utilisation de l'information historique dans une analyse fréquentielle permet de mieux mobiliser l'information réellement disponible et devrait donc permettre d'améliorer l'estimation des quantiles de grande période de retour. Par information historique, on entend ici de l'information relative à des grandes crues qui se sont produites avant le début de la période de mesure (dite période de jaugeage systématique) des niveaux et débits des lacs et rivières. On observe de manière générale que l'utilisation de l'information historique conduit à une diminution de l'impact des valeurs singulières dans les séries d'enregistrements systématiques et à une diminution de l'écart-type des estimations. Dans le présent article on présente les méthodes statistiques qui permettent la modélisation de l'information historique.Use of information about historical floods, i.e. extreme floods that occurred prior to systematic gauging, can often substantially improve the precision of flood quantile estimates. Such information can be retrieved from archives, newspapers, interviews with local residents, or by use of paleohydrologic and dendohydrologic traces. Various statistical techniques for incorporating historical information into frequency analyses are discussed in this review paper. The basic hypothesis in the statistical modeling of historical information is that a certain perception water level exists and that during a given historical period preceding the period of gauging, all exceedances of this level have been recorded, be it in newpapers, in people's memory, or trough traces in the catchment such as sediment deposits or traces on trees. No information is available on floods that did not exceed the perception threshold. It is further assumed that a period of systematic gauging is available. Figure 1 illustrates this situation. The U.S. Water Resources Council (1982) recommended the use of the method of adjusted moments for fitting the log Pearson type III distribution. A weighting factor is applied to the data below the threshold observed during the gauged period to account for the missing data below the threshold in the historical period. Several studies have pointed out that the method of adjusted moments is inefficient. Maximum likelihood estimators based on partially censored data have been shown to be much more efficient and to provide a practical framework for incorporating imprecise and categorical data. Unfortunately, for some of the most common 3-parameter distributions used in hydrology, the maximum likelihood method poses numerical problems. Recently, some authors have proposed use of the method of expected moments, a variant of the method of adjusted moments which gives less weight to observations below the threshold. According to preliminary studies, estimators based on expected moments are almost as efficient as maximum likelihood estimators, but have the advantage of avoiding the numerical problems related to the maximization of likelihood functions. Several studies have emphasized the potential gain in estimation accuracy with the use of historical information. Because historical floods by definition are large, their introduction in a flood frequency analysis can have a major impact on estimates of rare floods. This is particularly true when 3-parameter distributions are considered. Moreover, use of historical information is a means to increase the representativity of a outlier in the systematic data. For example, an extreme outlier will not get the same weight in the analysis if one can state with certainty that it is the largest flood in, say, 200 years, and not only the largest flood in, say, 20 years of systematic gauging.Historical data are generally imprecise, and their inaccuracy should be properly accounted for in the analysis. However, even with substantial uncertainty in the data, the use of historical information is a viable means to improve estimates of rare floods
Solvable Critical Dense Polymers
A lattice model of critical dense polymers is solved exactly for finite
strips. The model is the first member of the principal series of the recently
introduced logarithmic minimal models. The key to the solution is a functional
equation in the form of an inversion identity satisfied by the commuting
double-row transfer matrices. This is established directly in the planar
Temperley-Lieb algebra and holds independently of the space of link states on
which the transfer matrices act. Different sectors are obtained by acting on
link states with s-1 defects where s=1,2,3,... is an extended Kac label. The
bulk and boundary free energies and finite-size corrections are obtained from
the Euler-Maclaurin formula. The eigenvalues of the transfer matrix are
classified by the physical combinatorics of the patterns of zeros in the
complex spectral-parameter plane. This yields a selection rule for the
physically relevant solutions to the inversion identity and explicit finitized
characters for the associated quasi-rational representations. In particular, in
the scaling limit, we confirm the central charge c=-2 and conformal weights
Delta_s=((2-s)^2-1)/8 for s=1,2,3,.... We also discuss a diagrammatic
implementation of fusion and show with examples how indecomposable
representations arise. We examine the structure of these representations and
present a conjecture for the general fusion rules within our framework.Comment: 35 pages, v2: comments and references adde
Superconductivity-enhanced bias spectroscopy in carbon nanotube quantum dots
We study low-temperature transport through carbon nanotube quantum dots in
the Coulomb blockade regime coupled to niobium-based superconducting leads. We
observe pronounced conductance peaks at finite source-drain bias, which we
ascribe to elastic and inelastic cotunneling processes enhanced by the
coherence peaks in the density of states of the superconducting leads. The
inelastic cotunneling lines display a marked dependence on the applied gate
voltage which we relate to different tunneling-renormalizations of the two
subbands in the nanotube. Finally, we discuss the origin of an especially
pronounced sub-gap structure observed in every fourth Coulomb diamond
W-Extended Fusion Algebra of Critical Percolation
Two-dimensional critical percolation is the member LM(2,3) of the infinite
series of Yang-Baxter integrable logarithmic minimal models LM(p,p'). We
consider the continuum scaling limit of this lattice model as a `rational'
logarithmic conformal field theory with extended W=W_{2,3} symmetry and use a
lattice approach on a strip to study the fundamental fusion rules in this
extended picture. We find that the representation content of the ensuing closed
fusion algebra contains 26 W-indecomposable representations with 8 rank-1
representations, 14 rank-2 representations and 4 rank-3 representations. We
identify these representations with suitable limits of Yang-Baxter integrable
boundary conditions on the lattice and obtain their associated W-extended
characters. The latter decompose as finite non-negative sums of W-irreducible
characters of which 13 are required. Implementation of fusion on the lattice
allows us to read off the fusion rules governing the fusion algebra of the 26
representations and to construct an explicit Cayley table. The closure of these
representations among themselves under fusion is remarkable confirmation of the
proposed extended symmetry.Comment: 30 page
- …