230 research outputs found

    An investigation into inter- and intragenomic variations of graphic genomic signatures

    Get PDF
    We provide, on an extensive dataset and using several different distances, confirmation of the hypothesis that CGR patterns are preserved along a genomic DNA sequence, and are different for DNA sequences originating from genomes of different species. This finding lends support to the theory that CGRs of genomic sequences can act as graphic genomic signatures. In particular, we compare the CGR patterns of over five hundred different 150,000 bp genomic sequences originating from the genomes of six organisms, each belonging to one of the kingdoms of life: H. sapiens, S. cerevisiae, A. thaliana, P. falciparum, E. coli, and P. furiosus. We also provide preliminary evidence of this method's applicability to closely related species by comparing H. sapiens (chromosome 21) sequences and over one hundred and fifty genomic sequences, also 150,000 bp long, from P. troglodytes (Animalia; chromosome Y), for a total length of more than 101 million basepairs analyzed. We compute pairwise distances between CGRs of these genomic sequences using six different distances, and construct Molecular Distance Maps that visualize all sequences as points in a two-dimensional or three-dimensional space, to simultaneously display their interrelationships. Our analysis confirms that CGR patterns of DNA sequences from the same genome are in general quantitatively similar, while being different for DNA sequences from genomes of different species. Our analysis of the performance of the assessed distances uses three different quality measures and suggests that several distances outperform the Euclidean distance, which has so far been almost exclusively used for such studies. In particular we show that, for this dataset, DSSIM (Structural Dissimilarity Index) and the descriptor distance (introduced here) are best able to classify genomic sequences.Comment: 14 pages, 6 figures, 5 table

    TESTING THE CAPM AND THE FAMA-FRENCH 3-FACTOR MODEL ON U.S. HIGH-TECH STOCKS

    Get PDF
    This master’s thesis tests the capital asset pricing model (CAPM) and the Fama-French 3-factor model (FF3FM) for the U.S. high-tech industry. For a total sample of 120 U.S. high-tech companies we run OLS time-series regressions for both models by using return and accounting data from 2002 to 2016. It is found that on average, the CAPM is not sufficient in explaining average excess returns for our sample of U.S. high-tech stocks, indicated by significant abnormal returns in the time-series regressions. However, the FF3FM eliminates the significance of the abnormal returns, or at least lowers the significance of the time-series regressions intercepts. Hence, it is found that the latter model outperforms the traditional CAPM for our sample of U.S. high-tech stocks, indicated by lower significance of the alpha terms as well as increasing adjusted R2 values in the time-series regressions. The higher explanatory power of the FF3FM compared to the CAPM is mainly caused by the high significance of the size factor measured by the SMB (small-minus-big) variable, which confirms a negative size premium for U.S. high-tech stocks. The book-to-market factor represented by the HML (highminus-low) variable does not seem to contribute to explain average excess returns, which is concluded from an insignificant average regression coefficient. Since the FF3FM proves to be an improvement towards the traditional CAPM, it can be recommended to apply the former model as a valuation and decision making tool for U.S. high-tech stocks. However, these results only hold for stable economic periods as the research results show that during economic turmoil both models are not sufficient in explaining the respective average excess returns

    Methods for relativizing properties of codes

    Get PDF
    The usual setting for information transmission systems assumes that all words over the source alphabet need to be encoded. The demands on encodings of messages with respect to decodability, error-detection, etc. are thus relative to the whole set of words. In reality, depending on the information source, far fewer messages are transmitted, all belonging to some specific language. Hence the original demands on encodings can be weakened, if only the words in that language are to be considered. This leads one to relativize the properties of encodings or codes to the language at hand. We analyse methods of relativization in this sense. It seems there are four equally convincing notions of relativization. We compare those. Each of them has their own merits for specific code properties. We clarify the differences between the four approaches. We also consider the decidability of relativized properties. If P is a property defining a class of codes and L is a language, one asks, for a given language C, whether C satisfies P relative to L. We show that in the realm of regular languages this question is mostly decidable
    • …
    corecore