4,832 research outputs found
Transculturating the modern : Zhang Ruogu\u27s literary life in 1930s Shanghai
Zhang Ruogu 張若谷(1905-1967), now known primarily as an urban writer,2 began his literary career by translating and commenting on French literature in newspapers and journals and subsequently published several urban stories. He was also an active advocate of urban life and played an important role in the introduction of the ideas of the French-style café gathering and European cultural trends. Zhang Ruogu’s literary activities, his admiration of French literature and culture, and his works inspired by the French literary works that he admired or translated, are the focus of investigation of this paper. From Paris to Shanghai and from 19th-century French literature to urban stories of Shanghai of the 1930s, these literary trends and works advanced through a process of translation and interpretation. What occurred during this translation and interpretation process? How does this examination aid us in understanding Zhang Ruogu’s own work and modern Chinese literature? This paper addresses Zhang Ruogu’s translations, adaptations and interpretations of French literature as well as his own literary works
經典製造 : 金庸研究的文化政治
金庸自一九五五年在新晚報開始連載第一篇武俠小說《書劍恩仇錄》,至一 九七○年封筆作《鹿鼎記》,共創作了十五部小說。這十五部小說在華人世界中 歷久不衰,更引起不少討論。一九八四年,台灣遠景出版社出版了一系列「金 學研究叢書」,「金學」一詞正式出現。最初,討論金庸作品都以短小的筆記型 文章為主,集中在閱讀金著的感受、討論書中情節和人物等。一九八○年代末 開始,「金學」己成為學術界的熱門研究課題,有大學更主辦「金庸小說學術研 討會」,與會者多為享負盛名的學者,最近更出現了所謂「金庸學」(Jinyonology) 一詞。「金學研究集」發行人王榮文認為金庸「極有可能在有生之年看到其作品 「經典化」的完成,這真是古今中外作家們少有的幸運」。王德威在《金庸小說 國際學術研討會論文集》的序言中亦提到:「本書的出版,無疑為金庸作品經典 化發展,再下一城。」
這情況在當代華文創作圈中非常罕有,實在是一個值得深思的文化現象。 金庸小說能發展到現在成為「經典」,內在原因當然是其小說本身的確是不可多 得的好作品。然而,這並不是唯一,甚至不是最重要的原因,外在的文化脈絡 因素發揮的作用更大。作為作者,金庸花了很大心力把自己的小說包裝成「經 典文學」呈現在讀者面前;另外,經過中國大陸、香港和台灣的學術界和文化 界不斷討論,金庸小說終於成為「經典」。然而,由於三地的政治文化環境不同, 金庸小說的經典化在中港台所經歷的道路也有很大差異。本論文的研究目的, 首先是分析金庸如何為其小說製造出一個「經典」形象,再進一步對金庸小說 不同的經典化過程作一疏理,從而探討其間三地在金學研究現象中所包涵的文 化政治意義
Eye detection using discriminatory features and an efficient support vector machine
Accurate and efficient eye detection has broad applications in computer vision, machine learning, and pattern recognition. This dissertation presents a number of accurate and efficient eye detection methods using various discriminatory features and a new efficient Support Vector Machine (eSVM).
This dissertation first introduces five popular image representation methods - the gray-scale image representation, the color image representation, the 2D Haar wavelet image representation, the Histograms of Oriented Gradients (HOG) image representation, and the Local Binary Patterns (LBP) image representation - and then applies these methods to derive five types of discriminatory features. Comparative assessments are then presented to evaluate the performance of these discriminatory features on the problem of eye detection.
This dissertation further proposes two discriminatory feature extraction (DFE) methods for eye detection. The first DFE method, discriminant component analysis (DCA), improves upon the popular principal component analysis (PCA) method. The PCA method can derive the optimal features for data representation but not for classification. In contrast, the DCA method, which applies a new criterion vector that is defined on two novel measure vectors, derives the optimal discriminatory features in the whitened PCA space for two-class classification problems. The second DFE method, clustering-based discriminant analysis (CDA), improves upon the popular Fisher linear discriminant (FLD) method. A major disadvantage of the FLD is that it may not be able to extract adequate features in order to achieve satisfactory performance, especially for two-class problems. To address this problem, three CDA models (CDA-1, -2, and -3) are proposed by taking advantage of the clustering technique. For every CDA model anew between-cluster scatter matrix is defined. The CDA method thus can derive adequate features to achieve satisfactory performance for eye detection. Furthermore, the clustering nature of the three CDA models and the nonparametric nature of the CDA-2 and -3 models can further improve the detection performance upon the conventional FLD method.
This dissertation finally presents a new efficient Support Vector Machine (eSVM) for eye detection that improves the computational efficiency of the conventional Support Vector Machine (SVM). The eSVM first defines a Θ set that consists of the training samples on the wrong side of their margin derived from the conventional soft-margin SVM. The Θ set plays an important role in controlling the generalization performance of the eSVM. The eSVM then introduces only a single slack variable for all the training samples in the Θ set, and as a result, only a very small number of those samples in the Θ set become support vectors. The eSVM hence significantly reduces the number of support vectors and improves the computational efficiency without sacrificing the generalization performance. A modified Sequential Minimal Optimization (SMO) algorithm is then presented to solve the large Quadratic Programming (QP) problem defined in the optimization of the eSVM.
Three large-scale face databases, the Face Recognition Grand challenge (FRGC) version 2 database, the BioID database, and the FERET database, are applied to evaluate the proposed eye detection methods. Experimental results show the effectiveness of the proposed methods that improve upon some state-of-the-art eye detection methods
Moments of Being, A Real Trauma Truth of Virginia Woolf?
Master’s thesis in English Literature ENG-399
Introduction and Exemplars of Uncertainty Decomposition
Uncertainty plays a crucial role in the machine learning field. Both model
trustworthiness and performance require the understanding of uncertainty,
especially for models used in high-stake applications where errors can cause
cataclysmic consequences, such as medical diagnosis and autonomous driving.
Accordingly, uncertainty decomposition and quantification have attracted more
and more attention in recent years. This short report aims to demystify the
notion of uncertainty decomposition through an introduction to two types of
uncertainty and several decomposition exemplars, including maximum likelihood
estimation, Gaussian processes, deep neural network, and ensemble learning. In
the end, cross connections to other topics in this seminar and two conclusions
are provided
- …