15,663 research outputs found
Adaptive Testing for Alphas in High-dimensional Factor Pricing Models
This paper proposes a new procedure to validate the multi-factor pricing
theory by testing the presence of alpha in linear factor pricing models with a
large number of assets. Because the market's inefficient pricing is likely to
occur to a small fraction of exceptional assets, we develop a testing procedure
that is particularly powerful against sparse signals. Based on the
high-dimensional Gaussian approximation theory, we propose a simulation-based
approach to approximate the limiting null distribution of the test. Our
numerical studies show that the new procedure can deliver a reasonable size and
achieve substantial power improvement compared to the existing tests under
sparse alternatives, and especially for weak signals
Subsidiary Entrepreneurial Alertness: Antecedents and Outcomes
This thesis brings together concepts from both international business and entrepreneurship to develop a framework of the facilitators of subsidiary innovation and performance. This study proposes that Subsidiary Entrepreneurial Alertness (SEA) facilitates the recognition of opportunities (the origin of subsidiary initiatives). First introduced by Kirzner (1979) in the context of the individual, entrepreneurial alertness (EA) is the ability to notice an opportunity without actively searching. Similarly, to entrepreneurial alertness at the individual level, this study argues that SEA enables the subsidiary to best select opportunities based on resources available. The research further develops our conceptualisation of SEA by drawing on work by Tang et al. (2012) identifying three distinct activities of EA: scanning and search (identifying opportunities unseen by others due to their awareness gaps), association and connection of information, and evaluation and judgement to interpret or anticipate future viability of opportunities. This study then hypothesises that SEA leads to opportunity recognition at the subsidiary level and further hypothesises innovation and performance as outcomes of opportunity recognition. This research brings these arguments together to develop and test a comprehensive theoretical model.
The theoretical model is tested through a mail survey of the CEOs/MDs of foreign subsidiaries within the Republic of Ireland (an innovative hub for foreign subsidiaries). This method was selected as the best method to reach the targeted respondent, and due to the depth of knowledge the target respondent holds, the survey can answer the desired question more substantially. The results were examined using partial least squares structural equation modelling (PLS-SEM). The study’s findings confirm two critical aspects of subsidiary context, subsidiary brokerage and subsidiary credibility are positively related to SEA. The study establishes a positive link between SEA and both the generation of innovation and the subsidiary’s performance. This thesis makes three significant contributions to the subsidiary literature as it 1) introduces and develops the concept of SEA, 2) identifies the antecedents of SEA, and 3) demonstrates the impact of SEA on subsidiary opportunity recognition. Implications for subsidiaries, headquarters and policy makers are discussed along with the limitations of the study
On Monte Carlo methods for the Dirichlet process mixture model, and the selection of its precision parameter prior
Two issues commonly faced by users of Dirichlet process mixture models are: 1) how to appropriately select a hyperprior for its precision parameter alpha, and 2) the typically slow mixing of the MCMC chain produced by conditional Gibbs samplers based on its stick-breaking representation, as opposed to marginal collapsed Gibbs samplers based on the Polya urn, which have smaller integrated autocorrelation times.
In this thesis, we analyse the most common approaches to hyperprior selection for alpha, we identify their limitations, and we propose a new methodology to overcome them.
To address slow mixing, we revisit three label-switching Metropolis moves from the literature (Hastie et al., 2015; Papaspiliopoulos and Roberts, 2008), improve them, and introduce a fourth move. Secondly, we revisit two i.i.d. sequential importance samplers which operate in the collapsed space (Liu, 1996; S. N. MacEachern et al., 1999), and we develop a new sequential importance sampler for the stick-breaking parameters of Dirichlet process mixtures, which operates in the stick-breaking space and which has minimal integrated autocorrelation time. Thirdly, we introduce the i.i.d. transcoding algorithm which, conditional to a partition of the data, can infer back which specific stick in the stick-breaking construction each observation originated from. We use it as a building block to develop the transcoding sampler, which removes the need for label-switching Metropolis moves in the conditional stick-breaking sampler, as it uses the better performing marginal sampler (or any other sampler) to drive the MCMC chain, and augments its exchangeable partition posterior with conditional i.i.d. stick-breaking parameter inferences after the fact, thereby inheriting its shorter autocorrelation times
Examples of works to practice staccato technique in clarinet instrument
Klarnetin staccato tekniğini güçlendirme aşamaları eser çalışmalarıyla uygulanmıştır. Staccato
geçişlerini hızlandıracak ritim ve nüans çalışmalarına yer verilmiştir. Çalışmanın en önemli amacı
sadece staccato çalışması değil parmak-dilin eş zamanlı uyumunun hassasiyeti üzerinde de
durulmasıdır. Staccato çalışmalarını daha verimli hale getirmek için eser çalışmasının içinde etüt
çalışmasına da yer verilmiştir. Çalışmaların üzerinde titizlikle durulması staccato çalışmasının ilham
verici etkisi ile müzikal kimliğe yeni bir boyut kazandırmıştır. Sekiz özgün eser çalışmasının her
aşaması anlatılmıştır. Her aşamanın bir sonraki performans ve tekniği güçlendirmesi esas alınmıştır.
Bu çalışmada staccato tekniğinin hangi alanlarda kullanıldığı, nasıl sonuçlar elde edildiği bilgisine
yer verilmiştir. Notaların parmak ve dil uyumu ile nasıl şekilleneceği ve nasıl bir çalışma disiplini
içinde gerçekleşeceği planlanmıştır. Kamış-nota-diyafram-parmak-dil-nüans ve disiplin
kavramlarının staccato tekniğinde ayrılmaz bir bütün olduğu saptanmıştır. Araştırmada literatür
taraması yapılarak staccato ile ilgili çalışmalar taranmıştır. Tarama sonucunda klarnet tekniğin de
kullanılan staccato eser çalışmasının az olduğu tespit edilmiştir. Metot taramasında da etüt
çalışmasının daha çok olduğu saptanmıştır. Böylelikle klarnetin staccato tekniğini hızlandırma ve
güçlendirme çalışmaları sunulmuştur. Staccato etüt çalışmaları yapılırken, araya eser çalışmasının
girmesi beyni rahatlattığı ve istekliliği daha arttırdığı gözlemlenmiştir. Staccato çalışmasını yaparken
doğru bir kamış seçimi üzerinde de durulmuştur. Staccato tekniğini doğru çalışmak için doğru bir
kamışın dil hızını arttırdığı saptanmıştır. Doğru bir kamış seçimi kamıştan rahat ses çıkmasına
bağlıdır. Kamış, dil atma gücünü vermiyorsa daha doğru bir kamış seçiminin yapılması gerekliliği
vurgulanmıştır. Staccato çalışmalarında baştan sona bir eseri yorumlamak zor olabilir. Bu açıdan
çalışma, verilen müzikal nüanslara uymanın, dil atış performansını rahatlattığını ortaya koymuştur.
Gelecek nesillere edinilen bilgi ve birikimlerin aktarılması ve geliştirici olması teşvik edilmiştir.
Çıkacak eserlerin nasıl çözüleceği, staccato tekniğinin nasıl üstesinden gelinebileceği anlatılmıştır.
Staccato tekniğinin daha kısa sürede çözüme kavuşturulması amaç edinilmiştir. Parmakların
yerlerini öğrettiğimiz kadar belleğimize de çalışmaların kaydedilmesi önemlidir. Gösterilen azmin ve
sabrın sonucu olarak ortaya çıkan yapıt başarıyı daha da yukarı seviyelere çıkaracaktır
Model Diagnostics meets Forecast Evaluation: Goodness-of-Fit, Calibration, and Related Topics
Principled forecast evaluation and model diagnostics are vital in fitting probabilistic models and forecasting outcomes of interest. A common principle is that fitted or predicted distributions ought to be calibrated, ideally in the sense that the outcome is indistinguishable from a random draw from the posited distribution. Much of this thesis is centered on calibration properties of various types of forecasts.
In the first part of the thesis, a simple algorithm for exact multinomial goodness-of-fit tests is proposed. The algorithm computes exact -values based on various test statistics, such as the log-likelihood ratio and Pearson\u27s chi-square. A thorough analysis shows improvement on extant methods. However, the runtime of the algorithm grows exponentially in the number of categories and hence its use is limited.
In the second part, a framework rooted in probability theory is developed, which gives rise to hierarchies of calibration, and applies to both predictive distributions and stand-alone point forecasts. Based on a general notion of conditional T-calibration, the thesis introduces population versions of T-reliability diagrams and revisits a score decomposition into measures of miscalibration, discrimination, and uncertainty. Stable and efficient estimators of T-reliability diagrams and score components arise via nonparametric isotonic regression and the pool-adjacent-violators algorithm. For in-sample model diagnostics, a universal coefficient of determination is introduced that nests and reinterprets the classical in least squares regression.
In the third part, probabilistic top lists are proposed as a novel type of prediction in classification, which bridges the gap between single-class predictions and predictive distributions. The probabilistic top list functional is elicited by strictly consistent evaluation metrics, based on symmetric proper scoring rules, which admit comparison of various types of predictions
Decoding spatial location of attended audio-visual stimulus with EEG and fNIRS
When analyzing complex scenes, humans often focus their attention on an object at a particular spatial location in the presence of background noises and irrelevant visual objects. The ability to decode the attended spatial location would facilitate brain computer interfaces (BCI) for complex scene analysis. Here, we tested two different neuroimaging technologies and investigated their capability to decode audio-visual spatial attention in the presence of competing stimuli from multiple locations. For functional near-infrared spectroscopy (fNIRS), we targeted dorsal frontoparietal network including frontal eye field (FEF) and intra-parietal sulcus (IPS) as well as superior temporal gyrus/planum temporal (STG/PT). They all were shown in previous functional magnetic resonance imaging (fMRI) studies to be activated by auditory, visual, or audio-visual spatial tasks. We found that fNIRS provides robust decoding of attended spatial locations for most participants and correlates with behavioral performance. Moreover, we found that FEF makes a large contribution to decoding performance. Surprisingly, the performance was significantly above chance level 1s after cue onset, which is well before the peak of the fNIRS response.
For electroencephalography (EEG), while there are several successful EEG-based algorithms, to date, all of them focused exclusively on auditory modality where eye-related artifacts are minimized or controlled. Successful integration into a more ecological typical usage requires careful consideration for eye-related artifacts which are inevitable. We showed that fast and reliable decoding can be done with or without ocular-removal algorithm. Our results show that EEG and fNIRS are promising platforms for compact, wearable technologies that could be applied to decode attended spatial location and reveal contributions of specific brain regions during complex scene analysis
Assessing Consistency in Single-Case Data Features Using Modified Brinley Plots
The current text deals with the assessment of consistency of data features from experimentally similar phases and consistency of effects in single-case experimental designs. Although consistency is frequently mentioned as a critical feature, few quantifications have been proposed so far: namely, under the acronyms CONDAP (consistency of data patterns in similar phases) and CONEFF (consistency of effects). Whereas CONDAP allows assessing the consistency of data patterns, the proposals made here focus on the consistency of data features such as level, trend, and variability, as represented by summary measures (mean, ordinary least squares slope, and standard deviation, respectively). The assessment of consistency of effect is also made in terms of these three data features, while also including the study of the consistency of an immediate effect (if expected). The summary measures are represented as points on a modified Brinley plot and their similarity is assessed via quantifications of distance. Both absolute and relative measures of consistency are proposed: the former expressed in the same measurement units as the outcome variable and the latter as a percentage. Illustrations with real data sets (multiple baseline, ABAB, and alternating treatments designs) show the wide applicability of the proposals. We developed a user-friendly website to offer both the graphical representations and the quantifications
- …