5,684 research outputs found
Examples of works to practice staccato technique in clarinet instrument
Klarnetin staccato tekniğini güçlendirme aşamaları eser çalışmalarıyla uygulanmıştır. Staccato
geçişlerini hızlandıracak ritim ve nüans çalışmalarına yer verilmiştir. Çalışmanın en önemli amacı
sadece staccato çalışması değil parmak-dilin eş zamanlı uyumunun hassasiyeti üzerinde de
durulmasıdır. Staccato çalışmalarını daha verimli hale getirmek için eser çalışmasının içinde etüt
çalışmasına da yer verilmiştir. Çalışmaların üzerinde titizlikle durulması staccato çalışmasının ilham
verici etkisi ile müzikal kimliğe yeni bir boyut kazandırmıştır. Sekiz özgün eser çalışmasının her
aşaması anlatılmıştır. Her aşamanın bir sonraki performans ve tekniği güçlendirmesi esas alınmıştır.
Bu çalışmada staccato tekniğinin hangi alanlarda kullanıldığı, nasıl sonuçlar elde edildiği bilgisine
yer verilmiştir. Notaların parmak ve dil uyumu ile nasıl şekilleneceği ve nasıl bir çalışma disiplini
içinde gerçekleşeceği planlanmıştır. Kamış-nota-diyafram-parmak-dil-nüans ve disiplin
kavramlarının staccato tekniğinde ayrılmaz bir bütün olduğu saptanmıştır. Araştırmada literatür
taraması yapılarak staccato ile ilgili çalışmalar taranmıştır. Tarama sonucunda klarnet tekniğin de
kullanılan staccato eser çalışmasının az olduğu tespit edilmiştir. Metot taramasında da etüt
çalışmasının daha çok olduğu saptanmıştır. Böylelikle klarnetin staccato tekniğini hızlandırma ve
güçlendirme çalışmaları sunulmuştur. Staccato etüt çalışmaları yapılırken, araya eser çalışmasının
girmesi beyni rahatlattığı ve istekliliği daha arttırdığı gözlemlenmiştir. Staccato çalışmasını yaparken
doğru bir kamış seçimi üzerinde de durulmuştur. Staccato tekniğini doğru çalışmak için doğru bir
kamışın dil hızını arttırdığı saptanmıştır. Doğru bir kamış seçimi kamıştan rahat ses çıkmasına
bağlıdır. Kamış, dil atma gücünü vermiyorsa daha doğru bir kamış seçiminin yapılması gerekliliği
vurgulanmıştır. Staccato çalışmalarında baştan sona bir eseri yorumlamak zor olabilir. Bu açıdan
çalışma, verilen müzikal nüanslara uymanın, dil atış performansını rahatlattığını ortaya koymuştur.
Gelecek nesillere edinilen bilgi ve birikimlerin aktarılması ve geliştirici olması teşvik edilmiştir.
Çıkacak eserlerin nasıl çözüleceği, staccato tekniğinin nasıl üstesinden gelinebileceği anlatılmıştır.
Staccato tekniğinin daha kısa sürede çözüme kavuşturulması amaç edinilmiştir. Parmakların
yerlerini öğrettiğimiz kadar belleğimize de çalışmaların kaydedilmesi önemlidir. Gösterilen azmin ve
sabrın sonucu olarak ortaya çıkan yapıt başarıyı daha da yukarı seviyelere çıkaracaktır
Image classification over unknown and anomalous domains
A longstanding goal in computer vision research is to develop methods that are simultaneously applicable to a broad range of prediction problems. In contrast to this, models often perform best when they are specialized to some task or data type. This thesis investigates the challenges of learning models that generalize well over multiple unknown or anomalous modes and domains in data, and presents new solutions for learning robustly in this setting.
Initial investigations focus on normalization for distributions that contain multiple sources (e.g. images in different styles like cartoons or photos). Experiments demonstrate the extent to which existing modules, batch normalization in particular, struggle with such heterogeneous data, and a new solution is proposed that can better handle data from multiple visual modes, using differing sample statistics for each.
While ideas to counter the overspecialization of models have been formulated in sub-disciplines of transfer learning, e.g. multi-domain and multi-task learning, these usually rely on the existence of meta information, such as task or domain labels. Relaxing this assumption gives rise to a new transfer learning setting, called latent domain learning in this thesis, in which training and inference are carried out over data from multiple visual domains, without domain-level annotations. Customized solutions are required for this, as the performance of standard models degrades: a new data augmentation technique that interpolates between latent domains in an unsupervised way is presented, alongside a dedicated module that sparsely accounts for hidden domains in data, without requiring domain labels to do so.
In addition, the thesis studies the problem of classifying previously unseen or anomalous modes in data, a fundamental problem in one-class learning, and anomaly detection in particular. While recent ideas have been focused on developing self-supervised solutions for the one-class setting, in this thesis new methods based on transfer learning are formulated. Extensive experimental evidence demonstrates that a transfer-based perspective benefits new problems that have recently been proposed in anomaly detection literature, in particular challenging semantic detection tasks
AIUCD 2022 - Proceedings
L’undicesima edizione del Convegno Nazionale dell’AIUCD-Associazione di Informatica Umanistica ha per titolo Culture digitali. Intersezioni: filosofia, arti, media. Nel titolo è presente, in maniera esplicita, la richiesta di una riflessione, metodologica e teorica, sull’interrelazione tra tecnologie digitali, scienze dell’informazione, discipline filosofiche, mondo delle arti e cultural studies
The syntax of negative polarity items in Syrian Arabic based on the dialect of Deir Ezzor
Negative Polarity Items (NPIs) are pervasive among languages. Cross-linguistic examination of NPIs continues to shed light on the complexity of this phenomenon. One unfortunate fact is that NPIs in Arabic dialects have seen relatively little examination in comparison with NPIs in other languages, such as English, Dutch, and Greek. The present study aims at contributing to filling this lacuna in research. It is a descriptive and analytical study of the syntax of negative polarity items in the Arabic dialect of Deir Ezzor, a city on the Euphrates in the north-eastern part of Syria; this Arabic dialect is Mesopotamian and not Levantine. This thesis contributes to the study of NPIs by providing an extensive inventory of these items in an Arabic dialect and a deeper analysis of these items' behaviour and licensing conditions. This study moves beyond the already known negative polarity pronouns and determiners to discuss negative polarity auxiliary verbs and negative polarity lexical verbs. It also expands the discussion of the idiomatic NPIs by discussing minimisers and maximisers. This thesis discusses the largest number of NPIs in any Arabic dialect. It also sheds light on areas where a contribution is needed, such as a thorough examination of the licensing contexts, e.g., the subjunctive and comparatives. This study examines the licensing proposals and concludes that Giannakidou’s nonveridicality theory offers the needed account. This study proposes new ways to examine the contexts where the licensing is possible, e.g., considering the details of comparative structures and what makes them licensing environments for NPIs. This study concludes that further research is needed and that researchers should not limit their exploration to testing the proposals that account for the licensing problem. Details do matter, and the details are what we should be looking for
Graphical scaffolding for the learning of data wrangling APIs
In order for students across the sciences to avail themselves of modern data streams, they must first know how to wrangle data: how to reshape ill-organised, tabular data into another format, and how to do this programmatically, in languages such as Python and R. Despite the cross-departmental demand and the ubiquity of data wrangling in analytical workflows, the research on how to optimise the instruction of it has been minimal. Although data wrangling as a programming domain presents distinctive challenges - characterised by on-the-fly syntax lookup and code example integration - it also presents opportunities. One such opportunity is how tabular data structures are easily visualised. To leverage the inherent visualisability of data wrangling, this dissertation evaluates three types of graphics that could be employed as scaffolding for novices: subgoal graphics, thumbnail graphics, and parameter graphics. Using a specially built e-learning platform, this dissertation documents a multi-institutional, randomised, and controlled experiment that investigates the pedagogical effects of these. Our results indicate that the graphics are well-received, that subgoal graphics boost the completion rate, and that thumbnail graphics improve navigability within a command menu. We also obtained several non-significant results, and indications that parameter graphics are counter-productive. We will discuss these findings in the context of general scaffolding dilemmas, and how they fit into a wider research programme on data wrangling instruction
Songs without borders: complex interpretative song worlds and the audiences that inhabit them
The genre of music commonly referred to as art song often elicits emotionally charged responses in accounts of audience experiences. However, scholarship has largely neglected the object of inquiry where these responses and experiences materialise: the live art song event. The principal research task in this study is to investigate audience experience of live art song events in the UK. The audiences and events at the centre of this inquiry coalesce around the work of the art song promoter Oxford Lieder. Using a mixed method approach (questionnaire, diary methods and guided interviews), statistical and thematic analysis, and Interpretative Phenomenological Analysis, is applied to a dataset that utilises 82 individual participants’ experiences of live art song events, including regular attendees and those experiencing live art song for the first time.
To frame the findings of this inquiry, this study establishes the concept of complex interpretative song worlds: defined as a collection of interactions that audience members draw upon to construct their experience of live art song events, through a dynamic and multi-faceted interplay with the system of possibilities afforded by live art song environments. In this study, complex interpretative song world theorising takes place across three levels of audiencing:
(1) Interactions with the live art song domain (the norms, behaviours, and conventions of live art song environments) are gathered under three themes. Collecting activity sees a desire for participants to scrutinise song objects, embrace familiar artists and repertoire, and adopt a connoisseur-like approach to knowledge acquisition. Connecting activity reveals a prized sense of close psycho-social resonance, which takes place between songs, performers, spaces and everyday experiences. Venerating activity foregrounds a view of songs as inviolable objects, where perceived changes to songs are deemed heretical by some, examined through the (re)introduction of sung English translations into the live art song corpus.
(2) Interactions with live art song objects (the lexical and musical features that make up songs) reveal the ways audience members process words and music, and prioritise either, or both features during live art song events. The presentation of these materials in ways that blur senses (sights and sounds), and time (before, during, and after performances), are shown to be as additive to audience member conceptualisations of the nature of lexical-musical relationships as they are disruptive.
(3) Interactions with live art song actors (performers, producers, and audiences) reveal processes of role formation at work, where vocal acts, non-vocal acts, and fixed and non-fixed traits complicate the way audience members derive impressions of performers. Art song’s hybridity as a genre, which is not a dramatic form, yet ‘not not’ a dramatic form, reveals the imbricated way audience members construct identities of performers: as professional musicians; as human beings; and as inhabitants of roles defined textually through a song’s poetic content.
This interdisciplinary study draws predominantly on three overlapping areas of scholarship, and makes new contributions to knowledge in all three. For musicology, this inquiry develops deeper understandings of live art song objects to complement the hegemony of hermeneutic, musico-analytical and historiographical research that typifies much of the existing art song literature. For audience studies, these findings provide new audiencing insights, by examining an art form not yet analysed by empirical audience research methods, and one that simultaneously combines both words and music as a mode of expression. For translation theory, this inquiry responds to calls within the existing literature for more research to understand the reception of translation in music. This study also generates dividends outside of the academy, providing new insights for performers and promoters of art song to inform approaches to programming, presentation, production, marketing and audience development
Recommended from our members
Few-Shot Natural Language Processing by Meta-Learning Without Labeled Data
Humans show a remarkable capability to accurately solve a wide range of problems efficiently -- utilizing a limited amount of computation and experience. Deep learning models, by stark contrast, can be trained to be highly accurate on a narrow task while being highly inefficient in terms of the amount of compute and data required to reach that accuracy. Within natural language processing (NLP), recent breakthroughs in unsupervised pretraining have enabled reusable models that can be applied to many NLP tasks, however, learning of new tasks is still inefficient. This has led to research on few-shot learning, where the goal is to generalize to new tasks with very few labeled instances. Meta-learning, or learning to learn, treats the learning process itself as a learning problem from data with the goal of learning systems that can generalize to new tasks efficiently. This has the potential to produce few-shot learners that can accurately solve a wide range of new tasks. However, meta-learning requires a distribution over tasks with relevant labeled data that can be difficult to obtain, severely limiting the practical utility of meta-learning methods. In this dissertation, we develop methods to enable large-scale meta-learning from unlabeled text data and improve the few-shot generalization ability of NLP models.
We contribute methods that propose tasks synthetically created from unlabeled text, allowing for a large task distribution for meta-learning. This leads to rapid learning of new tasks by meta-learning from millions of self-supervised tasks and minimizes the train-test mismatch in few-shot learning by optimizing the pre-training directly for future fine-tuning with a few examples. Since real-world applications of NLP require learning diverse tasks with different numbers of classes, we first introduce an optimization-based meta-learning method that can learn from multiple NLP classification tasks with any number of classes. We then leverage the proposed self-supervised approach to create meta-training tasks, with a diverse number of classes, and meta-train models for few-shot learning using this task distribution. This leads to better representation learning, learning key hyper-parameters like learning rates, can be combined with supervised tasks to regularize supervised meta-learning, and leads to accurate few-shot learning on a diverse set of NLP classification tasks. We further explore the space of self-supervised tasks for meta-learning by considering important aspects like task diversity, difficulty, type, domain, and curriculum, and investigate how they affect meta-learning performance. Our analysis shows that all these factors meaningfully alter the task distribution, some inducing significant improvements in downstream few-shot accuracy of the meta-learned models.
Our findings yield accurate and efficient meta-learning methods that improve few-shot generalization to diverse tasks and should enable many future applications of meta-learning in NLP, such as hyper-parameter optimization, continual learning, efficient learning, learning in low-resource languages, and more
“I Can See the Forest for the Trees”: Examining Personality Traits with Trasformers
Our understanding of Personality and its structure is rooted in linguistic studies operating under the assumptions made by the Lexical Hypothesis: personality characteristics that are important to a group of people will at some point be codified in their language, with the number of encoded representations of a personality characteristic indicating their importance. Qualitative and quantitative efforts in the dimension reduction of our lexicon throughout the mid-20th century have played a vital role in the field’s eventual arrival at the widely accepted Five Factor Model (FFM). However, there are a number of presently unresolved conflicts regarding the breadth and structure of this model (c.f., Hough, Oswald, & Ock, 2015). The present study sought to address such issues through previously unavailable language modeling techniques. The Distributional Semantic Hypothesis (DSH) argues that the meaning of words may be formed through some function of their co-occurrence with other words. There is evidence that DSH-based techniques are cognitively valid, serving as a proxy for learned associations between stimuli (Günther et al., 2019). Given that Personality is often measured through self-report surveys, the present study proposed that a Personality measure be created directly from this source data, using large pre-trained Transformers (a type of neural network that is adept at encoding and decoding semantic representations from natural language). An inventory was constructed, administered, and response data was analyzed using partial correlation networks. This exploratory study identifies differences in the internal structure of trait-domains, while simultaneously demonstrating a quantitative approach to item creation and survey development
Great expectations: unsupervised inference of suspense, surprise and salience in storytelling
Stories interest us not because they are a sequence of mundane and predictable events but because they have drama and tension. Crucial to creating dramatic and exciting stories are surprise and suspense. Likewise, certain events are key to the plot and more important than others. Importance is referred to as salience. Inferring suspense, surprise and salience are highly challenging for computational systems. It is difficult because all these elements require a strong comprehension of the characters and their motivations, places, changes over time, and the cause/effect of complex interactions.
Recently advances in machine learning (often called deep learning) have substantially improved in many language-related tasks, including story comprehension and story writing. Most of these systems rely on supervision; that is, huge numbers of people need to tag large quantities of data to tell the system what to teach these systems. An example would be tagging which events are suspenseful. It is highly inflexible and costly.
Instead, the thesis trains a series of deep learning models via only reading stories, a self-supervised (or unsupervised) system. Narrative theory methods (rules and procedures) are applied to the knowledge built into the deep learning models to directly infer salience, surprise, and salience in stories. Extensions add memory and external knowledge from story plots and from Wikipedia to infer salience on novels such as Great Expectations and plays such as Macbeth. Other work adapts the models as a planning system for generating new stories.
The thesis finds that applying the narrative theory to deep learning models can align with the typical reader. In follow up work, the insights could help improve computer models for tasks such as automatic story writing, assistance for writing, summarising or editing stories. Moreover, the approach of applying narrative theory to the inherent qualities built in a system that learns itself (self-supervised) from reading from books, watching videos, listening to audio is much cheaper and more adaptable to other domains and tasks. Progress is swift in improving self-supervised systems. As such, the thesis's relevance is that applying domain expertise with these systems may be a more productive approach in many areas of interest for applying machine learning
- …