6,601 research outputs found

    Genetic control of plasticity of oil yield for combined abiotic stresses using a joint approach of crop modeling and genome-wide association

    Full text link
    Understanding the genetic basis of phenotypic plasticity is crucial for predicting and managing climate change effects on wild plants and crops. Here, we combined crop modeling and quantitative genetics to study the genetic control of oil yield plasticity for multiple abiotic stresses in sunflower. First we developed stress indicators to characterize 14 environments for three abiotic stresses (cold, drought and nitrogen) using the SUNFLO crop model and phenotypic variations of three commercial varieties. The computed plant stress indicators better explain yield variation than descriptors at the climatic or crop levels. In those environments, we observed oil yield of 317 sunflower hybrids and regressed it with three selected stress indicators. The slopes of cold stress norm reaction were used as plasticity phenotypes in the following genome-wide association study. Among the 65,534 tested SNP, we identified nine QTL controlling oil yield plasticity to cold stress. Associated SNP are localized in genes previously shown to be involved in cold stress responses: oligopeptide transporters, LTP, cystatin, alternative oxidase, or root development. This novel approach opens new perspectives to identify genomic regions involved in genotype-by-environment interaction of a complex traits to multiple stresses in realistic natural or agronomical conditions.Comment: 12 pages, 5 figures, Plant, Cell and Environmen

    SUNLAB: a Functional-Structral Model for Genotypic and Phenotypic Characterization of the Sunflower Crop

    Get PDF
    International audienceA new functional-structural model SUNLAB for the crop sunflower (Helianthus annuus L.) is developed. It is dedicated to simulate the organogenesis, morphogenesis, biomass accumulation and biomass partitioning to organs in sunflower growth. It is adapted to model phenotypic response to diverse environment factors including temperature stress and water deficiency, and adapted to different genotypic variants. The model is confronted to experimental data and estimated parameter values of two genotypes "Melody" and "Prodisol" are presented. SUNLAB parameters seem to show genotypic variability, which potentially makes the model an interesting intermediate to discriminate between genotypes. Statistical tests on estimated parameter values suggest that some parameters are common between genotypes and others are genotypic specific. Since SUNLAB simulate individual leaf area and biomass as two state variables, an interesting corollary is that it also simulates dynamically the specific leaf area (SLA) variable. Further studies are performed to evaluate model performances with more genotypes and more discriminating environments to test and expand model's adaptability and usabilit

    Musical audio-mining

    Get PDF

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    Varying Degrees of Difficulty in Melodic Dictation Examples According to Intervallic Content

    Get PDF
    Melodic dictation has long been a daunting task for students in aural skills training. Research has found that interval identification is a factor when taking melodic dictation. Research has also found that some intervals are easier to identify than other intervals. The goal of this thesis is to determine whether the difficulty of melodic dictation examples can be categorized by their intervallic content. A popular aural skills text was used as the source for the melodic dictation examples. The adjacent intervals in each melodic dictation example were counted and recorded by interval type. The analysis of the melodic dictation examples according to their intervallic content was then performed using an SPSS two-step cluster analysis. Two clusters emerged, proving that there were natural groupings within the data. Cluster 1 examples contained mostly conjunct motion, i.e., intervals of a m2 to M3, while cluster 2 examples were characterized by their disjunct intervallic content, i.e., intervals of a m6 to M7. Melodic dictation examples of both clusters were found to appear throughout the textbook organization, with the exception that no cluster 2 examples were found in the beginning units of the text. Other variables that were tracked were whether an example was composed (C) for the text or derived from music literature (L), the unit and melody number, and total number of intervals per melody. Rhythm was not observed

    Final Research Report on Auto-Tagging of Music

    Get PDF
    The deliverable D4.7 concerns the work achieved by IRCAM until M36 for the “auto-tagging of music”. The deliverable is a research report. The software libraries resulting from the research have been integrated into Fincons/HearDis! Music Library Manager or are used by TU Berlin. The final software libraries are described in D4.5. The research work on auto-tagging has concentrated on four aspects: 1) Further improving IRCAM’s machine-learning system ircamclass. This has been done by developing the new MASSS audio features, including audio augmentation and audio segmentation into ircamclass. The system has then been applied to train HearDis! “soft” features (Vocals-1, Vocals-2, Pop-Appeal, Intensity, Instrumentation, Timbre, Genre, Style). This is described in Part 3. 2) Developing two sets of “hard” features (i.e. related to musical or musicological concepts) as specified by HearDis! (for integration into Fincons/HearDis! Music Library Manager) and TU Berlin (as input for the prediction model of the GMBI attributes). Such features are either derived from previously estimated higher-level concepts (such as structure, key or succession of chords) or by developing new signal processing algorithm (such as HPSS) or main melody estimation. This is described in Part 4. 3) Developing audio features to characterize the audio quality of a music track. The goal is to describe the quality of the audio independently of its apparent encoding. This is then used to estimate audio degradation or music decade. This is to be used to ensure that playlists contain tracks with similar audio quality. This is described in Part 5. 4) Developing innovative algorithms to extract specific audio features to improve music mixes. So far, innovative techniques (based on various Blind Audio Source Separation algorithms and Convolutional Neural Network) have been developed for singing voice separation, singing voice segmentation, music structure boundaries estimation, and DJ cue-region estimation. This is described in Part 6.EC/H2020/688122/EU/Artist-to-Business-to-Business-to-Consumer Audio Branding System/ABC D

    09051 Abstracts Collection -- Knowledge representation for intelligent music processing

    Get PDF
    From the twenty-fifth to the thirtieth of January, 2009, the Dagstuhl Seminar 09051 on ``Knowledge representation for intelligent music processing\u27\u27 was held in Schloss Dagstuhl~--~Leibniz Centre for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations and demos given during the seminar as well as plenary presentations, reports of workshop discussions, results and ideas are put together in this paper. The first section describes the seminar topics and goals in general, followed by plenary `stimulus\u27 papers, followed by reports and abstracts arranged by workshop followed finally by some concluding materials providing views of both the seminar itself and also forward to the longer-term goals of the discipline. Links to extended abstracts, full papers and supporting materials are provided, if available. The organisers thank David Lewis for editing these proceedings
    • 

    corecore