849 research outputs found
A fuzzy rule model for high level musical features on automated composition systems
Algorithmic composition systems are now well-understood. However, when they are used for specific tasks like creating material for a part of a piece, it is common to prefer, from all of its possible outputs, those exhibiting specific properties. Even though the number of valid outputs is huge, many times the selection is performed manually, either using expertise in the algorithmic model, by means of sampling techniques, or some times even by chance. Automations of this process have been done traditionally by using machine learning techniques. However, whether or not these techniques are really capable of capturing the human rationality, through which the selection is done, to a great degree remains as an open question. The present work discusses a possible approach, that combines expert’s opinion and a fuzzy methodology for rule extraction, to model high level features. An early implementation able to explore the universe of outputs of a particular algorithm by means of the extracted rules is discussed. The rules search for objects similar to those having a desired and pre-identified feature. In this sense, the model can be seen as a finder of objects with specific properties.Peer ReviewedPostprint (author's final draft
Towards automated composition of convergent services: a survey
A convergent service is defined as a service that exploits the convergence of communication networks and at the same time takes advantage of features of the Web. Nowadays, building up a convergent service is not trivial, because although there are significant approaches that aim to automate the service composition at different levels in the Web and Telecom domains, selecting the most appropriate approach for specific case studies is complex due to the big amount of involved information and the lack of technical considerations. Thus, in this paper, we identify the relevant phases for convergent service composition and explore the existing approaches and their associated technologies for automating each phase. For each technology, the maturity and results are analysed, as well as the elements that must be considered prior to their application in real scenarios. Furthermore, we provide research directions related to the convergent service composition phases
АВТОМАТИЗОВАНА КОМПОЗИЦІЯ СЕМАНТИЧНИХ ВЕБ-СЕРВІСІВ У ВИКОНУВАНІ ПРОЦЕСИ. ПЕРЕКЛАД СТАТТІ «AUTOMATED COMPOSITION OF SEMANTIC WEB SERVICES INTO EXECUTABLE PROCESSES\ud (ПЕРЕКЛАД РЕМАРОВИЧ С.)\ud
Пропонується метод планування для автоматизованої композиції Веб-сервісів, які описані в OWL-S моделях процесу, який може\ud
ефективно боротися з недетермінізмом, частковою спостережністю і складними цілями.\ud
\ud
Abstract\ud
Different planning techniques have been applied to the problem of automated composition of web services. However, in realistic cases, this planning problem is far from trivial: the planner needs to deal with the nondeterministic behavior of web services, the partial observability of their internal status, and with complex goals expressing temporal conditions and preference requirements. We propose a planning technique for the automated composition of web services described in OWL-S process models, which can deal effectively with nondeterminism, partial observability, and complex goals. The technique allows for the synthesis of plans that encode compositions of web services with the usual programming constructs, like conditionals and iterations. The generated plans can thus be translated into executable processes, e.g., BPEL4WS programs. We implement our solution in a planner and do some preliminary experimental evaluations that show the potentialities of our approach, and the gain in performance of automating the composition at the semantic level w.r.t. the automated composition at the level of executable processes
Automated Composition of Picture-Synched Music Soundtracks for Movies
We describe the implementation of and early results from a system that
automatically composes picture-synched musical soundtracks for videos and
movies. We use the phrase "picture-synched" to mean that the structure of the
automatically composed music is determined by visual events in the input movie,
i.e. the final music is synchronised to visual events and features such as cut
transitions or within-shot key-frame events. Our system combines automated
video analysis and computer-generated music-composition techniques to create
unique soundtracks in response to the video input, and can be thought of as an
initial step in creating a computerised replacement for a human composer
writing music to fit the picture-locked edit of a movie. Working only from the
video information in the movie, key features are extracted from the input
video, using video analysis techniques, which are then fed into a
machine-learning-based music generation tool, to compose a piece of music from
scratch. The resulting soundtrack is tied to video features, such as scene
transition markers and scene-level energy values, and is unique to the input
video. Although the system we describe here is only a preliminary
proof-of-concept, user evaluations of the output of the system have been
positive.Comment: To be presented at the 16th ACM SIGGRAPH European Conference on
Visual Media Production. London, England: 17th-18th December 2019. 10 pages,
9 figure
- …