171 research outputs found

    Event Recognition in Laparoscopic Gynecology Videos with Hybrid Transformers

    Full text link
    Analyzing laparoscopic surgery videos presents a complex and multifaceted challenge, with applications including surgical training, intra-operative surgical complication prediction, and post-operative surgical assessment. Identifying crucial events within these videos is a significant prerequisite in a majority of these applications. In this paper, we introduce a comprehensive dataset tailored for relevant event recognition in laparoscopic gynecology videos. Our dataset includes annotations for critical events associated with major intra-operative challenges and post-operative complications. To validate the precision of our annotations, we assess event recognition performance using several CNN-RNN architectures. Furthermore, we introduce and evaluate a hybrid transformer architecture coupled with a customized training-inference framework to recognize four specific events in laparoscopic surgery videos. Leveraging the Transformer networks, our proposed architecture harnesses inter-frame dependencies to counteract the adverse effects of relevant content occlusion, motion blur, and surgical scene variation, thus significantly enhancing event recognition accuracy. Moreover, we present a frame sampling strategy designed to manage variations in surgical scenes and the surgeons' skill level, resulting in event recognition with high temporal resolution. We empirically demonstrate the superiority of our proposed methodology in event recognition compared to conventional CNN-RNN architectures through a series of extensive experiments

    A framework for clustering and adaptive topic tracking on evolving text and social media data streams.

    Get PDF
    Recent advances and widespread usage of online web services and social media platforms, coupled with ubiquitous low cost devices, mobile technologies, and increasing capacity of lower cost storage, has led to a proliferation of Big data, ranging from, news, e-commerce clickstreams, and online business transactions to continuous event logs and social media expressions. These large amounts of online data, often referred to as data streams, because they get generated at extremely high throughputs or velocity, can make conventional and classical data analytics methodologies obsolete. For these reasons, the issues of management and analysis of data streams have been researched extensively in recent years. The special case of social media Big Data brings additional challenges, particularly because of the unstructured nature of the data, specifically free text. One classical approach to mine text data has been Topic Modeling. Topic Models are statistical models that can be used for discovering the abstract ``topics\u27\u27 that may occur in a corpus of documents. Topic models have emerged as a powerful technique in machine learning and data science, providing a great balance between simplicity and complexity. They also provide sophisticated insight without the need for real natural language understanding. However they have not been designed to cope with the type of text data that is abundant on social media platforms, but rather for traditional medium size corpora consisting of longer documents, adhering to a specific language and typically spanning a stable set of topics. Unlike traditional document corpora, social media messages tend to be very short, sparse, noisy, and do not adhere to a standard vocabulary, linguistic patterns, or stable topic distributions. They are also generated at high velocity that impose high demands on topic modeling; and their evolving or dynamic nature, makes any set of results from topic modeling quickly become stale in the face of changes in the textual content and topics discussed within social media streams. In this dissertation, we propose an integrated topic modeling framework built on top of an existing stream-clustering framework called Stream-Dashboard, which can extract, isolate, and track topics over any given time period. In this new framework, Stream Dashboard first clusters the data stream points into homogeneous groups. Then data from each group is ushered to the topic modeling framework which extracts finer topics from the group. The proposed framework tracks the evolution of the clusters over time to detect milestones corresponding to changes in topic evolution, and to trigger an adaptation of the learned groups and topics at each milestone. The proposed approach to topic modeling is different from a generic Topic Modeling approach because it works in a compartmentalized fashion, where the input document stream is split into distinct compartments, and Topic Modeling is applied on each compartment separately. Furthermore, we propose extensions to existing topic modeling and stream clustering methods, including: an adaptive query reformulation approach to help focus on the topic discovery with time; a topic modeling extension with adaptive hyper-parameter and with infinite vocabulary; an adaptive stream clustering algorithm incorporating the automated estimation of dynamic, cluster-specific temporal scales for adaptive forgetting to help facilitate clustering in a fast evolving data stream. Our experimental results show that the proposed adaptive forgetting clustering algorithm can mine better quality clusters; that our proposed compartmentalized framework is able to mine topics of better quality compared to competitive baselines; and that the proposed framework can automatically adapt to focus on changing topics using the proposed query reformulation strategy

    Framework para la construcciĂłn y despliegue de sistemas de procesamiento en tiempo real

    Get PDF
    Traballo Fin de Máster en Tecnoloxías de Análise de Datos Masivos: Big Data. Curso 2017-2018En los últimos años se han desarrollado numerosas tecnologías destinadas al procesamiento de datos masivos, muchas de ellas de código abierto y de uso libre. Estas plataformas se centran en la escalabilidad horizontal, lo que implica que para el procesamiento de una mayor cantidad de datos sin grandes distorsiones en el ritmo, no es necesario aumentar o actualizar los recursos de una máquina (escalabilidad vertical), sino que es su ficiente con añadir más nodos con similares características a un clúster. La proliferación de este tipo de tecnologías de código abierto han democratizado y condicionado el gran número de aplicaciones que hacen uso de estas plataformas en multitud de ámbitos, tanto profesionales como académicos. Centrándonos en los frameworks de procesamiento, nos encontramos con una importante limitación: los datos han de poder dividirse en grupos independientes, de tal modo que sea posible paralelizar el trabajo en diferentes máquinas aunque existan puntos de procesamiento secuencial. Existen dos grandes tipos de tecnologías de procesamiento de este tipo: procesamiento de lotes (batch processing) y procesamiento de flujos (stream processing). En el primer caso, los resultados finales se obtienen juntos al fi nalizar el procesamiento del lote de datos compuesto por una o más etapas. Para de finir el trabajo a realizar, se define una topología de procesamiento que indica el flujo de los datos a través de las distintas etapas. Cada nodo (físico o virtual) puede ejecutar una instancia de la topología (aislada del resto de instancias), repartiéndose los datos de forma equitativa entre las instancias existentes. En las tecnologías de procesamiento de flujos, las distintas etapas de una topología son independientes y no pertenecen a una instancia concreta. Por tanto, las distintas etapas pueden ser paralelizadas de forma individual sin aumentar el grado de paralelismo de toda la topología. Estas tecnologías son adecuadas para aplicaciones que obtienen información en tiempo real y deben dar una respuesta inmediata, ya que cuando un dato completa su camino a través de las distintas etapas, el resultado puede obtenerse de forma instantánea. Sin embargo, con el procesamiento de lotes los resultados se obtienen cuando un lote de datos es procesado por completo. Un caso de aplicación de procesamiento en tiempo real es el análisis de contenidos en redes sociales para la detección temprana de riesgos. Este será el objetivo principal de este proyecto

    Statistical distribution of common audio features : encounters in a heavy-tailed universe

    Get PDF
    In the last few years some Music Information Retrieval (MIR) researchers have spotted important drawbacks in applying standard successful-in-monophonic algorithms to polyphonic music classification and similarity assessment. Noticeably, these so called “Bag-of-Frames” (BoF) algorithms share a common set of assumptions. These assumptions are substantiated in the belief that the numerical descriptions extracted from short-time audio excerpts (or frames) are enough to capture relevant information for the task at hand, that these frame-based audio descriptors are time independent, and that descriptor frames are well described by Gaussian statistics. Thus, if we want to improve current BoF algorithms we could: i) improve current audio descriptors, ii) include temporal information within algorithms working with polyphonic music, and iii) study and characterize the real statistical properties of these frame-based audio descriptors. From a literature review, we have detected that many works focus on the first two improvements, but surprisingly, there is a lack of research in the third one. Therefore, in this thesis we analyze and characterize the statistical distribution of common audio descriptors of timbre, tonal and loudness information. Contrary to what is usually assumed, our work shows that the studied descriptors are heavy-tailed distributed and thus, they do not belong to a Gaussian universe. This new knowledge led us to propose new algorithms that show improvements over the BoF approach in current MIR tasks such as genre classification, instrument detection, and automatic tagging of music. Furthermore, we also address new MIR tasks such as measuring the temporal evolution of Western popular music. Finally, we highlight some promising paths for future audio-content MIR research that will inhabit a heavy-tailed universe.En el campo de la extracción de información musical o Music Information Retrieval (MIR), los algoritmos llamados Bag-of-Frames (BoF) han sido aplicados con éxito en la clasificación y evaluación de similitud de señales de audio monofónicas. Por otra parte, investigaciones recientes han señalado problemas importantes a la hora de aplicar dichos algoritmos a señales de música polifónica. Estos algoritmos suponen que las descripciones numéricas extraídas de los fragmentos de audio de corta duración (o frames ) son capaces de capturar la información necesaria para la realización de las tareas planteadas, que el orden temporal de estos fragmentos de audio es irrelevante y que las descripciones extraídas de los segmentos de audio pueden ser correctamente descritas usando estadísticas Gaussianas. Por lo tanto, si se pretende mejorar los algoritmos BoF actuales se podría intentar: i) mejorar los descriptores de audio, ii) incluir información temporal en los algoritmos que trabajan con música polifónica y iii) estudiar y caracterizar las propiedades estadísticas reales de los descriptores de audio. La bibliografía actual sobre el tema refleja la existencia de un número considerable de trabajos centrados en las dos primeras opciones de mejora, pero sorprendentemente, hay una carencia de trabajos de investigación focalizados en la tercera opción. Por lo tanto, esta tesis se centra en el análisis y caracterización de la distribución estadística de descriptores de audio comúnmente utilizados para representar información tímbrica, tonal y de volumen. Al contrario de lo que se asume habitualmente, nuestro trabajo muestra que los descriptores de audio estudiados se distribuyen de acuerdo a una distribución de “cola pesada” y por lo tanto no pertenecen a un universo Gaussiano. Este descubrimiento nos permite proponer nuevos algoritmos que evidencian mejoras importantes sobre los algoritmos BoF actualmente utilizados en diversas tareas de MIR tales como clasificación de género, detección de instrumentos musicales y etiquetado automático de música. También nos permite proponer nuevas tareas tales como la medición de la evolución temporal de la música popular occidental. Finalmente, presentamos algunas prometedoras líneas de investigación para tareas de MIR ubicadas, a partir de ahora, en un universo de “cola pesada”.En l’àmbit de la extracció de la informació musical o Music Information Retrieval (MIR), els algorismes anomenats Bag-of-Frames (BoF) han estat aplicats amb èxit en la classificació i avaluació de similitud entre senyals monofòniques. D’altra banda, investigacions recents han assenyalat importants inconvenients a l’hora d’aplicar aquests mateixos algorismes en senyals de música polifònica. Aquests algorismes BoF suposen que les descripcions numèriques extretes dels fragments d’àudio de curta durada (frames) son suficients per capturar la informació rellevant per als algorismes, que els descriptors basats en els fragments son independents del temps i que l’estadística Gaussiana descriu correctament aquests descriptors. Per a millorar els algorismes BoF actuals doncs, es poden i) millorar els descriptors, ii) incorporar informació temporal dins els algorismes que treballen amb música polifònica i iii) estudiar i caracteritzar les propietats estadístiques reals d’aquests descriptors basats en fragments d’àudio. Sorprenentment, de la revisió bibliogràfica es desprèn que la majoria d’investigacions s’han centrat en els dos primers punts de millora mentre que hi ha una mancança quant a la recerca en l’àmbit del tercer punt. És per això que en aquesta tesi, s’analitza i caracteritza la distribució estadística dels descriptors més comuns de timbre, to i volum. El nostre treball mostra que contràriament al què s’assumeix, els descriptors no pertanyen a l’univers Gaussià sinó que es distribueixen segons una distribució de “cua pesada”. Aquest descobriment ens permet proposar nous algorismes que evidencien millores importants sobre els algorismes BoF utilitzats actualment en diferents tasques com la classificació del gènere, la detecció d’instruments musicals i l’etiquetatge automàtic de música. Ens permet també proposar noves tasques com la mesura de l’evolució temporal de la música popular occidental. Finalment, presentem algunes prometedores línies d’investigació per a tasques de MIR ubicades a partir d’ara en un univers de “cua pesada”

    Swarm intelligence for clustering dynamic data sets for web usage mining and personalization.

    Get PDF
    Swarm Intelligence (SI) techniques were inspired by bee swarms, ant colonies, and most recently, bird flocks. Flock-based Swarm Intelligence (FSI) has several unique features, namely decentralized control, collaborative learning, high exploration ability, and inspiration from dynamic social behavior. Thus FSI offers a natural choice for modeling dynamic social data and solving problems in such domains. One particular case of dynamic social data is online/web usage data which is rich in information about user activities, interests and choices. This natural analogy between SI and social behavior is the main motivation for the topic of investigation in this dissertation, with a focus on Flock based systems which have not been well investigated for this purpose. More specifically, we investigate the use of flock-based SI to solve two related and challenging problems by developing algorithms that form critical building blocks of intelligent personalized websites, namely, (i) providing a better understanding of the online users and their activities or interests, for example using clustering techniques that can discover the groups that are hidden within the data; and (ii) reducing information overload by providing guidance to the users on websites and services, typically by using web personalization techniques, such as recommender systems. Recommender systems aim to recommend items that will be potentially liked by a user. To support a better understanding of the online user activities, we developed clustering algorithms that address two challenges of mining online usage data: the need for scalability to large data and the need to adapt cluster sing to dynamic data sets. To address the scalability challenge, we developed new clustering algorithms using a hybridization of traditional Flock-based clustering with faster K-Means based partitional clustering algorithms. We tested our algorithms on synthetic data, real VCI Machine Learning repository benchmark data, and a data set consisting of real Web user sessions. Having linear complexity with respect to the number of data records, the resulting algorithms are considerably faster than traditional Flock-based clustering (which has quadratic complexity). Moreover, our experiments demonstrate that scalability was gained without sacrificing quality. To address the challenge of adapting to dynamic data, we developed a dynamic clustering algorithm that can handle the following dynamic properties of online usage data: (1) New data records can be added at any time (example: a new user is added on the site); (2) Existing data records can be removed at any time. For example, an existing user of the site, who no longer subscribes to a service, or who is terminated because of violating policies; (3) New parts of existing records can arrive at any time or old parts of the existing data record can change. The user\u27s record can change as a result of additional activity such as purchasing new products, returning a product, rating new products, or modifying the existing rating of a product. We tested our dynamic clustering algorithm on synthetic dynamic data, and on a data set consisting of real online user ratings for movies. Our algorithm was shown to handle the dynamic nature of data without sacrificing quality compared to a traditional Flock-based clustering algorithm that is re-run from scratch with each change in the data. To support reducing online information overload, we developed a Flock-based recommender system to predict the interests of users, in particular focusing on collaborative filtering or social recommender systems. Our Flock-based recommender algorithm (FlockRecom) iteratively adjusts the position and speed of dynamic flocks of agents, such that each agent represents a user, on a visualization panel. Then it generates the top-n recommendations for a user based on the ratings of the users that are represented by its neighboring agents. Our recommendation system was tested on a real data set consisting of online user ratings for a set of jokes, and compared to traditional user-based Collaborative Filtering (CF). Our results demonstrated that our recommender system starts performing at the same level of quality as traditional CF, and then, with more iterations for exploration, surpasses CF\u27s recommendation quality, in terms of precision and recall. Another unique advantage of our recommendation system compared to traditional CF is its ability to generate more variety or diversity in the set of recommended items. Our contributions advance the state of the art in Flock-based 81 for clustering and making predictions in dynamic Web usage data, and therefore have an impact on improving the quality of online services

    Modelling Web Usage in a Changing Environment

    Get PDF
    Eiben, A.E. [Promotor]Kowalczyk, W. [Copromotor

    China’s Charm Offensive in Latin America and the Caribbean: A Comprehensive Analysis of China’s Strategic Communication Strategy Across the Region.

    Get PDF
    China’s control over strategic digital sectors has given it access to personal, corporate, and government data through telecommunications companies such as Huawei

    China\u27s Charm Offensive in Latin America and the Caribbean: A comprehensive Analysis of China\u27s strategic Communication Strategy Across the Region [Part III: Image, Academia, and Technology]

    Get PDF
    This paper explores China’s public relations strategy in Latin America and the Caribbean (LAC) through diplomacy, promoting study networks, cooperation among academies, and establishing a significant number of Confucius Institutes. This is supported by a vast network of print, audiovisual and digital media owned by China or LAC groups. Yet, among the LAC population, knowledge of China is minimal. In sectors dedicated to research, politics, and the economy and finance, there is a slightly favorable image of China due to economic interest.https://digitalcommons.fiu.edu/jgi_research/1056/thumbnail.jp

    Redefining the anthology : forms and affordances in digital culture

    Get PDF
    Alors que le modèle économique de la télévision américaine, longtemps dominant, a été mis au défi de diverses manières par les changements industriels et technologiques de ces dernières années, des formes narratives de plus en plus hétérogènes sont apparues, qui se sont ajoutées aux structures sérielles originaires. La diversité des formes télévisuelles est devenue particulièrement évidente depuis que les paysages télévisuels nationaux et locaux ont commencé à s’ouvrir aux marchés étrangers situés en dehors des États-Unis, pour finalement adopter une perspective transnationale et globale. La transition vers la télévision distribuée sur Internet a joué un rôle central dans cette fragmentation formelle et la nouvelle dynamique de la diffusion en ligne a ouvert une different perspective pour comprendre le flux mondial de contenus télévisuels, qui reflète aujourd'hui un environnement multimédia et numérique hautement interconnecté et mis en réseau. En effet, la multiplication des services de vidéo à la demande oblige la sérialité à s’adapter au paysage médiatique contemporain, donnant naissance à des produits audiovisuels pouvant être transférés en ligne et présentant des spécificités de production, de distribution et de réception. L’un des résultats de tels changements dans les séries télévisées américaines à l’aube du XXIe siècle est la série anthologique divisée en différentes saisons avec des histoires distinctes, et pourtant liées par le ton et le style. Ma recherche se situe dans un tel contexte technologique, industriel et culturel, où le contenu télévisuel est de plus en plus fragmenté. Compte tenu de cette fragmentation des contenus, cette thèse examine la manière dont les contenus télévisuels contemporains sont distribués, dans l'interaction entre les processus de recommandation basés sur des algorithmes et les pratiques éditoriales plus traditionnelles. L’objectif de ce projet est donc d’étudier la manière dont certaines structures narratives typiques de la forme de l’anthologie apparaissent dans le contexte de la sérialité de la télévision nord-américaine, à partir de conditions spécifiques de production, de distribution et de consommation dans l’industrie des médias. En se concentrant sur l'évolution (dimension temporelle et historique) et sur la circulation numérique (dimension spatiale, géographique) des séries d'anthologies américaines, et en observant les particularités de leur production et de leur style, ainsi que leurs réseaux de distribution et les modes de consommation qu'elles favorisent, cette thèse s’inscrit finalement dans une conversation plus vaste sur les études culturelles et numériques. L’objectif final est d’étudier la relation entre les formes anthologiques, les plateformes de distribution et les modèles de consommation, en proposant une approche comparative de l’anthologie qui soit à la fois cross-culturelle, crosshistorique, cross-genre et qui prenne en consideration les pratiques pre- et post-numériques pour l’organisation de contenus culturels.As the longtime dominant U.S. television business model has been challenged in various ways by industrial and technological changes in recent years, more heterogeneous narrative forms have emerged in addition to original serial structures. The diversity of televisual forms became particularly evident since national, local television landscapes started opening up to foreign markets outside of the U.S., finally embracing a transnational, global perspective and tracing alternative value-chains. The transition to internet-distributed television played a pivotal role in this formal fragmentation and new dynamics of online streaming opened up another path for understanding the flow of television content, which today reflects a highly interconnected, networked media and digital environment. Indeed, the proliferation of video-on-demand services is forcing seriality to adapt to the contemporary mediascape, giving rise to audiovisual products that can be transferred online and present specificities in production, distribution and reception. One of the outcomes of such changes in U.S. television series at the dawn of the twenty-first century is the anthology series divided in different seasons with separate stories, yet linked by tone and style. My research positions itself in such a technological, industrial and cultural context, where television content is increasingly fragmented. Given such a fragmentation, this thesis considers the ways contemporary television content is distributed in the interaction between algorithmic-driven recommendation processes and more traditional editorial practices. The aim of the project is to investigate the way certain narrative structures typical of the anthology form emerge in the context of U.S. television seriality, starting from specific conditions of production, distribution and consumption in the media industry. By focusing on the evolution (temporal, historical dimension) and on the digital circulation (spatial, geographic dimension) of U.S. anthology series, and observing the peculiarities in their production and style, as well as their distributional networks and the consumption patterns they foster, this thesis ultimately insert itself into a larger conversation on digital-cultural studies. The final purpose is to examine the relation between anthological forms, distribution platforms and consumption models, by proposing a comparative approach to the anthology that is at the same time cross-cultural, cross-historical, cross-genre and accounting for both pre- and post-digital practices for cultural content organization
    • …
    corecore