51 research outputs found

    Data-Driven Methods for Data Center Operations Support

    Get PDF
    During the last decade, cloud technologies have been evolving at an impressive pace, such that we are now living in a cloud-native era where developers can leverage on an unprecedented landscape of (possibly managed) services for orchestration, compute, storage, load-balancing, monitoring, etc. The possibility to have on-demand access to a diverse set of configurable virtualized resources allows for building more elastic, flexible and highly-resilient distributed applications. Behind the scenes, cloud providers sustain the heavy burden of maintaining the underlying infrastructures, consisting in large-scale distributed systems, partitioned and replicated among many geographically dislocated data centers to guarantee scalability, robustness to failures, high availability and low latency. The larger the scale, the more cloud providers have to deal with complex interactions among the various components, such that monitoring, diagnosing and troubleshooting issues become incredibly daunting tasks. To keep up with these challenges, development and operations practices have undergone significant transformations, especially in terms of improving the automations that make releasing new software, and responding to unforeseen issues, faster and sustainable at scale. The resulting paradigm is nowadays referred to as DevOps. However, while such automations can be very sophisticated, traditional DevOps practices fundamentally rely on reactive mechanisms, that typically require careful manual tuning and supervision from human experts. To minimize the risk of outages—and the related costs—it is crucial to provide DevOps teams with suitable tools that can enable a proactive approach to data center operations. This work presents a comprehensive data-driven framework to address the most relevant problems that can be experienced in large-scale distributed cloud infrastructures. These environments are indeed characterized by a very large availability of diverse data, collected at each level of the stack, such as: time-series (e.g., physical host measurements, virtual machine or container metrics, networking components logs, application KPIs); graphs (e.g., network topologies, fault graphs reporting dependencies among hardware and software components, performance issues propagation networks); and text (e.g., source code, system logs, version control system history, code review feedbacks). Such data are also typically updated with relatively high frequency, and subject to distribution drifts caused by continuous configuration changes to the underlying infrastructure. In such a highly dynamic scenario, traditional model-driven approaches alone may be inadequate at capturing the complexity of the interactions among system components. DevOps teams would certainly benefit from having robust data-driven methods to support their decisions based on historical information. For instance, effective anomaly detection capabilities may also help in conducting more precise and efficient root-cause analysis. Also, leveraging on accurate forecasting and intelligent control strategies would improve resource management. Given their ability to deal with high-dimensional, complex data, Deep Learning-based methods are the most straightforward option for the realization of the aforementioned support tools. On the other hand, because of their complexity, this kind of models often requires huge processing power, and suitable hardware, to be operated effectively at scale. These aspects must be carefully addressed when applying such methods in the context of data center operations. Automated operations approaches must be dependable and cost-efficient, not to degrade the services they are built to improve. i

    Exploratory visual text analytics in the scientific literature domain

    Get PDF

    Representación computacional del lenguaje natural escrito

    Get PDF
    When humans read, or hear, words, they immediately relatethem to a concept. This is possible due to the informationalready stored in the brain and also to human’s ability toselect, process, and associate such information with words.However, for a computer, natural language text is only asequence of bits that does not convey any meaning on itsown, unless properly processed. A computer interprets thisbit sequence by modeling the processing that takes place inhuman minds, namely structuring and linking the text withpreviously stored information. During this process, as wellas when describing its results, the text is represented usingvarious formal structures that permit automatic processing,interpretation, and comparison of information. In this paper,we present a detailed description of these structures.Cuando el ser humano lee o escucha una palabra, inmediatamente la relaciona con un concepto. Esto es posible gracias a la acumulación de información y a la posibilidad de filtrar, procesar y relacionar dicha información. Para la máquina, una expresión escrita en el lenguaje natural es una cadena de bits que no aporta información por sí sola. Un computador interpreta esta cadena de bits, modelando el proceso que tiene lugar en la mente humana, estructurando y relacionado la cadena con información previamente almacenada. En el proceso, así como al momento de describir los resultados, el texto es representado por estructuras formales que permiten el procesamiento automático, la interpretación y la comparación de la información. Este artículo presenta una descripción detallada de estas estructuras

    Incorporating semantic and syntactic information into document representation for document clustering

    Get PDF
    Document clustering is a widely used strategy for information retrieval and text data mining. In traditional document clustering systems, documents are represented as a bag of independent words. In this project, we propose to enrich the representation of a document by incorporating semantic information and syntactic information. Semantic analysis and syntactic analysis are performed on the raw text to identify this information. A detailed survey of current research in natural language processing, syntactic analysis, and semantic analysis is provided. Our experimental results demonstrate that incorporating semantic information and syntactic information can improve the performance of our document clustering system for most of our data sets. A statistically significant improvement can be achieved when we combine both syntactic and semantic information. Our experimental results using compound words show that using only compound words does not improve the clustering performance for our data sets. When the compound words are combined with original single words, the combined feature set gets slightly better performance for most data sets. But this improvement is not statistically significant. In order to select the best clustering algorithm for our document clustering system, a comparison of several widely used clustering algorithms is performed. Although the bisecting K-means method has advantages when working with large datasets, a traditional hierarchical clustering algorithm still achieves the best performance for our small datasets

    The Hermeneutics Of The Hard Drive: Using Narratology, Natural Language Processing, And Knowledge Management To Improve The Effectiveness Of The Digital Forensic Process

    Get PDF
    In order to protect the safety of our citizens and to ensure a civil society, we ask our law enforcement, judiciary and intelligence agencies, under the rule of law, to seek probative information which can be acted upon for the common good. This information may be used in court to prosecute criminals or it can be used to conduct offensive or defensive operations to protect our national security. As the citizens of the world store more and more information in digital form, and as they live an ever-greater portion of their lives online, law enforcement, the judiciary and the Intelligence Community will continue to struggle with finding, extracting and understanding the data stored on computers. But this trend affords greater opportunity for law enforcement. This dissertation describes how several disparate approaches: knowledge management, content analysis, narratology, and natural language processing, can be combined in an interdisciplinary way to positively impact the growing difficulty of developing useful, actionable intelligence from the ever-increasing corpus of digital evidence. After exploring how these techniques might apply to the digital forensic process, I will suggest two new theoretical constructs, the Hermeneutic Theory of Digital Forensics and the Narrative Theory of Digital Forensics, linking existing theories of forensic science, knowledge management, content analysis, narratology, and natural language processing together in order to identify and extract narratives from digital evidence. An experimental approach will be described and prototyped. The results of these experiments demonstrate the potential of natural language processing techniques to digital forensics

    B!SON: A Tool for Open Access Journal Recommendation

    Get PDF
    Finding a suitable open access journal to publish scientific work is a complex task: Researchers have to navigate a constantly growing number of journals, institutional agreements with publishers, funders’ conditions and the risk of Predatory Publishers. To help with these challenges, we introduce a web-based journal recommendation system called B!SON. It is developed based on a systematic requirements analysis, built on open data, gives publisher-independent recommendations and works across domains. It suggests open access journals based on title, abstract and references provided by the user. The recommendation quality has been evaluated using a large test set of 10,000 articles. Development by two German scientific libraries ensures the longevity of the project
    corecore