3,333 research outputs found

    Filtering and scalability in the ECO distributed event model

    Get PDF
    Event-based communication is useful in many application domains, ranging from small, centralised applications to large, distributed systems. Many different event models have been developed to address the requirements of different application domains. One such model is the ECO model which was designed to support distributed virtual world applications. Like many other event models, ECO has event filtering capabilities meant to improve scalability by decreasing network traffic in a distributed implementation. Our recent work in event-based systems has included building a fully distributed version of the ECO model, including event filtering capabilities. This paper describes the results of our evaluation of filters as a means of achieving increased scalability in the ECO model. The evaluation is empirical and real data gathered from an actual event-based system is used

    Scalable Bayesian Non-Negative Tensor Factorization for Massive Count Data

    Full text link
    We present a Bayesian non-negative tensor factorization model for count-valued tensor data, and develop scalable inference algorithms (both batch and online) for dealing with massive tensors. Our generative model can handle overdispersed counts as well as infer the rank of the decomposition. Moreover, leveraging a reparameterization of the Poisson distribution as a multinomial facilitates conjugacy in the model and enables simple and efficient Gibbs sampling and variational Bayes (VB) inference updates, with a computational cost that only depends on the number of nonzeros in the tensor. The model also provides a nice interpretability for the factors; in our model, each factor corresponds to a "topic". We develop a set of online inference algorithms that allow further scaling up the model to massive tensors, for which batch inference methods may be infeasible. We apply our framework on diverse real-world applications, such as \emph{multiway} topic modeling on a scientific publications database, analyzing a political science data set, and analyzing a massive household transactions data set.Comment: ECML PKDD 201

    Monitoring in fog computing: state-of-the-art and research challenges

    Get PDF
    Fog computing has rapidly become a widely accepted computing paradigm to mitigate cloud computing-based infrastructure limitations such as scarcity of bandwidth, large latency, security, and privacy issues. Fog computing resources and applications dynamically vary at run-time, and they are highly distributed, mobile, and appear-disappear rapidly at any time over the internet. Therefore, to ensure the quality of service and experience for end-users, it is necessary to comply with a comprehensive monitoring approach. However, the volatility and dynamism characteristics of fog resources make the monitoring design complex and cumbersome. The aim of this article is therefore three-fold: 1) to analyse fog computing-based infrastructures and existing monitoring solutions; 2) to highlight the main requirements and challenges based on a taxonomy; 3) to identify open issues and potential future research directions.This work has been (partially) funded by H2020 EU/TW 5G-DIVE (Grant 859881) and H2020 5Growth (Grant 856709). It has been also funded by the Spanish State Research Agency (TRUE5G project, PID2019-108713RB-C52 PID2019-108713RB-C52 / AEI / 10.13039/501100011033)

    A Hierarchical Filtering-Based Monitoring Architecture for Large-scale Distributed Systems

    Get PDF
    On-line monitoring is essential for observing and improving the reliability and performance of large-scale distributed (LSD) systems. In an LSD environment, large numbers of events are generated by system components during their execution and interaction with external objects (e.g. users or processes). These events must be monitored to accurately determine the run-time behavior of an LSD system and to obtain status information that is required for debugging and steering applications. However, the manner in which events are generated in an LSD system is complex and represents a number of challenges for an on-line monitoring system. Correlated events axe generated concurrently and can occur at multiple locations distributed throughout the environment. This makes monitoring an intricate task and complicates the management decision process. Furthermore, the large number of entities and the geographical distribution inherent with LSD systems increases the difficulty of addressing traditional issues, such as performance bottlenecks, scalability, and application perturbation. This dissertation proposes a scalable, high-performance, dynamic, flexible and non-intrusive monitoring architecture for LSD systems. The resulting architecture detects and classifies interesting primitive and composite events and performs either a corrective or steering action. When appropriate, information is disseminated to management applications, such as reactive control and debugging tools. The monitoring architecture employs a novel hierarchical event filtering approach that distributes the monitoring load and limits event propagation. This significantly improves scalability and performance while minimizing the monitoring intrusiveness. The architecture provides dynamic monitoring capabilities through: subscription policies that enable applications developers to add, delete and modify monitoring demands on-the-fly, an adaptable configuration that accommodates environmental changes, and a programmable environment that facilitates development of self-directed monitoring tasks. Increased flexibility is achieved through a declarative and comprehensive monitoring language, a simple code instrumentation process, and automated monitoring administration. These elements substantially relieve the burden imposed by using on-line distributed monitoring systems. In addition, the monitoring system provides techniques to manage the trade-offs between various monitoring objectives. The proposed solution offers improvements over related works by presenting a comprehensive architecture that considers the requirements and implied objectives for monitoring large-scale distributed systems. This architecture is referred to as the HiFi monitoring system. To demonstrate effectiveness at debugging and steering LSD systems, the HiFi monitoring system has been implemented at the Old Dominion University for monitoring the Interactive Remote Instruction (IRI) system. The results from this case study validate that the HiFi system achieves the objectives outlined in this thesis

    Evaluation of low-complexity supervised and unsupervised NILM methods and pre-processing for detection of multistate white goods

    Get PDF
    According to recent studies by the BBC and the Scottish Fire and Rescue Service, malfunctioning appliances, especially white goods, were responsible for almost 12,000 fires in Great Britain in just over 3 years, and almost everyday in 2019. The top three “offenders” are washing machines, tumble dryers and dishwashers, hence we will focus on these, generally challenging to disaggregate, appliances in this paper. The first step towards remotely assessing safety in the house, e.g., due to appliances not being switched off or appliance malfunction, is by detecting appliance state and consumption from the NILM result generated from smart meter data. While supervised NILM methods are expected to perform best on the house they were trained on, this is not necessarily the case with transfer learning on unseen houses; unsupervised NILM may be a better option. However, unsupervised methods in general tend to be affected by the noise in the form of unknown appliances, varying power levels and signatures. We evaluate the robustness of three well-performing (based on prior studies) low-complexity NILM algorithms in order to determine appliance state and consumption: Decision Tree and KNN (supervised) and DBSCAN (unsupervised), as well as different algorithms for preprocessing to mitigate the effect of noisy data. These are tested on two datasets with different levels of noise, namely REFIT and REDD datasets, resampled to 1 min resolution

    NILM techniques for intelligent home energy management and ambient assisted living: a review

    Get PDF
    The ongoing deployment of smart meters and different commercial devices has made electricity disaggregation feasible in buildings and households, based on a single measure of the current and, sometimes, of the voltage. Energy disaggregation is intended to separate the total power consumption into specific appliance loads, which can be achieved by applying Non-Intrusive Load Monitoring (NILM) techniques with a minimum invasion of privacy. NILM techniques are becoming more and more widespread in recent years, as a consequence of the interest companies and consumers have in efficient energy consumption and management. This work presents a detailed review of NILM methods, focusing particularly on recent proposals and their applications, particularly in the areas of Home Energy Management Systems (HEMS) and Ambient Assisted Living (AAL), where the ability to determine the on/off status of certain devices can provide key information for making further decisions. As well as complementing previous reviews on the NILM field and providing a discussion of the applications of NILM in HEMS and AAL, this paper provides guidelines for future research in these topics.Agência financiadora: Programa Operacional Portugal 2020 and Programa Operacional Regional do Algarve 01/SAICT/2018/39578 Fundação para a Ciência e Tecnologia through IDMEC, under LAETA: SFRH/BSAB/142998/2018 SFRH/BSAB/142997/2018 UID/EMS/50022/2019 Junta de Comunidades de Castilla-La-Mancha, Spain: SBPLY/17/180501/000392 Spanish Ministry of Economy, Industry and Competitiveness (SOC-PLC project): TEC2015-64835-C3-2-R MINECO/FEDERinfo:eu-repo/semantics/publishedVersio

    Leveraging Node Attributes for Incomplete Relational Data

    Full text link
    Relational data are usually highly incomplete in practice, which inspires us to leverage side information to improve the performance of community detection and link prediction. This paper presents a Bayesian probabilistic approach that incorporates various kinds of node attributes encoded in binary form in relational models with Poisson likelihood. Our method works flexibly with both directed and undirected relational networks. The inference can be done by efficient Gibbs sampling which leverages sparsity of both networks and node attributes. Extensive experiments show that our models achieve the state-of-the-art link prediction results, especially with highly incomplete relational data.Comment: Appearing in ICML 201

    Simulating social relations in multi-agent systems

    Get PDF
    Open distributed systems are comprised of a large number of heterogeneous nodes with disparate requirements and objectives, a number of which may not conform to the system specification. This thesis argues that activity in such systems can be regulated by using distributed mechanisms inspired by social science theories regarding similarity /kinship, trust, reputation, recommendation and economics. This makes it possible to create scalable and robust agent societies which can adapt to overcome structural impediments and provide inherent defence against malicious and incompetent action, without detriment to system functionality and performance. In particular this thesis describes: • an agent based simulation and animation platform (PreSage), which offers the agent developer and society designer a suite of powerful tools for creating, simulating and visualising agent societies from both a local and global perspective. • a social information dissemination system (SID) based on principles of self organisation which personalises recommendation and directs information dissemination. • a computational socio-cognitive and economic framework (CScEF) which integrates and extends socio-cognitive theories of trust, reputation and recommendation with basic economic theory. • results from two simulation studies investigating the performance of SID and the CScEF. The results show the production of a generic, reusable and scalable platform for developing and animating agent societies, and its contribution to the community as an open source tool. Secondly specific results, regarding the application of SID and CScEF, show that revealing outcomes of using socio-technical mechanisms to condition agent interactions can be demonstrated and identified by using Presage.Open Acces
    corecore