576 research outputs found

    A Massively Scalable Architecture For Instant Messaging & Presence

    Get PDF
    This paper analyzes the scalability of Instant Messaging & Presence (IM&P) architectures. We take a queueing-based modelling and analysis approach to find the bottlenecks of the current IM&P architecture at the Dutch social network Hyves, as well as of alternative architectures. We use the Hierarchical Evaluation Tool (HIT) to create and analyse models analytically. Based on these results, we recommend a new architecture that provides better scalability than the current one. \u

    RELEASE: A High-level Paradigm for Reliable Large-scale Server Software

    Get PDF
    Erlang is a functional language with a much-emulated model for building reliable distributed systems. This paper outlines the RELEASE project, and describes the progress in the rst six months. The project aim is to scale the Erlang's radical concurrency-oriented programming paradigm to build reliable general-purpose software, such as server-based systems, on massively parallel machines. Currently Erlang has inherently scalable computation and reliability models, but in practice scalability is constrained by aspects of the language and virtual machine. We are working at three levels to address these challenges: evolving the Erlang virtual machine so that it can work effectively on large scale multicore systems; evolving the language to Scalable Distributed (SD) Erlang; developing a scalable Erlang infrastructure to integrate multiple, heterogeneous clusters. We are also developing state of the art tools that allow programmers to understand the behaviour of massively parallel SD Erlang programs. We will demonstrate the e ectiveness of the RELEASE approach using demonstrators and two large case studies on a Blue Gene

    Middleware for Large-scale Distributed Systems

    Get PDF
    Nos últimos anos o aumento exponencial da utilização de dispositivos móveis e serviços disponibilizados na “Cloud” levou a que a forma como os sistemas são desenhados e implementados mudasse, numa perspectiva de tentar alcançar requisitos que até então não eram essenciais. Analisando esta evolução, com o enorme aumento dos dispositivos móveis, como os “smartphones” e “tablets” fez com que o desenho e implementação de sistemas distribuidos fossem ainda mais importantes nesta área, na tentativa de promover sistemas e aplicações que fossem mais flexíveis, robutos, escaláveis e acima de tudo interoperáveis. A menor capacidade de processamento ou armazenamento destes dispositivos tornou essencial o aparecimento e crescimento de tecnologias que prometem solucionar muitos dos problemas identificados. O aparecimento do conceito de Middleware visa solucionar estas lacunas nos sistemas distribuidos mais evoluídos, promovendo uma solução a nível de organização e desenho da arquitetura dos sistemas, ao memo tempo que fornece comunicações extremamente rápidas, seguras e de confiança. Uma arquitetura baseada em Middleware visa dotar os sistemas de um canal de comunicação que fornece uma forte interoperabilidade, escalabilidade, e segurança na troca de mensagens, entre outras vantagens. Nesta tese vários tipos e exemplos de sistemas distribuídos e são descritos e analisados, assim como uma descrição em detalhe de três protocolos (XMPP, AMQP e DDS) de comunicação, sendo dois deles (XMPP e AMQP) utilzados em projecto reais que serão descritos ao longo desta tese. O principal objetivo da escrita desta tese é demonstrar o estudo e o levantamento do estado da arte relativamente ao conceito de Middleware aplicado a sistemas distribuídos de larga escala, provando que a utilização de um Middleware pode facilitar e agilizar o desenho e desenvolvimento de um sistema distribuído e traz enormes vantagens num futuro próximo.Over the last few years the designing and implementation of applications have evolved to a new breed of applications that are used by a huge number of users at the same time and are capable of being executed in up to thousands of machines physically distributed, even geographically, such as the cloud computing systems, the new concept of “big data” and smart cities. The existence of several components of these systems, distributed in independent machines, brings inevitable issues in terms of designing and implementation of those systems in order to achieve flexible, scalable, robust, reliable and interoperable systems. It is extremely important to design and implement systems that can be capable of providing a communication and coordination among all the components of the system. The concept of implementing a Middleware seems to be a great option to solve most of these issues, allowing a system to communicate with other systems in a really fast, robust and secure way. This thesis pretends to demonstrate that the usage of Middleware technologies to ensure the communication in distributed systems brings a huge number of advantages, such as interoperability between systems, robustness regarding the communication layer, scalability and high speed communications

    Learning from Digital Natives: Bridging Formal and Informal Learning. Final Report

    Get PDF
    Overview This report suggests that students are increasingly making use of a variety of etools (such as mobile phones, email, MSN, digital cameras, games consoles and social networking sites) to support their informal learning within formalised educational settings, and that they use the tools that they have available if none are provided for them. Therefore, higher education institutions should encourage the use of these tools. Aims and background This study aimed to explore how e-tools (such as mobile phones, email, MSN, digital cameras, games consoles and social networking sites) and the processes that underpin their use can support learning within educational institutions and help improve the quality of students’ experiences of learning in higher education (pgs 9-11). Methodology The study entailed: (i) desk research to identify related international research and practice and examples of integration of e-tools and learning processes in formal educational settings; (ii) a survey of 160 engineering and social work students across two contrasting Scottish universities (pre- and post-1992) – the University of Strathclyde and Glasgow Caledonian University – and follow-up interviews with eight students across the two subject areas to explore which technologies students were using for both learning and leisure activities within and outside the formal educational settings and how they would like to use such technologies to support their learning in both formal and informal settings; and (iii) interviews with eight members of staff from across the institutions and two subject areas to identify their perceptions of the educational value of the e-tools. (pgs 24-27). Key findings • Students reported making extensive use of a variety of both e-tools (such as mobile phones, email, MSN, digital cameras) and social networking tools (such as Bebo, MySpace, Wikipedia and YouTube) for informal socialisation, communication, information gathering, content creation and sharing, alongside using the institutionally provided technologies and learning environments. • Most of the students owned their own computer or had access to a sibling or parent’s computer. Many students owned a laptop but preferred not to bring it onto campus due to security concerns and because they found it too heavy to carry about. • Ownership of mobile phones was ubiquitous. • Whilst the students’ information searching literacy seemed adequate, the ability of these students to harness the power of social networking tools and informal processes for their learning was low. Staff reported using a few Web 2.0 and social software tools but they were generally less familiar with how these could be used to support learning and teaching. There were misconceptions surrounding the affordances of the tools and fears expressed about security and invasion of personal space. Considerations of the costs and the time it would take staff to develop their skills meant that there was a reluctance to take up new technologies at an institutional level. • Subject differences emerged in both staff and student perceptions as to which type of tools they would find most useful. Attitudes to Web 2.0 tools were different. Engineers were concerned with reliability, using institutional systems and inter-operability. Social workers were more flexible because they were focused on communication and professional needs. • The study concluded that digital tools, personal devices, social networking software and many of the other tools explored all have a large educational potential to support learning processing and teaching practices. Therefore, use of these tools and processes within institutions, amongst staff and students should be encouraged. • The report goes on to suggest ways in which the use of such technologies can help strengthen the links between informal and formal learning in higher education. The recommendations are grouped under four areas – pedagogical, socio-cultural, organisational and technological

    DojoAnalytics:A Learning Analytics interoperable component for DojoIBL

    Get PDF
    DojoIBL is a cloud-based platform that provides flexible support for collaborative inquiry-based learning processes. It expands the learning process beyond the classroom walls and brings it to an online setting. Such transition requires teachers and learners to have more means to track and to follow up their progress. Learning Analytics dashboards provide such functionality in form of meaningful visualizations. In this paper, we present the DojoAnalytics, a new module of DojoIBL that enables connections with third-party Learning Analytics dashboards. In order to demonstrate interoperability with the external dashboards, two use case implementations will be described

    Supporting Massive Mobility with stream processing software

    Get PDF
    The goal of this project is to design a solution for massive mobility using LISP protocol and scalable database systems like Apache Kafka. The project consists of three steps: rst, understanding the requirements of the massive mobility scenario; second, designing a solution based on a stream processing software that integrates with OOR (open-source LISP implementation). Third, building a prototype with OOR and a stream processing software (or a similar technology) and evaluating its performance. Our objectives are: Understand the requirements in an environment for massive mo- bility;Learn and evaluate the architecture of Apache Kafka and similar broker messages to see if these tools could satisfy the requirements; Propose an architecture for massive mobility using protocol LISP and Kafka as mapping system, and nally; Evaluate the performance of Apache Kafka using such architecture. In chapters 3 and 4 we will provide a summary of LISP protocol, Apache Kafka and other message brokers. On these chapters we describe the components of these tools and how we can use such components to achieve our objective. We will be evaluating the di erent mechanisms to 1) authenticate users, 2) access control list, 3) protocols to assure the delivery of the message, 4)integrity and 5)communication patterns. Because we are interested only in the last message of the queue, it is very important that the broker message provides a capability to obtain this message. Regarding the proposed architecture, we will see how we adapted Kafka to store the information managed by the mapping system in LISP. The EID in LISP will be repre- sented by topics in Apache Kafka., It will use the pattern publish-subscribe to spread the noti cation between all the subscribers. xTRs or Mobile devices will be able to play the role of Consumers and Publisher of the message brokers. Every topic will use only one partition and every subscriber will have its own consumer group to avoid competition to consume the messages. Finally we evaluate the performance of Apache Kafka. As we will see, Kafka escalates in a Linear way in the following cases: number of packets in the network in relation with the number of topics, number of packets in the network in relation with the number of subscribers, number of opened les by the server in relation with the number of topics time elapsed between the moment when publisher sends a message and subscriber receives it, regarding to the number of topics. In the conclusion we explain which objectives were achieved and why there are some challenges to be faced by kafka especially in two points: 1) we need only the last location (message) stored in the broker since Kafka does not provide an out of the box mechanism to obtain such messages, and 2) the amount of opened les that have to be managed simultaneously by the server. More study is required to compare the performance of Kafka against other tools
    corecore