25,113 research outputs found

    FOTE 2008 Conference Report

    Get PDF
    A report prepared by JA.Net and ULCC about the Future of Technology in Education (FOTE 2008) conference, Imperial College, 3rd October 2008. It covers the main speakers, themes and presentations: Cloud Computing, Second Life, Portability, Personalisation, Shared Services, Campus of the Future, Mobile Technology, Creativity and Media Production, Social Collaboration Tools for Staff and Students

    Media Culture 2020: collaborative teaching and blended learning using social media and cloud-based technologies

    Get PDF
    The Media Culture 2020 project was considered to be a great success by all the partners, academics and especially the students who took part. It is a true example of an intercultural, multidisciplinary, blended learning experience in higher education that achieved it goals of breaking down classroom walls and bridging geographical distance and cultural barriers. The students with different skills, coming from different countries and cultures, interacting with other enlarges the possibilities of creativity, collaboration and quality work. The blend of both synchronous and asynchronous teaching methods fostered an open, blended learning environment, one that extended the traditional boundaries of the classroom in time and space. The interactive and decentralized nature of digital tools enabled staff and students to communicate and strengthen social ties, alongside participation in the production of new knowledge and media content. For students and lecturers, the implementation of social media and cloud platforms offered an innovative solution to both teaching and learning in a collaborative manner. By leveraging the interactive and decentralised capabilities of a range of technologies in an educational context, this model of digital scholarship facilitates an open and dynamic working environment. Blended teaching methods allow for expansive collaboration, whereby information and knowledge can be accessed and disseminated across a number of networked devices

    The Internet-of-Things Meets Business Process Management: Mutual Benefits and Challenges

    Get PDF
    The Internet of Things (IoT) refers to a network of connected devices collecting and exchanging data over the Internet. These things can be artificial or natural, and interact as autonomous agents forming a complex system. In turn, Business Process Management (BPM) was established to analyze, discover, design, implement, execute, monitor and evolve collaborative business processes within and across organizations. While the IoT and BPM have been regarded as separate topics in research and practice, we strongly believe that the management of IoT applications will strongly benefit from BPM concepts, methods and technologies on the one hand; on the other one, the IoT poses challenges that will require enhancements and extensions of the current state-of-the-art in the BPM field. In this paper, we question to what extent these two paradigms can be combined and we discuss the emerging challenges

    Creating business value from big data and business analytics : organizational, managerial and human resource implications

    Get PDF
    This paper reports on a research project, funded by the EPSRC’s NEMODE (New Economic Models in the Digital Economy, Network+) programme, explores how organizations create value from their increasingly Big Data and the challenges they face in doing so. Three case studies are reported of large organizations with a formal business analytics group and data volumes that can be considered to be ‘big’. The case organizations are MobCo, a mobile telecoms operator, MediaCo, a television broadcaster, and CityTrans, a provider of transport services to a major city. Analysis of the cases is structured around a framework in which data and value creation are mediated by the organization’s business analytics capability. This capability is then studied through a sociotechnical lens of organization/management, process, people, and technology. From the cases twenty key findings are identified. In the area of data and value creation these are: 1. Ensure data quality, 2. Build trust and permissions platforms, 3. Provide adequate anonymization, 4. Share value with data originators, 5. Create value through data partnerships, 6. Create public as well as private value, 7. Monitor and plan for changes in legislation and regulation. In organization and management: 8. Build a corporate analytics strategy, 9. Plan for organizational and cultural change, 10. Build deep domain knowledge, 11. Structure the analytics team carefully, 12. Partner with academic institutions, 13. Create an ethics approval process, 14. Make analytics projects agile, 15. Explore and exploit in analytics projects. In technology: 16. Use visualization as story-telling, 17. Be agnostic about technology while the landscape is uncertain (i.e., maintain a focus on value). In people and tools: 18. Data scientist personal attributes (curious, problem focused), 19. Data scientist as ‘bricoleur’, 20. Data scientist acquisition and retention through challenging work. With regards to what organizations should do if they want to create value from their data the paper further proposes: a model of the analytics eco-system that places the business analytics function in a broad organizational context; and a process model for analytics implementation together with a six-stage maturity model

    Sustainability of systems interoperability in dynamic business networks

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia Electrotécnica e de ComputadoresCollaborative networked environments emerged with the spread of the internet, contributing to overcome past communication barriers, and identifying interoperability as an essential property to support businesses development. When achieved seamlessly, efficiency is increased in the entire product life cycle support. However, due to the different sources of knowledge, models and semantics, enterprise organisations are experiencing difficulties exchanging critical information, even when they operate in the same business environments. To solve this issue, most of them try to attain interoperability by establishing peer-to-peer mappings with different business partners, or use neutral data and product standards as the core for information sharing, in optimized networks. In current industrial practice, the model mappings that regulate enterprise communications are only defined once, and most of them are hardcoded in the information systems. This solution has been effective and sufficient for static environments, where enterprise and product models are valid for decades. However, more and more enterprise systems are becoming dynamic, adapting and looking forward to meet further requirements; a trend that is causing new interoperability disturbances and efficiency reduction on existing partnerships. Enterprise Interoperability (EI) is a well established area of applied research, studying these problems, and proposing novel approaches and solutions. This PhD work contributes to that research considering enterprises as complex and adaptive systems, swayed to factors that are making interoperability difficult to sustain over time. The analysis of complexity as a neighbouring scientific domain, in which features of interoperability can be identified and evaluated as a benchmark for developing a new foundation of EI, is here proposed. This approach envisages at drawing concepts from complexity science to analyse dynamic enterprise networks and proposes a framework for sustaining systems interoperability, enabling different organisations to evolve at their own pace, answering the upcoming requirements but minimizing the negative impact these changes can have on their business environment

    Microservice Transition and its Granularity Problem: A Systematic Mapping Study

    Get PDF
    Microservices have gained wide recognition and acceptance in software industries as an emerging architectural style for autonomic, scalable, and more reliable computing. The transition to microservices has been highly motivated by the need for better alignment of technical design decisions with improving value potentials of architectures. Despite microservices' popularity, research still lacks disciplined understanding of transition and consensus on the principles and activities underlying "micro-ing" architectures. In this paper, we report on a systematic mapping study that consolidates various views, approaches and activities that commonly assist in the transition to microservices. The study aims to provide a better understanding of the transition; it also contributes a working definition of the transition and technical activities underlying it. We term the transition and technical activities leading to microservice architectures as microservitization. We then shed light on a fundamental problem of microservitization: microservice granularity and reasoning about its adaptation as first-class entities. This study reviews state-of-the-art and -practice related to reasoning about microservice granularity; it reviews modelling approaches, aspects considered, guidelines and processes used to reason about microservice granularity. This study identifies opportunities for future research and development related to reasoning about microservice granularity.Comment: 36 pages including references, 6 figures, and 3 table

    TrIMS: Transparent and Isolated Model Sharing for Low Latency Deep LearningInference in Function as a Service Environments

    Full text link
    Deep neural networks (DNNs) have become core computation components within low latency Function as a Service (FaaS) prediction pipelines: including image recognition, object detection, natural language processing, speech synthesis, and personalized recommendation pipelines. Cloud computing, as the de-facto backbone of modern computing infrastructure for both enterprise and consumer applications, has to be able to handle user-defined pipelines of diverse DNN inference workloads while maintaining isolation and latency guarantees, and minimizing resource waste. The current solution for guaranteeing isolation within FaaS is suboptimal -- suffering from "cold start" latency. A major cause of such inefficiency is the need to move large amount of model data within and across servers. We propose TrIMS as a novel solution to address these issues. Our proposed solution consists of a persistent model store across the GPU, CPU, local storage, and cloud storage hierarchy, an efficient resource management layer that provides isolation, and a succinct set of application APIs and container technologies for easy and transparent integration with FaaS, Deep Learning (DL) frameworks, and user code. We demonstrate our solution by interfacing TrIMS with the Apache MXNet framework and demonstrate up to 24x speedup in latency for image classification models and up to 210x speedup for large models. We achieve up to 8x system throughput improvement.Comment: In Proceedings CLOUD 201
    • 

    corecore