466 research outputs found

    Authentication and authorisation in entrusted unions

    Get PDF
    This paper reports on the status of a project whose aim is to implement and demonstrate in a real-life environment an integrated eAuthentication and eAuthorisation framework to enable trusted collaborations and delivery of services across different organisational/governmental jurisdictions. This aim will be achieved by designing a framework with assurance of claims, trust indicators, policy enforcement mechanisms and processing under encryption to address the security and confidentiality requirements of large distributed infrastructures. The framework supports collaborative secure distributed storage, secure data processing and management in both the cloud and offline scenarios and is intended to be deployed and tested in two pilot studies in two different domains, viz, Bio-security incident management and Ambient Assisted Living (eHealth). Interim results in terms of security requirements, privacy preserving authentication, and authorisation are reported

    Alternate Means of Digital Design Communication

    Get PDF
    This thesis reconceptualises communication in digital design as an integrated social and technical process. The friction in the communicative processes pertaining to digital design can be traced to the fact that current research and practice emphasise technical concerns at the expense of social aspects of design communication. With the advent of BIM (Building Information Modelling), a code model of communication (machine-to-machine) is inadequately applied to design communication. This imbalance is addressed in this thesis by using inferential models of communication to capture and frame the psychological and social aspects behind the communicative contracts between people. Three critical aspects of the communicative act have been analysed, namely (1) data representation, (2) data classification and (3) data transaction, with the help of a new digital design communication platform, Speckle, which was developed during this research project for this purpose. By virtue of an applied living laboratory context, Speckle facilitated both qualitative and quantitative comparisons against existing methodologies with data from real-world settings. Regarding data representation (1), this research finds that the communicative performance of a low-level composable object model is better than that of a complete and universal one as it enables a more dynamic process of ontological revision. This implies that current practice and research operates at an inappropriate level of abstraction. On data classification (2), this thesis shows that a curatorial object-based data sharing methodology, as opposed to the current file-based approaches, leads to increased relevancy and a reduction in noise (information without intent, or meaning). Finally, on data transaction (3), the analysis shows that an object-based data sharing methodology is technically better suited to enable communicative contracts between stakeholders. It allows for faster and more meaningful change-dependent transactions, as well as allow for the emergence of traceable communicative networks outside of the predefined exchanges of current practices

    An Adaptable Optimal Network Topology Model for Efficient Data Centre Design in Storage Area Networks

    Get PDF
    In this research, we look at how different network topologies affect the energy consumption of modular data centre (DC) setups. We use a combined-input directed approach to assess the benefits of rack-scale and pod-scale fragmentation across a variety of electrical, optoelectronic, and composite network architectures in comparison to a conventional DC. When the optical transport architecture is implemented and the appropriate resource components are distributed, the findings reveal fragmentation at the layer level is adequate, even compared to a pod-scale DC. Composable DCs can operate at peak efficiency because of the optical network topology. Logical separation of conventional DC servers across an optical network architecture is also investigated in this article. When compared to physical decentralisation at the rack size, logical decomposition of data centers inside each rack offers a small decrease in the overall DC energy usage thanks to better resource needs allocation. This allows for a flexible, composable architecture that can accommodate performance based in-memory applications. Moreover, we look at the state of fundamentalmodel and its use in both static and dynamic data centres. According to our findings, typical DCs become more energy efficient when workload modularity increases, although excessive resource use still exists. By enabling optimal resource use and energy savings, disaggregation and micro-services were able to reduce the typical DC's up to 30%. Furthermore, we offer a heuristic to duplicate the Mixed integer model's output trends for energy-efficient allocation of caseloads in modularized DCs

    Software Frameworks for Model Composition

    Get PDF
    A software framework is an architecture or infrastructure intended to enable the integration and interoperation of software components. Specialized types of software frameworks are those specifically intended to support the composition of models or other components within a simulation system. Such frameworks are intended to simplify the process of assembling a complex model or simulation system from simpler component models as well as to promote the reuse of the component models. Several different types of software frameworks for model composition have been designed and implemented; those types include common library, product line architecture, interoperability protocol, object model, formal, and integrative environment. The various framework types have different components, processes for composing models, and intended applications. In this survey the fundamental terms and concepts of software frameworks for model composition are presented, the different types of such frameworks are explained and compared, and important examples of each type are described

    Teaching programming with computational and informational thinking

    Get PDF
    Computers are the dominant technology of the early 21st century: pretty well all aspects of economic, social and personal life are now unthinkable without them. In turn, computer hardware is controlled by software, that is, codes written in programming languages. Programming, the construction of software, is thus a fundamental activity, in which millions of people are engaged worldwide, and the teaching of programming is long established in international secondary and higher education. Yet, going on 70 years after the first computers were built, there is no well-established pedagogy for teaching programming. There has certainly been no shortage of approaches. However, these have often been driven by fashion, an enthusiastic amateurism or a wish to follow best industrial practice, which, while appropriate for mature professionals, is poorly suited to novice programmers. Much of the difficulty lies in the very close relationship between problem solving and programming. Once a problem is well characterised it is relatively straightforward to realise a solution in software. However, teaching problem solving is, if anything, less well understood than teaching programming. Problem solving seems to be a creative, holistic, dialectical, multi-dimensional, iterative process. While there are well established techniques for analysing problems, arbitrary problems cannot be solved by rote, by mechanically applying techniques in some prescribed linear order. Furthermore, historically, approaches to teaching programming have failed to account for this complexity in problem solving, focusing strongly on programming itself and, if at all, only partially and superficially exploring problem solving. Recently, an integrated approach to problem solving and programming called Computational Thinking (CT) (Wing, 2006) has gained considerable currency. CT has the enormous advantage over prior approaches of strongly emphasising problem solving and of making explicit core techniques. Nonetheless, there is still a tendency to view CT as prescriptive rather than creative, engendering scholastic arguments about the nature and status of CT techniques. Programming at heart is concerned with processing information but many accounts of CT emphasise processing over information rather than seeing then as intimately related. In this paper, while acknowledging and building on the strengths of CT, I argue that understanding the form and structure of information should be primary in any pedagogy of programming

    Authentication and authorisation in entrusted unions

    Get PDF
    This paper reports on the status of a project whose aim is to implement and demonstrate in a real-life environment an integrated eAuthentication and eAuthorisation framework to enable trusted collaborations and delivery of services across different organisational/governmental jurisdictions. This aim will be achieved by designing a framework with assurance of claims, trust indicators, policy enforcement mechanisms and processing under encryption to address the security and confidentiality requirements of large distributed infrastructures. The framework supports collaborative secure distributed storage, secure data processing and management in both the cloud and offline scenarios and is intended to be deployed and tested in two pilot studies in two different domains, viz, Bio-security incident management and Ambient Assisted Living (eHealth). Interim results in terms of security requirements, privacy preserving authentication, and authorisation are reported

    Hybrid Models as Transdisciplinary Research Enablers

    Get PDF
    This is the final version. Available on open access from Elsevier via the DOI in this recordModelling and simulation (M&S) techniques are frequently used in Operations Research (OR) to aid decision-making. With growing complexity of systems to be modelled, an increasing number of studies now apply multiple M&S techniques or hybrid simulation (HS) to represent the underlying system of interest. A parallel but related theme of research is extending the HS approach to include the development of hybrid models (HM). HM extends the M&S discipline by combining theories, methods and tools from across disciplines and applying multidisciplinary, interdisciplinary and transdisciplinary solutions to practice. In the broader OR literature, there are numerous examples of cross-disciplinary approaches in model development. However, within M&S, there is limited evidence of the application of conjoined methods for building HM. Where a stream of such research does exist, the integration of approaches is mostly at a technical level. In this paper, we argue that HM requires cross-disciplinary research engagement and a conceptual framework. The framework will enable the synthesis of discipline-specific methods and techniques, further cross-disciplinary research within the M&S community, and will serve as a transcending framework for the transdisciplinary alignment of M&S research with domain knowledge, hypotheses and theories from diverse disciplines. The framework will support the development of new composable HM methods, tools and applications. Although our framework is built around M&S literature, it is generally applicable to other disciplines, especially those with a computational element. The objective is to motivate a transdisciplinarity-enabling framework that supports the collaboration of research efforts from multiple disciplines, allowing them to grow into transdisciplinary research

    A model based approach for complex dynamic decision-making

    Get PDF
    Current state-of-the-practice and state-of-the-art of decision-making aids are inadequate for modern organisations that deal with significant uncertainty and business dynamism. This paper highlights the limitations of prevalent decision-making aids and proposes a model-based approach that advances the modelling abstraction and analysis machinery for complex dynamic decision-making. In particular, this paper proposes a meta-model to comprehensively represent organisation, establishes the relevance of model-based simulation technique as analysis means, introduces the advancements over actor technology to address analysis needs, and proposes a method to utilise proposed modelling abstraction, analysis technique, and analysis machinery in an effective and convenient manner. The proposed approach is illustrated using a near real-life case-study from a business process outsourcing organisation

    Revisiting Actor Programming in C++

    Full text link
    The actor model of computation has gained significant popularity over the last decade. Its high level of abstraction makes it appealing for concurrent applications in parallel and distributed systems. However, designing a real-world actor framework that subsumes full scalability, strong reliability, and high resource efficiency requires many conceptual and algorithmic additives to the original model. In this paper, we report on designing and building CAF, the "C++ Actor Framework". CAF targets at providing a concurrent and distributed native environment for scaling up to very large, high-performance applications, and equally well down to small constrained systems. We present the key specifications and design concepts---in particular a message-transparent architecture, type-safe message interfaces, and pattern matching facilities---that make native actors a viable approach for many robust, elastic, and highly distributed developments. We demonstrate the feasibility of CAF in three scenarios: first for elastic, upscaling environments, second for including heterogeneous hardware like GPGPUs, and third for distributed runtime systems. Extensive performance evaluations indicate ideal runtime behaviour for up to 64 cores at very low memory footprint, or in the presence of GPUs. In these tests, CAF continuously outperforms the competing actor environments Erlang, Charm++, SalsaLite, Scala, ActorFoundry, and even the OpenMPI.Comment: 33 page
    • 

    corecore