356 research outputs found

    Virtual learning process environment (VLPE): a BPM-based learning process management architecture

    Get PDF
    E-learning systems have significantly impacted the way that learning takes place within universities, particularly in providing self-learning support and flexibility of course delivery. Virtual Learning Environments help facilitate the management of educational courses for students, in particular by assisting course designers and thriving in the management of the learning itself. Current literature has shown that pedagogical modelling and learning process management facilitation are inadequate. In particular, quantitative information on the process of learning that is needed to perform real time or reflective monitoring and statistical analysis of students’ learning processes performance is deficient. Therefore, for a course designer, pedagogical evaluation and reform decisions can be difficult. This thesis presents an alternative e-learning systems architecture - Virtual Learning Process Environment (VLPE) - that uses the Business Process Management (BPM) conceptual framework to design an architecture that addresses the critical quantitative learning process information gaps associated with the conventional VLE frameworks. Within VLPE, course designers can model desired education pedagogies in the form of learning process workflows using an intuitive graphical flow diagram user-interface. Automated agents associated with BPM frameworks are employed to capture quantitative learning information from the learning process workflow. Consequently, course designers are able to monitor, analyse and re-evaluate in real time the effectiveness of their chosen pedagogy using live interactive learning process dashboards. Once a course delivery is complete the collated quantitative information can also be used to make major revisions to pedagogy design for the next iteration of the course. An additional contribution of this work is that this new architecture facilitates individual students in monitoring and analysing their own learning performances in comparison to their peers in a real time anonymous manner through a personal analytics learning process dashboard. A case scenario of the quantitative statistical analysis of a cohort of learners (10 participants in size) is presented. The analytical results of their learning processes, performances and progressions on a short Mathematics course over a five-week period are also presented in order to demonstrate that the proposed framework can significantly help to advance learning analytics and the visualisation of real time learning data

    A Novel Method for Adaptive Control of Manufacturing Equipment in Cloud Environments

    Get PDF
    The ability to adaptively control manufacturing equipment, both in local and distributed environments, is becoming increasingly more important for many manufacturing companies. One important reason for this is that manufacturing companies are facing increasing levels of changes, variations and uncertainty, caused by both internal and external factors, which can negatively impact their performance. Frequently changing consumer requirements and market demands usually lead to variations in manufacturing quantities, product design and shorter product life-cycles. Variations in manufacturing capability and functionality, such as equipment breakdowns, missing/worn/broken tools and delays, also contribute to a high level of uncertainty. The result is unpredictable manufacturing system performance, with an increased number of unforeseen events occurring in these systems. Events which are difficult for traditional planning and control systems to satisfactorily manage. For manufacturing scenarios such as these, the use of real-time manufacturing information and intelligence is necessary to enable manufacturing activities to be performed according to actual manufacturing conditions and requirements, and not according to a pre-determined process plan. Therefore, there is a need for an event-driven control approach to facilitate adaptive decision-making and dynamic control capabilities. Another reason driving the move for adaptive control of manufacturing equipment is the trend of increasing globalization, which forces manufacturing industry to focus on more cost-effective manufacturing systems and collaboration within global supply chains and manufacturing networks. Cloud Manufacturing is evolving as a new manufacturing paradigm to match this trend, enabling the mutually advantageous sharing of resources, knowledge and information between distributed companies and manufacturing units. One of the crucial objectives for Cloud Manufacturing is the coordinated planning, control and execution of discrete manufacturing operations in collaborative and networked environments. Therefore, there is also a need that such an event-driven control approach supports the control of distributed manufacturing equipment. The aim of this research study is to define and verify a novel and comprehensive method for adaptive control of manufacturing equipment in cloud environments. The presented research follows the Design Science Research methodology. From a review of research literature, problems regarding adaptive manufacturing equipment control have been identified. A control approach, building on a structure of event-driven Manufacturing Feature Function Blocks, supported by an Information Framework, has been formulated. The Function Block structure is constructed to generate real-time control instructions, triggered by events from the manufacturing environment. The Information Framework uses the concept of Ontologies and The Semantic Web to enable description and matching of manufacturing resource capabilities and manufacturing task requests in distributed environments, e.g. within Cloud Manufacturing. The suggested control approach has been designed and instantiated, implemented as prototype systems for both local and distributed manufacturing scenarios, in both real and virtual applications. In these systems, event-driven Assembly Feature Function Blocks for adaptive control of robotic assembly tasks have been used to demonstrate the applicability of the control approach. The utility and performance of these prototype systems have been tested, verified and evaluated for different assembly scenarios. The proposed control approach has many promising characteristics for use within both local and distributed environments, such as cloud environments. The biggest advantage compared to traditional control is that the required control is created at run-time according to actual manufacturing conditions. The biggest obstacle for being applicable to its full extent is manufacturing equipment controlled by proprietary control systems, with native control languages. To take the full advantage of the IEC Function Block control approach, controllers which can interface, interpret and execute these Function Blocks directly, are necessary

    Deriving Goal-oriented Performance Models by Systematic Experimentation

    Get PDF
    Performance modelling can require substantial effort when creating and maintaining performance models for software systems that are based on existing software. Therefore, this thesis addresses the challenge of performance prediction in such scenarios. It proposes a novel goal-oriented method for experimental, measurement-based performance modelling. We validated the approach in a number of case studies including standard industry benchmarks as well as a real development scenario at SAP

    Adaptive monitoring and control framework in Application Service Management environment

    Get PDF
    The economics of data centres and cloud computing services have pushed hardware and software requirements to the limits, leaving only very small performance overhead before systems get into saturation. For Application Service Management–ASM, this carries the growing risk of impacting the execution times of various processes. In order to deliver a stable service at times of great demand for computational power, enterprise data centres and cloud providers must implement fast and robust control mechanisms that are capable of adapting to changing operating conditions while satisfying service–level agreements. In ASM practice, there are normally two methods for dealing with increased load, namely increasing computational power or releasing load. The first approach typically involves allocating additional machines, which must be available, waiting idle, to deal with high demand situations. The second approach is implemented by terminating incoming actions that are less important to new activity demand patterns, throttling, or rescheduling jobs. Although most modern cloud platforms, or operating systems, do not allow adaptive/automatic termination of processes, tasks or actions, it is administrators’ common practice to manually end, or stop, tasks or actions at any level of the system, such as at the level of a node, function, or process, or kill a long session that is executing on a database server. In this context, adaptive control of actions termination remains a significantly underutilised subject of Application Service Management and deserves further consideration. For example, this approach may be eminently suitable for systems with harsh execution time Service Level Agreements, such as real–time systems, or systems running under conditions of hard pressure on power supplies, systems running under variable priority, or constraints set up by the green computing paradigm. Along this line of work, the thesis investigates the potential of dimension relevance and metrics signals decomposition as methods that would enable more efficient action termination. These methods are integrated in adaptive control emulators and actuators powered by neural networks that are used to adjust the operation of the system to better conditions in environments with established goals seen from both system performance and economics perspectives. The behaviour of the proposed control framework is evaluated using complex load and service agreements scenarios of systems compatible with the requirements of on–premises, elastic compute cloud deployments, server–less computing, and micro–services architectures

    Network analysis of large scale object oriented software systems

    Get PDF
    PhD ThesisThe evolution of software engineering knowledge, technology, tools, and practices has seen progressive adoption of new design paradigms. Currently, the predominant design paradigm is object oriented design. Despite the advocated and demonstrated benefits of object oriented design, there are known limitations of static software analysis techniques for object oriented systems, and there are many current and legacy object oriented software systems that are difficult to maintain using the existing reverse engineering techniques and tools. Consequently, there is renewed interest in dynamic analysis of object oriented systems, and the emergence of large and highly interconnected systems has fuelled research into the development of new scalable techniques and tools to aid program comprehension and software testing. In dynamic analysis, a key research problem is efficient interpretation and analysis of large volumes of precise program execution data to facilitate efficient handling of software engineering tasks. Some of the techniques, employed to improve the efficiency of analysis, are inspired by empirical approaches developed in other fields of science and engineering that face comparable data analysis challenges. This research is focused on application of empirical network analysis measures to dynamic analysis data of object oriented software. The premise of this research is that the methods that contribute significantly to the object collaboration network's structural integrity are also important for delivery of the software system’s function. This thesis makes two key contributions. First, a definition is proposed for the concept of the functional importance of methods of object oriented software. Second, the thesis proposes and validates a conceptual link between object collaboration networks and the properties of a network model with power law connectivity distribution. Results from empirical software engineering experiments on JHotdraw and Google Chrome are presented. The results indicate that five considered standard centrality based network measures can be used to predict functionally important methods with a significant level of accuracy. The search for functional importance of software elements is an essential starting point for program comprehension and software testing activities. The proposed definition and application of network analysis has the potential to improve the efficiency of post release phase software engineering activities by facilitating rapid identification of potentially functionally important methods in object oriented software. These results, with some refinement, could be used to perform change impact prediction and a host of other potentially beneficial applications to improve software engineering techniques

    A framework for knowledge discovery within business intelligence for decision support

    Get PDF
    Business Intelligence (BI) techniques provide the potential to not only efficiently manage but further analyse and apply the collected information in an effective manner. Benefiting from research both within industry and academia, BI provides functionality for accessing, cleansing, transforming, analysing and reporting organisational datasets. This provides further opportunities for the data to be explored and assist organisations in the discovery of correlations, trends and patterns that exist hidden within the data. This hidden information can be employed to provide an insight into opportunities to make an organisation more competitive by allowing manager to make more informed decisions and as a result, corporate resources optimally utilised. This potential insight provides organisations with an unrivalled opportunity to remain abreast of market trends. Consequently, BI techniques provide significant opportunity for integration with Decision Support Systems (DSS). The gap which was identified within the current body of knowledge and motivated this research, revealed that currently no suitable framework for BI, which can be applied at a meta-level and is therefore tool, technology and domain independent, currently exists. To address the identified gap this study proposes a meta-level framework: - ‘KDDS-BI’, which can be applied at an abstract level and therefore structure a BI investigation, irrespective of the end user. KDDS-BI not only facilitates the selection of suitable techniques for BI investigations, reducing the reliance upon ad-hoc investigative approaches which rely upon ‘trial and error’, yet further integrates Knowledge Management (KM) principles to ensure the retention and transfer of knowledge due to a structured approach to provide DSS that are based upon the principles of BI. In order to evaluate and validate the framework, KDDS-BI has been investigated through three distinct case studies. First KDDS-BI facilitates the integration of BI within ‘Direct Marketing’ to provide innovative solutions for analysis based upon the most suitable BI technique. Secondly, KDDS-BI is investigated within sales promotion, to facilitate the selection of tools and techniques for more focused in store marketing campaigns and increase revenue through the discovery of hidden data, and finally, operations management is analysed within a highly dynamic and unstructured environment of the London Underground Ltd. network through unique a BI solution to organise and manage resources, thereby increasing the efficiency of business processes. The three case studies provide insight into not only how KDDS-BI provides structure to the integration of BI within business process, but additionally the opportunity to analyse the performance of KDDS-BI within three independent environments for distinct purposes provided structure through KDDS-BI thereby validating and corroborating the proposed framework and adding value to business processes.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    ERP implementation methodologies and frameworks: a literature review

    Get PDF
    Enterprise Resource Planning (ERP) implementation is a complex and vibrant process, one that involves a combination of technological and organizational interactions. Often an ERP implementation project is the single largest IT project that an organization has ever launched and requires a mutual fit of system and organization. Also the concept of an ERP implementation supporting business processes across many different departments is not a generic, rigid and uniform concept and depends on variety of factors. As a result, the issues addressing the ERP implementation process have been one of the major concerns in industry. Therefore ERP implementation receives attention from practitioners and scholars and both, business as well as academic literature is abundant and not always very conclusive or coherent. However, research on ERP systems so far has been mainly focused on diffusion, use and impact issues. Less attention has been given to the methods used during the configuration and the implementation of ERP systems, even though they are commonly used in practice, they still remain largely unexplored and undocumented in Information Systems research. So, the academic relevance of this research is the contribution to the existing body of scientific knowledge. An annotated brief literature review is done in order to evaluate the current state of the existing academic literature. The purpose is to present a systematic overview of relevant ERP implementation methodologies and frameworks as a desire for achieving a better taxonomy of ERP implementation methodologies. This paper is useful to researchers who are interested in ERP implementation methodologies and frameworks. Results will serve as an input for a classification of the existing ERP implementation methodologies and frameworks. Also, this paper aims also at the professional ERP community involved in the process of ERP implementation by promoting a better understanding of ERP implementation methodologies and frameworks, its variety and history

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    An evaluation of the challenges of Multilingualism in Data Warehouse development

    Get PDF
    In this paper we discuss Business Intelligence and define what is meant by support for Multilingualism in a Business Intelligence reporting context. We identify support for Multilingualism as a challenging issue which has implications for data warehouse design and reporting performance. Data warehouses are a core component of most Business Intelligence systems and the star schema is the approach most widely used to develop data warehouses and dimensional Data Marts. We discuss the way in which Multilingualism can be supported in the Star Schema and identify that current approaches have serious limitations which include data redundancy and data manipulation, performance and maintenance issues. We propose a new approach to enable the optimal application of multilingualism in Business Intelligence. The proposed approach was found to produce satisfactory results when used in a proof-of-concept environment. Future work will include testing the approach in an enterprise environmen
    corecore