520 research outputs found

    A systematic methodology to analyse the performance and design configurations of business interoperability in cooperative industrial networks

    Get PDF
    This thesis proposes a methodology for modelling business interoperability in a context of cooperative industrial networks. The purpose is to develop a methodology that enables the design of cooperative industrial network platforms that are able to deliver business interoperability and the analysis of its impact on the performance of these platforms. To achieve the proposed objective, two modelling tools have been employed: the Axiomatic Design Theory for the design of interoperable platforms; and Agent-Based Simulation for the analysis of the impact of business interoperability. The sequence of the application of the two modelling tools depends on the scenario under analysis, i.e. whether the cooperative industrial network platform exists or not. If the cooperative industrial network platform does not exist, the methodology suggests first the application of the Axiomatic Design Theory to design different configurations of interoperable cooperative industrial network platforms, and then the use of Agent-Based Simulation to analyse or predict the business interoperability and operational performance of the designed configurations. Otherwise, one should start by analysing the performance of the existing platform and based on the achieved results, decide whether it is necessary to redesign it or not. If the redesign is needed, simulation is once again used to predict the performance of the redesigned platform. To explain how those two modelling tools can be applied in practice, a theoretical modelling framework, a theoretical Axiomatic Design model and a theoretical Agent-Based Simulation model are proposed. To demonstrate the applicability of the proposed methodology and/or to validate the proposed theoretical models, a case study regarding a Portuguese Reverse Logistics cooperative network (Valorpneu network) and a case study regarding a Portuguese construction project (Dam Baixo Sabor network) are presented. The findings of the application of the proposed methodology to these two case studies suggest that indeed the Axiomatic Design Theory can effectively contribute in the design of interoperable cooperative industrial network platforms and that Agent-Based Simulation provides an effective set of tools for analysing the impact of business interoperability on the performance of those platforms. However, these conclusions cannot be generalised as only two case studies have been carried out. In terms of relevance to theory, this is the first time that the network effect is addressed in the analysis of the impact of business interoperability on the performance of networked companies and also the first time that a holistic approach is proposed to design interoperable cooperative industrial network platforms. Regarding the practical implications, the proposed methodology is intended to provide industrial managers a management tool that can guide them easily, and in practical and systematic way, in the design of configurations of interoperable cooperative industrial network platforms and/or in the analysis of the impact of business interoperability on the performance of their companies and the networks where their companies operate

    DHS-IME Procurement Project Iowa Medicaid Enterprise Medicaid Information Technology Architecture State Self-Assessment Report, June 1, 2009

    Get PDF
    Medicaid Information Technology Architecture (MITA) is a business initiative of the Centers for Medicare and Medicaid Services (CMS) in cooperation with State programs. It is intended to stimulate an integrated business and technological transformation of the Medicaid enterprise in all states. MITA can improve Medicaid program administration by aligning business processes and supporting technology with national guidelines. The MITA Framework is a consolidation of principles, business and technical models, and guidelines that provides a template for states to use in development of their individual enterprise architectures. It is utilized in a manner that is consistent with CMS‟ expectations. In the future, MITA guidelines will support states‟ requests for appropriate Federal financial participation (FFP) for their Medicaid Management Information Systems (MMIS)

    Credit Risk Management in Nigerian Banks (2005 – 2015)

    Get PDF
    This study examines credit risk management in Nigerian banks. Content analysis approach was used to examine 15 banks over a ten years period. Findings from the study revealed that credit risk architecture significantly affects loan recovery of selected banks in Nigeria. Also GDP, NPL, interest rate and unemployment significantly affects the credit risk structure of banks in Nigeria. However inflation had insignificant effect on the credit risk structure of banks in Nigeria. We recommend that banks should enhance their credit risk architecture to always include collateral review and management, facility performance monitoring, quality reviews classification and risk portfolio reporting. Again, banks credit granting decision should be based on the result of risk assessment client’s solvency, available collateral, and transaction compliance with policies. Keywords: Credit risk, Credit structure, and Content analysis, Credit Risk Architecture

    Compliance flow: an intelligent workflow management system to support engineering processes

    Get PDF
    This work is about extending the scope of current workflow management systems to support engineering processes. On the one hand engineering processes are relatively dynamic, and on the other their specification and performance are constrained by industry standards and guidelines for the sake of product acceptability, such as IEC 61508 for safety and ISO 9001 for quality. A number of technologies have been proposed to increase the adaptability of current workflow systems to deal with dynamic situations. A primary concern is how to support open-ended processes that cannot be completely specified in detail prior to their execution. A survey of adaptive workflow systems is given and the enabling technologies are discussed. Engineering processes are studied and their characteristics are identified and discussed. Current workflow systems have been successfully used in managing "administrative" processes for some time, but they lack the flexibility to support dynamic, unpredictable, collaborative, and highly interdependent engineering processes. [Continues.

    Perfomance Analysis and Resource Optimisation of Critical Systems Modelled by Petri Nets

    Get PDF
    Un sistema crítico debe cumplir con su misión a pesar de la presencia de problemas de seguridad. Este tipo de sistemas se suele desplegar en entornos heterogéneos, donde pueden ser objeto de intentos de intrusión, robo de información confidencial u otro tipo de ataques. Los sistemas, en general, tienen que ser rediseñados después de que ocurra un incidente de seguridad, lo que puede conducir a consecuencias graves, como el enorme costo de reimplementar o reprogramar todo el sistema, así como las posibles pérdidas económicas. Así, la seguridad ha de ser concebida como una parte integral del desarrollo de sistemas y como una necesidad singular de lo que el sistema debe realizar (es decir, un requisito no funcional del sistema). Así pues, al diseñar sistemas críticos es fundamental estudiar los ataques que se pueden producir y planificar cómo reaccionar frente a ellos, con el fin de mantener el cumplimiento de requerimientos funcionales y no funcionales del sistema. A pesar de que los problemas de seguridad se consideren, también es necesario tener en cuenta los costes incurridos para garantizar un determinado nivel de seguridad en sistemas críticos. De hecho, los costes de seguridad puede ser un factor muy relevante ya que puede abarcar diferentes dimensiones, como el presupuesto, el rendimiento y la fiabilidad. Muchos de estos sistemas críticos que incorporan técnicas de tolerancia a fallos (sistemas FT) para hacer frente a las cuestiones de seguridad son sistemas complejos, que utilizan recursos que pueden estar comprometidos (es decir, pueden fallar) por la activación de los fallos y/o errores provocados por posibles ataques. Estos sistemas pueden ser modelados como sistemas de eventos discretos donde los recursos son compartidos, también llamados sistemas de asignación de recursos. Esta tesis se centra en los sistemas FT con recursos compartidos modelados mediante redes de Petri (Petri nets, PN). Estos sistemas son generalmente tan grandes que el cálculo exacto de su rendimiento se convierte en una tarea de cálculo muy compleja, debido al problema de la explosión del espacio de estados. Como resultado de ello, una tarea que requiere una exploración exhaustiva en el espacio de estados es incomputable (en un plazo prudencial) para sistemas grandes. Las principales aportaciones de esta tesis son tres. Primero, se ofrecen diferentes modelos, usando el Lenguaje Unificado de Modelado (Unified Modelling Language, UML) y las redes de Petri, que ayudan a incorporar las cuestiones de seguridad y tolerancia a fallos en primer plano durante la fase de diseño de los sistemas, permitiendo así, por ejemplo, el análisis del compromiso entre seguridad y rendimiento. En segundo lugar, se proporcionan varios algoritmos para calcular el rendimiento (también bajo condiciones de fallo) mediante el cálculo de cotas de rendimiento superiores, evitando así el problema de la explosión del espacio de estados. Por último, se proporcionan algoritmos para calcular cómo compensar la degradación de rendimiento que se produce ante una situación inesperada en un sistema con tolerancia a fallos

    Evaluation of property management systems for use within the Social Housing Sector in South Africa

    Get PDF
    Student Number : 8605435T - MSc research report - School of Construction Economics and Management - Faculty of Engineering and the Built EnvironmentThe purpose of this qualitative research project is to establish whether or not there is currently a property management system or systems available that meet the unique requirements of the overall ICT strategy for the Social Housing sector in South Africa. This included a detailed evaluation of candidate systems wherever possible. A generic functional specification was outlined in the report and these, together with other factors including conformance with the proposed strategic architecture, technology imperatives and vendor characteristics formed the basis of the evaluation and recommendation that followed. The state of Information Technology within a sample group of Housing Instititions was determined, together with an evaluation of available skills. The JD Edwards Financial Real Estate system owned by PeopleSoft and supported by Delloites, stood out as the leading commercial software package to satisfy the requirements of the overall ICT strategy for the sector. The IFCA Property Plus system ranked a close second

    Latent deep sequential learning of behavioural sequences

    Get PDF
    The growing use of asynchronous online education (MOOCs and e-courses) in recent years has resulted in increased economic and scientific productivity, which has worsened during the coronavirus epidemic. The widespread usage of OLEs has increased enrolment, including previously excluded students, resulting in a far higher dropout rate than in conventional classrooms. Dropouts are a significant problem, especially considering the rising proliferation of online courses, from individual MOOCs to whole academic programmes due to the pandemic. Increased efficiency in dropout prevention techniques is vital for institutions, students, and faculty members and must be prioritised. In response to the resurgence of interest in the student dropout prediction (SDP) issue, there has been a significant rise in contributions to the literature on this topic. An in-depth review of the current state of the art literature on SDP is provided, with a special emphasis on Machine Learning prediction approaches; however, this is not the only focus of the thesis. We propose a complete hierarchical categorisation of the current literature that correlates to the process of design decisions in the SDP, and we demonstrate how it may be implemented. In order to enable comparative analysis, we develop a formal notation for universally defining the multiple dropout models examined by scholars in the area, including online degrees and their attributes. We look at several other important factors that have received less attention in the literature, such as evaluation metrics, acquired data, and privacy concerns. We emphasise deep sequential machine learning approaches and are considered to be one of the most successful solutions available in this field of study. Most importantly, we present a novel technique - namely GRU-AE - for tackling the SDP problem using hidden spatial information and time-related data from student trajectories. Our method is capable of dealing with data imbalances and time-series sparsity challenges. The proposed technique outperforms current methods in various situations, including the complex scenario of full-length courses (such as online degrees). This situation was thought to be less common before the outbreak, but it is now deemed important. Finally, we extend our findings to different contexts with a similar characterisation (temporal sequences of behavioural labels). Specifically, we show that our technique can be used in real-world circumstances where the unbalanced nature of the data can be mitigated by using class balancement technique (i.e. ADASYN), e.g., survival prediction in critical care telehealth systems where balancement technique alleviates the problem of inter-activity reliance and sparsity, resulting in an overall improvement in performance

    Rule-based knowledge aggregation for large-scale protein sequence analysis of influenza A viruses

    Get PDF
    Background: The explosive growth of biological data provides opportunities for new statistical and comparative analyses of large information sets, such as alignments comprising tens of thousands of sequences. In such studies, sequence annotations frequently play an essential role, and reliable results depend on metadata quality. However, the semantic heterogeneity and annotation inconsistencies in biological databases greatly increase the complexity of aggregating and cleaning metadata. Manual curation of datasets, traditionally favoured by life scientists, is impractical for studies involving thousands of records. In this study, we investigate quality issues that affect major public databases, and quantify the effectiveness of an automated metadata extraction approach that combines structural and semantic rules. We applied this approach to more than 90,000 influenza A records, to annotate sequences with protein name, virus subtype, isolate, host, geographic origin, and year of isolation. Results: Over 40,000 annotated Influenza A protein sequences were collected by combining information from more than 90,000 documents from NCBI public databases. Metadata values were automatically extracted, aggregated and reconciled from several document fields by applying user-defined structural rules. For each property, values were recovered from ≥88.8% of records, with accuracy exceeding 96% in most cases. Because of semantic heterogeneity, each property required up to six different structural rules to be combined. Significant quality differences between databases were found: GenBank documents yield values more reliably than documents extracted from GenPept. Using a simple set of semantic rules and a reasoner, we reconstructed relationships between sequences from the same isolate, thus identifying 7640 isolates. Validation of isolate metadata against a simple ontology highlighted more than 400 inconsistencies, leading to over 3,000 property value corrections. Conclusion: To overcome the quality issues inherent in public databases, automated knowledge aggregation with embedded intelligence is needed for large-scale analyses. Our results show that user-controlled intuitive approaches, based on combination of simple rules, can reliably automate various curation tasks, reducing the need for manual corrections to approximately 5% of the records. Emerging semantic technologies possess desirable features to support today's knowledge aggregation tasks, with a potential to bring immediate benefits to this field

    Workflow technology for complex socio-technical systems

    Full text link
    Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal
    corecore