1,515 research outputs found

    Advancing Carbon Sequestration through Smart Proxy Modeling: Leveraging Domain Expertise and Machine Learning for Efficient Reservoir Simulation

    Get PDF
    Geological carbon sequestration (GCS) offers a promising solution to effectively manage extra carbon, mitigating the impact of climate change. This doctoral research introduces a cutting-edge Smart Proxy Modeling-based framework, integrating artificial neural networks (ANNs) and domain expertise, to re-engineer and empower numerical reservoir simulation for efficient modeling of CO2 sequestration and demonstrate predictive conformance and replicative capabilities of smart proxy modeling. Creating well-performing proxy models requires extensive human intervention and trial-and-error processes. Additionally, a large training database is essential to ANN model for complex tasks such as deep saline aquifer CO2 sequestration since it is used as the neural network\u27s input and output data. One major limitation in CCS programs is the lack of real field data due to a lack of field applications and issues with confidentiality. Considering these drawbacks, and due to high-dimensional nonlinearity, heterogeneity, and coupling of multiple physical processes associated with numerical reservoir simulation, novel research to handle these complexities as it allows for the creation of possible CO2 sequestration scenarios that may be used as a training set. This study addresses several types of static and dynamic realistic and practical field-base data augmentation techniques ranging from spatial complexity, spatio-temporal complexity, and heterogeneity of reservoir characteristics. By incorporating domain-expertise-based feature generation, this framework honors precise representation of reservoir overcoming computational challenges associated with numerical reservoir tools. The developed ANN accurately replicated fluid flow behavior, resulting in significant computational savings compared to traditional numerical simulation models. The results showed that all the ML models achieved very good accuracies and high efficiency. The findings revealed that the quality of the path between the focal cell and injection wells emerged as the most crucial factor in both CO2 saturation and pressure estimation models. These insights significantly contribute to our understanding of CO2 plume monitoring, paving the way for breakthroughs in investigating reservoir behavior at a minimal computational cost. The study\u27s commitment to replicating numerical reservoir simulation results underscores the model\u27s potential to contribute valuable insights into the behavior and performance of CO2 sequestration systems, as a complimentary tool to numerical reservoir simulation when there is no measured data available from the field. The transformative nature of this research has vast implications for advancing carbon storage modeling technologies. By addressing the computational limitations of traditional numerical reservoir models and harnessing the synergy between machine learning and domain expertise, this work provides a practical workflow for efficient decision-making in sequestration projects

    Guidelines for the user interface design of electronic medical records in optometry

    Get PDF
    With the prevalence of digitalisation in the medical industry, e-health systems have largely replaced the traditional paper-based recording methods. At the centre of these e-health systems are Electronic Health Records (EHRs) and Electronic Medical Records (EMRs), whose benefits significantly improve physician workflows. However, provision for user interface designs (UIDs) of these systems have been so poor that they have severely hindered physician usability, disrupted their workflows and risked patient safety. UID and usability guidelines have been provided, but have been very high level and general, mostly suitable for EHRs (which are used in general practices and hospitals). These guidelines have thus been ineffective in applicability for EMRs, which are typically used in niche medical environments. Within the niche field of Optometry, physicians experience disrupted workflows as a result of poor EMR UID and usability, of which EMR guidelines to improve these challenges are scarce. Hence, the need for this research arose, aiming to create UID guidelines for EMRs in Optometry, which will help improve the usability of the optometrists’ EMR. The main research question was successfully answered to produce the set of UID Guidelines for EMRs in Optometry, which includes guidelines built upon from literature and made contextually relevant, as well as some new additions, which are more patient focused. Design Science Research (DSR) was chosen as a suitable approach, and the phased Design Science Research Process Model (DSRPM) was used to guide this research. A literature review was conducted, including EHR and EMR, usability, UIDs, Optometry, related fields, and studies previously conducted to provide guidelines, frameworks and models. The review also included studying usability problems reported on the systems and the methods to overcome them. Task Analysis (TA) was used to observe and understand the optometrists’ workflows and their interactions with their EMRs during patient appointments, also identifying EMR problem areas. To address these problems, Focus Groups (FGs) were used to brainstorm solutions in the form of EMR UID features that optometrists’ required to improve their usability. From the literature review, TAs and FGs, proposed guidelines were created. The created guidelines informed the UID of an EMR prototype, which was successfully demonstrated to optometrists during Usability Testing sessions for the evaluation. Surveys were also used for the evaluation. The results proved the guidelines were successful, and were usable, effective, efficient and of good quality. A revised, final set of guidelines was then presented. Future researchers and designers may benefit from the contributions made from this research, which are both theoretical and practical

    An Integrated Cybersecurity Risk Management (I-CSRM) Framework for Critical Infrastructure Protection

    Get PDF
    Risk management plays a vital role in tackling cyber threats within the Cyber-Physical System (CPS) for overall system resilience. It enables identifying critical assets, vulnerabilities, and threats and determining suitable proactive control measures to tackle the risks. However, due to the increased complexity of the CPS, cyber-attacks nowadays are more sophisticated and less predictable, which makes risk management task more challenging. This research aims for an effective Cyber Security Risk Management (CSRM) practice using assets criticality, predication of risk types and evaluating the effectiveness of existing controls. We follow a number of techniques for the proposed unified approach including fuzzy set theory for the asset criticality, machine learning classifiers for the risk predication and Comprehensive Assessment Model (CAM) for evaluating the effectiveness of the existing controls. The proposed approach considers relevant CSRM concepts such as threat actor attack pattern, Tactic, Technique and Procedure (TTP), controls and assets and maps these concepts with the VERIS community dataset (VCDB) features for the purpose of risk predication. Also, the tool serves as an additional component of the proposed framework that enables asset criticality, risk and control effectiveness calculation for a continuous risk assessment. Lastly, the thesis employs a case study to validate the proposed i-CSRM framework and i-CSRMT in terms of applicability. Stakeholder feedback is collected and evaluated using critical criteria such as ease of use, relevance, and usability. The analysis results illustrate the validity and acceptability of both the framework and tool for an effective risk management practice within a real-world environment. The experimental results reveal that using the fuzzy set theory in assessing assets' criticality, supports stakeholder for an effective risk management practice. Furthermore, the results have demonstrated the machine learning classifiers’ have shown exemplary performance in predicting different risk types including denial of service, cyber espionage, and Crimeware. An accurate prediction can help organisations model uncertainty with machine learning classifiers, detect frequent cyber-attacks, affected assets, risk types, and employ the necessary corrective actions for its mitigations. Lastly, to evaluate the effectiveness of the existing controls, the CAM approach is used, and the result shows that some controls such as network intrusion, authentication, and anti-virus show high efficacy in controlling or reducing risks. Evaluating control effectiveness helps organisations to know how effective the controls are in reducing or preventing any form of risk before an attack occurs. Also, organisations can implement new controls earlier. The main advantage of using the CAM approach is that the parameters used are objective, consistent and applicable to CPS

    Exploring sequences of challenges and regulation in collaborative learning with process mining methodology

    Get PDF
    Abstract. The present study investigated the sequential interplay between cognitive and emotional/motivational challenges and regulation in collaborative learning groups of two profiles, high and low performing groups. The 77 participants were students of higher education institution, who collaboratively worked on a computer-based simulation in groups of three. The video data of approximately 34 hours was coded on a fine-grained level. Sequential analysis was applied by means of process mining methodology. The results show that in both groups cognitive regulation (i.e., planning, monitoring, and controlling) has a strong sequential relationship with emotional/motivational regulation than cognitive challenges. Unlike low performing groups (LPGs), high performing groups (HPGs) triggered a strong sequential relationship between cognitive regulation and emotional/motivational regulation to tackle cognitive challenges. Moreover, the results reveal that both groups initiated a regulatory process of monitoring. However, for LPGs monitoring manifested more sequences of emotional/motivational challenges which deterred them to run a regulatory process of controlling. Whereas HPGs were active enough to not only monitor but also control their learning by applying different strategies to progress in the task. Regarding statistical analysis, no difference was observed between HPGs and LPGs in terms of duration and frequency of each coding category. In addition, the process models of both groups also demonstrate that one regulatory process (i.e., cognitive) could have more and stronger sequential relationship with other regulatory processes (i.e., emotion/motivation) than cognitive and emotional/motivational challenges. The current study establishes theoretical grounding to advance understanding about the sequential relationship between challenges and regulation in low and high performing collaborative groups. On the practical implication’s front, it also provides empirical insights to develop pedagogical methodologies and designed tailored support to help collaborative groups deal with challenges by initiating regulatory processes to proceed in learning task

    Decision Support Elements and Enabling Techniques to Achieve a Cyber Defence Situational Awareness Capability

    Full text link
    [ES] La presente tesis doctoral realiza un análisis en detalle de los elementos de decisión necesarios para mejorar la comprensión de la situación en ciberdefensa con especial énfasis en la percepción y comprensión del analista de un centro de operaciones de ciberseguridad (SOC). Se proponen dos arquitecturas diferentes basadas en el análisis forense de flujos de datos (NF3). La primera arquitectura emplea técnicas de Ensemble Machine Learning mientras que la segunda es una variante de Machine Learning de mayor complejidad algorítmica (lambda-NF3) que ofrece un marco de defensa de mayor robustez frente a ataques adversarios. Ambas propuestas buscan automatizar de forma efectiva la detección de malware y su posterior gestión de incidentes mostrando unos resultados satisfactorios en aproximar lo que se ha denominado un SOC de próxima generación y de computación cognitiva (NGC2SOC). La supervisión y monitorización de eventos para la protección de las redes informáticas de una organización debe ir acompañada de técnicas de visualización. En este caso, la tesis aborda la generación de representaciones tridimensionales basadas en métricas orientadas a la misión y procedimientos que usan un sistema experto basado en lógica difusa. Precisamente, el estado del arte muestra serias deficiencias a la hora de implementar soluciones de ciberdefensa que reflejen la relevancia de la misión, los recursos y cometidos de una organización para una decisión mejor informada. El trabajo de investigación proporciona finalmente dos áreas claves para mejorar la toma de decisiones en ciberdefensa: un marco sólido y completo de verificación y validación para evaluar parámetros de soluciones y la elaboración de un conjunto de datos sintéticos que referencian unívocamente las fases de un ciberataque con los estándares Cyber Kill Chain y MITRE ATT & CK.[CA] La present tesi doctoral realitza una anàlisi detalladament dels elements de decisió necessaris per a millorar la comprensió de la situació en ciberdefensa amb especial èmfasi en la percepció i comprensió de l'analista d'un centre d'operacions de ciberseguretat (SOC). Es proposen dues arquitectures diferents basades en l'anàlisi forense de fluxos de dades (NF3). La primera arquitectura empra tècniques de Ensemble Machine Learning mentre que la segona és una variant de Machine Learning de major complexitat algorítmica (lambda-NF3) que ofereix un marc de defensa de major robustesa enfront d'atacs adversaris. Totes dues propostes busquen automatitzar de manera efectiva la detecció de malware i la seua posterior gestió d'incidents mostrant uns resultats satisfactoris a aproximar el que s'ha denominat un SOC de pròxima generació i de computació cognitiva (NGC2SOC). La supervisió i monitoratge d'esdeveniments per a la protecció de les xarxes informàtiques d'una organització ha d'anar acompanyada de tècniques de visualització. En aquest cas, la tesi aborda la generació de representacions tridimensionals basades en mètriques orientades a la missió i procediments que usen un sistema expert basat en lògica difusa. Precisament, l'estat de l'art mostra serioses deficiències a l'hora d'implementar solucions de ciberdefensa que reflectisquen la rellevància de la missió, els recursos i comeses d'una organització per a una decisió més ben informada. El treball de recerca proporciona finalment dues àrees claus per a millorar la presa de decisions en ciberdefensa: un marc sòlid i complet de verificació i validació per a avaluar paràmetres de solucions i l'elaboració d'un conjunt de dades sintètiques que referencien unívocament les fases d'un ciberatac amb els estàndards Cyber Kill Chain i MITRE ATT & CK.[EN] This doctoral thesis performs a detailed analysis of the decision elements necessary to improve the cyber defence situation awareness with a special emphasis on the perception and understanding of the analyst of a cybersecurity operations center (SOC). Two different architectures based on the network flow forensics of data streams (NF3) are proposed. The first architecture uses Ensemble Machine Learning techniques while the second is a variant of Machine Learning with greater algorithmic complexity (lambda-NF3) that offers a more robust defense framework against adversarial attacks. Both proposals seek to effectively automate the detection of malware and its subsequent incident management, showing satisfactory results in approximating what has been called a next generation cognitive computing SOC (NGC2SOC). The supervision and monitoring of events for the protection of an organisation's computer networks must be accompanied by visualisation techniques. In this case, the thesis addresses the representation of three-dimensional pictures based on mission oriented metrics and procedures that use an expert system based on fuzzy logic. Precisely, the state-of-the-art evidences serious deficiencies when it comes to implementing cyber defence solutions that consider the relevance of the mission, resources and tasks of an organisation for a better-informed decision. The research work finally provides two key areas to improve decision-making in cyber defence: a solid and complete verification and validation framework to evaluate solution parameters and the development of a synthetic dataset that univocally references the phases of a cyber-attack with the Cyber Kill Chain and MITRE ATT & CK standards.Llopis Sánchez, S. (2023). Decision Support Elements and Enabling Techniques to Achieve a Cyber Defence Situational Awareness Capability [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/19424

    A Structured Methodology For Tailoring And Deploying Lean Manufacturing Systems

    Get PDF
    The seminal works of Peter Drucker and James Womack in the 1990’s outlined the lean manufacturing practices of Toyota Motor Corporation (TMC) to become a world leader in manufacturing. These philosophies have since become the springboard for a significant paradigm shift in approaching manufacturing systems and how to leverage them to optimize operational practices and gain competitive advantage. While there is no shortage of literature touting the benefits of Lean Manufacturing Systems (LMS), there has been significant difficulty in effectively deploying them to obtain and sustain the performance that TMC has achieved. This body of work provides a novel methodology to break the deployment process into different elements by assessing the current business practices/interests and relating them to variables that support the philosophies of LMS. It also associates the key areas of lean from an operational perspective and connects the tools to business requirements by guiding the selection process to more effectively choose tools/processes that best fit the business needs. Finally, this methodology looks at different aspects of the deployment variables to provide a structured approach to tailoring the deployment planning strategy based on better understanding of the different interactions/requirements of LMS. The research also provides a validation of the proposed structured methodology to help practitioners leverage the resulting objective/quantitative information from assessing the current business to help coordinate deployment planning effort. The framework considers aspects prior to deployment planning by providing an approach for pre-deployment assessment to provide critical input for tailoring the LMS deployment

    A non-conformance classification and rapid control method for improved product validation

    Get PDF
    Product quality is a topic of significant industrial importance and has been the subject of ongoing research over many years. However, the study of non-conformance reduction in the pre-production stage of product development has received only limited attention. Although products undergo chronological and rigid assessments, there are still non-conformances which are detected late in development stages particularly in pre-production. Furthermore, these non-conformances are problematic when rectification cannot be found rapidly and these problems are then carried over into production. The research, which is based on consumer electronic product, addresses product nonconformance in the pre-production. The work reported in this thesis focuses on the identification and control of non-conformances to facilitate improved product validation and aids the pre-production team in product assessment and decision making. [Continues.
    • …
    corecore