3,396 research outputs found
Integrating expert-based objectivist and nonexpert-based subjectivist paradigms in landscape assessment
This thesis explores the integration of objective and subjective measures of landscape aesthetics, particularly focusing on crowdsourced geo-information. It addresses the increasing importance of considering public perceptions in national landscape governance, in line with the European Landscape Convention's emphasis on public involvement. Despite this, national landscape assessments often remain expert-centric and top-down, facing challenges in resource constraints and limited public engagement. The thesis leverages Web 2.0 technologies and crowdsourced geographic information, examining correlations between expert-based metrics of landscape quality and public perceptions. The Scenic-Or-Not initiative for Great Britain, GIS-based Wildness spatial layers, and LANDMAP dataset for Wales serve as key datasets for analysis.
The research investigates the relationships between objective measures of landscape wildness quality and subjective measures of aesthetics. Multiscale geographically weighted regression (MGWR) reveals significant correlations, with different wildness components exhibiting varying degrees of association. The study suggests the feasibility of incorporating wildness and scenicness measures into formal landscape aesthetic assessments. Comparing expert and public perceptions, the research identifies preferences for water-related landforms and variations in upland and lowland typologies. The study emphasizes the agreement between experts and non-experts on extreme scenic perceptions but notes discrepancies in mid-spectrum landscapes. To overcome limitations in systematic landscape evaluations, an integrative approach is proposed. Utilizing XGBoost models, the research predicts spatial patterns of landscape aesthetics across Great Britain, based on the Scenic-Or-Not initiatives, Wildness spatial layers, and LANDMAP data. The models achieve comparable accuracy to traditional statistical models, offering insights for Landscape Character Assessment practices and policy decisions. While acknowledging data limitations and biases in crowdsourcing, the thesis discusses the necessity of an aggregation strategy to manage computational challenges. Methodological considerations include addressing the modifiable areal unit problem (MAUP) associated with aggregating point-based observations. The thesis comprises three studies published or submitted for publication, each contributing to the understanding of the relationship between objective and subjective measures of landscape aesthetics. The concluding chapter discusses the limitations of data and methods, providing a comprehensive overview of the research
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Integrating IoT Analytics into Marketing Decision Making: A Smart Data-Driven Approach
With the advent of the Internet of Things (IoT), businesses have gained access to vast amounts of data generated by interconnected devices. Leveraging IoT analytics and marketing intelligence, organizations can extract valuable insights from this data to enhance decision-making processes. This paper presents a comprehensive methodology for data-driven decision-making in the context of IoT analytics and marketing intelligence. A real-time example is used to illustrate the application of this methodology, followed by an inference and discussion of the results. The rise of IoT has enabled real-time data collection from a wide array of interconnected devices, offering unprecedented opportunities for businesses to gain actionable insights. This paper focuses on the intersection of IoT analytics and marketing intelligence, exploring how data-driven decision-making can empower organizations to optimize their marketing strategies, customer experiences, and overall business performance
A BIM - GIS Integrated Information Model Using Semantic Web and RDF Graph Databases
In recent years, 3D virtual indoor and outdoor urban modelling has become an essential geospatial information framework for civil and engineering applications such as emergency response, evacuation planning, and facility management. Building multi-sourced and multi-scale 3D urban models are in high demand among architects, engineers, and construction professionals to achieve these tasks and provide relevant information to decision support systems. Spatial modelling technologies such as Building Information Modelling (BIM) and Geographical Information Systems (GIS) are frequently used to meet such high demands. However, sharing data and information between these two domains is still challenging. At the same time, the semantic or syntactic strategies for inter-communication between BIM and GIS do not fully provide rich semantic and geometric information exchange of BIM into GIS or vice-versa. This research study proposes a novel approach for integrating BIM and GIS using semantic web technologies and Resources Description Framework (RDF) graph databases. The suggested solution's originality and novelty come from combining the advantages of integrating BIM and GIS models into a semantically unified data model using a semantic framework and ontology engineering approaches. The new model will be named Integrated Geospatial Information Model (IGIM). It is constructed through three stages. The first stage requires BIMRDF and GISRDF graphs generation from BIM and GIS datasets. Then graph integration from BIM and GIS semantic models creates IGIMRDF. Lastly, the information from IGIMRDF unified graph is filtered using a graph query language and graph data analytics tools. The linkage between BIMRDF and GISRDF is completed through SPARQL endpoints defined by queries using elements and entity classes with similar or complementary information from properties, relationships, and geometries from an ontology-matching process during model construction. The resulting model (or sub-model) can be managed in a graph database system and used in the backend as a data-tier serving web services feeding a front-tier domain-oriented application. A case study was designed, developed, and tested using the semantic integrated information model for validating the newly proposed solution, architecture, and performance
Improving patient safety by learning from near misses – insights from safety-critical industries
Background
Patients are at risk of being harmed by the very processes meant to help them. To improve patient safety, healthcare organisations attempt to identify the factors that contribute to incidents and take action to optimise conditions to minimise repeats. However, improvements in patient safety have not matched those observed in other safety-critical industries.
One difference between healthcare and other safety-critical industries may be how they learn from near misses when seeking to make safety improvements. Near misses are incidents that almost happened, but for an interruption in the sequence of events. Management of near misses includes their identification, reporting and investigation, and the learning that results. Safety theory suggests that acting on near misses will lead to actions to help prevent incidents. However, evidence also suggests that healthcare has yet to embrace the learning potential that patient safety near misses offer.
The aims of this research, in support of this thesis, were to explore how best healthcare can learn from patient safety near misses to improve patient safety, and to identify what guidance non-healthcare safety-critical industries, which have implemented effective near-miss management systems, can offer healthcare. As this research progressed the aims were updated to include consideration of whether healthcare should seek to learn from patient safety near misses.
Methods
This research took a mixed-methods approach augmented by scoping reviews of the healthcare (study 1) and non-healthcare safety-critical industry (study 3) literature. A qualitative case study (study 2) was undertaken to explore the management of patient safety near misses in the English National Health Service. Seventeen interviews were undertaken with patient safety leads across acute hospitals, ambulance trusts, mental health trusts, primary care, and national bodies. A questionnaire was also used to help access the views of frontline staff.
A grounded theory (study 4) was used to develop a set of principles, based on learning from non-healthcare safety-critical industries, around how best near misses can be managed. Thirty-five interviews were undertaken across aviation, maritime, and rail, with nuclear later added as per the theoretical sampling.
Results
The scoping reviews contributed 125 healthcare and 108 non-healthcare safety-critical industry academic articles, published internationally between 2000 and 2022, to the evidence gained from the qualitative case study and grounded theory. Safety cultures and maturity with safety management processes were found to vary in and across the different industries, and there was a reluctance for healthcare to learn about safety and near misses from other industries.
Healthcare has yet to establish effective processes to manage patient safety near misses. There is an absence of evidence that learning has led to improvements in patient safety. The definition of a patient safety near miss varies, and organisations focus their efforts on reporting and investigating incidents, with limited attention to patient safety near misses. In non-healthcare safety-critical industries, near-miss management is more established, but process maturity varies in and across industries. Near misses are often defined specifically for an industry, but there is limited evidence that learning from them has improved safety. Information about near misses are commonly aggregated and may contribute to company and industry safety management systems.
Exploration of the definition of a patient safety near miss led to the identification of the features of a near miss. The features have not been previously defined in the manner presented in this thesis. A patient safety near miss is context-specific and complex, involves interruptions, highlights system vulnerabilities, and is delineated from an incident by whether events reach a patient.
Across healthcare and non-healthcare safety-critical industries the impact of learning from near misses is often assumed or extrapolated based on the common cause hypothesis. The hypothesis is regularly cited in safety literature and is used as the basis for justifying a focus on patient safety near misses. However, the validity of the hypothesis has been questioned and has not been validated for different patient safety near miss and incident types.
Conclusions
The research findings challenge long-held beliefs that learning from patient safety near misses will lead to improvements in patient safety. These beliefs are based on traditional safety theory that is unlikely to now be valid in the complexity of modern-day systems where incidents are the result of multiple factors and can emerge without apparent warning. Further research is required to understand the relationship between learning from patient safety near misses and patient safety, and whether the common cause hypothesis is valid for different types of healthcare safety event.
While there are questions about the value of learning directly from patient safety near misses, the contribution of near misses to safety management systems in non-healthcare safety-critical industries looks to be beneficial for safety improvement. Safety management systems have yet to be implemented in the National Health Service and future research should look to understand how best this may be achieved and their value. In the meantime, patient safety near misses may help healthcare’s understanding of systems and their optimisation to create barriers to incidents and build resilience. This research offers an evidence-based definition of a patient safety near miss and describes principles to support identification, reporting, prioritisation, investigation, aggregation, learning, and action to help improve patient safety
The Perception of K-12 Instrumental Directors in Low-Income Areas on Virtual Learning with Skill Development and Retention
Due to the extreme measures taken to protect students from COVID-19 during the pandemic, schools closed their doors, and educators struggled to continue teaching through virtual learning platforms. Performance-based classrooms were encouraged to discover new methods and strategies to motivate students to thrive even though face-to-face rehearsals were restricted. This study examined the experiences secondary music education instrumentalists faced while attempting to utilize synchronous and asynchronous instruction in a 100 percent virtual performance-based environment. This study aimed to understand the negative and positive effects placed on secondary instrumentalists’ performance abilities, fundamental development, and participation/retention since the introduction of virtual learning in low-income areas. The focus of this study also examined the possible benefits of enhancing pedagogical skills through the addition of technological advances to push instrumental instruction and performances on the secondary level. This study followed a qualitative hermeneutic phenomenology design. Music educators in low-income DeKalb County communities were interviewed for this study. Participants were requested to share their perspectives and experiences of performance-based virtual learning and results. The study raised the need for future discussions to create and implement a state and national virtual music education guideline that would assist music educators in turning a devastating situation into a blessing for all art programs and their stakeholders
Towards an integrated vulnerability-based approach for evaluating, managing and mitigating earthquake risk in urban areas
Tese de doutoramento em Civil EngineeringSismos de grande intensidade, como aqueles que ocorreram na Turquía-Síria (2023) ou México (2017)
deviam chamar a atenção para o projeto e implementação de ações proativas que conduzam à identificação
de bens vulneráveis. A presente tese propõe um fluxo de trabalho relativamente simples para
efetuar avaliações da vulnerabilidade sísmica à escala urbana mediante ferramentas digitais. Um modelo
de vulnerabilidade baseado em parâmetros é adotado devido à afinidade que possui com o Catálogo Nacional
de Monumentos Históricos mexicano. Uma primeira implementação do método (a grande escala)
foi efetuada na cidade histórica de Atlixco (Puebla, México), demonstrando a sua aplicabilidade e algumas
limitações, o que permitiu o desenvolvimento de uma estratégia para quantificar e considerar as incertezas
epistémicas encontradas nos processos de aquisição de dados. Devido ao volume de dados tratado, foi
preciso desenvolver meios robustos para obter, armazenar e gerir informações. O uso de Sistemas de
Informação Geográfica, com programas à medida baseados em linguagem Python e a distribuição de
ficheiros na ”nuvem”, facilitou a criação de bases de dados de escala urbana para facilitar a aquisição de
dados em campo, os cálculos de vulnerabilidade e dano e, finalmente, a representação dos resultados.
Este desenvolvimento foi a base para um segundo conjunto de trabalhos em municípios do estado de
Morelos (México). A caracterização da vulnerabilidade sísmica de mais de 160 construções permitiu a
avaliação da representatividade do método paramétrico pela comparação entre os níveis de dano teórico
e os danos observados depois do terramoto de Puebla-Morelos (2017). Esta comparação foi a base para
efetuar processos de calibração e ajuste assistidos por algoritmos de aprendizagem de máquina (Machine
Learning), fornecendo bases para o desenvolvimento de modelos de vulnerabilidade à medida (mediante
o uso de Inteligência Artificial), apoiados nas evidências de eventos sísmicos prévios.Strong seismic events like the ones of Türkiye-Syria (2023) or Mexico (2017) should guide our attention
to the design and implementation of proactive actions aimed to identify vulnerable assets. This work is
aimed to propose a suitable and easy-to-implement workflow for performing large-scale seismic vulnerability
assessments in historic environments by means of digital tools. A vulnerability-oriented model based
on parameters is adopted given its affinity with the Mexican Catalogue of Historical Monuments. A first
large-scale implementation of this method in the historical city of Atlixco (Puebla, Mexico) demonstrated its
suitability and some limitations, which lead to develop a strategy for quantifying and involving the epistemic
uncertainties found during the data acquisition process. Given the volume of data that these analyses involve,
it was necessary to develop robust data acquisition, storing and management strategies. The use
of Geographical Information System environments together with customised Python-based programs and
cloud-based distribution permitted to assemble urban databases for facilitating field data acquisition, performing
vulnerability and damage calculations, and representing outcomes. This development was the
base for performing a second large-scale assessment in selected municipalities of the state of Morelos
(Mexico). The characterisation of the seismic vulnerability of more than 160 buildings permitted to assess
the representativeness of the parametric vulnerability approach by comparing the theoretical damage estimations against the damages observed after the Puebla-Morelos 2017 Earthquakes. Such comparison is
the base for performing a Machine Learning assisted process of calibration and adjustment, representing
a feasible strategy for calibrating these vulnerability models by using Machine-Learning algorithms and the
empirical evidence of damage in post-seismic scenarios.This work was partly financed by FCT/MCTES through national funds (PIDDAC) under the R&D Unit
Institute for Sustainability and Innovation in Structural Engineering (ISISE), reference UIDB/04029/2020.
This research had financial support provided by the Portuguese Foundation of Science and Technology
(FCT) through the Analysis and Mitigation of Risks in Infrastructures (InfraRisk) program under the PhD
grant PD/BD/150385/2019
Chatbots for Modelling, Modelling of Chatbots
Tesis Doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de Lectura: 28-03-202
Automatic Generation of Personalized Recommendations in eCoaching
Denne avhandlingen omhandler eCoaching for personlig livsstilsstøtte i sanntid ved bruk av informasjons- og kommunikasjonsteknologi. Utfordringen er å designe, utvikle og teknisk evaluere en prototyp av en intelligent eCoach som automatisk genererer personlige og evidensbaserte anbefalinger til en bedre livsstil. Den utviklede løsningen er fokusert på forbedring av fysisk aktivitet. Prototypen bruker bærbare medisinske aktivitetssensorer. De innsamlede data blir semantisk representert og kunstig intelligente algoritmer genererer automatisk meningsfulle, personlige og kontekstbaserte anbefalinger for mindre stillesittende tid. Oppgaven bruker den veletablerte designvitenskapelige forskningsmetodikken for å utvikle teoretiske grunnlag og praktiske implementeringer. Samlet sett fokuserer denne forskningen på teknologisk verifisering snarere enn klinisk evaluering.publishedVersio
- …