5,982 research outputs found

    A forensics and compliance auditing framework for critical infrastructure protection

    Get PDF
    Contemporary societies are increasingly dependent on products and services provided by Critical Infrastructure (CI) such as power plants, energy distribution networks, transportation systems and manufacturing facilities. Due to their nature, size and complexity, such CIs are often supported by Industrial Automation and Control Systems (IACS), which are in charge of managing assets and controlling everyday operations. As these IACS become larger and more complex, encompassing a growing number of processes and interconnected monitoring and actuating devices, the attack surface of the underlying CIs increases. This situation calls for new strategies to improve Critical Infrastructure Protection (CIP) frameworks, based on evolved approaches for data analytics, able to gather insights from the CI. In this paper, we propose an Intrusion and Anomaly Detection System (IADS) framework that adopts forensics and compliance auditing capabilities at its core to improve CIP. Adopted forensics techniques help to address, for instance, post-incident analysis and investigation, while the support of continuous auditing processes simplifies compliance management and service quality assessment. More specifically, after discussing the rationale for such a framework, this paper presents a formal description of the proposed components and functions and discusses how the framework can be implemented using a cloud-native approach, to address both functional and non-functional requirements. An experimental analysis of the framework scalability is also provided.info:eu-repo/semantics/publishedVersio

    Converging organoids and extracellular matrix::New insights into liver cancer biology

    Get PDF

    Serverless Strategies and Tools in the Cloud Computing Continuum

    Full text link
    Tesis por compendio[ES] En los últimos años, la popularidad de la computación en nube ha permitido a los usuarios acceder a recursos de cómputo, red y almacenamiento sin precedentes bajo un modelo de pago por uso. Esta popularidad ha propiciado la aparición de nuevos servicios para resolver determinados problemas informáticos a gran escala y simplificar el desarrollo y el despliegue de aplicaciones. Entre los servicios más destacados en los últimos años se encuentran las plataformas FaaS (Función como Servicio), cuyo principal atractivo es la facilidad de despliegue de pequeños fragmentos de código en determinados lenguajes de programación para realizar tareas específicas en respuesta a eventos. Estas funciones son ejecutadas en los servidores del proveedor Cloud sin que los usuarios se preocupen de su mantenimiento ni de la gestión de su elasticidad, manteniendo siempre un modelo de pago por uso de grano fino. Las plataformas FaaS pertenecen al paradigma informático conocido como Serverless, cuyo propósito es abstraer la gestión de servidores por parte de los usuarios, permitiéndoles centrar sus esfuerzos únicamente en el desarrollo de aplicaciones. El problema del modelo FaaS es que está enfocado principalmente en microservicios y tiende a tener limitaciones en el tiempo de ejecución y en las capacidades de computación (por ejemplo, carece de soporte para hardware de aceleración como GPUs). Sin embargo, se ha demostrado que la capacidad de autoaprovisionamiento y el alto grado de paralelismo de estos servicios pueden ser muy adecuados para una mayor variedad de aplicaciones. Además, su inherente ejecución dirigida por eventos hace que las funciones sean perfectamente adecuadas para ser definidas como pasos en flujos de trabajo de procesamiento de archivos (por ejemplo, flujos de trabajo de computación científica). Por otra parte, el auge de los dispositivos inteligentes e integrados (IoT), las innovaciones en las redes de comunicación y la necesidad de reducir la latencia en casos de uso complejos han dado lugar al concepto de Edge computing, o computación en el borde. El Edge computing consiste en el procesamiento en dispositivos cercanos a las fuentes de datos para mejorar los tiempos de respuesta. La combinación de este paradigma con la computación en nube, formando arquitecturas con dispositivos a distintos niveles en función de su proximidad a la fuente y su capacidad de cómputo, se ha acuñado como continuo de la computación en la nube (o continuo computacional). Esta tesis doctoral pretende, por lo tanto, aplicar diferentes estrategias Serverless para permitir el despliegue de aplicaciones generalistas, empaquetadas en contenedores de software, a través de los diferentes niveles del continuo computacional. Para ello, se han desarrollado múltiples herramientas con el fin de: i) adaptar servicios FaaS de proveedores Cloud públicos; ii) integrar diferentes componentes software para definir una plataforma Serverless en infraestructuras privadas y en el borde; iii) aprovechar dispositivos de aceleración en plataformas Serverless; y iv) facilitar el despliegue de aplicaciones y flujos de trabajo a través de interfaces de usuario. Además, se han creado y adaptado varios casos de uso para evaluar los desarrollos conseguidos.[CA] En els últims anys, la popularitat de la computació al núvol ha permès als usuaris accedir a recursos de còmput, xarxa i emmagatzematge sense precedents sota un model de pagament per ús. Aquesta popularitat ha propiciat l'aparició de nous serveis per resoldre determinats problemes informàtics a gran escala i simplificar el desenvolupament i desplegament d'aplicacions. Entre els serveis més destacats en els darrers anys hi ha les plataformes FaaS (Funcions com a Servei), el principal atractiu de les quals és la facilitat de desplegament de petits fragments de codi en determinats llenguatges de programació per realitzar tasques específiques en resposta a esdeveniments. Aquestes funcions són executades als servidors del proveïdor Cloud sense que els usuaris es preocupen del seu manteniment ni de la gestió de la seva elasticitat, mantenint sempre un model de pagament per ús de gra fi. Les plataformes FaaS pertanyen al paradigma informàtic conegut com a Serverless, el propòsit del qual és abstraure la gestió de servidors per part dels usuaris, permetent centrar els seus esforços únicament en el desenvolupament d'aplicacions. El problema del model FaaS és que està enfocat principalment a microserveis i tendeix a tenir limitacions en el temps d'execució i en les capacitats de computació (per exemple, no té suport per a maquinari d'acceleració com GPU). Tot i això, s'ha demostrat que la capacitat d'autoaprovisionament i l'alt grau de paral·lelisme d'aquests serveis poden ser molt adequats per a més aplicacions. A més, la seva inherent execució dirigida per esdeveniments fa que les funcions siguen perfectament adequades per ser definides com a passos en fluxos de treball de processament d'arxius (per exemple, fluxos de treball de computació científica). D'altra banda, l'auge dels dispositius intel·ligents i integrats (IoT), les innovacions a les xarxes de comunicació i la necessitat de reduir la latència en casos d'ús complexos han donat lloc al concepte d'Edge computing, o computació a la vora. L'Edge computing consisteix en el processament en dispositius propers a les fonts de dades per millorar els temps de resposta. La combinació d'aquest paradigma amb la computació en núvol, formant arquitectures amb dispositius a diferents nivells en funció de la proximitat a la font i la capacitat de còmput, s'ha encunyat com a continu de la computació al núvol (o continu computacional). Aquesta tesi doctoral pretén, doncs, aplicar diferents estratègies Serverless per permetre el desplegament d'aplicacions generalistes, empaquetades en contenidors de programari, a través dels diferents nivells del continu computacional. Per això, s'han desenvolupat múltiples eines per tal de: i) adaptar serveis FaaS de proveïdors Cloud públics; ii) integrar diferents components de programari per definir una plataforma Serverless en infraestructures privades i a la vora; iii) aprofitar dispositius d'acceleració a plataformes Serverless; i iv) facilitar el desplegament d'aplicacions i fluxos de treball mitjançant interfícies d'usuari. A més, s'han creat i s'han adaptat diversos casos d'ús per avaluar els desenvolupaments aconseguits.[EN] In recent years, the popularity of Cloud computing has allowed users to access unprecedented compute, network, and storage resources under a pay-per-use model. This popularity led to new services to solve specific large-scale computing challenges and simplify the development and deployment of applications. Among the most prominent services in recent years are FaaS (Function as a Service) platforms, whose primary appeal is the ease of deploying small pieces of code in certain programming languages to perform specific tasks on an event-driven basis. These functions are executed on the Cloud provider's servers without users worrying about their maintenance or elasticity management, always keeping a fine-grained pay-per-use model. FaaS platforms belong to the computing paradigm known as Serverless, which aims to abstract the management of servers from the users, allowing them to focus their efforts solely on the development of applications. The problem with FaaS is that it focuses on microservices and tends to have limitations regarding the execution time and the computing capabilities (e.g. lack of support for acceleration hardware such as GPUs). However, it has been demonstrated that the self-provisioning capability and high degree of parallelism of these services can be well suited to broader applications. In addition, their inherent event-driven triggering makes functions perfectly suitable to be defined as steps in file processing workflows (e.g. scientific computing workflows). Furthermore, the rise of smart and embedded devices (IoT), innovations in communication networks and the need to reduce latency in challenging use cases have led to the concept of Edge computing. Edge computing consists of conducting the processing on devices close to the data sources to improve response times. The coupling of this paradigm together with Cloud computing, involving architectures with devices at different levels depending on their proximity to the source and their compute capability, has been coined as Cloud Computing Continuum (or Computing Continuum). Therefore, this PhD thesis aims to apply different Serverless strategies to enable the deployment of generalist applications, packaged in software containers, across the different tiers of the Cloud Computing Continuum. To this end, multiple tools have been developed in order to: i) adapt FaaS services from public Cloud providers; ii) integrate different software components to define a Serverless platform on on-premises and Edge infrastructures; iii) leverage acceleration devices on Serverless platforms; and iv) facilitate the deployment of applications and workflows through user interfaces. Additionally, several use cases have been created and adapted to assess the developments achieved.Risco Gallardo, S. (2023). Serverless Strategies and Tools in the Cloud Computing Continuum [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202013Compendi

    Authentication enhancement in command and control networks: (a study in Vehicular Ad-Hoc Networks)

    Get PDF
    Intelligent transportation systems contribute to improved traffic safety by facilitating real time communication between vehicles. By using wireless channels for communication, vehicular networks are susceptible to a wide range of attacks, such as impersonation, modification, and replay. In this context, securing data exchange between intercommunicating terminals, e.g., vehicle-to-everything (V2X) communication, constitutes a technological challenge that needs to be addressed. Hence, message authentication is crucial to safeguard vehicular ad-hoc networks (VANETs) from malicious attacks. The current state-of-the-art for authentication in VANETs relies on conventional cryptographic primitives, introducing significant computation and communication overheads. In this challenging scenario, physical (PHY)-layer authentication has gained popularity, which involves leveraging the inherent characteristics of wireless channels and the hardware imperfections to discriminate between wireless devices. However, PHY-layerbased authentication cannot be an alternative to crypto-based methods as the initial legitimacy detection must be conducted using cryptographic methods to extract the communicating terminal secret features. Nevertheless, it can be a promising complementary solution for the reauthentication problem in VANETs, introducing what is known as “cross-layer authentication.” This thesis focuses on designing efficient cross-layer authentication schemes for VANETs, reducing the communication and computation overheads associated with transmitting and verifying a crypto-based signature for each transmission. The following provides an overview of the proposed methodologies employed in various contributions presented in this thesis. 1. The first cross-layer authentication scheme: A four-step process represents this approach: initial crypto-based authentication, shared key extraction, re-authentication via a PHY challenge-response algorithm, and adaptive adjustments based on channel conditions. Simulation results validate its efficacy, especially in low signal-to-noise ratio (SNR) scenarios while proving its resilience against active and passive attacks. 2. The second cross-layer authentication scheme: Leveraging the spatially and temporally correlated wireless channel features, this scheme extracts high entropy shared keys that can be used to create dynamic PHY-layer signatures for authentication. A 3-Dimensional (3D) scattering Doppler emulator is designed to investigate the scheme’s performance at different speeds of a moving vehicle and SNRs. Theoretical and hardware implementation analyses prove the scheme’s capability to support high detection probability for an acceptable false alarm value ≤ 0.1 at SNR ≥ 0 dB and speed ≤ 45 m/s. 3. The third proposal: Reconfigurable intelligent surfaces (RIS) integration for improved authentication: Focusing on enhancing PHY-layer re-authentication, this proposal explores integrating RIS technology to improve SNR directed at designated vehicles. Theoretical analysis and practical implementation of the proposed scheme are conducted using a 1-bit RIS, consisting of 64 × 64 reflective units. Experimental results show a significant improvement in the Pd, increasing from 0.82 to 0.96 at SNR = − 6 dB for multicarrier communications. 4. The fourth proposal: RIS-enhanced vehicular communication security: Tailored for challenging SNR in non-line-of-sight (NLoS) scenarios, this proposal optimises key extraction and defends against denial-of-service (DoS) attacks through selective signal strengthening. Hardware implementation studies prove its effectiveness, showcasing improved key extraction performance and resilience against potential threats. 5. The fifth cross-layer authentication scheme: Integrating PKI-based initial legitimacy detection and blockchain-based reconciliation techniques, this scheme ensures secure data exchange. Rigorous security analyses and performance evaluations using network simulators and computation metrics showcase its effectiveness, ensuring its resistance against common attacks and time efficiency in message verification. 6. The final proposal: Group key distribution: Employing smart contract-based blockchain technology alongside PKI-based authentication, this proposal distributes group session keys securely. Its lightweight symmetric key cryptography-based method maintains privacy in VANETs, validated via Ethereum’s main network (MainNet) and comprehensive computation and communication evaluations. The analysis shows that the proposed methods yield a noteworthy reduction, approximately ranging from 70% to 99%, in both computation and communication overheads, as compared to the conventional approaches. This reduction pertains to the verification and transmission of 1000 messages in total

    Configuration Management of Distributed Systems over Unreliable and Hostile Networks

    Get PDF
    Economic incentives of large criminal profits and the threat of legal consequences have pushed criminals to continuously improve their malware, especially command and control channels. This thesis applied concepts from successful malware command and control to explore the survivability and resilience of benign configuration management systems. This work expands on existing stage models of malware life cycle to contribute a new model for identifying malware concepts applicable to benign configuration management. The Hidden Master architecture is a contribution to master-agent network communication. In the Hidden Master architecture, communication between master and agent is asynchronous and can operate trough intermediate nodes. This protects the master secret key, which gives full control of all computers participating in configuration management. Multiple improvements to idempotent configuration were proposed, including the definition of the minimal base resource dependency model, simplified resource revalidation and the use of imperative general purpose language for defining idempotent configuration. Following the constructive research approach, the improvements to configuration management were designed into two prototypes. This allowed validation in laboratory testing, in two case studies and in expert interviews. In laboratory testing, the Hidden Master prototype was more resilient than leading configuration management tools in high load and low memory conditions, and against packet loss and corruption. Only the research prototype was adaptable to a network without stable topology due to the asynchronous nature of the Hidden Master architecture. The main case study used the research prototype in a complex environment to deploy a multi-room, authenticated audiovisual system for a client of an organization deploying the configuration. The case studies indicated that imperative general purpose language can be used for idempotent configuration in real life, for defining new configurations in unexpected situations using the base resources, and abstracting those using standard language features; and that such a system seems easy to learn. Potential business benefits were identified and evaluated using individual semistructured expert interviews. Respondents agreed that the models and the Hidden Master architecture could reduce costs and risks, improve developer productivity and allow faster time-to-market. Protection of master secret keys and the reduced need for incident response were seen as key drivers for improved security. Low-cost geographic scaling and leveraging file serving capabilities of commodity servers were seen to improve scaling and resiliency. Respondents identified jurisdictional legal limitations to encryption and requirements for cloud operator auditing as factors potentially limiting the full use of some concepts

    Converging organoids and extracellular matrix::New insights into liver cancer biology

    Get PDF
    Primary liver cancer, consisting primarily of hepatocellular carcinoma (HCC) and cholangiocarcinoma (CCA), is a heterogeneous malignancy with a dismal prognosis, resulting in the third leading cause of cancer mortality worldwide [1, 2]. It is characterized by unique histological features, late-stage diagnosis, a highly variable mutational landscape, and high levels of heterogeneity in biology and etiology [3-5]. Treatment options are limited, with surgical intervention the main curative option, although not available for the majority of patients which are diagnosed in an advanced stage. Major contributing factors to the complexity and limited treatment options are the interactions between primary tumor cells, non-neoplastic stromal and immune cells, and the extracellular matrix (ECM). ECM dysregulation plays a prominent role in multiple facets of liver cancer, including initiation and progression [6, 7]. HCC often develops in already damaged environments containing large areas of inflammation and fibrosis, while CCA is commonly characterized by significant desmoplasia, extensive formation of connective tissue surrounding the tumor [8, 9]. Thus, to gain a better understanding of liver cancer biology, sophisticated in vitro tumor models need to incorporate comprehensively the various aspects that together dictate liver cancer progression. Therefore, the aim of this thesis is to create in vitro liver cancer models through organoid technology approaches, allowing for novel insights into liver cancer biology and, in turn, providing potential avenues for therapeutic testing. To model primary epithelial liver cancer cells, organoid technology is employed in part I. To study and characterize the role of ECM in liver cancer, decellularization of tumor tissue, adjacent liver tissue, and distant metastatic organs (i.e. lung and lymph node) is described, characterized, and combined with organoid technology to create improved tissue engineered models for liver cancer in part II of this thesis. Chapter 1 provides a brief introduction into the concepts of liver cancer, cellular heterogeneity, decellularization and organoid technology. It also explains the rationale behind the work presented in this thesis. In-depth analysis of organoid technology and contrasting it to different in vitro cell culture systems employed for liver cancer modeling is done in chapter 2. Reliable establishment of liver cancer organoids is crucial for advancing translational applications of organoids, such as personalized medicine. Therefore, as described in chapter 3, a multi-center analysis was performed on establishment of liver cancer organoids. This revealed a global establishment efficiency rate of 28.2% (19.3% for hepatocellular carcinoma organoids (HCCO) and 36% for cholangiocarcinoma organoids (CCAO)). Additionally, potential solutions and future perspectives for increasing establishment are provided. Liver cancer organoids consist of solely primary epithelial tumor cells. To engineer an in vitro tumor model with the possibility of immunotherapy testing, CCAO were combined with immune cells in chapter 4. Co-culture of CCAO with peripheral blood mononuclear cells and/or allogenic T cells revealed an effective anti-tumor immune response, with distinct interpatient heterogeneity. These cytotoxic effects were mediated by cell-cell contact and release of soluble factors, albeit indirect killing through soluble factors was only observed in one organoid line. Thus, this model provided a first step towards developing immunotherapy for CCA on an individual patient level. Personalized medicine success is dependent on an organoids ability to recapitulate patient tissue faithfully. Therefore, in chapter 5 a novel organoid system was created in which branching morphogenesis was induced in cholangiocyte and CCA organoids. Branching cholangiocyte organoids self-organized into tubular structures, with high similarity to primary cholangiocytes, based on single-cell sequencing and functionality. Similarly, branching CCAO obtain a different morphology in vitro more similar to primary tumors. Moreover, these branching CCAO have a higher correlation to the transcriptomic profile of patient-paired tumor tissue and an increased drug resistance to gemcitabine and cisplatin, the standard chemotherapy regimen for CCA patients in the clinic. As discussed, CCAO represent the epithelial compartment of CCA. Proliferation, invasion, and metastasis of epithelial tumor cells is highly influenced by the interaction with their cellular and extracellular environment. The remodeling of various properties of the extracellular matrix (ECM), including stiffness, composition, alignment, and integrity, influences tumor progression. In chapter 6 the alterations of the ECM in solid tumors and the translational impact of our increased understanding of these alterations is discussed. The success of ECM-related cancer therapy development requires an intimate understanding of the malignancy-induced changes to the ECM. This principle was applied to liver cancer in chapter 7, whereby through a integrative molecular and mechanical approach the dysregulation of liver cancer ECM was characterized. An optimized agitation-based decellularization protocol was established for primary liver cancer (HCC and CCA) and paired adjacent tissue (HCC-ADJ and CCA-ADJ). Novel malignancy-related ECM protein signatures were found, which were previously overlooked in liver cancer transcriptomic data. Additionally, the mechanical characteristics were probed, which revealed divergent macro- and micro-scale mechanical properties and a higher alignment of collagen in CCA. This study provided a better understanding of ECM alterations during liver cancer as well as a potential scaffold for culture of organoids. This was applied to CCA in chapter 8 by combining decellularized CCA tumor ECM and tumor-free liver ECM with CCAO to study cell-matrix interactions. Culture of CCAO in tumor ECM resulted in a transcriptome closely resembling in vivo patient tumor tissue, and was accompanied by an increase in chemo resistance. In tumor-free liver ECM, devoid of desmoplasia, CCAO initiated a desmoplastic reaction through increased collagen production. If desmoplasia was already present, distinct ECM proteins were produced by the organoids. These were tumor-related proteins associated with poor patient survival. To extend this method of studying cell-matrix interactions to a metastatic setting, lung and lymph node tissue was decellularized and recellularized with CCAO in chapter 9, as these are common locations of metastasis in CCA. Decellularization resulted in removal of cells while preserving ECM structure and protein composition, linked to tissue-specific functioning hallmarks. Recellularization revealed that lung and lymph node ECM induced different gene expression profiles in the organoids, related to cancer stem cell phenotype, cell-ECM integrin binding, and epithelial-to-mesenchymal transition. Furthermore, the metabolic activity of CCAO in lung and lymph node was significantly influenced by the metastatic location, the original characteristics of the patient tumor, and the donor of the target organ. The previously described in vitro tumor models utilized decellularized scaffolds with native structure. Decellularized ECM can also be used for creation of tissue-specific hydrogels through digestion and gelation procedures. These hydrogels were created from both porcine and human livers in chapter 10. The liver ECM-based hydrogels were used to initiate and culture healthy cholangiocyte organoids, which maintained cholangiocyte marker expression, thus providing an alternative for initiation of organoids in BME. Building upon this, in chapter 11 human liver ECM-based extracts were used in combination with a one-step microfluidic encapsulation method to produce size standardized CCAO. The established system can facilitate the reduction of size variability conventionally seen in organoid culture by providing uniform scaffolding. Encapsulated CCAO retained their stem cell phenotype and were amendable to drug screening, showing the feasibility of scalable production of CCAO for throughput drug screening approaches. Lastly, Chapter 12 provides a global discussion and future outlook on tumor tissue engineering strategies for liver cancer, using organoid technology and decellularization. Combining multiple aspects of liver cancer, both cellular and extracellular, with tissue engineering strategies provides advanced tumor models that can delineate fundamental mechanistic insights as well as provide a platform for drug screening approaches.<br/

    The effect of autologous macrophage therapy in cirrhosis in response to individual immune reparative pathways: developing a novel therapy

    Get PDF
    BACKGROUND: Liver cirrhosis is the end stage of any injury process to the liver. Once established it inevitably progresses to complications such as portal hypertension, cancer and death. There is not cure for liver cirrhosis besides liver transplant. We face an unmet demand for treatment of this condition. The role of macrophages in fibrosis development and resolution in the liver has been extensively investigated. Prof Forbes group invested in the development of autologous macrophage product to promote fibrosis resolution hence cirrhosis regression. This has demonstrated its efficacy and safety in animal models. From these encouraging pre-clinic data a phase 1 first in human clinical trial of autologous activated macrophage product for cirrhotic patients was developed. METHODS: Using an established 3+3 dose escalation model we enrolled a total of 9 subject in the phase 1 trial reaching a maximum achieved and safe dose of 1x10^9 macrophages. In addition to adverse events, dose toxicity and macrophage activation syndrome (MAS) parameter, we evaluated a varied range of circulating cytokines and chemokine pre and post treatment using a commercial kit. Moreover we developed a protocol for P13- magnetic resonance spectrometry (MRS) for the analysis of the metabolically active liver parenchyma. Data from the phase 1 trial were used to improve the autologous cellular produce and phase 2 randomised controlled trial. RESULTS: The autologous activated macrophage produce is demonstrated not to cause any toxicity in this first in human study of cirrhotic population of different aetiology. Cytokine and chemokine analysis supports these findings and specifically demonstrates low levels of IL-8, which represent cardinal feature of MAS. Other interesting cytokine signals may support extra cellular matrix remodelling effect of the autologous macrophage product infusion. In addition we demonstrated a reproducible protocol for MRS in liver disease. DISCUSSION: Autologous activated macrophage infusion did not result in any toxicity in cirrhotic subjects taking part in this study and shows preliminary signs of efficacy in fibrosis resolution both clinically and biochemically. This work places the basis of development of cellular products for treatment of cirrhosis and fibrosis and provides invaluable insight in immune response to cellular treatment

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Effects of municipal smoke-free ordinances on secondhand smoke exposure in the Republic of Korea

    Get PDF
    ObjectiveTo reduce premature deaths due to secondhand smoke (SHS) exposure among non-smokers, the Republic of Korea (ROK) adopted changes to the National Health Promotion Act, which allowed local governments to enact municipal ordinances to strengthen their authority to designate smoke-free areas and levy penalty fines. In this study, we examined national trends in SHS exposure after the introduction of these municipal ordinances at the city level in 2010.MethodsWe used interrupted time series analysis to assess whether the trends of SHS exposure in the workplace and at home, and the primary cigarette smoking rate changed following the policy adjustment in the national legislation in ROK. Population-standardized data for selected variables were retrieved from a nationally representative survey dataset and used to study the policy action’s effectiveness.ResultsFollowing the change in the legislation, SHS exposure in the workplace reversed course from an increasing (18% per year) trend prior to the introduction of these smoke-free ordinances to a decreasing (−10% per year) trend after adoption and enforcement of these laws (β2 = 0.18, p-value = 0.07; β3 = −0.10, p-value = 0.02). SHS exposure at home (β2 = 0.10, p-value = 0.09; β3 = −0.03, p-value = 0.14) and the primary cigarette smoking rate (β2 = 0.03, p-value = 0.10; β3 = 0.008, p-value = 0.15) showed no significant changes in the sampled period. Although analyses stratified by sex showed that the allowance of municipal ordinances resulted in reduced SHS exposure in the workplace for both males and females, they did not affect the primary cigarette smoking rate as much, especially among females.ConclusionStrengthening the role of local governments by giving them the authority to enact and enforce penalties on SHS exposure violation helped ROK to reduce SHS exposure in the workplace. However, smoking behaviors and related activities seemed to shift to less restrictive areas such as on the streets and in apartment hallways, negating some of the effects due to these ordinances. Future studies should investigate how smoke-free policies beyond public places can further reduce the SHS exposure in ROK
    corecore