26 research outputs found

    A Methodology for Assuring Privacy by Design in Information Systems

    Get PDF
    There is no doubt that privacy by design PbD has become a structuring paradigm for personal data protection. Certainly this paradigm has been in use since 1995; however the GDRP "The General Data Protection Regulation", by considering PbD in 2018 as a legal obligation, it testifies the PbD seven principles relevance. Companies are therefore called to put in place technical and organizational measures to integrate PbD into companies. Hence the need for a methodology to provide an exhaustive approach adapted to this implementation. Given the focus of the literature on the implementation of methodologies dedicated to the embodiment of PbD only in software systems, this article aims to propose an ISPM methodology "Information System Privacy Methodology" which focuses on the implementation of PbD in the enterprises architecture, specifically in information systems taking into account all the technical and organizational aspects which must be adopted for the said goal success

    Apply the LINDDUN framework for privacy requirement analysis

    Get PDF
    LINDDUN is a framework to identify privacy threats and elicit privacy requirements from a system. It has complete procedures and strong support on privacy requirements analysis. This research tries to figure out how practically we can apply the LINDDUN methodology in privacy requirements analysis. This thesis studies LINDDUN in a case project name Rin-Tin-Tinder for privacy threats and privacy requirements analysis. The analysis results are compared with the privacy requirement elicited by the project team in a workshop session. The analysis result is verified through a comparison with the Microsoft privacy guideline. The discussions and analysis on comparison implies strengths and weakness of the LINDDUN methodology. Compared to workshop, the LINDDUN methodology lead the analyst to identify more privacy threats and get more privacy requirements, and makes analyzing process more predictable. Meanwhile, the LINDDUN methodology has a blind spot on users' unintentional false instructions. The thesis discussed possible directions to improve LINDDUN and summarize a guide rules on assumption making, which is an important procedure in LINDDUN. These findings will be helpful for LINDDUN s further improvement

    A semi-automated BPMN-based framework for detecting conflicts between security, data-minimization, and fairness requirements

    Get PDF
    Requirements are inherently prone to conflicts. Security, data-minimization, and fairness requirements are no exception. Importantly, undetected conflicts between such requirements can lead to severe effects, including privacy infringement and legal sanctions. Detecting conflicts between security, data-minimization, and fairness requirements is a challenging task, as such conflicts are context-specific and their detection requires a thorough understanding of the underlying business processes. For example, a process may require anonymous execution of a task that writes data into a secure data storage, where the identity of the writer is needed for the purpose of accountability. Moreover, conflicts not arise from trade-offs between requirements elicited from the stakeholders, but also from misinterpretation of elicited requirements while implementing them in business processes, leading to a non-alignment between the data subjects’ requirements and their specifications. Both types of conflicts are substantial challenges for conflict detection. To address these challenges, we propose a BPMN-based framework that supports: (i) the design of business processes considering security, data-minimization and fairness requirements, (ii) the encoding of such requirements as reusable, domain-specific patterns, (iii) the checking of alignment between the encoded requirements and annotated BPMN models based on these patterns, and (iv) the detection of conflicts between the specified requirements in the BPMN models based on a catalog of domain-independent anti-patterns. The security requirements were reused from SecBPMN2, a security-oriented BPMN 2.0 extension, while the fairness and data-minimization parts are new. For formulating our patterns and anti-patterns, we extended a graphical query language called SecBPMN2-Q. We report on the feasibility and the usability of our approach based on a case study featuring a healthcare management system, and an experimental user study. \ua9 2020, The Author(s)

    Domain- and Quality-aware Requirements Engineering for Law-compliant Systems

    Get PDF
    Titel in deutscher Übersetzung: Domänen- und qualitätsgetriebene Anforderungserhebung für gesetzeskonforme Systeme Der bekannte Leitsatz in der Anforderungserhebung und -analyse besagt, dass es schwierig ist, das richtige System zu bauen, wenn man nicht weiß, was das 'Richtige' eigentlich ist. Es existieren überzeugende Belege, dass dieser Leitsatz die Notwendigkeit der Anforderungserhebung und -analyse exakt definiert und beschreibt. Zum Beispiel ergaben Studien, dass das Beheben von Defekten in einer Software, die bereits produktiv genutzt wird, bis zu 80 mal so teuer ist wie das frühzeitige Beheben der korrespondierenden Defekte in den Anforderungen. Generell hat es sich gezeigt, dass das Durchführen einer angemessenen Anforderungserhebung und -analyse ein wichtiger Erfolgsfaktor für Softwareentwicklungsprojekte ist. Während der Progression von den initialen Wünschen der beteiligten Interessensvertretern für ein zu entwickelndes System zu einer Spezifikation für eben dieses Systems müssen Anforderungsanalysten einen komplexen Entscheidungsprozess durchlaufen, der die initialen Wünsche in die Spezifikation überführt. Tatsächlich wird das Treffen von Entscheidungen als integraler Bestandteil der Anforderungsanalyse gesehen. In dieser Arbeit werden wir versuchen zu verstehen welche Aktivitäten und Information von Nöten sind, um eine fundierte Auswahl von Anforderungen vorzunehmen, welche Herausforderungen damit verbunden sind, wie eine ideale Lösung zur Anforderungswahl aussehen könnte und in welchen Bereichen der aktuelle Stand der Technik in Bezug auf diese ideale Lösung lückenhaft ist. Innerhalb dieser Arbeit werden wir die Informationen, die notwendig für eine fundierte Anforderungsauswahl sind, identifizieren, einen Prozess präsentieren, um diese notwendigen Informationen zu sammeln, die Herausforderungen herausstellen, die durch diesen Prozess und die damit verbundenen Aktivitäten adressiert werden und eine Auswahl von Methoden diskutieren, mit deren Hilfe man die Aktivitäten des Prozesses umsetzen kann. Die gesammelten Informationen werden dann für eine automatisierte Anforderungsauswahl verwendet. Für die Auswahl kommt ein Optimierungsmodell, das Teil des Beitrags dieser Arbeit ist, zum Einsatz. Da wir während der Erstellung dieser Arbeit zwei große Lücken im Stand der Technik bezüglich unseres Prozesses und der damit verbundenen Aktivitäten identifiziert haben, präsentieren wir darüber hinaus zwei neuartige Methoden für die Kontexterhebung und die Erhebung von rechtlichen Anforderungen, um diese Lücken zu schließen. Diese Methoden sind Teil des Hauptbeitrags dieser Arbeit. Unsere Lösung für der Erhebung des Kontext für ein zu entwickelndes System ermöglicht das Etablieren eines domänenspezifischen Kontextes unter Zuhilfenahme von Mustern für verschiedene Domänen. Diese Kontextmuster erlauben eine strukturierte Erhebung und Dokumentation aller relevanten Interessensvertreter und technischen Entitäten für ein zu entwickelndes System. Sowohl die Dokumentation in Form von grafischen Musterinstanzen und textuellen Vorlageninstanzen als auch die Methode zum Sammeln der notwendigen Informationen sind expliziter Bestandteil jedes Kontextmusters. Zusätzlich stellen wir auch Hilfsmittel für die Erstellung neuer Kontextmuster und das Erweitern der in dieser Arbeit präsentierten Kontextmustersprache zur Verfügung. Unsere Lösung für die Erhebung von rechtlichen Anforderungen basiert auch auf Mustern und stellt eine Methode bereit, welche es einem erlaubt, die relevanten Gesetze für ein zu erstellendes System, welches in Form der funktionalen Anforderungen bereits beschrieben sein muss, zu identifizieren und welche die bestehenden funktionalen Anforderungen mit den rechtlichen Anforderungen verknüpft. Diese Methode beruht auf der Zusammenarbeit zwischen Anforderungsanalysten und Rechtsexperten und schließt die Verständnislücke zwischen ihren verschiedenartigen Welten. Wir veranschaulichen unseren Prozess unter der Zuhilfenahme eines durchgehenden Beispiels aus dem Bereich der service-orientierten Architekturen. Zusätzlich präsentieren wir sowohl die Ergebnisse der Anwendung unseres Prozesses (bzw. Teilen davon) auf zwei reale Fälle aus den Bereichen von Smart Grids und Wahlsystemen, als auch alle anderen Ergebnisse der wissenschaftlichen Methoden, die wir genutzt haben, um unsere Lösung zu fundieren und validieren.The long known credo of requirements engineering states that it is challenging to build the right system if you do not know what right is. There is strong evidence that this credo exactly defines and describes the necessity of requirements engineering. Fixing a defect when it is already fielded is reported to be up to eighty times more expensive than fixing the corresponding requirements defects early on. In general, conducting sufficient requirements engineering has shown to be a crucial success factor for software development projects. Throughout the progression from initial stakeholders' wishes regarding the system-to-be to a specification for the system-to-be requirements engineers have to undergo a complex decision process for forming the actual plan connecting stakeholder wishes and the final specification. Indeed, decision making is considered to be an inherent part of requirements engineering. In this thesis, we try to understand which activities and information are needed for selecting requirements, which the challenges are, how an ideal solution for selecting requirements would look like, and where the current state of the art is deficient regarding the ideal solution. Within this thesis we identify the information necessary for an informed requirements selection, present a process in which one collects all the necessary information, highlight the challenges to be addressed by this process and its activities, and a selection of methods to conduct the activities of the process. All the collected information is then used for an automated requirements selection using an optimization model which is also part of the contribution of this thesis. As we identified two major gaps in the state of the art considering the proposed process and its activities, we also present two novel methods for context elicitation and for legal compliance requirements elicitation to fill the gaps as part of the main contribution. Our solution for context elicitation enables a domain-specific context establishment based on patterns for different domains. The context patterns allow a structured elicitation and documentation of relevant stakeholders and technical entities for a system-to-be. Both, the documentation in means of graphical pattern instances and textual template instances as well as the method for collecting the necessary information are explicitly given in each context pattern. Additionally, we also provide the means which are necessary to derive new context patterns and extend our context patterns language which is part of this thesis. Our solution for legal compliance requirements elicitation is a pattern-based and guided method which lets one identify the relevant laws for a system-to-be, which is described in means of functional requirements, and which intertwines the functional requirements with the according legal requirements. This method relies on the collaboration of requirements engineers and legal experts, and bridges the gap between their distinct worlds. Our process is exemplified using a running example in the domain of service oriented architectures. Additionally, the results of applying (parts of) the process to real life cases from the smart grid domain and voting system domain are presented, as well as all other results from the scientific means we took to ground and validate the proposed solutions

    Tematski zbornik radova međunarodnog značaja. Tom 3 / Međunarodni naučni skup “Dani Arčibalda Rajsa”, Beograd, 3-4. mart 2015.

    Get PDF
    In front of you is the Thematic Collection of Papers presented at the International Scientific Confer-ence “Archibald Reiss Days”, which was organized by the Academy of Criminalistic and Police Studies in Belgrade, in co-operation with the Ministry of Interior and the Ministry of Education, Science and Techno-logical Development of the Republic of Serbia, National Police University of China, Lviv State University of Internal Affairs, Volgograd Academy of the Russian Internal Affairs Ministry, Faculty of Security in Skopje, Faculty of Criminal Justice and Security in Ljubljana, Police Academy “Alexandru Ioan Cuza“ in Bucharest, Academy of Police Force in Bratislava and Police College in Banjaluka, and held at the Academy of Crimi-nalistic and Police Studies, on 3 and 4 March 2015.International Scientific Conference “Archibald Reiss Days” is organized for the fifth time in a row, in memory of the founder and director of the first modern higher police school in Serbia, Rodolphe Archibald Reiss, PhD, after whom the Conference was named.The Thematic Collection of Papers contains 168 papers written by eminent scholars in the field of law, security, criminalistics, police studies, forensics, informatics, as well as members of national security system participating in education of the police, army and other security services from Spain, Russia, Ukraine, Bela-rus, China, Poland, Armenia, Portugal, Turkey, Austria, Slovakia, Hungary, Slovenia, Macedonia, Croatia, Montenegro, Bosnia and Herzegovina, Republic of Srpska and Serbia. Each paper has been reviewed by two reviewers, international experts competent for the field to which the paper is related, and the Thematic Conference Proceedings in whole has been reviewed by five competent international reviewers.The papers published in the Thematic Collection of Papers contain the overview of contemporary trends in the development of police education system, development of the police and contemporary secu-rity, criminalistic and forensic concepts. Furthermore, they provide us with the analysis of the rule of law activities in crime suppression, situation and trends in the above-mentioned fields, as well as suggestions on how to systematically deal with these issues. The Collection of Papers represents a significant contribution to the existing fund of scientific and expert knowledge in the field of criminalistic, security, penal and legal theory and practice. Publication of this Collection contributes to improving of mutual cooperation between educational, scientific and expert institutions at national, regional and international level

    Problem-Based Privacy Analysis (ProPAn): A Computer-aided Privacy Requirements Engineering Method

    No full text
    With the advancing digitalization in almost all parts of our daily life, e.g., electronic health records and smart homes, and the outsourcing of data processing, e.g., data storage in the cloud and data analysis services, computer-based systems process more and more data these days. Often the processed data originate from natural persons (called data subjects) and are hence personal data possibly containing sensitive information about the individuals. Privacy in the context of personal data processing means that personal data are protected, e.g., against unwanted access and modification, that data subjects are aware about the processing practices of the controller that processes their data, and that data subjects keep control over the processing of their personal data. Privacy regulations, such as the EU GeneralData Protection Regulation (GDPR), aim at protecting data subjects by empowering them with rights and by putting obligations on controllers processing personal data. Not only administrative fines defined in regulations are a driver for the consideration of privacy in the development of a software-based system, also several data breaches occurred in the last years have shown that a poor consideration of privacy during the system and software development mayultimately lead to a loss of trust in and reputation of the controller. To avoid the occurrence of data breaches and to be compliant with privacy regulations, privacy should to be considered in system and software development as a software quality from the beginning. This approach is also known as privacy-by-design. There are several challenges for privacy-by-design methods that are still not fully addressed by existing methods. First, diverse notions of privacy exist. Most of these privacy notions are non-technical and have to berefined to more technical privacy requirements that can be related to the system. Second, the system has to be analyzed for its personal data processing behavior. That is, it has to be determined which personal data are collected, stored, and provided to others by the system. Third, the privacy requirements have to be elicited that are actually relevant for the system. Fourth, the privacy risks imposed by or existing in the system have to be identified and evaluated. Fifth, measures that implement the privacy requirements and mitigatethe privacy risks of the system have to be selected and integrated into the system. Sixth, privacy regulations mandate to assess the impact of the personal data processing on the data subjects. Such a privacy impact assessment (PIA) may be performed as part of a privacy-by-design method. Seventh, the conduction of a privacy-by-design method should be supported as good as possible, e.g., by asystematic method, supportive material, and computer support. In this thesis, I propose the privacy requirements engineering method Problem-based Privacy Analysis (ProPAn). The ProPAn method aims to address the aforementioned challenges starting with a system's functional requirements as input. As part of ProPAn, I provide a privacy requirements taxonomy that I derived from and mapped to various other privacy notions. This privacy requirements taxonomy addresses the first challenge mentioned above. The ProPAnmethod is the main contribution of my thesis and addresses the second to seventh challenge mentioned above. To address the fifth challenge in the ProPAn method, I propose an aspect-oriented requirements engineering framework that allows to modelcross-cutting functionalities and to modularly integrate them into a system's functional requirements. The seventh challenge is addressed by ProPAn's computer support for the execution of the method and the documentation and validation of the method's artifacts in a machine-readable model

    Development and application of distributed computing tools for virtual screening of large compound libraries

    Get PDF
    Im derzeitigen Drug Discovery Prozess ist die Identifikation eines neuen Targetproteins und dessen potenziellen Liganden langwierig, teuer und zeitintensiv. Die Verwendung von in silico Methoden gewinnt hier zunehmend an Bedeutung und hat sich als wertvolle Strategie zur Erkennung komplexer Zusammenhänge sowohl im Bereich der Struktur von Proteinen wie auch bei Bioaktivitäten erwiesen. Die zunehmende Nachfrage nach Rechenleistung im wissenschaftlichen Bereich sowie eine detaillierte Analyse der generierten Datenmengen benötigen innovative Strategien für die effiziente Verwendung von verteilten Computerressourcen, wie z.B. Computergrids. Diese Grids ergänzen bestehende Technologien um einen neuen Aspekt, indem sie heterogene Ressourcen zur Verfügung stellen und koordinieren. Diese Ressourcen beinhalten verschiedene Organisationen, Personen, Datenverarbeitung, Speicherungs- und Netzwerkeinrichtungen, sowie Daten, Wissen, Software und Arbeitsabläufe. Das Ziel dieser Arbeit war die Entwicklung einer universitätsweit anwendbaren Grid-Infrastruktur - UVieCo (University of Vienna Condor pool) -, welche für die Implementierung von akademisch frei verfügbaren struktur- und ligandenbasierten Drug Discovery Anwendungen verwendet werden kann. Firewall- und Sicherheitsprobleme wurden mittels eines virtuellen privaten Netzwerkes gelöst, wohingegen die Virtualisierung der Computerhardware über das CoLinux Konzept ermöglicht wurde. Dieses ermöglicht, dass unter Linux auszuführende Aufträge auf Windows Maschinen laufen können. Die Effektivität des Grids wurde durch Leistungsmessungen anhand sequenzieller und paralleler Aufgaben ermittelt. Als Anwendungsbeispiel wurde die Assoziation der Expression bzw. der Sensitivitätsprofile von ABC-Transportern mit den Aktivitätsprofilen von Antikrebswirkstoffen durch Data-Mining des NCI (National Cancer Institute) Datensatzes analysiert. Die dabei generierten Datensätze wurden für liganden-basierte Computermethoden wie Shape-Similarity und Klassifikationsalgorithmen mit dem Ziel verwendet, P-glycoprotein (P-gp) Substrate zu identifizieren und sie von Nichtsubstraten zu trennen. Beim Erstellen vorhersagekräftiger Klassifikationsmodelle konnte das Problem der extrem unausgeglichenen Klassenverteilung durch Verwendung der „Cost-Sensitive Bagging“ Methode gelöst werden. Applicability Domain Studien ergaben, dass unser Modell nicht nur die NCI Substanzen gut vorhersagen kann, sondern auch für wirkstoffähnliche Moleküle verwendet werden kann. Die entwickelten Modelle waren relativ einfach, aber doch präzise genug um für virtuelles Screening einer großen chemischen Bibliothek verwendet werden zu können. Dadurch könnten P-gp Substrate schon frühzeitig erkannt werden, was möglicherweise nützlich sein kann zur Entfernung von Substanzen mit schlechten ADMET-Eigenschaften bereits in einer frühen Phase der Arzneistoffentwicklung. Zusätzlich wurden Shape-Similarity und Self-organizing Map Techniken verwendet um neue Substanzen in einer hauseigenen sowie einer großen kommerziellen Datenbank zu identifizieren, die ähnlich zu selektiven Serotonin-Reuptake-Inhibitoren (SSRI) sind und Apoptose induzieren können. Die erhaltenen Treffer besitzen neue chemische Grundkörper und können als Startpunkte für Leitstruktur-Optimierung in Betracht gezogen werden. Die in dieser Arbeit beschriebenen Studien werden nützlich sein um eine verteilte Computerumgebung zu kreieren die vorhandene Ressourcen in einer Organisation nutzt, und die für verschiedene Anwendungen geeignet ist, wie etwa die effiziente Handhabung der Klassifizierung von unausgeglichenen Datensätzen, oder mehrstufiges virtuelles Screening.In the current drug discovery process, the identification of new target proteins and potential ligands is very tedious, expensive and time-consuming. Thus, use of in silico techniques is of utmost importance and proved to be a valuable strategy in detecting complex structural and bioactivity relationships. Increased demands of computational power for tremendous calculations in scientific fields and timely analysis of generated piles of data require innovative strategies for efficient utilization of distributed computing resources in the form of computational grids. Such grids add a new aspect to the emerging information technology paradigm by providing and coordinating the heterogeneous resources such as various organizations, people, computing, storage and networking facilities as well as data, knowledge, software and workflows. The aim of this study was to develop a university-wide applicable grid infrastructure, UVieCo (University of Vienna Condor pool) which can be used for implementation of standard structure- and ligand-based drug discovery applications using freely available academic software. Firewall and security issues were resolved with a virtual private network setup whereas virtualization of computer hardware was done using the CoLinux concept in a way to run Linux-executable jobs inside Windows machines. The effectiveness of the grid was assessed by performance measurement experiments using sequential and parallel tasks. Subsequently, the association of expression/sensitivity profiles of ABC transporters with activity profiles of anticancer compounds was analyzed by mining the data from NCI (National Cancer Institute). The datasets generated in this analysis were utilized with ligand-based computational methods such as shape similarity and classification algorithms to identify and separate P-gp substrates from non-substrates. While developing predictive classification models, the problem of imbalanced class distribution was proficiently addressed using the cost-sensitive bagging approach. Applicability domain experiment revealed that our model not only predicts NCI compounds well, but it can also be applied to drug-like molecules. The developed models were relatively simple but precise enough to be applicable for virtual screening of large chemical libraries for the early identification of P-gp substrates which can potentially be useful to remove compounds of poor ADMET properties in an early phase of drug discovery. Additionally, shape-similarity and self-organizing maps techniques were used to screen in-house as well as a large vendor database for identification of novel selective serotonin reuptake inhibitor (SSRI) like compounds to induce apoptosis. The retrieved hits possess novel chemical scaffolds and can be considered as a starting point for lead optimization studies. The work described in this thesis will be useful to create distributed computing environment using available resources within an organization and can be applied to various applications such as efficient handling of imbalanced data classification problems or multistep virtual screening approach

    CUSTOMIZED FINISHING TECHNIQUES ON ENTRY LEVEL FDM 3D PRINTED ARTEFACTS IN VISUAL ARTS: An explanatory sequential study.

    Get PDF
    Published ThesisThe aim of this study is to investigate ways to improve the quality of entry-level fused deposition modelling (ELFDM) produced artefacts, to make the technology more accessible to a wider range of prosumer and address the scale limitations of production components. The development of entry-level 3D printed (EL3DP) technology enhances art and design by providing new techniques previously impossible; however limitations such as poor surface finish quality and size limitations are persistently observed. These limitations steer artists and designers away from utilizing this technology due to poor aesthetic value outputs. It was necessary to construct this study from within an explanatory sequential mixed method paradigm as both quantitative and qualitative data were needed to sketch a broad overview and analyse abstract concepts like aesthetic value. Due to the lack of recorded academic information an experimental pilot study was first conducted to identify potential techniques, followed by quantitative (tensile tests and surface profile measurements) and qualitative (in depth interviews and online surveys) phases and lastly all the data was interpreted to cohesively substantiate the hypothesis. The results show that the pre-experimental pilot study identified potential techniques that were investigated in the phases that followed. Clear evidence is shown to support the progression of ELFDM technique development by applying post-production finishing techniques (PPFTs). It also indicates that the aesthetic value of an artefact can be enhanced by applying surface finishing and assembly techniques. This study enables a larger range of entry-level prosumer to utilize cheaper alternatives to Additive Manufacturing (AM) technologies which will lessen the gap between high-end and entry-level. Furthermore by affecting the strength and surface texture of ELFDM 3D prints it has a direct influence on the aesthetic value and functionality of EL3DP artefacts

    Formulation and Assessment of Taste-masked Electrospun Fibre Mats for Paediatric Drug Delivery

    Get PDF
    Since the Paediatric Regulation came into force by the European Medicines Agency in 2007, the drive to formulate age-appropriate dosage forms has been accelerated. The aim of this thesis was to develop new approaches for paediatric formulation design through the optimisation of novel taste-masking and taste-assessment methods. Electrospinning was demonstrated to be a suitable taste-masking technology, producing fibre mats that can be further formulated into easy to swallow oral films. The electrospinning of Eudragit E PO, a taste-masking polymer, was optimised using Quality by Design principles and in particular Design of Experiment. To further enhance the taste-masking capability of the electrospun mat, co-axial electrospinning was utilised using another taste-masking polymer, Kollicoat Smartseal. The use of both polymers successfully taste-masked chlorpheniramine maleate, a known bitter anti-histamine. This was demonstrated using an electronic biosensor tasting system or E-tongue. The E-tongue was used to assess the bitterness threshold of this model drug but also of other standard bitter drugs for benchmarking. In addition, it was used to taste-assess various formulations which aided in ranking and deselecting formulations. Electrospun fibre mats can be further processed into a number of different dosage forms for final presentation to the patient. The fibre mats were designed to be presented as an oral film. A water-soluble outer layer was added to the films using multi-axial electrospinning. A human panel was conducted to investigate the mouthfeel and overall acceptability of electrospun PVA films versus solvent-cast PVA films. The electrospun films were found to be as acceptable as the standard solvent-cast films, a very promising result for clinical translation. PVA and PVP were electrospun with the previously optimised polymers using tri-axial and tetra-axial electrospinning. The taste of the multi-axial electrospun fibre mats were assessed and it was found that adding a water-soluble outer layer reduces the taste-masking ability. Thus, it was found that electrospinning of a bitter drug using hydrophobic taste-masking polymers is very promising in the formulation of paediatric oral films
    corecore