42 research outputs found

    Design automation algorithms for advanced lithography

    Get PDF
    In circuit manufacturing, as the technology nodes keep shrinking, conventional 193 nm immersion lithography (193i) has reached its printability limit. To continue the scaling with Moore's law, different kinds of advanced lithography have been proposed, such as multiple patterning lithography (MPL), extreme ultraviolet (EUV), electron beam lithography (EBL) and directed self-assembly (DSA). While these new technologies create enormous opportunities, they also pose great design challenges due to their unique process characteristics and stringent constraints. In order to smoothly adopt these advanced lithography technologies in integrated circuit (IC) fabrication, effective electronic design automation (EDA) algorithms must be designed and integrated into computer-aided design (CAD) tools to address the underlying design constraints and help the circuit designer to better facilitate the lithography process. In this thesis, we focus on algorithmic design and efficient implementation of EDA algorithm for advanced lithography, including directed self-assembly (DSA) and self-aligned double patterning (SADP), to conquer the physical challenges and improve the manufacturing yield. The first advanced lithography technology we explore is self-aligned double patterning (SADP). SADP has the significant advantage over traditional litho-etch-litho-etch (LELE) double patterning in its ability to eliminate overlay, making it a preferable DPL choice for the 14 nm technology node. As in any DPL technology, layout decomposition is the key problem. While the layout decomposition problem for LELE DPL has been well studied in the literature, only a few attempts have been made for the SADP layout decomposition problem. This thesis studies the SADP decomposition problem in different scenarios. SADP has been successfully deployed in 1D patterns and has several applications; however, applying it to 2D patterns turns out to be much more difficult. All previous exact algorithms were based on computationally expensive methods such as SAT or ILP. Other previous algorithms were heuristics without a guarantee that an overlay-free solution can be found even if one exists. The SADP decomposition problem on general 2D layout is proven to be NP-complete. However, we show that if we restrict the overlay, the problem is polynomial-time solvable, and present an exact algorithm to determine if a given 2D layout has a no-overlay SADP decomposition. When designing the layout decomposition algorithms, it is usually useful to take the layout structure into consideration. As most of the current IC layouts adopt a row-based standard cell design style, we can take advantage of its characteristics and design more efficient algorithms compared to the algorithms for general 2D patterns. In particular, the fixed widths of standard cells and power tracks on top and bottom of cells suggest that improvements can be made over the algorithms for general decomposition problem. We present a shortest-path based polynomial time SADP decomposition algorithm for row-based standard cell layout that efficiently finds decompositions with minimum overlay violations. Our proposed algorithm takes advantage of the fixed width of the cells and the alternating power tracks between the rows to limit the possible decompositions and thus achieve high efficiency. The next advanced lithography technology we discuss in the thesis is directed self-assembly (DSA). Block copolymer directed self-assembly (DSA) is a promising technique for patterning contact holes and vias in 7 nm technology nodes. To pattern contacts/vias with DSA, guiding templates are usually printed first with conventional lithography (193i) that has a coarser pitch resolution. Contact holes are then patterned with DSA process. The guiding templates play the role of defining the DSA patterns, which have a finer resolution than the templates. As a result, different patterns can be obtained through controlling the templates. It is shown that DSA lithography is very promising in patterning contacts/vias in 7 nm technology node. However, to utilize DSA for full-chip manufacturing, EDA for DSA must be fully explored because EDA is the key enabler for manufacturing, and the EDA research for DSA is still lagging behind. To pattern the contact layer with DSA, we must ensure that all the contacts in the layout require only feasible DSA templates. Nevertheless, the original layout may not be designed in a DSA-friendly way. However, even with an optimized library, infeasible templates may be introduced after the physical design phase. We propose a simulated-annealing (SA) based scheme to perform full-chip level contact layer optimization. According to the experimental results, the DSA conflicts in the contact layer are reduced by close to 90% on average after applying the proposed optimization algorithm. It is a current trend that industry is transiting from the random 2D designs to highly regular 1D gridded designs for sub-20 nm nodes and fabricating circuit designs with print-cut technology. In this process, the randomly distributed cuts may be too dense to be printed by single patterning lithography. DSA has proven its success in contact hole patterning, and can be easily expanded to cut printing for 1D gridded designs. Nevertheless, the irregular distribution of cuts still presents a great challenge for DSA, as the self-assembly process usually forms regular patterns. As a result, the cut layer must be optimized for the DSA process. To address the above problem, we propose an efficient algorithm to optimize cut layers without hurting the original circuit logic. Our work utilizes a technique called `line-end extension' to move the cuts and extend the functional wires without changing the original functionality of the circuit. Consequently, the cuts can be redistributed and grouped into valid DSA templates. Multiple patterning lithography has been widely adopted for today's circuit manufacturing. However, increasing the number of masks will make the manufacturing process more expensive. By incorporating DSA into the multiple patterning process, it is possible to reduce the number of masks and achieve a cost-effective solution. We study the decomposition problem for the contact layer in row-based standard cell layout with DSA-MP complementary lithography. We explore several heuristic-based approaches, and propose an algorithm that decomposes a standard cell row optimally in polynomial-time. Our experiments show that our algorithm is guaranteed to find a minimum cost solution if one exists, while the heuristic cannot or only finds a sub-optimal solution. Our results show that the DSA-MP complementary approach is very promising for the future advanced nodes. As in any lithography technique, the process variation control and proximity correction are the most important issues. As the DSA templates are patterned by conventional lithography, the patterned templates are prone to deviate from mask shapes due to process variations, which will ultimately affect the contacts after the DSA process even for the same type of template. Therefore, in order to enable the DSA technology in contact/via layer printing, it is extremely important to accurately model and detect hotspots, as well as estimate the contact pitch and locations during the verification phase. We propose a machine learning based design automation framework for DSA verification. A novel DSA model and a set of features are included. We implemented the proposed ML-based flow and performed extensive experiments on comparing the performances of learning algorithms and features. The experimental results show that our approach is much more efficient than the traditional approach, and can produce highly accurate results

    Computer Science & Technology Series : XVI Argentine Congress of Computer Science - Selected papers

    Get PDF
    CACIC’10 was the sixteenth Congress in the CACIC series. It was organized by the School of Computer Science of the University of Moron. The Congress included 10 Workshops with 104 accepted papers, 1 main Conference, 4 invited tutorials, different meetings related with Computer Science Education (Professors, PhD students, Curricula) and an International School with 5 courses. (http://www.cacic2010.edu.ar/). CACIC 2010 was organized following the traditional Congress format, with 10 Workshops covering a diversity of dimensions of Computer Science Research. Each topic was supervised by a committee of three chairs of different Universities. The call for papers attracted a total of 195 submissions. An average of 2.6 review reports were collected for each paper, for a grand total of 507 review reports that involved about 300 different reviewers. A total of 104 full papers were accepted and 20 of them were selected for this book.Red de Universidades con Carreras en Informática (RedUNCI

    Optimizations in Heterogeneous Mobile Networks

    Get PDF

    Investigación de la distribución de los alelos HLA en poblaciones sanas y enfermas mediante la aplicación de nuevas metodologías de secuenciación

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Medicina, Departamento de Inmunología, Oftalmología y ORL, leída el 09/03/2021Increasing our knowledge of the HLA system, including both the complete sequence description and the assessment of its diversity at the worldwide human population-level, is of great importance for elucidating the molecular functional mechanisms of the immune system and its regulation in health and disease. Furthermore, assessment of HLA allelic and haplotypic diversity of each human population is essential in the clinical histocompatibility and transplantation setting as well as in the pharmacogenetics, immunotherapy and anthropology fields. Nevertheless, the inherent vast polymorphism and high complexity presented by the HLA system have been an important challenge for its unambiguous and in-depth (high-resolution) characterization by previously available legacy molecular HLA genotyping methods (e.g. SSP, SSO and even SBT). Recent application of novel next-generation sequencing (NGS) technology for high-resolution molecular HLA genotyping has enabled to obtain, at a high-throughput mode and larger scale, full-length and/or extended sequences and genotypes of all major HLA genes, thus overcoming most of these previous limitations. Objectives: I) Characterization of HLA allele and haplotype diversity of all major classical HLA genes (HLA-A, -B, -C, -DPA1, -DPB1, -DQA1, -DQB1, -DRB1 and -DRB3/4/5) by application of NGS of a first representative cohort of the Spanish population that could also serve as a healthy control reference group. Respective statistical analyses were performed for this immunogenetic population data. II) Characterization of HLA allele and haplotype diversity of all major classical HLA genes (HLA-A, -B, -C, -DPA1, -DPB1, -DQA1, -DQB1, -DRB1 and -DRB3/4/5) by application of NGS of a respective cohort of multiple sclerosis (MS) patients in the Spanish population (recruited at the Department of Neurology, Hospital Clínic, Barcelona, Catalonia, Spain). A first case-control study was carried out to examine HLA-disease associations with MS in these Spanish population cohorts as well as to attempt a fine-mapping of these allele and haplotype associations by full gene resolution level via NGS. In addition, a second analysis exercise (i.e. test case) of this case-control study was carried out using an alternative healthy control group dataset, exclusively from the Spanish northeastern region of Catalonia in this second case, to evaluate possible differences in the findings of HLA-disease association with MS due to plausible regional HLA genetic variation within mainland Spain (i.e. as a statistical way to try controlling for any possible existing population stratification)...El estudio del sistema HLA, incluyendo la descripción completa de su secuencia y de la diversidad de este complejo HLA a nivel poblacional, es de gran importancia de cara a poder entender los mecanismos moleculares y funciones del sistema inmune así como su regulación en individuos sanos y enfermos. Además, la caracterización exhaustiva de la diversidad de alelos y haplotipos HLA de cada población humana es esencial en el campo de la inmunología de trasplante e histocompatibilidad al igual que en las áreas de farmacogenética e inmunoterapia. El inmenso polimorfismo y gran complejidad que presenta el sistema HLA han sido hasta ahora importantes barreras de cara a poder caracterizarlo en gran detalle (por alta resolución) y sin ambigüedades mediante métodos de genotipaje HLA tradicionales disponibles (como son SSP, SSO o incluso SBT). La reciente aplicación de la novedosa tecnología de secuenciación masiva NGS para el genotipaje molecular HLA por alta resolución ha posibilitado obtener secuencias completas o mucho más extendidas para genotipos de los principales genes de HLA, superándose así estas previas limitaciones. Objetivos: I) Caracterización de la diversidad alélica y haplotípica de los principales genes HLA (HLA-A, -B, -C, -DPA1, -DPB1, -DQA1, -DQB1, -DRB1 y -DRB3/4/5) mediante la aplicación de NGS en una primera cohorte representativa de la población española que, igualmente, constituirá una población control de referencia para estudios de asociación de HLA y enfermedades. También, respectivos análisis estadísticos se realizaron para estos resultados de genotipaje HLA. II) Caracterización de la diversidad alélica y haplotípica de los principales genes HLA (HLA-A, -B, -C, -DPA1, -DPB1, -DQA1, -DQB1, -DRB1 y -DRB3/4/5) mediante la aplicación de NGS en una correspondiente cohorte de pacientes con esclerosis múltiple (EM) de la población española (reclutados y procedentes del Departamento de Neurología del Hospital Clínic (Barcelona, Cataluña)). Un primer estudio de asociación HLA tomando casos (pacientes EM) frente a controles sanos se llevó a cabo para examinar la asociación de genes HLA y la enfermedad de EM en estas cohortes de población española antes mencionadas. Así se buscaba realizar un mapeo fino de las respectivas asociaciones alélicas y haplotípicas de HLA mediante la gran resolución alélica proporcionada por esta metodología de secuenciación masiva. De modo adicional, y como un segundo ejercicio de análisis en este estudio de asociación HLA, se utilizó un grupo control sano alternativo al previo, que incluía individuos procedentes de la región de Cataluña (situada al noreste de España) exclusivamente en este caso, para evaluar así posibles diferencias dadas en la asociación de HLA con EM debido a la probable variación genética en HLA existente a nivel regional dentro del territorio de España...Fac. de MedicinaTRUEunpu

    Security and Privacy Preservation in Mobile Crowdsensing

    Get PDF
    Mobile crowdsensing (MCS) is a compelling paradigm that enables a crowd of individuals to cooperatively collect and share data to measure phenomena or record events of common interest using their mobile devices. Pairing with inherent mobility and intelligence, mobile users can collect, produce and upload large amounts of data to service providers based on crowdsensing tasks released by customers, ranging from general information, such as temperature, air quality and traffic condition, to more specialized data, such as recommended places, health condition and voting intentions. Compared with traditional sensor networks, MCS can support large-scale sensing applications, improve sensing data trustworthiness and reduce the cost on deploying expensive hardware or software to acquire high-quality data. Despite the appealing benefits, however, MCS is also confronted with a variety of security and privacy threats, which would impede its rapid development. Due to their own incentives and vulnerabilities of service providers, data security and user privacy are being put at risk. The corruption of sensing reports may directly affect crowdsensing results, and thereby mislead customers to make irrational decisions. Moreover, the content of crowdsensing tasks may expose the intention of customers, and the sensing reports might inadvertently reveal sensitive information about mobile users. Data encryption and anonymization techniques can provide straightforward solutions for data security and user privacy, but there are several issues, which are of significantly importance to make MCS practical. First of all, to enhance data trustworthiness, service providers need to recruit mobile users based on their personal information, such as preferences, mobility pattern and reputation, resulting in the privacy exposure to service providers. Secondly, it is inevitable to have replicate data in crowdsensing reports, which may possess large communication bandwidth, but traditional data encryption makes replicate data detection and deletion challenging. Thirdly, crowdsensed data analysis is essential to generate crowdsensing reports in MCS, but the correctness of crowdsensing results in the absence of malicious mobile users and service providers become a huge concern for customers. Finally yet importantly, even if user privacy is preserved during task allocation and data collection, it may still be exposed during reward distribution. It further discourage mobile users from task participation. In this thesis, we explore the approaches to resolve these challenges in MCS. Based on the architecture of MCS, we conduct our research with the focus on security and privacy protection without sacrificing data quality and users' enthusiasm. Specifically, the main contributions are, i) to enable privacy preservation and task allocation, we propose SPOON, a strong privacy-preserving mobile crowdsensing scheme supporting accurate task allocation. In SPOON, the service provider recruits mobile users based on their locations, and selects proper sensing reports according to their trust levels without invading user privacy. By utilizing the blind signature, sensing tasks are protected and reports are anonymized. In addition, a privacy-preserving credit management mechanism is introduced to achieve decentralized trust management and secure credit proof for mobile users; ii) to improve communication efficiency while guaranteeing data confidentiality, we propose a fog-assisted secure data deduplication scheme, in which a BLS-oblivious pseudo-random function is developed to enable fog nodes to detect and delete replicate data in sensing reports without exposing the content of reports. Considering the privacy leakages of mobile users who report the same data, the blind signature is utilized to hide users' identities, and chameleon hash function is leveraged to achieve contribution claim and reward retrieval for anonymous greedy mobile users; iii) to achieve data statistics with privacy preservation, we propose a privacy-preserving data statistics scheme to achieve end-to-end security and integrity protection, while enabling the aggregation of the collected data from multiple sources. The correctness verification is supported to prevent the corruption of the aggregate results during data transmission based on the homomorphic authenticator and the proxy re-signature. A privacy-preserving verifiable linear statistics mechanism is developed to realize the linear aggregation of multiple crowdsensed data from a same device and the verification on the correctness of aggregate results; and iv) to encourage mobile users to participating in sensing tasks, we propose a dual-anonymous reward distribution scheme to offer the incentive for mobile users and privacy protection for both customers and mobile users in MCS. Based on the dividable cash, a new reward sharing incentive mechanism is developed to encourage mobile users to participating in sensing tasks, and the randomization technique is leveraged to protect the identities of customers and mobile users during reward claim, distribution and deposit

    Exploring New Horizons and Challenges for Social Studies in a New Normal

    Get PDF
    The new standards and changes exist in social science studies. Covid 19, especially in Indonesia, at the end of 2019, has an impact on changes in every sector of life. This change is a form of community adaptation. Therefore, this conference aims to explore theoretical and practical developments of the social sciences, to build academic networks while gathering academics from various research institutes and universities. This book provides the new standard and encourages many thoughts in theoretical and empirical studies in the social field. The scope that can be generated in this standard includes patterns, opportunities, and challenges in social science, learning to new standards, learning innovation, and implementing new learning standards in Indonesia, which was adopted in the form of the Merdeka Belajar program. The study results will fill the gaps in knowledge in the new social life and social science. Therefore, this book aims to mediate the researchers in the same field to discuss and find solutions to current issues in the social field and build cooperation and synergy in creative ideas to work together to create joint research. This book will be interesting to students, scholars, and practitioners who have a deep concern in social science. It is futuristic with a lot of practical insights for the students, faculty, and practitioners. Since the contributors are from across the globe, it is fascinating to see the global benchmarks

    Exploring New Horizons and Challenges for Social Studies in a New Normal

    Get PDF
    The new standards and changes exist in social science studies. Covid 19, especially in Indonesia, at the end of 2019, has an impact on changes in every sector of life. This change is a form of community adaptation. Therefore, this conference aims to explore theoretical and practical developments of the social sciences, to build academic networks while gathering academics from various research institutes and universities. This book provides the new standard and encourages many thoughts in theoretical and empirical studies in the social field. The scope that can be generated in this standard includes patterns, opportunities, and challenges in social science, learning to new standards, learning innovation, and implementing new learning standards in Indonesia, which was adopted in the form of the Merdeka Belajar program. The study results will fill the gaps in knowledge in the new social life and social science. Therefore, this book aims to mediate the researchers in the same field to discuss and find solutions to current issues in the social field and build cooperation and synergy in creative ideas to work together to create joint research. This book will be interesting to students, scholars, and practitioners who have a deep concern in social science. It is futuristic with a lot of practical insights for the students, faculty, and practitioners. Since the contributors are from across the globe, it is fascinating to see the global benchmarks

    Refactoring of Security Antipatterns in Distributed Java Components

    Get PDF
    The importance of JAVA as a programming and execution environment has grown steadily over the past decade. Furthermore, the IT industry has adapted JAVA as a major building block for the creation of new middleware as well as a technology facilitating the migration of existing applications towards web-driven environments. Parallel in time, the role of security in distributed environments has gained attention, as a large amount of middleware applications has replaced enterprise-level mainframe systems. The protection of confidentiality, integrity and availability are therefore critical for the market success of a product. The vulnerability level of every product is determined by the weakest embedded component, and selling vulnerable products can cause enormous economic damage to software vendors. An important goal of this work is to create the awareness that the usage of a programming language, which is designed as being secure, is not sufficient to create secure and trustworthy distributed applications. Moreover, the incorporation of the threat model of the programming language improves the risk analysis by allowing a better definition of the attack surface of the application. The evolution of a programming language leads towards common patterns for solutions for recurring quality aspects. Suboptimal solutions, also known as ´antipatterns´, are typical causes for quality weaknesses such as security vulnerabilities. Moreover, the exposure to a specific environment is an important parameter for threat analysis, as code considered secure in a specific scenario can cause unexpected risks when switching the environment. Antipatterns are a well-established means on the abstractional level of system modeling to inform about the effects of incomplete solutions, which are also important in the later stages of the software development process. Especially on the implementation level, we see a deficit of helpful examples, that would give programmers a better and holistic understanding. In our basic assumption, we link the missing experience of programmers regarding the security properties of patterns within their code to the creation of software vulnerabilities. Traditional software development models focus on security properties only on the meta layer. To transfer these efficiently to the practical level, we provide a three-stage approach: First, we focus on typical security problems within JAVA applications, and develop a standardized catalogue of ´antipatterns´ with examples from standard software products. Detecting and avoiding these antipatterns positively influences software quality. We therefore focus, as second element of our methodology, on possible enhancements to common models for the software development process. These help to control and identify the occurrence of antipatterns during development activities, i. e. during the coding phase and during the phase of component assembly, integrating one´s own and third party code. Within the third part, and emphasizing the practical focus of this research, we implement prototypical tools for support of the software development phase. The practical findings of this research helped to enhance the security of the standard JAVA platforms and JEE frameworks. We verified the relevance of our methods and tools by applying these to standard software products leading to a measurable reduction of vulnerabilities and an information exchange with middleware vendors (Sun Microsystems, JBoss) targeting runtime security. Our goal is to enable software architects and software developers developing end-user applications to apply our findings with embedded standard components on their environments. From a high-level perspective, software architects profit from this work through the projection of the quality-of-service goals to protection details. This supports their task of deriving security requirements when selecting standard components. In order to give implementation-near practitioners a helpful starting point to benefit from our research we provide tools and case-studies to achieve security improvements within their own code base.Die Bedeutung der Programmiersprache JAVA als Baustein für Softwareentwicklungs- und Produktionsinfrastrukturen ist im letzten Jahrzehnt stetig gestiegen. JAVA hat sich als bedeutender Baustein für die Programmierung von Middleware-Lösungen etabliert. Ebenfalls evident ist die Verwendung von JAVA-Technologien zur Migration von existierenden Arbeitsplatz-Anwendungen hin zu webbasierten Einsatzszenarien. Parallel zu dieser Entwicklung hat sich die Rolle der IT-Sicherheit nicht zuletzt aufgrund der Verdrängung von mainframe-basierten Systemen hin zu verteilten Umgebungen verstärkt. Der Schutz von Vertraulichkeit, Integrität und Verfügbarkeit ist seit einigen Jahren ein kritisches Alleinstellungsmerkmal für den Markterfolg von Produkten. Verwundbarkeiten in Produkten wirken mittlerweile indirekt über kundenseitigen Vertrauensverlust negativ auf den wirtschaftlichen Erfolg der Softwarehersteller, zumal der Sicherheitsgrad eines Systems durch die verwundbarste Komponente bestimmt wird. Ein zentrales Ziel dieser Arbeit ist die Erkenntnis zu vermitteln, dass die alleinige Nutzung einer als ´sicher´ eingestuften Programmiersprache nicht als alleinige Grundlage zur Erstellung von sicheren und vertrauenswürdigen Anwendungen ausreicht. Vielmehr führt die Einbeziehung des Bedrohungsmodells der Programmiersprache zu einer verbesserten Risikobetrachtung, da die Angriffsfläche einer Anwendung detaillierter beschreibbar wird. Die Entwicklung und fortschreitende Akzeptanz einer Programmiersprache führt zu einer Verbreitung von allgemein anerkannten Lösungsmustern zur Erfüllung wiederkehrender Qualitätsanforderungen. Im Bereich der Dienstqualitäten fördern ´Gegenmuster´, d.h. nichtoptimale Lösungen, die Entstehung von Strukturschwächen, welche in der Domäne der IT-Sicherheit ´Verwundbarkeiten´ genannt werden. Des Weiteren ist die Einsatzumgebung einer Anwendung eine wichtige Kenngröße, um eine Bedrohungsanalyse durchzuführen, denn je nach Beschaffenheit der Bedrohungen im Zielszenario kann eine bestimmte Benutzeraktion eine Bedrohung darstellen, aber auch einen erwarteten Anwendungsfall charakterisieren. Während auf der Modellierungsebene ein breites Angebot von Beispielen zur Umsetzung von Sicherheitsmustern besteht, fehlt es den Programmierern auf der Implementierungsebene häufig an ganzheitlichem Verständnis. Dieses kann durch Beispiele, welche die Auswirkungen der Verwendung von ´Gegenmustern´ illustrieren, vermittelt werden. Unsere Kernannahme besteht darin, dass fehlende Erfahrung der Programmierer bzgl. der Sicherheitsrelevanz bei der Wahl von Implementierungsmustern zur Entstehung von Verwundbarkeiten führt. Bei der Vermittlung herkömmlicher Software-Entwicklungsmodelle wird die Integration von praktischen Ansätzen zur Umsetzung von Sicherheitsanforderungen zumeist nur in Meta-Modellen adressiert. Zur Erweiterung des Wirkungsgrades auf die praktische Ebene wird ein dreistufiger Ansatz präsentiert. Im ersten Teil stellen wir typische Sicherheitsprobleme von JAVA-Anwendungen in den Mittelpunkt der Betrachtung, und entwickeln einen standardisierten Katalog dieser ´Gegenmuster´. Die Relevanz der einzelnen Muster wird durch die Untersuchung des Auftretens dieser in Standardprodukten verifiziert. Der zweite Untersuchungsbereich widmet sich der Integration von Vorgehensweisen zur Identifikation und Vermeidung der ´Sicherheits-Gegenmuster´ innerhalb des Software-Entwicklungsprozesses. Hierfür werden zum einen Ansätze für die Analyse und Verbesserung von Implementierungsergebnissen zur Verfügung gestellt. Zum anderen wird, induziert durch die verbreitete Nutzung von Fremdkomponenten, die arbeitsintensive Auslieferungsphase mit einem Ansatz zur Erstellung ganzheitlicher Sicherheitsrichtlinien versorgt. Da bei dieser Arbeit die praktische Verwendbarkeit der Ergebnisse eine zentrale Anforderung darstellt, wird diese durch prototypische Werkzeuge und nachvollziehbare Beispiele in einer dritten Perspektive unterstützt. Die Relevanz der Anwendung der entwickelten Methoden und Werkzeuge auf Standardprodukte zeigt sich durch die im Laufe der Forschungsarbeit entdeckten Sicherheitsdefizite. Die Rückmeldung bei führenden Middleware-Herstellern (Sun Microsystems, JBoss) hat durch gegenseitigen Erfahrungsaustausch im Laufe dieser Forschungsarbeit zu einer messbaren Verringerung der Verwundbarkeit ihrer Middleware-Produkte geführt. Neben den erreichten positiven Auswirkungen bei den Herstellern der Basiskomponenten sollen Erfahrungen auch an die Architekten und Entwickler von Endprodukten, welche Standardkomponenten direkt oder indirekt nutzen, weitergereicht werden. Um auch dem praktisch interessierten Leser einen möglichst einfachen Einstieg zu bieten, stehen die Werkzeuge mit Hilfe von Fallstudien in einem praktischen Gesamtzusammenhang. Die für das Tiefenverständnis notwendigen Theoriebestandteile bieten dem Software-Architekten die Möglichkeit sicherheitsrelevante Auswirkungen einer Komponentenauswahl frühzeitig zu erkennen und bei der Systemgestaltung zu nutzen
    corecore