9,341 research outputs found

    Downstream processing of lentiviral vectors with focus on steric exclusion chromatography

    Get PDF
    Lentiviral vectors (LV) are widely used to deliver therapeutic genes for gene therapy and gene-modified cell therapy and have shown success in chimeric antigen T cell therapies. The ongoing market growth leads to an increasing demand for purified LV which requires efficient downstream processes. Due to the lower stability of LV, new demands are placed on the process. Existing unit operations must be greatly optimized and research into new, alternative methods is essential. In a holistic approach, the work focused on identified bottlenecks and presents new approaches for clarification, analytics, and chromatographic purification. In the first part of this work, a vacuum-based clarification method with diatomaceous earth was improved for LV which were produced by suspension cell culture. This clarification method allowed fast and high throughput clarification and improved handling by eliminating the centrifugation step and increasing filter capacity. Thus, clarification of LV with diatomaceous earth laid the foundation for subsequent chromatography studies. To improve the analytical sample throughput and accelerate process development the second part of this thesis deals with the development of a high throughput assay with automated readout and analysis for the determination of the infectious titer, which is the key process variable for enveloped viral vectors. For this purpose, transduced cells are quantified by immunological detection in a real-time live-cell analysis system using software-based image evaluation. Eventually, the third and fourth parts focused on steric exclusion chromatography. It could be demonstrated that process parameters like the buffer mixing strategy and flow rate are crucial for this thermodynamically driven process of depletion interaction between the LV and the membrane. Moreover, it was shown that an ideal PEG molecular weight and concentration must be identified. The visualization of the LV on the membrane showed that the LV were mainly found on the first membrane layer after loading. Therefore, the surface area-specific flow rate was crucial for scale-up. The mechanistic understanding of the process and the process optimizations enabled reproducibly high LV recoveries and removal of impurities.Lentivirale Vektoren (LV) werden zur Übertragung therapeutischer Gene für die Gen- und Zelltherapie eingesetzt, vornehmlich für die Therapie mit chimären Antigenrezeptor-T-Zellen. Das Marktwachstum führt zu einer steigenden Nachfrage nach LV. Die geringe LV-Stabilität stellt Anforderungen an einen schonenden Prozess. Bestehende Verfahren müssen stark optimiert werden und die Erforschung neuer, alternativer Methoden ist unerlässlich. In einem ganzheitlichen Ansatz fokussiert sich die Arbeit auf gefundene Schwachstellen und stellt für die Klarfiltration, die Analytik und die chromatographische Reinigung neue Ansätze vor. Im ersten Teil dieser Arbeit wurde eine vakuumbasierte Klärungsmethode mit Kieselgur für LV etabliert, die durch Suspensionszellkultur hergestellt wurden. Diese Klärungsmethode ermöglichte eine schnelle Filtration mit hohem Durchsatz und verbesserte die Handhabung durch den Wegfall des Zentrifugationsschritts und die Erhöhung der Filterkapazität. Die Klarfiltration von LV legte damit die Grundlage für nachfolgende Chromatographiestudien. Um den analytischen Probendurchsatz zu verbessern und die Prozessentwicklung zu beschleunigen, befasste sich der zweite Teil dieser Arbeit mit der Entwicklung eines Hochdurchsatzassays mit automatisierter Erfassung und Auswertung zur Bestimmung des infektiösen Titers, welcher für umhüllte virale Vektoren die wichtigste Prozessgröße darstellt. Transduzierte Zellen werden hierbei durch immunologische Detektion in einem Echtzeit-Lebendzell-Analysesystem mit softwarebasierter Bildauswertung quantifiziert. Der dritte und vierte Teil befasste sich mit der sterischen Ausschlusschromatographie (SXC). Es konnte gezeigt werden, dass die Mischstrategie der Puffer und die Flussrate für den thermodynamisch getriebenen Prozess der Verarmungsinteraktion zwischen den LV und der Membran entscheidend sind. Weiterhin wurde identifiziert, dass die Parameter PEG Konzentration und Größe entscheidend für den Erfolg sind und dementsprechend optimiert werden müssen. Die Visualisierung der LV auf der Membran zeigte, dass hauptsächlich die oberste Membranlage für die Abscheidung genutzt wurde. Daher war die oberflächenspezifische Flussrate für die Hochskalierung entscheidend. Das mechanistische Verständnis des Prozesses und die Prozessoptimierungen ermöglichten reproduzierbar hohe LV-Wiederfindungen und die Entfernung von Verunreinigungen

    The role of artificial intelligence-driven soft sensors in advanced sustainable process industries: a critical review

    Get PDF
    With the predicted depletion of natural resources and alarming environmental issues, sustainable development has become a popular as well as a much-needed concept in modern process industries. Hence, manufacturers are quite keen on adopting novel process monitoring techniques to enhance product quality and process efficiency while minimizing possible adverse environmental impacts. Hardware sensors are employed in process industries to aid process monitoring and control, but they are associated with many limitations such as disturbances to the process flow, measurement delays, frequent need for maintenance, and high capital costs. As a result, soft sensors have become an attractive alternative for predicting quality-related parameters that are ‘hard-to-measure’ using hardware sensors. Due to their promising features over hardware counterparts, they have been employed across different process industries. This article attempts to explore the state-of-the-art artificial intelligence (Al)-driven soft sensors designed for process industries and their role in achieving the goal of sustainable development. First, a general introduction is given to soft sensors, their applications in different process industries, and their significance in achieving sustainable development goals. AI-based soft sensing algorithms are then introduced. Next, a discussion on how AI-driven soft sensors contribute toward different sustainable manufacturing strategies of process industries is provided. This is followed by a critical review of the most recent state-of-the-art AI-based soft sensors reported in the literature. Here, the use of powerful AI-based algorithms for addressing the limitations of traditional algorithms, that restrict the soft sensor performance is discussed. Finally, the challenges and limitations associated with the current soft sensor design, application, and maintenance aspects are discussed with possible future directions for designing more intelligent and smart soft sensing technologies to cater the future industrial needs

    DATA AUGMENTATION FOR SYNTHETIC APERTURE RADAR USING ALPHA BLENDING AND DEEP LAYER TRAINING

    Get PDF
    Human-based object detection in synthetic aperture RADAR (SAR) imagery is complex and technical, laboriously slow but time critical—the perfect application for machine learning (ML). Training an ML network for object detection requires very large image datasets with imbedded objects that are accurately and precisely labeled. Unfortunately, no such SAR datasets exist. Therefore, this paper proposes a method to synthesize wide field of view (FOV) SAR images by combining two existing datasets: SAMPLE, which is composed of both real and synthetic single-object chips, and MSTAR Clutter, which is composed of real wide-FOV SAR images. Synthetic objects are extracted from SAMPLE using threshold-based segmentation before being alpha-blended onto patches from MSTAR Clutter. To validate the novel synthesis method, individual object chips are created and classified using a simple convolutional neural network (CNN); testing is performed against the measured SAMPLE subset. A novel technique is also developed to investigate training activity in deep layers. The proposed data augmentation technique produces a 17% increase in the accuracy of measured SAR image classification. This improvement shows that any residual artifacts from segmentation and blending do not negatively affect ML, which is promising for future use in wide-area SAR synthesis.Outstanding ThesisMajor, United States Air ForceApproved for public release. Distribution is unlimited

    Introduction to Facial Micro Expressions Analysis Using Color and Depth Images: A Matlab Coding Approach (Second Edition, 2023)

    Full text link
    The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment. FMER is a subset of image processing and it is a multidisciplinary topic to analysis. So, it requires familiarity with other topics of Artifactual Intelligence (AI) such as machine learning, digital image processing, psychology and more. So, it is a great opportunity to write a book which covers all of these topics for beginner to professional readers in the field of AI and even without having background of AI. Our goal is to provide a standalone introduction in the field of MFER analysis in the form of theorical descriptions for readers with no background in image processing with reproducible Matlab practical examples. Also, we describe any basic definitions for FMER analysis and MATLAB library which is used in the text, that helps final reader to apply the experiments in the real-world applications. We believe that this book is suitable for students, researchers, and professionals alike, who need to develop practical skills, along with a basic understanding of the field. We expect that, after reading this book, the reader feels comfortable with different key stages such as color and depth image processing, color and depth image representation, classification, machine learning, facial micro-expressions recognition, feature extraction and dimensionality reduction. The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment.Comment: This is the second edition of the boo

    Corporate Social Responsibility: the institutionalization of ESG

    Get PDF
    Understanding the impact of Corporate Social Responsibility (CSR) on firm performance as it relates to industries reliant on technological innovation is a complex and perpetually evolving challenge. To thoroughly investigate this topic, this dissertation will adopt an economics-based structure to address three primary hypotheses. This structure allows for each hypothesis to essentially be a standalone empirical paper, unified by an overall analysis of the nature of impact that ESG has on firm performance. The first hypothesis explores the evolution of CSR to the modern quantified iteration of ESG has led to the institutionalization and standardization of the CSR concept. The second hypothesis fills gaps in existing literature testing the relationship between firm performance and ESG by finding that the relationship is significantly positive in long-term, strategic metrics (ROA and ROIC) and that there is no correlation in short-term metrics (ROE and ROS). Finally, the third hypothesis states that if a firm has a long-term strategic ESG plan, as proxied by the publication of CSR reports, then it is more resilience to damage from controversies. This is supported by the finding that pro-ESG firms consistently fared better than their counterparts in both financial and ESG performance, even in the event of a controversy. However, firms with consistent reporting are also held to a higher standard than their nonreporting peers, suggesting a higher risk and higher reward dynamic. These findings support the theory of good management, in that long-term strategic planning is both immediately economically beneficial and serves as a means of risk management and social impact mitigation. Overall, this contributes to the literature by fillings gaps in the nature of impact that ESG has on firm performance, particularly from a management perspective

    Examples of works to practice staccato technique in clarinet instrument

    Get PDF
    Klarnetin staccato tekniğini güçlendirme aşamaları eser çalışmalarıyla uygulanmıştır. Staccato geçişlerini hızlandıracak ritim ve nüans çalışmalarına yer verilmiştir. Çalışmanın en önemli amacı sadece staccato çalışması değil parmak-dilin eş zamanlı uyumunun hassasiyeti üzerinde de durulmasıdır. Staccato çalışmalarını daha verimli hale getirmek için eser çalışmasının içinde etüt çalışmasına da yer verilmiştir. Çalışmaların üzerinde titizlikle durulması staccato çalışmasının ilham verici etkisi ile müzikal kimliğe yeni bir boyut kazandırmıştır. Sekiz özgün eser çalışmasının her aşaması anlatılmıştır. Her aşamanın bir sonraki performans ve tekniği güçlendirmesi esas alınmıştır. Bu çalışmada staccato tekniğinin hangi alanlarda kullanıldığı, nasıl sonuçlar elde edildiği bilgisine yer verilmiştir. Notaların parmak ve dil uyumu ile nasıl şekilleneceği ve nasıl bir çalışma disiplini içinde gerçekleşeceği planlanmıştır. Kamış-nota-diyafram-parmak-dil-nüans ve disiplin kavramlarının staccato tekniğinde ayrılmaz bir bütün olduğu saptanmıştır. Araştırmada literatür taraması yapılarak staccato ile ilgili çalışmalar taranmıştır. Tarama sonucunda klarnet tekniğin de kullanılan staccato eser çalışmasının az olduğu tespit edilmiştir. Metot taramasında da etüt çalışmasının daha çok olduğu saptanmıştır. Böylelikle klarnetin staccato tekniğini hızlandırma ve güçlendirme çalışmaları sunulmuştur. Staccato etüt çalışmaları yapılırken, araya eser çalışmasının girmesi beyni rahatlattığı ve istekliliği daha arttırdığı gözlemlenmiştir. Staccato çalışmasını yaparken doğru bir kamış seçimi üzerinde de durulmuştur. Staccato tekniğini doğru çalışmak için doğru bir kamışın dil hızını arttırdığı saptanmıştır. Doğru bir kamış seçimi kamıştan rahat ses çıkmasına bağlıdır. Kamış, dil atma gücünü vermiyorsa daha doğru bir kamış seçiminin yapılması gerekliliği vurgulanmıştır. Staccato çalışmalarında baştan sona bir eseri yorumlamak zor olabilir. Bu açıdan çalışma, verilen müzikal nüanslara uymanın, dil atış performansını rahatlattığını ortaya koymuştur. Gelecek nesillere edinilen bilgi ve birikimlerin aktarılması ve geliştirici olması teşvik edilmiştir. Çıkacak eserlerin nasıl çözüleceği, staccato tekniğinin nasıl üstesinden gelinebileceği anlatılmıştır. Staccato tekniğinin daha kısa sürede çözüme kavuşturulması amaç edinilmiştir. Parmakların yerlerini öğrettiğimiz kadar belleğimize de çalışmaların kaydedilmesi önemlidir. Gösterilen azmin ve sabrın sonucu olarak ortaya çıkan yapıt başarıyı daha da yukarı seviyelere çıkaracaktır

    Data analytics in digital marketing for tracking the effectiveness of campaigns and inform strategy

    Get PDF
    The purpose of the study is to present a digital marketing data analytics model to analyze campaign efficacy and inform strategy based on website performance, social media metrics, email marketing performance, customer data for targeting and personalization, and customer journey analysis. This model defines campaign success criteria for strategy. A statistical analysis approach was used to analyze the data for the research. Data was gathered through a survey. This study analyzes demographic parameters descriptively using the structural equation model (SEM). From comprehensive surveys, 125 digital media and 115 online shop subjects responded. Sampled were 240 people. According to the findings, social media data, customer journey research, successful advertising, and informed approaches are highly correlated. Compared to the previous study, website performance evaluation does not match the marketing plan's success. The model's results can be used by any company that communicates with clients online. © 2023 by the authors; licensee Growing Science, Canada.https://doi.org/10.5267/j.ijdns.2023.3.0157pubpub

    Learning disentangled speech representations

    Get PDF
    A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody. The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions. In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks. This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically

    Towards ultrasound full-waveform inversion in medical imaging

    Get PDF
    Ultrasound imaging is a front-line clinical modality with a wide range of applications. However, there are limitations to conventional methods for some medical imaging problems, including the imaging of the intact brain. The goal of this thesis is to explore and build on recent technological advances in ultrasonics and related areas such as geophysics, including the ultrasound data parallel acquisition hardware, advanced computational techniques for field modelling and for inverse problem solving. With the significant increase in the computational power now available, a particular focus will be put on exploring the potential of full-waveform inversion (FWI), a high-resolution image reconstruction technique which has shown significant success in seismic exploration, for medical imaging applications. In this thesis a range of technologies and systems have been developed in order to improve ultrasound imaging by taking advantage of these recent advances. In the first part of this thesis the application of dual frequency ultrasound for contrast enhanced imaging of neurovasculature in the mouse brain is investigated. Here we demonstrated a significant improvement in the contrast-to-tissue ratio that could be achieved by using a multi-probe, dual frequency imaging system when compared to a conventional approach using a single high frequency probe. However, without a sufficiently accurate calibration method to determine the positioning of these probes the image resolution was found to be significantly reduced. To mitigate the impact of these positioning errors, a second study was carried out to develop a sophisticated dual probe ultrasound tomography acquisition system with a robust methodology for the calibration of transducer positions. This led to a greater focus on the development of ultrasound tomography applications in medical imaging using FWI. A 2.5D brain phantom was designed that consisted of a soft tissue brain model surrounded by a hard skull mimicking material to simulate a transcranial imaging problem. This was used to demonstrate for the first time, as far as we are aware, the experimental feasibility of imaging the brain through skull using FWI. Furthermore, to address the lack of broadband sensors available for medical FWI reconstruction applications, a deep learning neural network was proposed for the bandwidth extension of observed narrowband data. A demonstration of this proposed technique was then carried out by improving the FWI image reconstruction of experimentally acquired breast phantom imaging data. Finally, the FWI imaging method was expanded for3D neuroimaging applications and an in silico feasibility of reconstructing the mouse brain with commercial transducers is demonstrated.Open Acces

    Micro-Electro Discharge Machining: Principles, Recent Advancements and Applications

    Get PDF
    Micro electrical discharge machining (micro-EDM) is a thermo-electric and contactless process most suited for micro-manufacturing and high-precision machining, especially when difficult-to-cut materials, such as super alloys, composites, and electro conductive ceramics, are processed. Many industrial domains exploit this technology to fabricate highly demanding components, such as high-aspect-ratio micro holes for fuel injectors, high-precision molds, and biomedical parts.Moreover, the continuous trend towards miniaturization and high precision functional components boosted the development of control strategies and optimization methodologies specifically suited to address the challenges in micro- and nano-scale fabrication.This Special Issue showcases 12 research papers and a review article focusing on novel methodological developments on several aspects of micro electrical discharge machining: machinability studies of hard materials (TiNi shape memory alloys, Si3N4–TiN ceramic composite, ZrB2-based ceramics reinforced with SiC fibers and whiskers, tungsten-cemented carbide, Ti-6Al-4V alloy, duplex stainless steel, and cubic boron nitride), process optimization adopting different dielectrics or electrodes, characterization of mechanical performance of processed surface, process analysis, and optimization via discharge pulse-type discrimination, hybrid processes, fabrication of molds for inflatable soft microactuators, and implementation of low-cost desktop micro-EDM system
    corecore