1,223 research outputs found
Impact of Imaging and Distance Perception in VR Immersive Visual Experience
Virtual reality (VR) headsets have evolved to include unprecedented viewing quality. Meanwhile, they have become lightweight, wireless, and low-cost, which has opened to new applications and a much wider audience. VR headsets can now provide users with greater understanding of events and accuracy of observation, making decision-making faster and more effective. However, the spread of immersive technologies has shown a slow take-up, with the adoption of virtual reality limited to a few applications, typically related to entertainment. This reluctance appears to be due to the often-necessary change of operating paradigm and some scepticism towards the "VR advantage". The need therefore arises to evaluate the contribution that a VR system can make to user performance, for example to monitoring and decision-making. This will help system designers understand when immersive technologies can be proposed to replace or complement standard display systems such as a desktop monitor.
In parallel to the VR headsets evolution there has been that of 360 cameras, which are now capable to instantly acquire photographs and videos in stereoscopic 3D (S3D) modality, with very high resolutions. 360° images are innately suited to VR headsets, where the captured view can be observed and explored through the natural rotation of the head. Acquired views can even be experienced and navigated from the inside as they are captured.
The combination of omnidirectional images and VR headsets has opened to a new way of creating immersive visual representations. We call it: photo-based VR. This represents a new methodology that combines traditional model-based rendering with high-quality omnidirectional texture-mapping. Photo-based VR is particularly suitable for applications related to remote visits and realistic scene reconstruction, useful for monitoring and surveillance systems, control panels and operator training.
The presented PhD study investigates the potential of photo-based VR representations. It starts by evaluating the role of immersion and user’s performance in today's graphical visual experience, to then use it as a reference to develop and evaluate new photo-based VR solutions. With the current literature on photo-based VR experience and associated user performance being very limited, this study builds new knowledge from the proposed assessments.
We conduct five user studies on a few representative applications examining how visual representations can be affected by system factors (camera and display related) and how it can influence human factors (such as realism, presence, and emotions). Particular attention is paid to realistic depth perception, to support which we develop target solutions for photo-based VR. They are intended to provide users with a correct perception of space dimension and objects size. We call it: true-dimensional visualization.
The presented work contributes to unexplored fields including photo-based VR and true-dimensional visualization, offering immersive system designers a thorough comprehension of the benefits, potential, and type of applications in which these new methods can make the difference.
This thesis manuscript and its findings have been partly presented in scientific publications. In particular, five conference papers on Springer and the IEEE symposia, [1], [2], [3], [4], [5], and one journal article in an IEEE periodical [6], have been published
Design of decorative 3D models: from geodesic ornaments to tangible assemblies
L'obiettivo di questa tesi è sviluppare strumenti utili per creare opere d'arte decorative digitali in 3D. Uno dei processi decorativi più comunemente usati prevede la creazione di pattern decorativi, al fine di abbellire gli oggetti. Questi pattern possono essere dipinti sull'oggetto di base o realizzati con l'applicazione di piccoli elementi decorativi. Tuttavia, la loro realizzazione nei media digitali non è banale. Da un lato, gli utenti esperti possono eseguire manualmente la pittura delle texture o scolpire ogni decorazione, ma questo processo può richiedere ore per produrre un singolo pezzo e deve essere ripetuto da zero per ogni modello da decorare. D'altra parte, gli approcci automatici allo stato dell'arte si basano sull'approssimazione di questi processi con texturing basato su esempi o texturing procedurale, o con sistemi di riproiezione 3D. Tuttavia, questi approcci possono introdurre importanti limiti nei modelli utilizzabili e nella qualità dei risultati. Il nostro lavoro sfrutta invece i recenti progressi e miglioramenti delle prestazioni nel campo dell'elaborazione geometrica per creare modelli decorativi direttamente sulle superfici. Presentiamo una pipeline per i pattern 2D e una per quelli 3D, e dimostriamo come ognuna di esse possa ricreare una vasta gamma di risultati con minime modifiche dei parametri. Inoltre, studiamo la possibilità di creare modelli decorativi tangibili. I pattern 3D generati possono essere stampati in 3D e applicati a oggetti realmente esistenti precedentemente scansionati. Discutiamo anche la creazione di modelli con mattoncini da costruzione, e la possibilità di mescolare mattoncini standard e mattoncini custom stampati in 3D. Ciò consente una rappresentazione precisa indipendentemente da quanto la voxelizzazione sia approssimativa. I principali contributi di questa tesi sono l'implementazione di due diverse pipeline decorative, un approccio euristico alla costruzione con mattoncini e un dataset per testare quest'ultimo.The aim of this thesis is to develop effective tools to create digital decorative 3D artworks. Real-world art often involves the use of decorative patterns to enrich objects. These patterns can be painted on the base or might be realized with the application of small decorative elements. However, their creation in digital media is not trivial. On the one hand, users can manually perform texture paint or sculpt each decoration, in a process that can take hours to produce a single piece and needs to be repeated from the ground up for every model that needs to be decorated. On the other hand, automatic approaches in state of the art rely on approximating these processes with procedural or by-example texturing or with 3D reprojection. However, these approaches can introduce significant limitations in the models that can be used and in the quality of the results. Instead, our work exploits the recent advances and performance improvements in the geometry processing field to create decorative patterns directly on surfaces. We present a pipeline for 2D and one for 3D patterns and demonstrate how each of them can recreate a variety of results with minimal tweaking of the parameters. Furthermore, we investigate the possibility of creating decorative tangible models. The 3D patterns we generate can be 3D printed and applied to previously scanned real-world objects. We also discuss the creation of models with standard building bricks and the possibility of mixing standard and custom 3D-printed bricks. This allows for a precise representation regardless of the coarseness of the voxelization. The main contributions of this thesis are the implementation of two different decorative pipelines, a heuristic approach to brick construction, and a dataset to test the latter
Synthetic Aperture Radar (SAR) Meets Deep Learning
This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports
Machine learning for the sustainable energy transition: a data-driven perspective along the value chain from manufacturing to energy conversion
According to the special report Global Warming of 1.5 °C of the IPCC, climate action is not only necessary but more than ever urgent. The world is witnessing rising sea levels, heat waves, events of flooding, droughts, and desertification resulting in the loss of lives and damage to livelihoods, especially in countries of the Global South. To mitigate climate change and commit to the Paris agreement, it is of the uttermost importance to reduce greenhouse gas emissions coming from the most emitting sector, namely the energy sector. To this end, large-scale penetration of renewable energy systems into the energy market is crucial for the energy transition toward a sustainable future by replacing fossil fuels and improving access to energy with socio-economic benefits. With the advent of Industry 4.0, Internet of Things technologies have been increasingly applied to the energy sector introducing the concept of smart grid or, more in general, Internet of Energy. These paradigms are steering the energy sector towards more efficient, reliable, flexible, resilient, safe, and sustainable solutions with huge environmental and social potential benefits. To realize these concepts, new information technologies are required, and among the most promising possibilities are Artificial Intelligence and Machine Learning which in many countries have already revolutionized the energy industry. This thesis presents different Machine Learning algorithms and methods for the implementation of new strategies to make renewable energy systems more efficient and reliable. It presents various learning algorithms, highlighting their advantages and limits, and evaluating their application for different tasks in the energy context. In addition, different techniques are presented for the preprocessing and cleaning of time series, nowadays collected by sensor networks mounted on every renewable energy system. With the possibility to install large numbers of sensors that collect vast amounts of time series, it is vital to detect and remove irrelevant, redundant, or noisy features, and alleviate the curse of dimensionality, thus improving the interpretability of predictive models, speeding up their learning process, and enhancing their generalization properties. Therefore, this thesis discussed the importance of dimensionality reduction in sensor networks mounted on renewable energy systems and, to this end, presents two novel unsupervised algorithms. The first approach maps time series in the network domain through visibility graphs and uses a community detection algorithm to identify clusters of similar time series and select representative parameters. This method can group both homogeneous and heterogeneous physical parameters, even when related to different functional areas of a system. The second approach proposes the Combined Predictive Power Score, a method for feature selection with a multivariate formulation that explores multiple sub-sets of expanding variables and identifies the combination of features with the highest predictive power over specified target variables. This method proposes a selection algorithm for the optimal combination of variables that converges to the smallest set of predictors with the highest predictive power. Once the combination of variables is identified, the most relevant parameters in a sensor network can be selected to perform dimensionality reduction. Data-driven methods open the possibility to support strategic decision-making, resulting in a reduction of Operation & Maintenance costs, machine faults, repair stops, and spare parts inventory size. Therefore, this thesis presents two approaches in the context of predictive maintenance to improve the lifetime and efficiency of the equipment, based on anomaly detection algorithms. The first approach proposes an anomaly detection model based on Principal Component Analysis that is robust to false alarms, can isolate anomalous conditions, and can anticipate equipment failures. The second approach has at its core a neural architecture, namely a Graph Convolutional Autoencoder, which models the sensor network as a dynamical functional graph by simultaneously considering the information content of individual sensor measurements (graph node features) and the nonlinear correlations existing between all pairs of sensors (graph edges). The proposed neural architecture can capture hidden anomalies even when the turbine continues to deliver the power requested by the grid and can anticipate equipment failures. Since the model is unsupervised and completely data-driven, this approach can be applied to any wind turbine equipped with a SCADA system. When it comes to renewable energies, the unschedulable uncertainty due to their intermittent nature represents an obstacle to the reliability and stability of energy grids, especially when dealing with large-scale integration. Nevertheless, these challenges can be alleviated if the natural sources or the power output of renewable energy systems can be forecasted accurately, allowing power system operators to plan optimal power management strategies to balance the dispatch between intermittent power generations and the load demand. To this end, this thesis proposes a multi-modal spatio-temporal neural network for multi-horizon wind power forecasting. In particular, the model combines high-resolution Numerical Weather Prediction forecast maps with turbine-level SCADA data and explores how meteorological variables on different spatial scales together with the turbines' internal operating conditions impact wind power forecasts. The world is undergoing a third energy transition with the main goal to tackle global climate change through decarbonization of the energy supply and consumption patterns. This is not only possible thanks to global cooperation and agreements between parties, power generation systems advancements, and Internet of Things and Artificial Intelligence technologies but also necessary to prevent the severe and irreversible consequences of climate change that are threatening life on the planet as we know it. This thesis is intended as a reference for researchers that want to contribute to the sustainable energy transition and are approaching the field of Artificial Intelligence in the context of renewable energy systems
Semiconductor-Superconductor Josephson Junctions in the Presence of Zeeman and Spin-Orbit Fields
Epitaxially grown Al-InAs hybrids have a great potential for future applications. The most prominent incentive in this regard are potential Majorana zero modes, which are to be believed ideal candidates for fault-tolerant quantum computers. However, with the recent access to these novel materials, it is furthermore possible to conduct experiments on a wide range of generic phenomena. With the help of top-down fabrication, individual designed Josephson junctions offer an unprecedented playground for experimentalists due to the unique combination of the two-dimensional electron gas (2DEG) and the superconductor.
This dissertation is about examining of the fundamental building blocks of single Josephson junctions built on such a heterostructure. For this purpose, we elaborated a fabrication process and installed a measurement technique based on a cold RLC resonator in the low MHz regime that is placed in series to the sample. In contrast to the normal resistance, the resonator is a tool which allows us to access the inductance of a superconducting system and thus to probe the supercurrent-carrying Andreev bound states (ABS).
The main discoveries of this work include a complete picture of the ABS dependency on various parameters, such as the charge carrier density, the dc current, the magnetic fields, the temperature, or the transparency of the junction, which is close to unity. In the heterostructure, we can break inversion and time-reversal symmetry simultaneously with the interaction of spin-orbit and Zeeman fields. This, in combination with the ballistic character of the Josephson device, leads to a non-reciprocal current that depends on the cross product of current and Zeeman field. Furthermore, we report a rectification effect of the supercurrent even far below the critical temperature of the superconductor. The observed non-reciprocal current is a consequence of a distorted current-phase relation (CPR). Using the inductance, we can display this distortion and derive the novel magnetochiral anisotropy (MCA) coefficient for supercurrents.
Moreover, with the MCA coefficient we extract the Dresselhaus component and witness furthermore a peculiar sign change of the MCA at the point where the Zeeman energy is as large as the induced gap. Finally, with the gained understanding and experience of single superconductor-semiconductor Josephson junctions, we create the basis for more complex devices, e.g. multiterminal Josephson junctions (MTJJs). Such junctions with multiple superconducting leads are predicted to host synthetic Weyl singularities in their ABS spectrum. In this work, we present first results of this new topic and show that it is possible to fabricate such MTJJs and to measure their inductance
Heterogeneous Photocatalysis
This reprint is a compilation of the articles submitted in the Special Issue entitled, “Heterogeneous Photocatalysis: A Solution for a Greener Earth”, from the journal Catalysts, which presents an overview of the latest advances in the development of innovative photocatalytic processes
SuperCDMS HVeV Run 2 Low-Mass Dark Matter Search, Highly Multiplexed Phonon-Mediated Particle Detector with Kinetic Inductance Detector, and the Blackbody Radiation in Cryogenic Experiments
There is ample evidence of dark matter (DM), a phenomenon responsible for ≈ 85% of the matter content of the Universe that cannot be explained by the Standard Model (SM). One of the most compelling hypotheses is that DM consists of beyond-SM particle(s) that are nonluminous and nonbaryonic. So far, numerous efforts have been made to search for particle DM, and yet none has yielded an unambiguous observation of DM particles.
We present in Chapter 2 the SuperCDMS HVeV Run 2 experiment, where we search for DM in the mass ranges of 0.5--10⁴ MeV/c² for the electron-recoil DM and 1.2--50 eV/c² for the dark photon and the Axion-like particle (ALP). SuperCDMS utilizes cryogenic crystals as detectors to search for DM interaction with the crystal atoms. The interaction is detected in the form of recoil energy mediated by phonons. In the HVeV project, we look for electron recoil, where we enhance the signal by the Neganov-Trofimov-Luke effect under high-voltage biases. The technique enabled us to detect quantized e⁻h⁺ creation at a 3% ionization energy resolution. Our work is the first DM search analysis considering charge trapping and impact ionization effects for solid-state detectors. We report our results as upper limits for the assumed particle models as functions of DM mass. Our results exclude the DM-electron scattering cross section, the dark photon kinetic mixing parameter, and the ALP axioelectric coupling above 8.4 x 10⁻³⁴ cm², 3.3 x 10⁻¹⁴, and 1.0 x 10⁻⁹, respectively.
Currently every SuperCDMS detector is equipped with a few phonon sensors based on the transition-edge sensor (TES) technology. In order to improve phonon-mediated particle detectors' background rejection performance, we are developing highly multiplexed detectors utilizing kinetic inductance detectors (KIDs) as phonon sensors. This work is detailed in chapter 3 and chapter 4. We have improved our previous KID and readout line designs, which enabled us to produce our first ø3" detector with 80 phonon sensors. The detector yielded a frequency placement accuracy of 0.07%, indicating our capability of implementing hundreds of phonon sensors in a typical SuperCDMS-style detector. We detail our fabrication technique for simultaneously employing Al and Nb for the KID circuit. We explain our signal model that includes extracting the RF signal, calibrating the RF signal into pair-breaking energy, and then the pulse detection. We summarize our noise condition and develop models for different noise sources. We combine the signal and the noise models to be an energy resolution model for KID-based phonon-mediated detectors. From this model, we propose strategies to further improve future detectors' energy resolution and introduce our ongoing implementations.
Blackbody (BB) radiation is one of the plausible background sources responsible for the low-energy background currently preventing low-threshold DM experiments to search for lower DM mass ranges. In Chapter 5, we present our study for such background for cryogenic experiments. We have developed physical models and, based on the models, simulation tools for BB radiation propagation as photons or waves. We have also developed a theoretical model for BB photons' interaction with semiconductor impurities, which is one of the possible channels for generating the leakage current background in SuperCDMS-style detectors. We have planned for an experiment to calibrate our simulation and leakage current generation model. For the experiment, we have developed a specialized ``mesh TES'' photon detector inspired by cosmic microwave background experiments. We present its sensitivity model, the radiation source developed for the calibration, and the general plan of the experiment.</p
Comparative Analysis of Techniques Used to Detect Copy-Move Tampering for Real-World Electronic Images
Evolution of high computational powerful computers, easy availability of several innovative editing software package and high-definition quality-based image capturing tools follows to effortless result in producing image forgery. Though, threats for security and misinterpretation of digital images and scenes have been observed to be happened since a long period and also a lot of research has been established in developing diverse techniques to authenticate the digital images. On the contrary, the research in this region is not limited to checking the validity of digital photos but also to exploring the specific signs of distortion or forgery. This analysis would not require additional prior information of intrinsic content of corresponding digital image or prior embedding of watermarks. In this paper, recent growth in the area of digital image tampering identification have been discussed along with benchmarking study has been shown with qualitative and quantitative results. With variety of methodologies and concepts, different applications of forgery detection have been discussed with corresponding outcomes especially using machine and deep learning methods in order to develop efficient automated forgery detection system. The future applications and development of advanced soft-computing based techniques in digital image forgery tampering has been discussed
Coherent and Holographic Imaging Methods for Immersive Near-Eye Displays
Lähinäytöt on suunniteltu tarjoamaan realistisia kolmiulotteisia katselukokemuksia, joille on merkittävää tarvetta esimerkiksi työkoneiden etäkäytössä ja 3D-suunnittelussa. Nykyaikaiset lähinäytöt tuottavat kuitenkin edelleen ristiriitaisia visuaalisia vihjeitä, jotka heikentävät immersiivistä kokemusta ja haittaavat niiden miellyttävää käyttöä. Merkittävänä ratkaisuvaihtoehtona pidetään koherentin valon, kuten laservalon, käyttöä näytön valaistukseen, millä voidaan korjata nykyisten lähinäyttöjen puutteita. Erityisesti koherentti valaistus mahdollistaa holografisen kuvantamisen, jota käyttävät holografiset näytöt voivat tarkasti jäljitellä kolmiulotteisten mallien todellisia valoaaltoja. Koherentin valon käyttäminen näyttöjen valaisemiseen aiheuttaa kuitenkin huomiota vaativaa korkean kontrastin häiriötä pilkkukuvioiden muodossa. Lisäksi holografisten näyttöjen laskentamenetelmät ovat laskennallisesti vaativia ja asettavat uusia haasteita analyysin, pilkkuhäiriön ja valon mallintamisen suhteen.
Tässä väitöskirjassa tutkitaan laskennallisia menetelmiä lähinäytöille koherentissa kuvantamisjärjestelmässä käyttäen signaalinkäsittelyä, koneoppimista sekä geometrista (säde) ja fysikaalista (aalto) optiikan mallintamista. Työn ensimmäisessä osassa keskitytään holografisten kuvantamismuotojen analysointiin sekä kehitetään hologrammien laskennallisia menetelmiä. Holografian korkeiden laskentavaatimusten ratkaisemiseksi otamme käyttöön holografiset stereogrammit holografisen datan likimääräisenä esitysmuotona. Tarkastelemme kyseisen esitysmuodon visuaalista oikeellisuutta kehittämällä analyysikehyksen holografisen stereogrammin tarjoamien visuaalisten vihjeiden tarkkuudelle akkommodaatiota varten suhteessa sen suunnitteluparametreihin. Lisäksi ehdotamme signaalinkäsittelyratkaisua pilkkuhäiriön vähentämiseksi, ratkaistaksemme nykyisten menetelmien valon mallintamiseen liittyvät visuaalisia artefakteja aiheuttavat ongelmat. Kehitämme myös uudenlaisen holografisen kuvantamismenetelmän, jolla voidaan mallintaa tarkasti valon käyttäytymistä haastavissa olosuhteissa, kuten peiliheijastuksissa.
Väitöskirjan toisessa osassa lähestytään koherentin näyttökuvantamisen laskennallista taakkaa koneoppimisen avulla. Kehitämme koherentin akkommodaatioinvariantin lähinäytön suunnittelukehyksen, jossa optimoidaan yhtäaikaisesti näytön staattista optiikka ja näytön kuvan esikäsittelyverkkoa. Lopuksi nopeutamme ehdottamaamme uutta holografista kuvantamismenetelmää koneoppimisen avulla reaaliaikaisia sovelluksia varten. Kyseiseen ratkaisuun sisältyy myös tehokkaan menettelyn kehittäminen funktionaalisten satunnais-3D-ympäristöjen tuottamiseksi. Kehittämämme menetelmä mahdollistaa suurten synteettisten moninäkökulmaisten kuvien datasettien tuottamisen, joilla voidaan kouluttaa sopivia neuroverkkoja mallintamaan holografista kuvantamismenetelmäämme reaaliajassa.
Kaiken kaikkiaan tässä työssä kehitettyjen menetelmien osoitetaan olevan erittäin kilpailukykyisiä uusimpien koherentin valon lähinäyttöjen laskentamenetelmien kanssa. Työn tuloksena nähdään kaksi vaihtoehtoista lähestymistapaa ristiriitaisten visuaalisten vihjeiden aiheuttamien nykyisten lähinäyttöongelmien ratkaisemiseksi joko staattisella tai dynaamisella optiikalla ja reaaliaikaiseen käyttöön soveltuvilla laskentamenetelmillä. Esitetyt tulokset ovat näin ollen tärkeitä seuraavan sukupolven immersiivisille lähinäytöille.Near-eye displays have been designed to provide realistic 3D viewing experience, strongly demanded in applications, such as remote machine operation, entertainment, and 3D design. However, contemporary near-eye displays still generate conflicting visual cues which degrade the immersive experience and hinders their comfortable use. Approaches using coherent, e.g., laser light for display illumination have been considered prominent for tackling the current near-eye display deficiencies. Coherent illumination enables holographic imaging whereas holographic displays are expected to accurately recreate the true light waves of a desired 3D scene. However, the use of coherent light for driving displays introduces additional high contrast noise in the form of speckle patterns, which has to be taken care of. Furthermore, imaging methods for holographic displays are computationally demanding and impose new challenges in analysis, speckle noise and light modelling.
This thesis examines computational methods for near-eye displays in the coherent imaging regime using signal processing, machine learning, and geometrical (ray) and physical (wave) optics modeling. In the first part of the thesis, we concentrate on analysis of holographic imaging modalities and develop corresponding computational methods. To tackle the high computational demands of holography, we adopt holographic stereograms as an approximative holographic data representation. We address the visual correctness of such representation by developing a framework for analyzing the accuracy of accommodation visual cues provided by a holographic stereogram in relation to its design parameters. Additionally, we propose a signal processing solution for speckle noise reduction to overcome existing issues in light modelling causing visual artefacts. We also develop a novel holographic imaging method to accurately model lighting effects in challenging conditions, such as mirror reflections.
In the second part of the thesis, we approach the computational complexity aspects of coherent display imaging through deep learning. We develop a coherent accommodation-invariant near-eye display framework to jointly optimize static display optics and a display image pre-processing network. Finally, we accelerate the corresponding novel holographic imaging method via deep learning aimed at real-time applications. This includes developing an efficient procedure for generating functional random 3D scenes for forming a large synthetic data set of multiperspective images, and training a neural network to approximate the holographic imaging method under the real-time processing constraints.
Altogether, the methods developed in this thesis are shown to be highly competitive with the state-of-the-art computational methods for coherent-light near-eye displays. The results of the work demonstrate two alternative approaches for resolving the existing near-eye display problems of conflicting visual cues using either static or dynamic optics and computational methods suitable for real-time use. The presented results are therefore instrumental for the next-generation immersive near-eye displays
Electrical and Optical Modeling of Thin-Film Photovoltaic Modules
Heutzutage ist durch viele wissenschaftliche Studien nachgewiesen, dass die Erde längst dem Klimawandel unterworfen ist. Daher muss die gesamte Menschheit vereint handeln, um die schlimmsten Katastrophenszenarien zu verhindern. Ein vielversprechender Ansatz - wenn nicht sogar der vielversprechendste überhaupt - um diese angesprochene, größte Herausforderung in der Geschichte der Menschheit zu bewältigen, ist es, den Energiehunger der Menschheit durch die Erzeugung erneuerbarer und unerschöpflicher Energie zu sättigen. Die Photovoltaik (PV)-Technologie ist ein vielversprechender Anwärter, die leistungsstärkste erneuerbare Energiequelle zu stellen, und spielt aufgrund ihrer direkten Umwandlung des Sonnenlichtes und ihrer skalierbaren Anwendbarkeit in Form von großflächigen Solarmodulen bereits jetzt eine große Rolle bei der Erzeugung erneuerbarer Energie. Im PV-Sektor sind Solarmodule aus Siliziumwafern die derzeit vorherrschende Technologie. Neu aufkommende PV-Technologien wie die Dünnschichttechnologie haben jedoch vorteilhafte Eigenschaften wie einen sehr geringen Kohlenstoffdioxid (CO2)-Fußabdruck, eine kurze energetische Amortisierungszeit und das Potenzial für eine kostengünstige monolithische Massenproduktion, obwohl diese derzeit noch nicht final ausgereift ist. Um die Dünnschichttechnologie jedoch gezielt in Richtung einer breiten Marktreife zu entwickeln, sind numerische Simulationen eine wichtige Säule für das wissenschaftliche Verständnis und die technologische Optimierung. Während sich traditionelle Simulationsliteratur häufig mit materialspezifischen Herausforderungen befasst, konzentriert sich diese Arbeit auf industrieorientierte Herausforderungen auf Modulebene, ohne die zugrundeliegenden Materialparameter zu verändern.
Um ein allumfassendes, digitales Modell eines Solarmoduls zu erstellen, werden in dieser Arbeit mehrere Simulationsansätze aus verschiedenen physikalischen Bereichen kombiniert. Zur Abbildung elektrischer Effekte, einschließlich der räumlichen Spannungsvariation innerhalb des Moduls, wird eine Finite Elemente Methode (FEM) zur Lösung der räumlich quantisierten Poisson-Gleichung verwendet. Um optische Effekte zu berücksichtigen, wird eine generalisierte Transfermatrix-Methode (TMM) verwendet. Alle Simulationsmethoden sind in dieser Arbeit von Grund auf neu programmiert worden, um eine Verknüpfung aller Simulationsebenen mit dem höchstmöglichen Grad an Anpassung und Verknüpfung zu ermöglichen. Die Simulation und die Korrektheit der Parameter wird durch externe Quanteneffizienz (EQE)-Messungen, experimentelle Reflexionsdaten und gemessene Strom-Spannungs (I-U)-Kennlinien verifiziert. Der Kernpunkt der Vorgehensweise dieser Arbeit ist eine ganzheitliche Simulationsmethodik auf Modulebene. Dies ermöglicht es, die Lücke zwischen der Simulation auf Materialebene über die Berechnung von Laborwirkungsgraden bis hin zur Bestimmung der von zahlreichen Umweltfaktoren beeinflusste Leistung der Module im Freifeld zu überbrücken. Durch diese Verknüpfung von Zellsimulation und Systemdesign ist es lediglich aus Laboreigenschaften möglich, das Freifeldverhalten von Solarmodulen zu prognostizieren. Sogar das Zurückrechnen von experimentellen Messungen zu Materialparameter ist mittels des in dieser Arbeit entwickelten Verfahrens des Reverse Engineering Fittings (REF) möglich.
Das in dieser Arbeit entwickelte numerische Verfahren kann für mehrere Anwendungen genutzt werden. Zunächst können durch die Kombination von elektrischen und optischen Simulationen ganzheitliche Top-Down-Verlustanalysen durchgeführt werden. Dies ermöglicht eine wissenschaftliche Einordnung und einen quantitativen Vergleich aller Verlustleistungsmechanismen auf einen Blick, was die zukünftige Forschung und Entwicklung in Richtung von technologischen Schwachstellen von Solarmodulen lenkt. Darüber hinaus ermöglicht die Kombination von Elektrik und Optik die Detektion von Verlusten, die auf dem nichtlinearen Zusammenspiel dieser beiden Ebenen beruhen und auf eine räumliche Spannungsverteilung im Solarmodul zurückzuführen sind.
Diese Arbeit verwendet die entwickelten numerischen Modelle ebenfalls für Optimierungsprobleme, die an digitalen Modellen realer Solarmodule durchgeführt werden. Häufig auftretende Fragestellungen bei der Entwicklung von Solarmodulen sind beispielsweise die Schichtdicke des vorderen optisch transparenten, elektrisch leitfähigen Oxids (TCO) oder die Breite von monolithisch verschalteten Zellen. Die Bestimmung des Optimums dieser mehrdimensionalen Abwägungen zwischen optischer Transparenz, elektrischer Leitfähigkeit und geometrisch inaktiver Fläche zwischen den einzelnen Zellen ist ein Hauptmerkmal der Methodik dieser Arbeit. Mittels des FEM-Ansatzes dieser Arbeit ist es möglich, alle gegenseitigen Wechselwirkungen über verschiedene physikalische Ebenen hinweg zu berücksichtigen und ein ganzheitlich optimiertes Moduldesign zu finden. Auch topologisch komplexere Probleme, wie das Finden eines geeigneten Designs für das Metallisierungsgitter, können auf Grundlage der Simulation mittels der Methode der Topologie-Optimierung (TO) gelöst werden. In dieser Arbeit wurde das TO-Verfahren zum ersten Mal für monolithisch integrierte Zellen eingesetzt. Darüber hinaus wurde gezeigt, dass sowohl einfache Optimierungen der TCO-Schichtdicken als auch Topologie-Optimierungen stark von den vorherrschenden Beleuchtungsverhältnissen abhängen. Daher ist eine Optimierung auf den Jahresertrag anstelle des Laborwirkungsgrades für industrienahe Anwendungen wesentlich sinnvoller, da die mittleren Jahreseinstrahlungen deutlich von den Laborbedingungen abweichen. Mit Hilfe dieser Ertragsoptimierung wurde in dieser Arbeit für die Kupfer-Indium-Gallium-Diselenid CuInGaSe (CIGS)-Technologie ein Leistungsgewinn von über 1 % im Ertrag für einige geografische Standorte und gleichzeitig eine Materialeinsparung für die Metallisierungs- und TCO-Schicht von bis zu 50 % errechnet.
Mit Hilfe der numerischen Simulationen dieser Arbeit können alle denkbaren technologischen Verbesserungen auf Modulebene in das Modell eingebracht werden. Auf diese Weise wurde das aktuelle technologische Limit für CIGS-Dünnschicht-Solarmodule berechnet. Unter Verwendung der Randbedingungen der derzeit verfügbaren Materialien, Technologie- und Fertigungstoleranzen und des derzeit besten in der Literatur veröffentlichten CIGS-Materials ergibt sich ein theoretisches Wirkungsgradmaximum von 24 % auf Modulebene. Das derzeit beste veröffentlichte Modul mit den gegebenen Restriktionen weist einen Wirkungsgrad von 19,2 % auf [1]. Verbessert sich der CIGS-Absorber vergleichbar mit jenem von Galliumarsenid (GaAs) im Hinblick auf dessen Rekombinationsrate, ergibt sich ein erhöhtes Wirkungsgradlimit von etwa 28 %. Im Falle eines idealen CIGS-Absorbers ohne intrinsische Rekombinationsverluste wird in dieser Arbeit eine maximale Effizienzobergrenze von 29 % berechnet
- …