24 research outputs found

    Machine learning for accelerating the discovery of high-performance low-cost solar cells

    Get PDF
    Solar energy has the potential to enhance the operation of electronic devices profoundly and is the solution to the most important challenge facing humanity today. Such devices primarily rely on rechargeable batteries to satisfy their energy needs. However, since photovoltaic (PV) technology is a mature and reliable method for converting the Sun’s vast energy into electricity, innovation in developing new materials and solar cell architectures is becoming more important to increase the penetration of PV technologies in wearable and IoT applications. Moreover, artificial intelligence (AI) is touted to be a game changer in energy harvesting. The thesis aims to optimize solar cell performance using various computational methods, from solar irradiance and solar architecture to cost analysis of the PV system. The thesis explores the PV cell architectures that can be used for optimized cost/efficiency trade-offs. In addition, machine learning (ML) algorithms are incorporated to develop reconfigurable PV cells based on switchable complementary metal-oxide-semiconductor (CMOS) addressable switches, such that the output power can be optimized for different light patterns and shading. The first part of the thesis presents a critical literature review of a range of ML techniques applied for estimating solar irradiance, followed by a review on accurately predicting the levelized cost of electricity (LCOE) and return on investment (ROI) of a PV system and lastly, presents a systematic review (SR) on the discovery of solar cells. Furthermore, the literature review consists of a thorough systematic review that reveals that ML techniques can speed up the discovery of new solar cell materials and architectures. The review covers a broad range of ML techniques that focus on producing low-cost solar cells. Additionally, a new classification method is introduced based on data synthesis, ML algorithms, optimization, and fabrication process. The review finds that Gaussian Process Regression (GPR) ML technique with Bayesian Optimization (BO) is the most promising method for designing low-cost organic solar cell architecture. Therefore, the first part of the thesis critically evaluates the existing ML techniques and guides researchers in discovering solar cells using ML techniques. The literature review also discusses the recent research work done for predicting solar irradiance and evaluating the LCOE and ROI of the PV system using various time-series forecasting techniques under ML algorithms. Secondly, the thesis proposes an ML algorithm for accurately predicting solar irradiance using the wireless sensor network (WSN) relying on batteries that need constant replacement and are hazardous waste. Therefore, WSNs with solar energy harvesters that scavenge energy from the Sun are proposed as an alternative solution. Consequently, the ML algorithms that enable WSN nodes to accurately predict the amount of solar irradiance are presented so that the node can intelligently manage its energy. The nodes use the panel’s energy to power its internal electronic components, such as the processor and transmitter, and charge its battery. Accordingly, this helps the node access an exact amount of solar irradiance predictions to plan its energy utilization more efficiently, thereby adjusting the operation schedule depending on the expected solar energy availability. The ML models were based on historical weather datasets from California, USA, and Delhi, India, from 2010 to 2020. In addition, the process of data pre-processing, followed by feature engineering, identification of outliers, and grid search to determine the most optimized ML model, is evaluated. Compared with the linear regression (LR) model, the support vector regression (SVR) model showed accurate solar irradiance forecasting. Moreover, from the predicted output calculated results, it was also found that the models with time duration of 1 year and 1 month have much better forecasting results than 10 years and 1 week, with both root square mean error (RMSE) and mean absolute error (MAE) less than 7% for California, USA. Consecutively, the third part of the thesis evaluates the parameter LCOE using demographic variables. Moreover, LCOE facilitates economic decisions and quantitative comparisons between energy generation technologies. Previous methods for calculating the LCOE were based on fixed singular input values that do not capture the uncertainty associated with determining the financial feasibility of a PV project. Instead, a dynamic model that considers important demographic, energy, and policy data that include interest rates, inflation rates, and energy yield is proposed. All these parameters will undoubtedly vary during a PV system’s lifetime and help determine a more accurate LCOE value. Furthermore, comparisons between different ML algorithms revealed that the ARIMA model gave an accuracy of 93.8% for predicting the consumer price of electricity. Moreover, the proposed model with two case studies from the United States and the Philippines is evaluated in detail. Results from these case studies revealed that LCOE values for the State of California could be almost 30% different (5.03 ¢/kWh for singular values in comparison to 7.09¢/kWh using our ML model), which can distort the risk or economic feasibility of a PV power plant. Additionally, the ML model predicts the ROI of a grid-connected PV plant in the Philippines to be 5.37 years instead of 4.23 years which gives a clear indication to the client for making an accurate estimation for the cost analysis of a PV plant

    Integrated Chemical Processes in Liquid Multiphase Systems

    Get PDF
    The essential principles of green chemistry are the use of renewable raw materials, highly efficient catalysts and green solvents linked with energy efficiency and process optimization in real-time. Experts from different fields show, how to examine all levels from the molecular elementary steps up to the design and operation of an entire plant for developing novel and efficient production processes

    Machine Learning in Discrete Molecular Spaces

    Get PDF
    The past decade has seen an explosion of machine learning in chemistry. Whether it is in property prediction, synthesis, molecular design, or any other subdivision, machine learning seems poised to become an integral, if not a dominant, component of future research efforts. This extraordinary capacity rests on the interac- tion between machine learning models and the underlying chemical data landscape commonly referred to as chemical space. Chemical space has multiple incarnations, but is generally considered the space of all possible molecules. In this sense, it is one example of a molecular set: an arbitrary collection of molecules. This thesis is devoted to precisely these objects, and particularly how they interact with machine learning models. This work is predicated on the idea that by better understanding the relationship between molecular sets and the models trained on them we can improve models, achieve greater interpretability, and further break down the walls between data-driven and human-centric chemistry. The hope is that this enables the full predictive power of machine learning to be leveraged while continuing to build our understanding of chemistry. The first three chapters of this thesis introduce and reviews the necessary machine learning theory, particularly the tools that have been specially designed for chemical problems. This is followed by an extensive literature review in which the contributions of machine learning to multiple facets of chemistry over the last two decades are explored. Chapters 4-7 explore the research conducted throughout this PhD. Here we explore how we can meaningfully describe the properties of an arbitrary set of molecules through information theory; how we can determine the most informative data points in a set of molecules; how graph signal processing can be used to understand the relationship between the chosen molecular representation, the property, and the machine learning model; and finally how this approach can be brought to bear on protein space. Each of these sub-projects briefly explores the necessary mathematical theory before leveraging it to provide approaches that resolve the posed problems. We conclude with a summary of the contributions of this work and outline fruitful avenues for further exploration

    Numerical and Evolutionary Optimization 2020

    Get PDF
    This book was established after the 8th International Workshop on Numerical and Evolutionary Optimization (NEO), representing a collection of papers on the intersection of the two research areas covered at this workshop: numerical optimization and evolutionary search techniques. While focusing on the design of fast and reliable methods lying across these two paradigms, the resulting techniques are strongly applicable to a broad class of real-world problems, such as pattern recognition, routing, energy, lines of production, prediction, and modeling, among others. This volume is intended to serve as a useful reference for mathematicians, engineers, and computer scientists to explore current issues and solutions emerging from these mathematical and computational methods and their applications

    Evolutionary Computation

    Get PDF
    This book presents several recent advances on Evolutionary Computation, specially evolution-based optimization methods and hybrid algorithms for several applications, from optimization and learning to pattern recognition and bioinformatics. This book also presents new algorithms based on several analogies and metafores, where one of them is based on philosophy, specifically on the philosophy of praxis and dialectics. In this book it is also presented interesting applications on bioinformatics, specially the use of particle swarms to discover gene expression patterns in DNA microarrays. Therefore, this book features representative work on the field of evolutionary computation and applied sciences. The intended audience is graduate, undergraduate, researchers, and anyone who wishes to become familiar with the latest research work on this field

    Personality Identification from Social Media Using Deep Learning: A Review

    Get PDF
    Social media helps in sharing of ideas and information among people scattered around the world and thus helps in creating communities, groups, and virtual networks. Identification of personality is significant in many types of applications such as in detecting the mental state or character of a person, predicting job satisfaction, professional and personal relationship success, in recommendation systems. Personality is also an important factor to determine individual variation in thoughts, feelings, and conduct systems. According to the survey of Global social media research in 2018, approximately 3.196 billion social media users are in worldwide. The numbers are estimated to grow rapidly further with the use of mobile smart devices and advancement in technology. Support vector machine (SVM), Naive Bayes (NB), Multilayer perceptron neural network, and convolutional neural network (CNN) are some of the machine learning techniques used for personality identification in the literature review. This paper presents various studies conducted in identifying the personality of social media users with the help of machine learning approaches and the recent studies that targeted to predict the personality of online social media (OSM) users are reviewed

    Improved fragment-based protein structure prediction by redesign of search heuristics

    Get PDF
    Difficulty in sampling large and complex conformational spaces remains a key limitation in fragment-based de novo prediction of protein structure. Our previous work has shown that even for small-to-medium-sized proteins, some current methods inadequately sample alternative structures. We have developed two new conformational sampling techniques, one employing a bilevel optimisation framework and the other employing iterated local search. We combine strategies of forced structural perturbation (where some fragment insertions are accepted regardless of their impact on scores) and greedy local optimisation, allowing greater exploration of the available conformational space. Comparisons against the Rosetta Abinitio method indicate that our protocols more frequently generate native-like predictions for many targets, even following the low-resolution phase, using a given set of fragment libraries. By contrasting results across two different fragment sets, we show that our methods are able to better take advantage of high-quality fragments. These improvements can also translate into more reliable identification of near-native structures in a simple clustering-based model selection procedure. We show that when fragment libraries are sufficiently well-constructed, improved breadth of exploration within runs improves prediction accuracy. Our results also suggest that in benchmarking scenarios, a total exclusion of fragments drawn from homologous templates can make performance differences between methods appear less pronounced

    Optimización de algoritmos bioinspirados en sistemas heterogéneos CPU-GPU.

    Get PDF
    Los retos científicos del siglo XXI precisan del tratamiento y análisis de una ingente cantidad de información en la conocida como la era del Big Data. Los futuros avances en distintos sectores de la sociedad como la medicina, la ingeniería o la producción eficiente de energía, por mencionar sólo unos ejemplos, están supeditados al crecimiento continuo en la potencia computacional de los computadores modernos. Sin embargo, la estela de este crecimiento computacional, guiado tradicionalmente por la conocida “Ley de Moore”, se ha visto comprometido en las últimas décadas debido, principalmente, a las limitaciones físicas del silicio. Los arquitectos de computadores han desarrollado numerosas contribuciones multicore, manycore, heterogeneidad, dark silicon, etc, para tratar de paliar esta ralentización computacional, dejando en segundo plano otros factores fundamentales en la resolución de problemas como la programabilidad, la fiabilidad, la precisión, etc. El desarrollo de software, sin embargo, ha seguido un camino totalmente opuesto, donde la facilidad de programación a través de modelos de abstracción, la depuración automática de código para evitar efectos no deseados y la puesta en producción son claves para una viabilidad económica y eficiencia del sector empresarial digital. Esta vía compromete, en muchas ocasiones, el rendimiento de las propias aplicaciones; consecuencia totalmente inadmisible en el contexto científico. En esta tesis doctoral tiene como hipótesis de partida reducir las distancias entre los campos hardware y software para contribuir a solucionar los retos científicos del siglo XXI. El desarrollo de hardware está marcado por la consolidación de los procesadores orientados al paralelismo masivo de datos, principalmente GPUs Graphic Processing Unit y procesadores vectoriales, que se combinan entre sí para construir procesadores o computadores heterogéneos HSA. En concreto, nos centramos en la utilización de GPUs para acelerar aplicaciones científicas. Las GPUs se han situado como una de las plataformas con mayor proyección para la implementación de algoritmos que simulan problemas científicos complejos. Desde su nacimiento, la trayectoria y la historia de las tarjetas gráficas ha estado marcada por el mundo de los videojuegos, alcanzando altísimas cotas de popularidad según se conseguía más realismo en este área. Un hito importante ocurrió en 2006, cuando NVIDIA (empresa líder en la fabricación de tarjetas gráficas) lograba hacerse con un hueco en el mundo de la computación de altas prestaciones y en el mundo de la investigación con el desarrollo de CUDA “Compute Unified Device Arquitecture. Esta arquitectura posibilita el uso de la GPU para el desarrollo de aplicaciones científicas de manera versátil. A pesar de la importancia de la GPU, es interesante la mejora que se puede producir mediante su utilización conjunta con la CPU, lo que nos lleva a introducir los sistemas heterogéneos tal y como detalla el título de este trabajo. Es en entornos heterogéneos CPU-GPU donde estos rendimientos alcanzan sus cotas máximas, ya que no sólo las GPUs soportan el cómputo científico de los investigadores, sino que es en un sistema heterogéneo combinando diferentes tipos de procesadores donde podemos alcanzar mayor rendimiento. En este entorno no se pretende competir entre procesadores, sino al contrario, cada arquitectura se especializa en aquella parte donde puede explotar mejor sus capacidades. Donde mayor rendimiento se alcanza es en estos clústeres heterogéneos, donde múltiples nodos son interconectados entre sí, pudiendo dichos nodos diferenciarse no sólo entre arquitecturas CPU-GPU, sino también en las capacidades computacionales dentro de estas arquitecturas. Con este tipo de escenarios en mente, se presentan nuevos retos en los que lograr que el software que hemos elegido como candidato se ejecuten de la manera más eficiente y obteniendo los mejores resultados posibles. Estas nuevas plataformas hacen necesario un rediseño del software para aprovechar al máximo los recursos computacionales disponibles. Se debe por tanto rediseñar y optimizar los algoritmos existentes para conseguir que las aportaciones en este campo sean relevantes, y encontrar algoritmos que, por su propia naturaleza sean candidatos para que su ejecución en dichas plataformas de alto rendimiento sea óptima. Encontramos en este punto una familia de algoritmos denominados bioinspirados, que utilizan la inteligencia colectiva como núcleo para la resolución de problemas. Precisamente esta inteligencia colectiva es la que les hace candidatos perfectos para su implementación en estas plataformas bajo el nuevo paradigma de computación paralela, puesto que las soluciones pueden ser construidas en base a individuos que mediante alguna forma de comunicación son capaces de construir conjuntamente una solución común. Esta tesis se centrará especialmente en uno de estos algoritmos bioinspirados que se engloba dentro del término metaheurísticas bajo el paradigma del Soft Computing, el Ant Colony Optimization “ACO”. Se realizará una contextualización, estudio y análisis del algoritmo. Se detectarán las partes más críticas y serán rediseñadas buscando su optimización y paralelización, manteniendo o mejorando la calidad de sus soluciones. Posteriormente se pasará a implementar y testear las posibles alternativas sobre diversas plataformas de alto rendimiento. Se utilizará el conocimiento adquirido en el estudio teórico-práctico anterior para su aplicación a casos reales, más en concreto se mostrará su aplicación sobre el plegado de proteínas. Todo este análisis es trasladado a su aplicación a un caso concreto. En este trabajo, aunamos las nuevas plataformas hardware de alto rendimiento junto al rediseño e implementación software de un algoritmo bioinspirado aplicado a un problema científico de gran complejidad como es el caso del plegado de proteínas. Es necesario cuando se implementa una solución a un problema real, realizar un estudio previo que permita la comprensión del problema en profundidad, ya que se encontrará nueva terminología y problemática para cualquier neófito en la materia, en este caso, se hablará de aminoácidos, moléculas o modelos de simulación que son desconocidos para los individuos que no sean de un perfil biomédico.Ingeniería, Industria y Construcció

    고신뢰성 유도무기용 브러쉬 없는 영구자석 속도검출기 최적 설계

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2016. 8. 정현교.유도무기 및 무인 항공기와 같은 국방 및 항공 분야 구동장치로는 전동기를 이용한 전기식 구동장치가 널리 사용되고 있으며, 이러한 전기식 구동장치를 제어하기 위해서는 회전속도 센서가 필수적으로 요구된다. 현재 국내외에서 개발 완료 되었거나, 개발 진행 중인 유도무기용 전기식 구동장치에 사용되는 회전속도 센서로는 직류 영구자석 속도검출기(DC Tachogenerator)가 가장 많이 사용되고 있다. 직류 영구자석 속도검출기는 직류 발전기 원리를 이용하는 간단한 구조를 가지므로 소형으로 구현이 가능하며, 여기전압이 불필요하고, 속도에 비례하는 전압 출력을 빠르고 손쉽게 얻을 수 있는 장점이 있다. 하지만 직류 영구자석 속도검출기는 기계적으로 브러쉬를 통하여 접촉하는 구조를 가지고 있기 때문에, 진동 및 충격 등이 지속적으로 인가되는 가혹한 군사환경 조건에 대한 내환경성 측면에서 불리하고, 고속 회전하는 전동기에 사용하기 힘들며, 브러쉬의 기계적 마모에 의한 사용시간 제한 및 전자파 간섭에 의한 신호 잡음 발생 등의 문제점을 가지게 된다. 따라서, 본 논문에서는 직류 영구자석 속도검출기의 우수한 장점들을 유지하면서도 브러쉬의 사용으로 인한 단점들을 극복함으로써, 유도무기에서 요구하는 높은 안정성과 신뢰성을 가질 수 있는 브러쉬 없는 영구자석 속도검출기(Brushless Tachogenerator)를 제안하고, 이의 최적 설계안을 제시한다. 제안된 브러쉬 없는 영구자석 속도검출기는 교류 발전기 원리를 이용하기 때문에 국내의 전동기 및 발전기 제조시설 기반을 활용하여 제작이 가능하다. 따라서, 군사용 목적 사용에 따른 해외 도입품의 수출 규제와 관계없이 국내에서 독자 개발이 가능한 장점을 가진다. 본 논문에서는 브러쉬 없는 영구자석 속도검출기의 회전속도 및 회전방향 구현 기법을 새롭게 제안하고, 브러쉬 없는 영구자석 속도검출기 운용 중에 단선에 의해 3개의 상역기전압 중 1개가 검출이 불가능하더라도 센서 자체적으로 이를 극복하는 내고장성 확보 방법에 대하여 제안함으로써 속도검출기의 신뢰성 및 안정성을 향상시켰다. 또한, 본 논문에서는 복잡한 목적함수를 가지며 오랜 계산시간이 소요되는 브러쉬 없는 영구자석 속도검출기와 같은 전기기기 최적설계 문제를 효과적으로 해결할 수 있는 새로운 대리모델 기반 멀티모달 최적화 알고리즘을 제안하고, 이를 바탕으로 브러쉬 없는 영구자석 속도검출기에 대한 최적 설계를 수행하였으며, 실제 시제품을 제작하고 이에 대하여 다양한 시험을 수행함으로써 제안된 설계기법 및 시제품의 성능을 입증하였다. 마지막으로 제안된 브러쉬 없는 영구자석 속도검출기를 이용하여 유도탄 날개 구동장치에 발생한 공기역학적 공탄성 진동을 효과적으로 억제할 수 있는 새로운 제어 기법을 제시하고, 이에 대한 검증시험을 수행함으로써, 제안된 기법의 우수한 성능을 확인하였다.제 1 장 서론 1 1.1 연구배경 및 목적 1 1.2 논문 구성 4 제 2 장 영구자석 속도검출기 6 제 3 장 브러쉬 없는 영구자석 속도검출기 설계 기법 9 3.1 사다리꼴 역기전력을 이용한 설계 10 3.2 정현파 역기전력을 이용한 설계 29 제 4 장 브러쉬 없는 영구자석 속도검출기 최적 설계 41 4.1 기존의 최적화 기법 42 4.2 제안된 최적화 기법 46 4.3 제안된 최적화 기법을 이용한 영구자석 속도검출기 최적 설계 59 제 5 장 시제품 설계, 제작 및 평가 76 5.1 사다리꼴 역기전력을 이용한 속도검출기 시제품 77 5.2 정현파 역기전력을 이용한 속도검출기 시제품 86 제 6 장 브러쉬 없는 영구자석 속도검출기를 이용한 유도탄 날개 구동장치 공탄성 진동 억제 제어 95 6.1 유도탄 날개 공탄성 진동 현상 95 6.2 브러쉬 없는 영구자석 속도검출기를 이용한 공탄성 진동 억제 제어 96 제 7 장 결론 및 향후 연구계획 114 7.1 결론 114 7.2 향후 연구계획 115 참고 문헌 117 Abstract 129Docto
    corecore