5,789 research outputs found

    Investigating optical absorption in organic semiconductors using a coarse-grained approach

    Get PDF
    Organic semiconductors have attracted great interest as candidate materials for solar cells, light-emitting diodes and other photonic applications. The performance of such devices depends upon the spectral range and strength of the optical absorption, as well as other material properties. Optical transition energies and strengths are controlled in turn by the chemical structure of the (macro)molecule and its conformation; however, the large conformational phase space of conjugated polymers means that the chemical structure – optical property relationship is not trivial to determine. In the solid state, the conformations and interactions of the molecules are a function of process conditions and hard to control, and yet they control the optoelectronic properties of the material. Whilst many studies have sought to develop and validate computationally efficient methods for prediction of transition energies, relatively few have addressed prediction of transition strength. Methods such as time dependent density functional theory are widely used to complement experimental studies but are too computationally expensive to access the length scales required to fully explain the observed structure-property relationships, especially for polymers with large repeat units or complex phase behaviour. Coarse-grained models of optical properties such as tight-binding exciton models can be used to examine the structure-property relationships at longer length scales, but these use approximations to the electron-hole interactions that do not accurately reproduce the experimentally observed trends in absorption strength as a function of molecular structure. I therefore implement a coarse-grained model using parameters from ab initio calculations as a tool to further our understanding of the relationship between the conformation of pi-conjugated macromolecules and their optical properties. I apply this approach to study two conjugated polymers and show how effects of conformational changes on their optical absorption spectra can be linked to changes in the parameters that characterise the electronic intermonomer interactions in the model.Open Acces

    Full stack development toward a trapped ion logical qubit

    Get PDF
    Quantum error correction is a key step toward the construction of a large-scale quantum computer, by preventing small infidelities in quantum gates from accumulating over the course of an algorithm. Detecting and correcting errors is achieved by using multiple physical qubits to form a smaller number of robust logical qubits. The physical implementation of a logical qubit requires multiple qubits, on which high fidelity gates can be performed. The project aims to realize a logical qubit based on ions confined on a microfabricated surface trap. Each physical qubit will be a microwave dressed state qubit based on 171Yb+ ions. Gates are intended to be realized through RF and microwave radiation in combination with magnetic field gradients. The project vertically integrates software down to hardware compilation layers in order to deliver, in the near future, a fully functional small device demonstrator. This thesis presents novel results on multiple layers of a full stack quantum computer model. On the hardware level a robust quantum gate is studied and ion displacement over the X-junction geometry is demonstrated. The experimental organization is optimized through automation and compressed waveform data transmission. A new quantum assembly language purely dedicated to trapped ion quantum computers is introduced. The demonstrator is aimed at testing implementation of quantum error correction codes while preparing for larger scale iterations.Open Acces

    Industry 4.0: product digital twins for remanufacturing decision-making

    Get PDF
    Currently there is a desire to reduce natural resource consumption and expand circular business principles whilst Industry 4.0 (I4.0) is regarded as the evolutionary and potentially disruptive movement of technology, automation, digitalisation, and data manipulation into the industrial sector. The remanufacturing industry is recognised as being vital to the circular economy (CE) as it extends the in-use life of products, but its synergy with I4.0 has had little attention thus far. This thesis documents the first investigating into I4.0 in remanufacturing for a CE contributing a design and demonstration of a model that optimises remanufacturing planning using data from different instances in a product’s life cycle. The initial aim of this work was to identify the I4.0 technology that would enhance the stability in remanufacturing with a view to reducing resource consumption. As the project progressed it narrowed to focus on the development of a product digital twin (DT) model to support data-driven decision making for operations planning. The model’s architecture was derived using a bottom-up approach where requirements were extracted from the identified complications in production planning and control that differentiate remanufacturing from manufacturing. Simultaneously, the benefits of enabling visibility of an asset’s through-life health were obtained using a DT as the modus operandi. A product simulator and DT prototype was designed to use Internet of Things (IoT) components, a neural network for remaining life estimations and a search algorithm for operational planning optimisation. The DT was iteratively developed using case studies to validate and examine the real opportunities that exist in deploying a business model that harnesses, and commodifies, early life product data for end-of-life processing optimisation. Findings suggest that using intelligent programming networks and algorithms, a DT can enhance decision-making if it has visibility of the product and access to reliable remanufacturing process information, whilst existing IoT components provide rudimentary “smart” capabilities, but their integration is complex, and the durability of the systems over extended product life cycles needs to be further explored

    A Syntactical Reverse Engineering Approach to Fourth Generation Programming Languages Using Formal Methods

    Get PDF
    Fourth-generation programming languages (4GLs) feature rapid development with minimum configuration required by developers. However, 4GLs can suffer from limitations such as high maintenance cost and legacy software practices. Reverse engineering an existing large legacy 4GL system into a currently maintainable programming language can be a cheaper and more effective solution than rewriting from scratch. Tools do not exist so far, for reverse engineering proprietary XML-like and model-driven 4GLs where the full language specification is not in the public domain. This research has developed a novel method of reverse engineering some of the syntax of such 4GLs (with Uniface as an exemplar) derived from a particular system, with a view to providing a reliable method to translate/transpile that system's code and data structures into a modern object-oriented language (such as C\#). The method was also applied, although only to a limited extent, to some other 4GLs, Informix and Apex, to show that it was in principle more broadly applicable. A novel testing method that the syntax had been successfully translated was provided using 'abstract syntax trees'. The novel method took manually crafted grammar rules, together with Encapsulated Document Object Model based data from the source language and then used parsers to produce syntactically valid and equivalent code in the target/output language. This proof of concept research has provided a methodology plus sample code to automate part of the process. The methodology comprised a set of manual or semi-automated steps. Further automation is left for future research. In principle, the author's method could be extended to allow the reverse engineering recovery of the syntax of systems developed in other proprietary 4GLs. This would reduce time and cost for the ongoing maintenance of such systems by enabling their software engineers to work using modern object-oriented languages, methodologies, tools and techniques

    The problem of hyperbolic discounting

    Get PDF

    Collected Papers (on various scientific topics), Volume XIII

    Get PDF
    This thirteenth volume of Collected Papers is an eclectic tome of 88 papers in various fields of sciences, such as astronomy, biology, calculus, economics, education and administration, game theory, geometry, graph theory, information fusion, decision making, instantaneous physics, quantum physics, neutrosophic logic and set, non-Euclidean geometry, number theory, paradoxes, philosophy of science, scientific research methods, statistics, and others, structured in 17 chapters (Neutrosophic Theory and Applications; Neutrosophic Algebra; Fuzzy Soft Sets; Neutrosophic Sets; Hypersoft Sets; Neutrosophic Semigroups; Neutrosophic Graphs; Superhypergraphs; Plithogeny; Information Fusion; Statistics; Decision Making; Extenics; Instantaneous Physics; Paradoxism; Mathematica; Miscellanea), comprising 965 pages, published between 2005-2022 in different scientific journals, by the author alone or in collaboration with the following 110 co-authors (alphabetically ordered) from 26 countries: Abduallah Gamal, Sania Afzal, Firoz Ahmad, Muhammad Akram, Sheriful Alam, Ali Hamza, Ali H. M. Al-Obaidi, Madeleine Al-Tahan, Assia Bakali, Atiqe Ur Rahman, Sukanto Bhattacharya, Bilal Hadjadji, Robert N. Boyd, Willem K.M. Brauers, Umit Cali, Youcef Chibani, Victor Christianto, Chunxin Bo, Shyamal Dalapati, Mario Dalcín, Arup Kumar Das, Elham Davneshvar, Bijan Davvaz, Irfan Deli, Muhammet Deveci, Mamouni Dhar, R. Dhavaseelan, Balasubramanian Elavarasan, Sara Farooq, Haipeng Wang, Ugur Halden, Le Hoang Son, Hongnian Yu, Qays Hatem Imran, Mayas Ismail, Saeid Jafari, Jun Ye, Ilanthenral Kandasamy, W.B. Vasantha Kandasamy, Darjan Karabašević, Abdullah Kargın, Vasilios N. Katsikis, Nour Eldeen M. Khalifa, Madad Khan, M. Khoshnevisan, Tapan Kumar Roy, Pinaki Majumdar, Sreepurna Malakar, Masoud Ghods, Minghao Hu, Mingming Chen, Mohamed Abdel-Basset, Mohamed Talea, Mohammad Hamidi, Mohamed Loey, Mihnea Alexandru Moisescu, Muhammad Ihsan, Muhammad Saeed, Muhammad Shabir, Mumtaz Ali, Muzzamal Sitara, Nassim Abbas, Munazza Naz, Giorgio Nordo, Mani Parimala, Ion Pătrașcu, Gabrijela Popović, K. Porselvi, Surapati Pramanik, D. Preethi, Qiang Guo, Riad K. Al-Hamido, Zahra Rostami, Said Broumi, Saima Anis, Muzafer Saračević, Ganeshsree Selvachandran, Selvaraj Ganesan, Shammya Shananda Saha, Marayanagaraj Shanmugapriya, Songtao Shao, Sori Tjandrah Simbolon, Florentin Smarandache, Predrag S. Stanimirović, Dragiša Stanujkić, Raman Sundareswaran, Mehmet Șahin, Ovidiu-Ilie Șandru, Abdulkadir Șengür, Mohamed Talea, Ferhat Taș, Selçuk Topal, Alptekin Ulutaș, Ramalingam Udhayakumar, Yunita Umniyati, J. Vimala, Luige Vlădăreanu, Ştefan Vlăduţescu, Yaman Akbulut, Yanhui Guo, Yong Deng, You He, Young Bae Jun, Wangtao Yuan, Rong Xia, Xiaohong Zhang, Edmundas Kazimieras Zavadskas, Zayen Azzouz Omar, Xiaohong Zhang, Zhirou Ma.‬‬‬‬‬‬‬

    Development of a GPU-accelerated flow simulation method for wind turbine applications

    Get PDF
    A new and novel GPU accelerated method has been developed for solving the Navier-Stokes equations for bodies of arbitrary geometry in both 2D and 3D. The present method utilises the vortex particles to discretize the governing equations in the Lagrangian frame. Those particles act as vorticity carriers which translate in accordance with the local velocity field. Vorticity information is thus propagated from the vorticity source to the rest of the flow domain in mimicking the advection and diffusion processes of the real flow. In the high-fidelity method, vorticity generation can take place around the bodies. The no-slip condition produces a boundary flux which is subsequently diffused to the neighbouring particles. The new method has been successfully validated by simulating the flow field of an impulsively started cylinder. The calculated drag curve matches well with the theoretical prediction and other numerical results in the literature. To extend the applicability of the code to wind-turbine applications, a simplified re-meshing strategy is adopted which is found to produce small numerical inaccuracies. In the engineering method, a simplified hybrid approach has been developed which decouples the advection and diffusion processes. The viscous effects are ignored on the bodies and are recovered in the wake. For this purpose, the Laplace equation that resulted from the irrotational assumption of the flow has been solved using the boundary element method. The solution produces a dipole distribution that is subsequently converted to viscous particles by employing the Hess’ equivalence principle. In addition, an accurate interpolation scheme has been developed to evaluate the dipole gradient across the distorted wake geometry. To reduce the simulation time, the fast multipole method has been implemented on the GPU in 2D and 3D. To parallelize the implementation, a novel data construction algorithm has been proposed. Furthermore, an analytical expression for the velocity strain has been derived. The new developed methods have been applied to problems involving aerofoils and vertical axis wind turbines. Comparisons with experimental data have shown that the new techniques are accurate and can be used with confidence for a wide variety of wind turbine applications

    Sustaining Glasgow's Urban Networks: the Link Communities of Complex Urban Systems

    Get PDF
    As cities grow in population size and became more crowded (UN DESA, 2018), the main future challenges around the world will remain to be accommodating the growing urban population while drastically reducing environmental pressure. Contemporary urban agglomerations (large or small) constantly impose burden on the natural environment by conveying ecosystem services to close and distant places, through coupled human nature [infrastructure] systems (CHANS). Tobler’s first law in geography (1970) that states that “everything is related to everything else, but near things are more related than distant things” is now challenged by globalization. When this law was first established, the hypothesis referred to geological processes (Campbell and Shin, 2012, p.194) that were predominantly observed in pre-globalized economy, where freight was costly and mainly localized (Zhang et al., 2018). With the recent advances and modernisation made in transport technologies, most of them in the sea and air transportation (Zhang et al., 2018) and the growth of cities in population, natural resources and bi-products now travel great distances to infiltrate cities (Neuman, 2006) and satisfy human demands. Technical modernisation and the global hyperconnectivity of human interactions and trading, in the last thirty years alone resulted with staggering 94 per cent growth of resource extraction and consumption (Giljum et al., 2015). Local geographies (Kennedy, Cuddihy and Engel-Yan, 2007) will remain affected by global urbanisation (Giljum et al., 2015), and as a corollary, the operational inefficiencies of their local infrastructure networks, will contribute even more to the issues of environmental unsustainability on a global scale. Another challenge for future city-regions is the equity of public infrastructure services and policy creation that promote the same (Neuman and Hull, 2009). Public infrastructure services refer to services provisioned by networked infrastructure, which are subject to both public obligation and market rules. Therefore, their accessibility to all citizens needs to be safeguarded. The disparity of growth between networked infrastructure and socio-economic dynamics affects the sustainable assimilation and equal access to infrastructure in various districts in cities, rendering it as a privilege. Yet, the empirical evidence of whether the place of residence acts as a disadvantage to public service access and use, remains rather scarce (Clifton et al., 2016). The European Union recognized (EU, 2011) the issue of equality in accessibility (i.e. equity) critical for territorial cohesion and sustainable development across districts, municipalities and regions with diverse economic performance. Territorial cohesion, formally incorporated into the Treaty of Lisbon, now steers the policy frameworks of territorial development within the Union. Subsequently, the European Union developed a policy paradigm guided by equal access (Clifton et al., 2016) to public infrastructure services, considering their accessibility as instrumental aspect in achieving territorial cohesion across and within its member states. A corollary of increasing the equity to public infrastructure services among growing global population is the potential increase in environmental pressure they can impose, especially if this pressure is not decentralised and surges at unsustainable rate (Neuman, 2006). This danger varies across countries and continents, and is directly linked to the increase of urban population due to; [1] improved quality of life and increased life expectancy and/or [2] urban in-migration of rural population and/or [3] global political or economic immigration. These three rising urban trends demand new approaches to reimagine planning and design practices that foster infrastructure equity, whilst delivering environmental justice. Therefore, this research explores in depth the nature of growth of networked infrastructure (Graham and Marvin, 2001) as a complex system and its disparity from the socio-economic growth (or decline) of Glasgow and Clyde Valley city-region. The results of this research gain new understanding in the potential of using emerging tools from network science for developing optimization strategy that supports more cecentralized, efficient, fair and (as an outcome) sustainable enlargement of urban infrastructure, to accommodate new and empower current residents of the city. Applying the novel link clustering community detection algorithm (Ahn et al., 2010) in this thesis I have presented the potential for better understanding the complexity behind the urban system of networked infrastructure, through discovering their overlapping communities. As I will show in the literature review (Chapter 2), the long standing tradition of centralised planning practice relying on zoning and infiltrating infrastructure, left us with urban settlements which are failing to respond to the environmental pressure and the socio-economic inequalities. Building on the myriad of knowledge from planners, geographers, sociologists and computer scientists, I developed a new element (i.e. link communities) within the theory of urban studies that defines cities as complex systems. After, I applied a method borrowed from the study of complex networks to unpack their basic elements. Knowing the link (i.e. functional, or overlapping) communities of metropolitan Glasgow enabled me to evaluate the current level of communities interconnectedness and reveal the gaps as well as the potentials for improving the studied system’s performance. The complex urban system in metropolitan Glasgow was represented by its networked infrastructure, which essentially was a system of distinct sub-systems, one of them mapped by a physical and the other one by a social graph. The conceptual framework for this methodological approach was formalised from the extensively reviewed literature and methods utilising network science tools to detect community structure in complex networks. The literature review led to constructing a hypothesis claiming that the efficiency of the physical network’s topology is achieved through optimizing the number of nodes with high betweenness centrality, while the efficiency of the logical network’s topology is achieved by optimizing the number of links with high edge betweenness. The conclusion from the literature review presented through the discourse on to the primal problem in 7.4.1, led to modelling the two network topologies as separate graphs. The bipartite graph of their primal syntax was mirrored to be symmetrical and converted to dual. From the dual syntax I measured the complete accessibility (i.e. betweenness centrality) of the entire area and not only of the streets. Betweenness centrality of a node measures the number of shortest paths that pass through the node connecting pairs of nodes. The betweenness centrality is same as the integration of streets in space syntax, where the streets are analysed in their dual syntax representation. Street integration is the number of intersections the street shares with other streets and a high value means high accessibility. Edges with high betweenness are shared between strong communities. Based on the theoretical underpinnings of the network’s modularity and community structure analysed herein, it can be concluded that a complex network that is both robust and efficient (and in urban planning terminology ‘sustainable’) is consisted of numerous strong communities connected with each other by optimal number of links with high edge betweenness. To get this insight, the study detected the edge cut-set and vertex cut-set of the complex network. The outcome was a statistical model developed in the open source software R (Ihaka and Gentleman, 1996). The model empirical detects the network’s overlapping communities, determining the current sustainability of its physical and logical topologies. Initially, an assumption was that the number of communities within the infrastructure (physical) network layer were different from the one in the logical. They were detected using the Louvain method that performs graph partitioning on the hierarchical streets structure. Further, the number of communities in the relational network layer (i.e. accessibility to locations) was detected based on the OD accessibility matrix established from the functional dependency between the household locations and predefined points of interest. The communities from the graph of the ‘relational layer' were discovered with the single-link hierarchical clustering algorithm. The number of communities observed in the physical and the logical topologies of the eight shires significantly deviated
    corecore