4,745 research outputs found

    Numerical Method for Calculation of Power Conversion Efficiency and Colorimetrics of Rectangular Luminescent Solar Concentrators

    Get PDF
    Similar to conventional photovoltaics, the path toward higher efficiencies for luminescent solar concentrators (LSCs) shows increased interest in tandem structures. Herein, a numerical calculation that allows for much faster estimates of fundamental LSC performance indicators (power conversion efficiency, average visible transmission, and color-rendering index) compared to ray trace simulations is proposed. Both double and triple structures are assessed, taking into account concentrations, absorption and emission spectra, and quantum yield as luminophore inputs for rectangular LSCs of any size. The waveguide material is modeled using an absorption spectrum and a refractive index. Interactions between the first, second, and/or third LSCs are incorporated into the algorithm. A comparison with ray trace results shows good correspondence. Supplementary Excel files are available with detailed calculations for future research or industry applications

    Tradition and Innovation in Construction Project Management

    Get PDF
    This book is a reprint of the Special Issue 'Tradition and Innovation in Construction Project Management' that was published in the journal Buildings

    Complexity Science in Human Change

    Get PDF
    This reprint encompasses fourteen contributions that offer avenues towards a better understanding of complex systems in human behavior. The phenomena studied here are generally pattern formation processes that originate in social interaction and psychotherapy. Several accounts are also given of the coordination in body movements and in physiological, neuronal and linguistic processes. A common denominator of such pattern formation is that complexity and entropy of the respective systems become reduced spontaneously, which is the hallmark of self-organization. The various methodological approaches of how to model such processes are presented in some detail. Results from the various methods are systematically compared and discussed. Among these approaches are algorithms for the quantification of synchrony by cross-correlational statistics, surrogate control procedures, recurrence mapping and network models.This volume offers an informative and sophisticated resource for scholars of human change, and as well for students at advanced levels, from graduate to post-doctoral. The reprint is multidisciplinary in nature, binding together the fields of medicine, psychology, physics, and neuroscience

    Training Latency Minimization for Model-Splitting Allowed Federated Edge Learning

    Full text link
    To alleviate the shortage of computing power faced by clients in training deep neural networks (DNNs) using federated learning (FL), we leverage the edge computing and split learning to propose a model-splitting allowed FL (SFL) framework, with the aim to minimize the training latency without loss of test accuracy. Under the synchronized global update setting, the latency to complete a round of global training is determined by the maximum latency for the clients to complete a local training session. Therefore, the training latency minimization problem (TLMP) is modelled as a minimizing-maximum problem. To solve this mixed integer nonlinear programming problem, we first propose a regression method to fit the quantitative-relationship between the cut-layer and other parameters of an AI-model, and thus, transform the TLMP into a continuous problem. Considering that the two subproblems involved in the TLMP, namely, the cut-layer selection problem for the clients and the computing resource allocation problem for the parameter-server are relative independence, an alternate-optimization-based algorithm with polynomial time complexity is developed to obtain a high-quality solution to the TLMP. Extensive experiments are performed on a popular DNN-model EfficientNetV2 using dataset MNIST, and the results verify the validity and improved performance of the proposed SFL framework

    Techno-optimization of CO2 transport networks with constrained pipeline parameters

    Get PDF
    In planning large scale carbon sequestration projects, one of the key parameters affecting project economics is the selection of optimal pipeline transportation networks connecting physical locations of carbon sources to sinks (or injection sites). This network is usually determined based on several limiting factors including existing right-of-way, densely populated regions, topology, etc. Open-source tools such as SimCCS2.0 do an effective job in proposing provably optimal routes for construction of new pipelines but are unable to accommodate existing pipelines in techno-economic optimization. With the newly amended 45Q laws offering 70% more tax credits for carbon sequestration than it did in the 2018 amendment, energy companies are looking more into repurposing gas and liquid transportation lines for CO2 transportation to abandoned oil and gas wells for carbon storage and this has further bolstered the need to have a method to account for existing pipelines in sequestration economics. This project demonstrates a method to account for existing pipelines by 1 introducing zero cost paths into the cost surface to represent pipelines, 2 allowing for tie points into the existing pipeline by use of cost exclusion zones around zero cost paths and then, 3 calculating least cost paths and defining transshipment nodes along pipeline intersections. Doing this allowed for a reformulation of the alternate network paths between sources and sinks, and the network was then solved as Minimum-Cost-Network-Flow-Problem (MCNFP) modeled as a mixed integer programming problem. The solution was developed using Python programming language and demo test cases are shown to illustrate the effectiveness of the solution in assessing cost reduction associated with CO2 transfer from sources tied into locations along existing transport pipelines to sinks. This solution has been packaged into a software name Sequestrix and has been made publicly available on GitHub for researchers and economic analysts to take advantage of for evaluating large scale CCUS projects, and to encourage further development and collaboration

    Contributions to autonomous robust navigation of mobile robots in industrial applications

    Get PDF
    151 p.Un aspecto en el que las plataformas móviles actuales se quedan atrás en comparación con el punto que se ha alcanzado ya en la industria es la precisión. La cuarta revolución industrial trajo consigo la implantación de maquinaria en la mayor parte de procesos industriales, y una fortaleza de estos es su repetitividad. Los robots móviles autónomos, que son los que ofrecen una mayor flexibilidad, carecen de esta capacidad, principalmente debido al ruido inherente a las lecturas ofrecidas por los sensores y al dinamismo existente en la mayoría de entornos. Por este motivo, gran parte de este trabajo se centra en cuantificar el error cometido por los principales métodos de mapeado y localización de robots móviles,ofreciendo distintas alternativas para la mejora del posicionamiento.Asimismo, las principales fuentes de información con las que los robots móviles son capaces de realizarlas funciones descritas son los sensores exteroceptivos, los cuales miden el entorno y no tanto el estado del propio robot. Por esta misma razón, algunos métodos son muy dependientes del escenario en el que se han desarrollado, y no obtienen los mismos resultados cuando este varía. La mayoría de plataformas móviles generan un mapa que representa el entorno que les rodea, y fundamentan en este muchos de sus cálculos para realizar acciones como navegar. Dicha generación es un proceso que requiere de intervención humana en la mayoría de casos y que tiene una gran repercusión en el posterior funcionamiento del robot. En la última parte del presente trabajo, se propone un método que pretende optimizar este paso para así generar un modelo más rico del entorno sin requerir de tiempo adicional para ello

    Piggyback on Idle Ride-Sourcing Drivers for Intracity Parcel Delivery

    Full text link
    This paper investigates the operational strategies for an integrated platform that provides both ride-sourcing services and intracity parcel delivery services over a transportation network utilizing the idle time of ride-sourcing drivers. Specifically, the integrated platform simultaneously offers on-demand ride-sourcing services for passengers and multiple modes of parcel delivery services for customers, including: (1) on-demand delivery, where drivers immediately pick up and deliver parcels upon receiving a delivery request; and (2) flexible delivery, where drivers can pick up (or drop off) parcels only when they are idle and waiting for the next ride-sourcing request. A continuous-time Markov Chain (CTMC) model is proposed to characterize the status change of drivers under joint movement of passengers and parcels over the transportation network with limited vehicle capacity, where the service quality of ride-sourcing services, on-demand delivery services, and flexible delivery services are rigorously quantified. Building on the CTMC model, incentives for ride-sourcing passengers, delivery customers, drivers, and the platform are captured through an economic equilibrium model, and the optimal operational decisions of the platform are derived by solving a non-convex profit-maximizing problem. We prove the well-posedness of the model and develop a tailored algorithm to compute the optimal decisions of the platform at an accelerated speed. Furthermore, we validate the proposed model in a comprehensive case study for San Francisco, demonstrating that joint management of ride-sourcing services and intracity package delivery services can lead to a Pareto improvement that benefits all stakeholders in the integrated ride-sourcing and parcel delivery market

    A Deep Learning Approach to Evaluating Disease Risk in Coronary Bifurcations

    Full text link
    Cardiovascular disease represents a large burden on modern healthcare systems, requiring significant resources for patient monitoring and clinical interventions. It has been shown that the blood flow through coronary arteries, shaped by the artery geometry unique to each patient, plays a critical role in the development and progression of heart disease. However, the popular and well tested risk models such as Framingham and QRISK3 current cardiovascular disease risk models are not able to take these differences when predicting disease risk. Over the last decade, medical imaging and image processing have advanced to the point that non-invasive high-resolution 3D imaging is routinely performed for any patient suspected of coronary artery disease. This allows for the construction of virtual 3D models of the coronary anatomy, and in-silico analysis of blood flow within the coronaries. However, several challenges still exist which preclude large scale patient-specific simulations, necessary for incorporating haemodynamic risk metrics as part of disease risk prediction. In particular, despite a large amount of available coronary medical imaging, extraction of the structures of interest from medical images remains a manual and laborious task. There is significant variation in how geometric features of the coronary arteries are measured, which makes comparisons between different studies difficult. Modelling blood flow conditions in the coronary arteries likewise requires manual preparation of the simulations and significant computational cost. This thesis aims to solve these challenges. The "Automated Segmentation of Coronary Arteries (ASOCA)" establishes a benchmark dataset of coronary arteries and their associated 3D reconstructions, which is currently the largest openly available dataset of coronary artery models and offers a wide range of applications such as computational modelling, 3D printed for experiments, developing, and testing medical devices such as stents, and Virtual Reality applications for education and training. An automated computational modelling workflow is developed to set up, run and postprocess simulations on the Left Main Bifurcation and calculate relevant shape metrics. A convolutional neural network model is developed to replace the computational fluid dynamics process, which can predict haemodynamic metrics such as wall shear stress in minutes, compared to several hours using traditional computational modelling reducing the computation and labour cost involved in performing such simulations
    corecore