498 research outputs found

    NUMERICAL SIMULATION OF A FLAT-TUBE HIGH POWER DENSITY SOLID OXIDE FUEL CELL

    Get PDF
    In recent years, fuel cells have been deemed to be a low-polluting fuel consuming power-generation technology with high efficiency. They are an important technology for a potentially wide variety of applications. Among fuel cell types, solid oxide fuel cells (SOFC) have the recognized potential to be one of most promising distributed power generation technologies. Tubular SOFCs have evolved over last two decades, and work is currently underway to reduce cell cost toward commercialization. Further SOFC development is needed in order to achieve a commercially competitive cell and stack cost. A flat-tube high power density (HPD) SOFC is a newly designed cell of a different geometry from a tubular SOFC. It has increased power density, but still maintains the tubular SOFC¡¯s beneficial feature of secure sealing. In this study, heat/mass transfer and fluid flow in a single flat-tube high power density SOFC is investigated using a self-developed code in FORTRAN. The temperature fields, concentration fields and velocity fields in different chambers of a flat¨Ctube HPD SOFC are studied.Based on the temperature fields and species concentration fields, an overall electrical performance of a flat-tube high power density SOFC is performed using a commercial tool for electrical circuit analysis. The effects of the stack chamber numbers, stack shape and other stack features on the performance of the flat-tube HPD SOFC are also studied. The results show that the performance of a flat-tube HPD SOFC is better than a tubular SOFC with the same active cell surface, and that increasing the chamber number can improve the overall performance and power/volume rating for a flat-tube HPD SOFC. The study helps to design and optimize the flat-tube HPD SOFC for practical applications so as to achieve widespread utilization of SOFCs. In this study, one interesting application example for the SOFC is also presented

    End-to-End Delay Analysis for Fixed Priority Scheduling in WirelessHART Networks

    Get PDF
    The WirelessHART standard has been specifically designed for real-time communication between sensor and actuator devices for industrial process monitoring and control. End-to-end communication delay analysis for WirelessHART networks is required for acceptance test of real-time data flows from sensors to actuators and for workload adjustment in response to network dynamics. In this paper, we map the scheduling of real-time periodic data flows in a WirelessHART network to real-time multiprocessor scheduling. We, then, exploit the response time analysis for multiprocessor scheduling and propose a novel method for the end-to-end delay analysis of the real-time flows that are scheduled using a fixed priority scheduling policy in a WirelessHART network. Simulations based on both random topologies and real network topologies of a physical testbed demonstrate the efficacy of our end-to-end delay analysis in terms of acceptance ratio under various fixed priority scheduling policies

    Convergence of flow-based generative models via proximal gradient descent in Wasserstein space

    Full text link
    Flow-based generative models enjoy certain advantages in computing the data generation and the likelihood, and have recently shown competitive empirical performance. Compared to the accumulating theoretical studies on related score-based diffusion models, analysis of flow-based models, which are deterministic in both forward (data-to-noise) and reverse (noise-to-data) directions, remain sparse. In this paper, we provide a theoretical guarantee of generating data distribution by a progressive flow model, the so-called JKO flow model, which implements the Jordan-Kinderleherer-Otto (JKO) scheme in a normalizing flow network. Leveraging the exponential convergence of the proximal gradient descent (GD) in Wasserstein space, we prove the Kullback-Leibler (KL) guarantee of data generation by a JKO flow model to be O(ε2)O(\varepsilon^2) when using Nlog(1/ε)N \lesssim \log (1/\varepsilon) many JKO steps (NN Residual Blocks in the flow) where ε\varepsilon is the error in the per-step first-order condition. The assumption on data density is merely a finite second moment, and the theory extends to data distributions without density and when there are inversion errors in the reverse process where we obtain KL-W2W_2 mixed error guarantees. The non-asymptotic convergence rate of the JKO-type W2W_2-proximal GD is proved for a general class of convex objective functionals that includes the KL divergence as a special case, which can be of independent interest

    End-to-End Communication Delay Analysis in WirelessHART Networks

    Get PDF
    WirelessHART is a new standard specifically designed for real-time and reliable communication between sensor and actuator devices for industrial process monitoring and control applications. End-to-end communication delay analysis for WirelessHART networks is required to determine the schedulability of real-time data flows from sensors to actuators for the purpose of acceptance test or workload adjustment in response to network dynamics. In this paper, we map the scheduling of real-time periodic data flows in a WirelessHART network to real-time multiprocessor scheduling. We then exploit the response time analysis for multiprocessor scheduling and propose a novel method for the delay analysis that establishes an upper bound of the end-to-end communication delay of each real-time flow in a WirelessHART network. Simulation studies based on both random topologies and real network topologies of a 74-node physical wireless sensor network testbed demonstrate that our analysis provides safe and reasonably tight upper bounds of the end-to-end delays of real-time flows, and hence enables effective schedulability tests for WirelessHART networks

    Distributed Channel Allocation Algorithms for Wireless Sensor Networks

    Get PDF
    Interference between concurrent transmissions can cause severe performance degradation in wireless sensor networks (WSNs). While multiple channels available in WSN technology such as IEEE 802.15.4 can be exploited to mitigate interference, channel allocation can have a significant impact on the performance of multi-channel communication. This paper proposes a set of distributed algorithms for near-optimal channel allocation in WSNs with theoretical bounds. We first consider the problem of minimizing the number of channels needed to remove interference in a WSN, and propose both receiver-based and link-based distributed channel allocation protocols. For WSNs with an insufficient number of channels, we formulate a fair channel allocation problem whose objective is to minimize the maximum interference (MinMax) experienced by any transmission link in the network. We prove that MinMax channel allocation is NP-hard and propose a distributed link-based MinMax channel allocation protocol. We also propose a distributed protocol for link scheduling based on MinMax channel allocation. Simulations based on real topologies and data traces collected from a WSN testbed consisting of 74 TelosB motes, and using random topologies have shown that our channel allocation protocols significantly outperform a state-of-the-art channel allocation protocol

    Theory and Algorithms for Partial Order Based Reduction in Planning

    Full text link
    Search is a major technique for planning. It amounts to exploring a state space of planning domains typically modeled as a directed graph. However, prohibitively large sizes of the search space make search expensive. Developing better heuristic functions has been the main technique for improving search efficiency. Nevertheless, recent studies have shown that improving heuristics alone has certain fundamental limits on improving search efficiency. Recently, a new direction of research called partial order based reduction (POR) has been proposed as an alternative to improving heuristics. POR has shown promise in speeding up searches. POR has been extensively studied in model checking research and is a key enabling technique for scalability of model checking systems. Although the POR theory has been extensively studied in model checking, it has never been developed systematically for planning before. In addition, the conditions for POR in the model checking theory are abstract and not directly applicable in planning. Previous works on POR algorithms for planning did not establish the connection between these algorithms and existing theory in model checking. In this paper, we develop a theory for POR in planning. The new theory we develop connects the stubborn set theory in model checking and POR methods in planning. We show that previous POR algorithms in planning can be explained by the new theory. Based on the new theory, we propose a new, stronger POR algorithm. Experimental results on various planning domains show further search cost reduction using the new algorithm

    An Integrated Data Mining Approach to Real-time Clinical Monitoring and Deterioration Warning

    Get PDF
    Clinical study found that early detection and intervention are essential for preventing clinical deterioration in patients, for patients both in intensive care units (ICU) as well as in general wards but under real-time data sensing (RDS). In this paper, we develop an integrated data mining approach to give early deterioration warnings for patients under real-time monitoring in ICU and RDS. Existing work on mining real-time clinical data often focus on certain single vital sign and specific disease. In this paper, we consider an integrated data mining approach for general sudden deterioration warning. We synthesize a large feature set that includes first and second order time-series features, detrended fluctuation analysis (DFA), spectral analysis, approximative entropy, and cross-signal features. We then systematically apply and evaluate a series of established data mining methods, including forward feature selection, linear and nonlinear classification algorithms, and exploratory undersampling for class imbalance. An extensive empirical study is conducted on real patient data collected between 2001 and 2008 from a variety of ICUs. Results show the benefit of each of the proposed techniques, and the final integrated approach significantly improves the prediction quality. The proposed clinical warning system is currently under integration with the electronic medical record system at Barnes-Jewish Hospital in preparation for a clinical trial. This work represents a promising step toward general early clinical warning which has the potential to significantly improve the quality of patient care in hospitals

    Designing Intelligent Software Agents for B2B Sequential Dutch Auctions: A Structural Econometric Approach

    Get PDF
    We study multi-unit sequential Dutch auctions in a complex B2B context. Using a large real-world dataset, we apply structural econometric analysis to recover the parameters governing the distribution of bidders’ valuations. The identification of these parameters allows us to simulate auction results under different designs and perform policy counterfactuals. We also develop a dynamic optimization approach to guide the setting of key auction parameters. Given the bounded rationality of human decision makers, we propose to augment auctioneers’ capabilities with high performance decision support tools in the form of software agents. Our paper contributes to both theory and practice of auction design. From the theoretical perspective, this is the first study that explicitly models the sequential aspects of Dutch auctions using structural econometric analysis. From the managerial perspective, this paper offers useful implications to business practitioners for complex decision making in B2B auctions

    Information Transparency in Multi-Channel B2B Auctions: A Field Experiment

    Get PDF
    With the large amount of data available via different channels, firms have increasingly viewed information transparency as an important component of their strategy. This paper examines how the disclosure of winners\u27 information affects sellers\u27 revenues in multi-channel, B2B sequential auctions. Using a field experiment, we find that bidders tend to pay higher prices when winners\u27 identities are concealed from public view. At the outset, such finding contradicts the prediction of the well-known linkage principle in auction theory. Our empirical analysis suggests that anonymizing winning bids might discourage tacit collusion and mitigate the declining price trend in these B2B sequential auctions. This paper contributes to the growing literature on information transparency in market design. It also provides valuable insights to practitioners in designing information revelation policies in complex B2B markets
    corecore