250 research outputs found

    Development of an Integrated Process, Modeling and Simulation Platform for Performance-Based Design of Low-Energy and High IEQ Buildings

    Get PDF
    The objective of this study was to develop a Virtual Design Studio (VDS) : a software platform for integrated, coordinated and optimized design of green building systems with low energy consumption, high indoor environmental quality (IEQ), and high level of sustainability. The VDS is intended to assist collaborating architects, engineers and project management team members throughout from the early phases to the detailed building design stages. It can be used to plan design tasks and workflow, and evaluate the potential impacts of various green building strategies on the building performance by using the state of the art simulation tools as well as industrial/professional standards and guidelines for green building system design. Based on the review and analysis of existing professional practices in building system design, particularly those used in U.S., Germany and UK, a generic process for performance-based building design, construction and operation was proposed. It included Assess, Define, Design, Apply, and Monitoring (ADDAM) stages. The current VDS focused on the first three stages. The VDS considers the building design as a multi-dimensional process involving multiple design teams, design factors, and design stages. The intersection among these three dimensions defines a specific design task in terms of who , what and when . It also considers building design as a multi-objective process that aims to enhance the five aspects of performance for green building systems: site sustainability, materials and resource efficiency, water utilization efficiency, energy efficiency and impacts to the atmospheric environment, and IEQ. The current VDS development has been limited to the energy efficiency and IEQ performance with particular focus on thermal, air quality and lighting environmental quality because of their strong interaction with the energy performance of buildings. The VDS software framework contains four major functions: 1) Design coordination: It enables users to define tasks using the Input-Process-Output flow approach, which specifies the anticipated activities (i.e., the process), required input and output information, and anticipated interactions with other tasks. It also allows task scheduling to define the work flow, and sharing of the design data and information via internet. 2) Modeling and simulation: It enables users to perform building simulations to predict the energy consumption and IEQ conditions at any of the design stages by using EnergyPlus and a combined heat, air, moisture and pollutant simulation (CHAMPS) model. A method for co-simulation was developed to allow the use of both models at the same time step for the combined energy and indoor air quality analysis. 3) Results visualization: It enables users to display a 3-D geometric design of the building by reading BIM (building information model) file generated by design software such as SketchUp, and the predicted results of heat, air, moisture, pollutant and light distributions in the building. 4) Performance evaluation: It enables the users to compare the performance of a proposed building design against a reference building that is defined for the same type of buildings under the same climate condition, and predict the percent of improvements over the minimum requirements specified in ASHRAE Standard 55-2010, 62.1-2010 and 90.1-2010. An approach was developed to estimate the potential impact of a design factor on the whole building performance, and hence can assist the user to identify areas that have most pay back for investment. The VDS software was developed by using C++ with the conventional Model, View and Control (MVC) software architecture. The software has been verified by using a simple 3-zone case building. The application of the VDS concepts and framework for building design and performance analysis has been illustrated by using a medium size five story office building that received the LEED Platinum Certification from USGBC

    Advertising strategy for profit-maximization: a novel practice on Tmall's online ads manager platforms

    Full text link
    Ads manager platform gains popularity among numerous e-commercial vendors/advertisers. It helps advertisers to facilitate the process of displaying their ads to target customers. One of the main challenges faced by advertisers, especially small and medium-sized enterprises, is to configure their advertising strategy properly. An ineffective advertising strategy will bring too many ``just looking'' clicks and, eventually, generate high advertising expenditure unproportionally to the growth of sales. In this paper, we present a novel profit-maximization model for online advertising optimization. The optimization problem is constructed to find optimal set of features to maximize the probability that target customers buy advertising products. We further reformulate the optimization problem to a knapsack problem with changeable parameters, and introduce a self-adjusted algorithm for finding the solution to the problem. Numerical experiment based on statistical data from Tmall show that our proposed method can optimize the advertising strategy given expenditure budget effectively.Comment: Online advertising campaign

    Stars Are All You Need: A Distantly Supervised Pyramid Network for Unified Sentiment Analysis

    Full text link
    Data for the Rating Prediction (RP) sentiment analysis task such as star reviews are readily available. However, data for aspect-category detection (ACD) and aspect-category sentiment analysis (ACSA) is often desired because of the fine-grained nature but are expensive to collect. In this work, we propose Unified Sentiment Analysis (Uni-SA) to understand aspect and review sentiment in a unified manner. Specifically, we propose a Distantly Supervised Pyramid Network (DSPN) to efficiently perform ACD, ACSA, and RP using only RP labels for training. We evaluate DSPN on multi-aspect review datasets in English and Chinese and find that in addition to the internal efficiency of sample size, DSPN also performs comparably well to a variety of benchmark models. We also demonstrate the interpretability of DSPN's outputs on reviews to show the pyramid structure inherent in unified sentiment analysis.Comment: 15 pages, 3 figures, 5 table

    L^2R: Lifelong Learning for First-stage Retrieval with Backward-Compatible Representations

    Full text link
    First-stage retrieval is a critical task that aims to retrieve relevant document candidates from a large-scale collection. While existing retrieval models have achieved impressive performance, they are mostly studied on static data sets, ignoring that in the real-world, the data on the Web is continuously growing with potential distribution drift. Consequently, retrievers trained on static old data may not suit new-coming data well and inevitably produce sub-optimal results. In this work, we study lifelong learning for first-stage retrieval, especially focusing on the setting where the emerging documents are unlabeled since relevance annotation is expensive and may not keep up with data emergence. Under this setting, we aim to develop model updating with two goals: (1) to effectively adapt to the evolving distribution with the unlabeled new-coming data, and (2) to avoid re-inferring all embeddings of old documents to efficiently update the index each time the model is updated. We first formalize the task and then propose a novel Lifelong Learning method for the first-stage Retrieval, namely L^2R. L^2R adopts the typical memory mechanism for lifelong learning, and incorporates two crucial components: (1) selecting diverse support negatives for model training and memory updating for effective model adaptation, and (2) a ranking alignment objective to ensure the backward-compatibility of representations to save the cost of index rebuilding without hurting the model performance. For evaluation, we construct two new benchmarks from LoTTE and Multi-CPR datasets to simulate the document distribution drift in realistic retrieval scenarios. Extensive experiments show that L^2R significantly outperforms competitive lifelong learning baselines.Comment: accepted by CIKM202

    Continual Learning for Generative Retrieval over Dynamic Corpora

    Full text link
    Generative retrieval (GR) directly predicts the identifiers of relevant documents (i.e., docids) based on a parametric model. It has achieved solid performance on many ad-hoc retrieval tasks. So far, these tasks have assumed a static document collection. In many practical scenarios, however, document collections are dynamic, where new documents are continuously added to the corpus. The ability to incrementally index new documents while preserving the ability to answer queries with both previously and newly indexed relevant documents is vital to applying GR models. In this paper, we address this practical continual learning problem for GR. We put forward a novel Continual-LEarner for generatiVE Retrieval (CLEVER) model and make two major contributions to continual learning for GR: (i) To encode new documents into docids with low computational cost, we present Incremental Product Quantization, which updates a partial quantization codebook according to two adaptive thresholds; and (ii) To memorize new documents for querying without forgetting previous knowledge, we propose a memory-augmented learning mechanism, to form meaningful connections between old and new documents. Empirical results demonstrate the effectiveness and efficiency of the proposed model.Comment: Accepted by CIKM 202

    Learning to Truncate Ranked Lists for Information Retrieval

    Full text link
    Ranked list truncation is of critical importance in a variety of professional information retrieval applications such as patent search or legal search. The goal is to dynamically determine the number of returned documents according to some user-defined objectives, in order to reach a balance between the overall utility of the results and user efforts. Existing methods formulate this task as a sequential decision problem and take some pre-defined loss as a proxy objective, which suffers from the limitation of local decision and non-direct optimization. In this work, we propose a global decision based truncation model named AttnCut, which directly optimizes user-defined objectives for the ranked list truncation. Specifically, we take the successful transformer architecture to capture the global dependency within the ranked list for truncation decision, and employ the reward augmented maximum likelihood (RAML) for direct optimization. We consider two types of user-defined objectives which are of practical usage. One is the widely adopted metric such as F1 which acts as a balanced objective, and the other is the best F1 under some minimal recall constraint which represents a typical objective in professional search. Empirical results over the Robust04 and MQ2007 datasets demonstrate the effectiveness of our approach as compared with the state-of-the-art baselines
    corecore