221 research outputs found
Investigation on the heat extraction performance of deep closed-loop borehole heat exchanger system for building heating
In recent years, deep geothermal energy has been widely exploited through closed-loop borehole heat exchanger system for building heating. In order to precisely evaluate the sustainable heat extraction capacity and the impact of different designs and operating parameters, two heat transfer models are implemented in the open-source scientific software OpenGeoSys (OGS), with respect to the Deep Borehole Heat Exchanger (DBHE) and Enhanced U-tube Borehole Heat Exchanger (EUBHE) system. Besides, three types of boundary conditions are implemented, including the constant inflow temperature, the constant heat extraction rate, and constant building thermal power that integrates the ground source heat pump (GSHP) module. By applying the two BHE models, the influence of different designs and operating parameters on the GSHP system is evaluated. The sustainable heat extraction capacity and efficiency of a deep EUBHE system are predicted. Moreover, its performance and efficiency are further compared against the 2-DBHE array system that has the same total borehole length.
It is found that the soil thermal conductivity is the most important parameter in the design of DBHE and EUBHE systems. The sustainable specific heat extraction rate of the EUBHE system is 86.5 W/m higher than an array with 2 DBHEs. Under the building thermal load of 1.225 MW, the total electricity consumed by the EUBHE system is approximately 27 % less than the 2-DBHE array over 10 years. The average Coefficient of System Performance (CSP) value of the EUBHE system is 1.66 higher over 10 heating seasons. The two numerical models implemented in the OpenGeoSys software can be used to predict and optimize the thermal characteristics of the closed-loop DBHE and EUBHE systems in real projects
Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions
Deep neural networks are widely used for classification. These deep models
often suffer from a lack of interpretability -- they are particularly difficult
to understand because of their non-linear nature. As a result, neural networks
are often treated as "black box" models, and in the past, have been trained
purely to optimize the accuracy of predictions. In this work, we create a novel
network architecture for deep learning that naturally explains its own
reasoning for each prediction. This architecture contains an autoencoder and a
special prototype layer, where each unit of that layer stores a weight vector
that resembles an encoded training input. The encoder of the autoencoder allows
us to do comparisons within the latent space, while the decoder allows us to
visualize the learned prototypes. The training objective has four terms: an
accuracy term, a term that encourages every prototype to be similar to at least
one encoded input, a term that encourages every encoded input to be close to at
least one prototype, and a term that encourages faithful reconstruction by the
autoencoder. The distances computed in the prototype layer are used as part of
the classification process. Since the prototypes are learned during training,
the learned network naturally comes with explanations for each prediction, and
the explanations are loyal to what the network actually computes.Comment: The first two authors contributed equally, 8 pages, accepted in AAAI
201
Review : membrane tethers control plasmodesmal function and formation
Cell-to-cell communication is crucial in coordinating diverse biological processes in multicellular organisms. In plants, communication between adjacent cells occurs via nanotubular passages called plasmodesmata (PD). The PD passage is composed of an appressed endoplasmic reticulum (ER) internally, and plasma membrane (PM) externally, that traverses the cell wall, and associates with the actin-cytoskeleton. The coordination of the ER, PM and cytoskeleton plays a potential role in maintaining the architecture and conductivity of PD. Many data suggest that PD-associated proteins can serve as tethers that connect these structures in a functional PD, to regulate cell-to-cell communication. In this review, we summarize the organization and regulation of PD activity via tethering proteins, and discuss the importance of PD-mediated cell-to-cell communication in plant development and defense against environmental stress
Role Change of Developed Countries and Emerging Economic Entities in Global Governance
with rapid development of globalization, numerous global problems appear, and global governance thus emerges. For a long time, the elite club G8 Group which consists of developed countries monopolizes and leads global governance, while vast developing countries are at the edge of global governance stage. However, due to increasingly complex and severe global problems, defects of G8 mechanism, low execution efficiency and legality problem, governance ability of G8 declines continuously. As emerging economic entities rise, G20 gradually replaces G8 and becomes a main mode of global governance. The role that emerging economic entities play stands out increasingly. In addition, participation in global governance by emerging economic entities intensively reflects interests and appeals of developing countries. In the future, emerging economic entities will become important force in politic and economic fields in the world
Interpretable Image Recognition with Hierarchical Prototypes
Vision models are interpretable when they classify objects on the basis of
features that a person can directly understand. Recently, methods relying on
visual feature prototypes have been developed for this purpose. However, in
contrast to how humans categorize objects, these approaches have not yet made
use of any taxonomical organization of class labels. With such an approach, for
instance, we may see why a chimpanzee is classified as a chimpanzee, but not
why it was considered to be a primate or even an animal. In this work we
introduce a model that uses hierarchically organized prototypes to classify
objects at every level in a predefined taxonomy. Hence, we may find distinct
explanations for the prediction an image receives at each level of the
taxonomy. The hierarchical prototypes enable the model to perform another
important task: interpretably classifying images from previously unseen classes
at the level of the taxonomy to which they correctly relate, e.g. classifying a
hand gun as a weapon, when the only weapons in the training data are rifles.
With a subset of ImageNet, we test our model against its counterpart black-box
model on two tasks: 1) classification of data from familiar classes, and 2)
classification of data from previously unseen classes at the appropriate level
in the taxonomy. We find that our model performs approximately as well as its
counterpart black-box model while allowing for each classification to be
interpreted.Comment: Published as a full paper at HCOMP 201
This Looks Like Those: Illuminating Prototypical Concepts Using Multiple Visualizations
We present ProtoConcepts, a method for interpretable image classification
combining deep learning and case-based reasoning using prototypical parts.
Existing work in prototype-based image classification uses a ``this looks like
that'' reasoning process, which dissects a test image by finding prototypical
parts and combining evidence from these prototypes to make a final
classification. However, all of the existing prototypical part-based image
classifiers provide only one-to-one comparisons, where a single training image
patch serves as a prototype to compare with a part of our test image. With
these single-image comparisons, it can often be difficult to identify the
underlying concept being compared (e.g., ``is it comparing the color or the
shape?''). Our proposed method modifies the architecture of prototype-based
networks to instead learn prototypical concepts which are visualized using
multiple image patches. Having multiple visualizations of the same prototype
allows us to more easily identify the concept captured by that prototype (e.g.,
``the test image and the related training patches are all the same shade of
blue''), and allows our model to create richer, more interpretable visual
explanations. Our experiments show that our ``this looks like those'' reasoning
process can be applied as a modification to a wide range of existing
prototypical image classification networks while achieving comparable accuracy
on benchmark datasets
- …