44 research outputs found
Block Selection Method for Using Feature Norm in Out-of-distribution Detection
Detecting out-of-distribution (OOD) inputs during the inference stage is
crucial for deploying neural networks in the real world. Previous methods
commonly relied on the output of a network derived from the highly activated
feature map. In this study, we first revealed that a norm of the feature map
obtained from the other block than the last block can be a better indicator of
OOD detection. Motivated by this, we propose a simple framework consisting of
FeatureNorm: a norm of the feature map and NormRatio: a ratio of FeatureNorm
for ID and OOD to measure the OOD detection performance of each block. In
particular, to select the block that provides the largest difference between
FeatureNorm of ID and FeatureNorm of OOD, we create Jigsaw puzzle images as
pseudo OOD from ID training samples and calculate NormRatio, and the block with
the largest value is selected. After the suitable block is selected, OOD
detection with the FeatureNorm outperforms other OOD detection methods by
reducing FPR95 by up to 52.77% on CIFAR10 benchmark and by up to 48.53% on
ImageNet benchmark. We demonstrate that our framework can generalize to various
architectures and the importance of block selection, which can improve previous
OOD detection methods as well.Comment: 11 pages including reference. 5 figures and 5 table
Multi-decadal change in summer mean water temperature in Lake Konnevesi, Finland (1984â2021)
Depth-resolved water temperature data on the thermal environment of lakes are often hindered by sparse temporal frequency, limited depth resolution, or short duration that create many challenges for long-term analysis. Where high frequency and depth-resolved data exist, they can provide a wealth of knowledge about how lakes are responding to a changing climate. In this study, we analyzed around 950 profiles of summer mean water temperature (July to September), which includes about 30,600 unique observations, from a subarctic lake (Lake Konnevesi, Finland) to understand the changes in lake surface water temperature (LSWT), lake deepwater temperature (LDWT), and lake volumetrically weighted mean temperature (LVWMT) from 1984 to 2021. Statistical analysis of this dataset revealed a substantial warming of LSWT (0.41 °C decadeâ1) and LVWMT (0.32 °C decadeâ1), whilst LDWT remained unchanged (0.00 °C decadeâ1). Our analysis using a generalized additive model suggested the inter-annual variability in LSWT and LVWMT correlated significantly with the upward trends of summer mean air temperature and solar radiation, but suggested no significant effect of observed changes in ice departure dates and near-surface wind speed. None of the investigated predictors correlated with the change in the LDWT. Due to the variable response of lake surface and bottom water temperature to climate change in this subarctic lake, our data suggest a substantial increase in lake thermal stability. Our study supports the growing literature on lake thermal responses to climate change, and illustrates the unique contrast of climate change impacts at the surface and at depth in lake ecosystems, with deep waters acting as a potential thermal refuge to aquatic organisms within a warming world
Fluvial bedload transport modelling: advanced ensemble tree-based models or optimized deep learning algorithms?
The potential of advanced tree-based models and optimized deep learning algorithms to predict fluvial bedload transport was explored, identifying the most flexible and accurate algorithm, and the optimum set of readily available and reliable inputs. Using 926 datasets for 20 rivers, the performance of three groups of models was tested: (1) standalone tree-based models Alternating Model Tree (AMT) and Dual Perturb and Combine Tree (DPCT); (2) ensemble tree-based models Iterative Absolute Error Regression (IAER), ensembled with AMT and DPCT; and (3) optimized deep learning models Long Short-Term Memory (LSTM) and Recurrent Neural Network (RNN) ensembled with Grey Wolf Optimizer. Comparison of the predictive performance of the models with that of commonly used empirical equations and sensitivity analysis of the driving variables revealed that: (i) the coarse grain-size percentile D90 was the most effective variable in bedload transport prediction (where Dx is the xth percentile of the bed surface grain size distribution), followed by D84, D50, flow discharge, D16, and channel slope and width; (ii) all tree-based models and optimized deep learning algorithms displayed âvery goodâ or âgoodâ performance, outperforming empirical equations; and (iii) all algorithms performed best when all input parameters were used. Thus, a range of different input variable combinations must be considered in the optimization of these models. Overall, ensemble algorithms provided more accurate predictions of bedload transport than their standalone counterpart. In particular, the ensemble tree-based model IAER-AMT performed most strongly, displaying great potential to produce robust predictions of bedload transport in coarse-grained rivers based on a few readily available flow and channel variables
Uncertainty quantification of granular computingâneural network model for prediction of pollutant longitudinal dispersion coefficient in aquatic streams
Discharge of pollution loads into natural water systems remains a global challenge that threatens water and food supply, as well as endangering ecosystem services. Natural rehabilitation of contaminated streams is mainly influenced by the longitudinal dispersion coefficient, or the rate of longitudinal dispersion (Dx), a key parameter with large spatiotemporal fluctuations that characterizes pollution transport. The large uncertainty in estimation of Dx in streams limits the water quality assessment in natural streams and design of water quality enhancement strategies. This study develops an artificial intelligence-based predictive model, coupling granular computing and neural network models (GrC-ANN) to provide robust estimation of Dx and its uncertainty for a range of flow-geometric conditions with high spatiotemporal variability. Uncertainty analysis of Dx estimated from the proposed GrC-ANN model was performed by alteration of the training data used to tune the model. Modified bootstrap method was employed to generate different training patterns through resampling from a global database of tracer experiments in streams with 503 datapoints. Comparison between the Dx values estimated by GrC-ANN to those determined from tracer measurements shows the appropriateness and robustness of the proposed method in determining the rate of longitudinal dispersion. The GrC-ANN model with the narrowest bandwidth of estimated uncertainty (bandwidth-factorâ=â0.56) that brackets the highest percentage of true Dx data (i.e., 100%) is the best model to compute Dx in streams. Considering the significant inherent uncertainty reported in the previous Dx models, the GrC-ANN model developed in this study is shown to have a robust performance for evaluating pollutant mixing (Dx) in turbulent environmental flow systems
Seamless monolithic three-dimensional integration of single-crystalline films by growth
The demand for the three-dimensional (3D) integration of electronic
components is on a steady rise. The through-silicon-via (TSV) technique emerges
as the only viable method for integrating single-crystalline device components
in a 3D format, despite encountering significant processing challenges. While
monolithic 3D (M3D) integration schemes show promise, the seamless connection
of single-crystalline semiconductors without intervening wafers has yet to be
demonstrated. This challenge arises from the inherent difficulty of growing
single crystals on amorphous or polycrystalline surfaces post the
back-end-of-the-line process at low temperatures to preserve the underlying
circuitry. Consequently, a practical growth-based solution for M3D of single
crystals remains elusive. Here, we present a method for growing
single-crystalline channel materials, specifically composed of transition metal
dichalcogenides, on amorphous and polycrystalline surfaces at temperatures
lower than 400 {\deg}C. Building on this developed technique, we demonstrate
the seamless monolithic integration of vertical single-crystalline logic
transistor arrays. This accomplishment leads to the development of
unprecedented vertical CMOS arrays, thereby constructing vertical inverters.
Ultimately, this achievement sets the stage to pave the way for M3D integration
of various electronic and optoelectronic hardware in the form of single
crystals
Nanomaterials for Neural Interfaces
This review focuses on the application of nanomaterials for neural interfacing. The junction between nanotechnology and neural tissues can be particularly worthy of scientific attention for several reasons: (i) Neural cells are electroactive, and the electronic properties of nanostructures can be tailored to match the charge transport requirements of electrical cellular interfacing. (ii) The unique mechanical and chemical properties of nanomaterials are critical for integration with neural tissue as long-term implants. (iii) Solutions to many critical problems in neural biology/medicine are limited by the availability of specialized materials. (iv) Neuronal stimulation is needed for a variety of common and severe health problems. This confluence of need, accumulated expertise, and potential impact on the well-being of people suggests the potential of nanomaterials to revolutionize the field of neural interfacing. In this review, we begin with foundational topics, such as the current status of neural electrode (NE) technology, the key challenges facing the practical utilization of NEs, and the potential advantages of nanostructures as components of chronic implants. After that the detailed account of toxicology and biocompatibility of nanomaterials in respect to neural tissues is given. Next, we cover a variety of specific applications of nanoengineered devices, including drug delivery, imaging, topographic patterning, electrode design, nanoscale transistors for high-resolution neural interfacing, and photoactivated interfaces. We also critically evaluate the specific properties of particular nanomaterialsâincluding nanoparticles, nanowires, and carbon nanotubesâthat can be taken advantage of in neuroprosthetic devices. The most promising future areas of research and practical device engineering are discussed as a conclusion to the review.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/64336/1/3970_ftp.pd
Global Surface Temperature: A New Insight
This paper belongs to our Special Issue âApplication of Climate Data in Hydrologic Modelsâ [...
A parsimonious framework of evaluating WSUD features in urban flood mitigation
In this study, a parsimonious framework for supporting Water Sensitive Urban Design (WSUD) was proposed to seek a tradeoff between investment of WSUD features and mitigation of urban flood damage. A two-dimensional (2D) hydrological-hydraulic simulation model, PCSWMM, was adopted to simulate the rainfall-runoff process and inundation scenarios, and the flood damages was evaluated based on inundated water depths and damage curves. The sensitivity of deploying various design features to flood control effects was also tested, which provided useful information for identifying potential design parameters (like conduit sizes and pond locations). The proposed framework was applied to a hypothetical case adapted from an urban district in the tropical region considering various WSUD features (i.e. rainwater storage pond, rain garden, and conduit upgrading). The results showed that when the gross investment of WSUD features increased from 0 to 1.19 million ; a linear relationship (with a R-square fit at 0.9) was found suitable to represent the relationship between the investment and the damage. The proposed framework is effective in helping assess the balance between mitigation of urban flood damage and adoption of WSUD features, and could be used to support urban water managers for a more science-based decision making towards flood risk management.MOE (Min. of Education, Sâpore)Published versio
Understanding Conflicting Interests of a Government and a Tobacco Manufacturer: A Game-Theoretic Approach
Rice is the staple food of nearly half of the population of the world, most of whom live in developing countries. Ensuring a domestic supply of rice from outside sources is difficult for developing countries as less than 5% of the total worldâs production is available for international trade. Hence, in order to ensure domestic food security, e.g., food availability and access, governments provide subsidies in agriculture. In many occasions, public money used for the subsidy goes toward promoting undesirable crops like tobacco. Although the strategic interaction between governments and manufacturers is critical, it has not been studied in the literature. This study fills this gap by considering a game between a government (of a developing country) and a tobacco manufacturer in which the government decides on a mix of subsidies and the tobacco manufacturer decides on declaring a purchasing price of tobacco. We provide a numerical study to show that controlling the output harvest price is more effective in reaching the desired end result for both the government and the tobacco manufacturer. A subsidy in fertilizer results in the measurable increase in the government spending but does not have significant effect in reaching the production target. The fertilizer subsidy should be provided only when the output price is too high to be affordable for the population