11,683 research outputs found
On Power Law Scaling Dynamics for Time-fractional Phase Field Models during Coarsening
In this paper, we study the phase field models with fractional-order in time.
The phase field models have been widely used to study coarsening dynamics of
material systems with microstructures. It is known that phase field models are
usually derived from energy variation so that they obey some energy dissipation
laws intrinsically. Recently, many works have been published on investigating
fractional-order phase field models, but little is known of the corresponding
energy dissipation laws. We focus on the time-fractional phase field models and
report that the effective free energy and roughness obey a universal power-law
scaling dynamics during coarsening. Mainly, the effective free energy and
roughness in the time-fractional phase field models scale by following a
similar power law as the integer phase field models, where the power is
linearly proportional to the fractional order. This universal scaling law is
verified numerically against several phase field models, including the
Cahn-Hilliard equations with different variable mobilities and molecular beam
epitaxy models. This new finding sheds light on potential applications of time
fractional phase field models in studying coarsening dynamics and crystal
growths
Unifying and Merging Well-trained Deep Neural Networks for Inference Stage
We propose a novel method to merge convolutional neural-nets for the
inference stage. Given two well-trained networks that may have different
architectures that handle different tasks, our method aligns the layers of the
original networks and merges them into a unified model by sharing the
representative codes of weights. The shared weights are further re-trained to
fine-tune the performance of the merged model. The proposed method effectively
produces a compact model that may run original tasks simultaneously on
resource-limited devices. As it preserves the general architectures and
leverages the co-used weights of well-trained networks, a substantial training
overhead can be reduced to shorten the system development time. Experimental
results demonstrate a satisfactory performance and validate the effectiveness
of the method.Comment: To appear in the 27th International Joint Conference on Artificial
Intelligence and the 23rd European Conference on Artificial Intelligence,
2018. (IJCAI-ECAI 2018
- …