500 research outputs found
Improved quantum entropic uncertainty relations
We study entropic uncertainty relations by using stepwise linear functions
and quadratic functions. Two kinds of improved uncertainty lower bounds are
constructed: the state-independent one based on the lower bound of Shannon
entropy and the tighter state-dependent one based on the majorization
techniques. The analytical results for qubit and qutrit systems with two or
three measurement settings are explicitly derived, with detailed examples
showing that they outperform the existing bounds. The case with the presence of
quantum memory is also investigated.Comment: 14 pages,6 figure
Melt crystallization and segmental dynamics of poly(ethylene oxide) confined in a solid electrolyte composite
The isothermal melt crystallization and the corresponding segmental dynamics, of a high molecular weight poly(ethylene oxide) (PEO) confined by Li7La3Zr2O12 (LLZO) particles in solid electrolyte composites, were monitored by differential scanning calorimetry (DSC) and dielectric relaxation spectroscopy (DRS), respectively. Our results show that the overall crystallinity is positively correlated with the surface area of LLZO particles. The primary and secondary crystallization processes are identified by a modified Avrami equation, while two dynamic modes, the α relaxation and αâČ relaxation, were in the DRS measurements. The results reveal an unambiguous correlation between the primary crystallization and the α relaxation, while a correlation between the second crystallization and the αâČ relaxation concurrently exist in the electrolyte composites. © 2020 Wiley Periodicals, Inc. J. Polym. Sci. 2020, 58, 466â477In a representative polymerâceramic composite solid electrolyte, segmental dynamics are closely related to the crystallization processes in the polymer matrix. This nature may significantly impact the performance of the electrolyte, as ionic conductivity in such material relies on segmental motions of the polymer.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/153736/1/pola29577_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/153736/2/pola29577.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/153736/3/pola29577-sup-0001-AppendixS1.pd
SdcNet: A Computation-Efficient CNN for Object Recognition
In many computer-vision systems, object recognition is one of the most commonly-used operations. The challenging task in this operation is to extract sufficient critical features related to the targets from diverse backgrounds. Convolutional neural networks (CNNs) can be used to meet this challenge, which, however, often requires a large amount of computation resources.
In this thesis, a computation-efficient CNN architecture for object recognition is proposed. It aims at using the lowest computation volume to achieve a good processing quality. This is achieved by applying image filtering knowledge in the design of the CNN architecture. This work is composed of two parts, the design of a CNN module for feature extraction, and an end-to-end CNN architecture. In the module, in order to extract the maximum amount of high-density feature information from a given set of 2-D maps, successive depthwise convolutions are applied to the same group of data to produce feature elements of various filtering orders. Moreover, a particular pre-and-post-convolution data control method is used to optimize the successive convolutions. The pre-convolution data control is to organize the data to be convolved according to their nature. The post-convolution data control is to combine the critical feature elements of various filtering orders to enhance the quality of the convolved results. The CNN architecture is mainly composed of the cascaded modules. The hyper-parameters in the architecture can be adjusted easily so that each module is tuned to suit the signals in order to optimize the processing quality. The simulation results demonstrated that the architecture gives a better processing quality using a significantly lower computation volume, compared with existing CNNs of the similar kind. The results also confirm the computation efficiency of the proposed module, which enables more object recognition applications on embedded devices
Upcycling Steel Slag in Producing Eco-Efficient Ironâcalcium Phosphate Cement
In the present study, steel slag powder (SSP) was utilized as the raw material to prepare iron-calcium phosphate cement (ICPC) by reacting with ammonium dihydrogen phosphate (ADP). The influences of the raw materials (SSP/ADP) mass ratios ranging from 2.0 to 7.0 on the properties and microstructures of ICPC pastes were investigated. The compressive strengths of ICPC pastes at all ages firstly increased and then decreased with the increase of SSP/ADP, and the SSP/ADP of 6.0 gave the highest strength. Crystalline mundrabillaite and amorphous phases [i.e. Fe(OH)3, Al(OH)3 and H4SiO4] were formed as the dominant binding phases through the reactions of the calcium-containing compounds (brownmillerite, monticellite and srebrodolskite) in the steel slag and ADP. Further, ADP could also react with the free FeO contained in the steel slag to yield amorphous iron phosphate phase. BSE analysis indicated that the hydration products formed and growed on the surface of steel slag particles and connect them to form the continuous, dense microstructure of ICPC paste. The utilization of high-volume steel slag as the base component will potentially bring great economic and environmental benefits for the manufacture of phosphate cement
Synchro-Transient-Extracting Transform for the Analysis of Signals with Both Harmonic and Impulsive Components
Time-frequency analysis (TFA) techniques play an increasingly important role
in the field of machine fault diagnosis attributing to their superiority in
dealing with nonstationary signals. Synchroextracting transform (SET) and
transient-extracting transform (TET) are two newly emerging techniques that can
produce energy concentrated representation for nonstationary signals. However,
SET and TET are only suitable for processing harmonic signals and impulsive
signals, respectively. This poses a challenge for each of these two techniques
when a signal contains both harmonic and impulsive components. In this paper,
we propose a new TFA technique to solve this problem. The technique aims to
combine the advantages of SET and TET to generate energy concentrated
representations for both harmonic and impulsive components of the signal.
Furthermore, we theoretically demonstrate that the proposed technique retains
the signal reconstruction capability. The effectiveness of the proposed
technique is verified using numerical and real-world signals
On local existence and blow-up of solution for the higher-order nonlinear Kirchhoff-type equation with nonlinear strongly damped terms
In this paper ,we deal with the initial boundary value problems for higher -order kirchhoff-type equation with nonlinear strongly dissipation:At first ,we prove the local existence and uniqueness of the solution by Galerkin methodthen and contracting mapping principle .Furthermore,we prove the global existence of solution , At last,we consider that blow up of solution in finite time under suitable condition
SdcNet: A Computation-Efficient CNN for Object Recognition
Extracting features from a huge amount of data for object recognition is a challenging task. Convolution neural network can be used to meet the challenge, but it often requires a large amount of computation resources. In this paper, a computation-efficient convolutional module, named SdcBlock, is proposed and based on it, the convolution network SdcNet is introduced for object recognition tasks. In the proposed module, optimized successive depthwise convolutions, supported by appropriate data management, are applied in order to generate vectors containing higher density and more varieties of feature information. The hyperparameters can be easily adjusted to suit varieties of tasks under different computation restrictions without significantly jeopardizing the performance. The experiments have shown that SdcNet achieved an error rate of 5.60% in CIFAR-10 with only 55M Flops and also reduced further the error rate to 5.24% using a moderate volume of 103M Flops. The expected computation efficiency of the SdcNet has been confirmed
PRE+: dual of proxy re-encryption for secure cloud data sharing service
With the rapid development of very large, diverse, complex, and distributed datasets generated from internet transactions, emails, videos, business information systems, manufacturing industry, sensors and internet of things etc., cloud and big data computation have emerged as a cornerstone of modern applications. Indeed, on the one hand, cloud and big data applications are becoming a main driver for economic growth. On the other hand, cloud and big data techniques may threaten people and enterprisesâ privacy and security due to ever increasing exposure of their data to massive access. In this paper, aiming at providing secure cloud data sharing services in cloud storage, we propose a scalable and controllable cloud data sharing framework for cloud users (called: Scanf). To this end, we introduce a new cryptographic primitive, namely, PRE+, which can be seen as the dual of traditional proxy re-encryption (PRE) primitive. All the traditional PRE schemes until now require the delegator (or the delegator and the delegatee cooperatively) to generate the re-encryption keys. We observe that this is not the only way to generate the re-encryption keys, the encrypter also has the ability to generate re-encryption keys. Based on this observation, we construct a new PRE+ scheme, which is almost the same as the traditional PRE scheme except the re-encryption keys generated by the encrypter. Compared with PRE, our PRE+ scheme can easily achieve the non-transferable property and message-level based fine-grained delegation. Thus our Scanf framework based on PRE+ can also achieve these two properties, which is very important for users of cloud storage sharing service. We also roughly evaluate our PRE+ schemeâs performance and the results show that our scheme is efficient and practica for cloud data storage applications.Peer ReviewedPostprint (author's final draft
SdcNet for Object Recognition
In this paper, a CNN architecture for object recognition is proposed, aiming at achieving a good processing quality at the lowest computation cost. The work includes the design of SdcBlock, a convolution module, for feature extraction, and that of SdcNet, an end to end CNN architecture. The module is designed to extract the maximum amount of high density feature information from a given set of data channels. To this end, successive depthwise convolutions (Sdc) are applied to each group of data to produce fe ature elements of different filtering orders. To optimize the functionality of these convolutions, a particular pre and post convolution data control is applied. The pre convolution control is to organize the input channels of the module so that the depthw ise convolutions can be performed with a single or multiple data channels, depending on the nature of the data. The post convolution control is to combine the critical feature elements of different filtering orders to enhance the quality of the convolved results. The SdcNet is mainly composed of cascaded SdcBlocks. The hyper parameters in the architecture can be adjusted easily so that each module can be tuned to suit its input signals in order to optimize the processing quality of the entire network. Three different versions of SdcNet have been proposed and tested using CIFAR dataset, and the results demonstrate that the architecture gives a better processing quality at a significantly lower computation cost, compared with networks performing similar tasks. Two other versions have also been tested with samples from ImageNet to prove the applicability of SdcNet in object recognition with images of ImageNet format Also, a SdcNet for brain tumor detection has been designed and tested su ccessfully to illustrate that SdcNet can effectively perform the detection with a high computation efficiency
- âŠ