18,755 research outputs found
Application of wavelet analysis in tool wear evaluation using image processing method
Tool wear plays a significant role for proper planning and control of machining parameters to maintain the product quality. However, existing tool wear monitoring methods using sensor signals still have limitations. Since the cutting tool operates directly on the work-piece during machining process, the machined surface provides valuable information about the cutting tool condition. Therefore, the objective of present study is to evaluate the tool wear based on the workpiece profile signature by using wavelet analysis. The effect of wavelet families, scale of wavelet and statistical features of the continuous wavelet coefficient on the tool wear is studied. The surface profile of workpiece was captured using a DSLR camera. Invariant moment method was applied to extract the surface profile up to sub-pixel accuracy. The extracted surface profile was analyzed by using continuous wavelet transform (CWT) written in MATLAB. The re-sults showed that average, RMS and peak to valley of CWT coefficients at all scale increased with tool wear. Peak to valley at higher scale is more sensitive to tool wear. Haar was found to be more effective and significant to correlate with tool wear with highest R2 which is 0.9301
Efficient UC Commitment Extension with Homomorphism for Free (and Applications)
Homomorphic universally composable (UC) commitments allow for the sender to reveal the result of additions and multiplications of values contained in commitments without revealing the values themselves while assuring the receiver of the correctness of such computation on committed values.
In this work, we construct essentially optimal additively homomorphic UC commitments from any (not necessarily UC or homomorphic) extractable commitment. We obtain amortized linear computational complexity in the length of the input messages and rate 1.
Next, we show how to extend our scheme to also obtain multiplicative homomorphism at the cost of asymptotic optimality but retaining low concrete complexity for practical parameters.
While the previously best constructions use UC oblivious transfer as the main building block, our constructions only require extractable commitments and PRGs, achieving better concrete efficiency and offering new insights into the sufficient conditions for obtaining homomorphic UC commitments.
Moreover, our techniques yield public coin protocols, which are compatible with the Fiat-Shamir heuristic.
These results come at the cost of realizing a restricted version of the homomorphic commitment functionality where the sender is allowed to perform any number of commitments and operations on committed messages but is only allowed to perform a single batch opening of a number of commitments.
Although this functionality seems restrictive, we show that it can be used as a building block for more efficient instantiations of recent protocols for secure multiparty computation and zero knowledge non-interactive arguments of knowledge
Learning and Type Compatibility in Signaling Games
Which equilibria will arise in signaling games depends on how the receiver
interprets deviations from the path of play. We develop a micro-foundation for
these off-path beliefs, and an associated equilibrium refinement, in a model
where equilibrium arises through non-equilibrium learning by populations of
patient and long-lived senders and receivers. In our model, young senders are
uncertain about the prevailing distribution of play, so they rationally send
out-of-equilibrium signals as experiments to learn about the behavior of the
population of receivers. Differences in the payoff functions of the types of
senders generate different incentives for these experiments. Using the Gittins
index (Gittins, 1979), we characterize which sender types use each signal more
often, leading to a constraint on the receiver's off-path beliefs based on
"type compatibility" and hence a learning-based equilibrium selection
Sub-Nyquist Sampling: Bridging Theory and Practice
Sampling theory encompasses all aspects related to the conversion of
continuous-time signals to discrete streams of numbers. The famous
Shannon-Nyquist theorem has become a landmark in the development of digital
signal processing. In modern applications, an increasingly number of functions
is being pushed forward to sophisticated software algorithms, leaving only
those delicate finely-tuned tasks for the circuit level.
In this paper, we review sampling strategies which target reduction of the
ADC rate below Nyquist. Our survey covers classic works from the early 50's of
the previous century through recent publications from the past several years.
The prime focus is bridging theory and practice, that is to pinpoint the
potential of sub-Nyquist strategies to emerge from the math to the hardware. In
that spirit, we integrate contemporary theoretical viewpoints, which study
signal modeling in a union of subspaces, together with a taste of practical
aspects, namely how the avant-garde modalities boil down to concrete signal
processing systems. Our hope is that this presentation style will attract the
interest of both researchers and engineers in the hope of promoting the
sub-Nyquist premise into practical applications, and encouraging further
research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin
Sum-Rate Maximization for Linearly Precoded Downlink Multiuser MISO Systems with Partial CSIT: A Rate-Splitting Approach
This paper considers the Sum-Rate (SR) maximization problem in downlink
MU-MISO systems under imperfect Channel State Information at the Transmitter
(CSIT). Contrary to existing works, we consider a rather unorthodox
transmission scheme. In particular, the message intended to one of the users is
split into two parts: a common part which can be recovered by all users, and a
private part recovered by the corresponding user. On the other hand, the rest
of users receive their information through private messages. This
Rate-Splitting (RS) approach was shown to boost the achievable Degrees of
Freedom (DoF) when CSIT errors decay with increased SNR. In this work, the RS
strategy is married with linear precoder design and optimization techniques to
achieve a maximized Ergodic SR (ESR) performance over the entire range of SNRs.
Precoders are designed based on partial CSIT knowledge by solving a stochastic
rate optimization problem using means of Sample Average Approximation (SAA)
coupled with the Weighted Minimum Mean Square Error (WMMSE) approach. Numerical
results show that in addition to the ESR gains, the benefits of RS also include
relaxed CSIT quality requirements and enhanced achievable rate regions compared
to conventional transmission with NoRS.Comment: accepted to IEEE Transactions on Communication
Multiuser detection in a dynamic environment Part I: User identification and data detection
In random-access communication systems, the number of active users varies
with time, and has considerable bearing on receiver's performance. Thus,
techniques aimed at identifying not only the information transmitted, but also
that number, play a central role in those systems. An example of application of
these techniques can be found in multiuser detection (MUD). In typical MUD
analyses, receivers are based on the assumption that the number of active users
is constant and known at the receiver, and coincides with the maximum number of
users entitled to access the system. This assumption is often overly
pessimistic, since many users might be inactive at any given time, and
detection under the assumption of a number of users larger than the real one
may impair performance.
The main goal of this paper is to introduce a general approach to the problem
of identifying active users and estimating their parameters and data in a
random-access system where users are continuously entering and leaving the
system. The tool whose use we advocate is Random-Set Theory: applying this, we
derive optimum receivers in an environment where the set of transmitters
comprises an unknown number of elements. In addition, we can derive
Bayesian-filter equations which describe the evolution with time of the a
posteriori probability density of the unknown user parameters, and use this
density to derive optimum detectors. In this paper we restrict ourselves to
interferer identification and data detection, while in a companion paper we
shall examine the more complex problem of estimating users' parameters.Comment: To be published on IEEE Transactions on Information Theor
- …