35 research outputs found
Shinpaku shingō no jikan oyobi shūhasū ryōiki no supāsusei ni motozuku aratana hisesshokugata shinpakusū suiteihō
Purpose: This study aims to analyze green supply chain management (GSCM) and green marketing strategies (GMS) to green purchasing intentions (GPI). This study conducts on craft SMEs in the Special Region of Yogyakarta, Indonesia. Design/methodology/approach: This study uses primary data which is obtained through questionnaires. The unit of analysis in this study is organizations and individuals. The sampling technique is purposive sampling, with the criteria of SMEs that conduct environmentally friendly production processes and consumers who have ever bought green products. Data analysis uses structural equation modeling. Findings: The results of the data analysis show that there is an influence of green supply chain management on green marketing strategy, and there is an influence of green marketing strategy on green purchase intention. Research limitations/implications: This study is limited by relatively small sample size. The sample is only environmentally oriented SMEs. Large companies that are also environmentally friendly have not been included as samples in this study, so the results of this study only generalized to SMEs. Future research should accommodate these two types of companies, namely SMEs and companies, so that it can be easier to generalize the findings and allow different tests of GSCM to be applied to SMEs and large companies. This study only analyzed GSCM from two dimensions, namely GP and GCC. Other variables that can be used to explain GSCM are internal environmental, green information systems, eco-design and packaging. Practical implications: GSCM can be started with conducts the right GP and always coordinating with consumers which related to green products. GP (green purchasing) and GCC (green consumer cooperation) as GSCM elements have a strong association in predicting the success of a green marketing strategy. It is expected that SMEs should pay attention to the raw material purchase, so that the problem of environmentally friendly raw materials can be truly obtained to enter the production process and produce environmentally friendly products. Originality/value: This study analyzes the relationship between GSCM practices and organizational performance in the green marketing and business strategiescontext, where there is still a scarcity of studies in this context. Besides that, there is an increase in awareness of green operations and green marketing in Asia, but the relevant studies in Asian countries have not been conducted much, especially in Southeast Asia. The result of this study proves that the GSCM model can increase value along the supply chain by emphasizing green supply chain management and green marketing.Peer Reviewe
Translation-Invariant Shrinkage/Thresholding of Group Sparse Signals
This paper addresses signal denoising when large-amplitude coefficients form
clusters (groups). The L1-norm and other separable sparsity models do not
capture the tendency of coefficients to cluster (group sparsity). This work
develops an algorithm, called 'overlapping group shrinkage' (OGS), based on the
minimization of a convex cost function involving a group-sparsity promoting
penalty function. The groups are fully overlapping so the denoising method is
translation-invariant and blocking artifacts are avoided. Based on the
principle of majorization-minimization (MM), we derive a simple iterative
minimization algorithm that reduces the cost function monotonically. A
procedure for setting the regularization parameter, based on attenuating the
noise to a specified level, is also described. The proposed approach is
illustrated on speech enhancement, wherein the OGS approach is applied in the
short-time Fourier transform (STFT) domain. The denoised speech produced by OGS
does not suffer from musical noise.Comment: 33 pages, 7 figures, 5 table
A unified approach to sparse signal processing
A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, compo-nent analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing and rate of innovation. The redundancy introduced by channel coding i
Recommended from our members
Structured Sub-Nyquist Sampling with Applications in Compressive Toeplitz Covariance Estimation, Super-Resolution and Phase Retrieval
Sub-Nyquist sampling has received a huge amount of interest in the past decade. In classical compressed sensing theory, if the measurement procedure satisfies a particular condition known as Restricted Isometry Property (RIP), we can achieve stable recovery of signals of low-dimensional intrinsic structures with an order-wise optimal sample size. Such low-dimensional structures include sparse and low rank for both vector and matrix cases. The main drawback of conventional compressed sensing theory is that random measurements are required to ensure the RIP property. However, in many applications such as imaging and array signal processing, applying independent random measurements may not be practical as the systems are deterministic. Moreover, random measurements based compressed sensing always exploits convex programs for signal recovery even in the noiseless case, and solving those programs is computationally intensive if the ambient dimension is large, especially in the matrix case. The main contribution of this dissertation is that we propose a deterministic sub-Nyquist sampling framework for compressing the structured signal and come up with computationally efficient algorithms. Besides widely studied sparse and low-rank structures, we particularly focus on the cases that the signals of interest are stationary or the measurements are of Fourier type. The key difference between our work from classical compressed sensing theory is that we explicitly exploit the second-order statistics of the signals, and study the equivalent quadratic measurement model in the correlation domain. The essential observation made in this dissertation is that a difference/sum coarray structure will arise from the quadratic model if the measurements are of Fourier type. With these observations, we are able to achieve a better compression rate for covariance estimation, identify more sources in array signal processing or recover the signals of larger sparsity. In this dissertation, we will first study the problem of Toeplitz covariance estimation. In particular, we will show how to achieve an order-wise optimal compression rate using the idea of sparse arrays in both general and low-rank cases. Then, an analysis framework of super-resolution with positivity constraint is established. We will present fundamental robustness guarantees, efficient algorithms and applications in practices. Next, we will study the problem of phase-retrieval for which we successfully apply the sparse array ideas by fully exploiting the quadratic measurement model. We achieve near-optimal sample complexity for both sparse and general cases with practical Fourier measurements and provide efficient and deterministic recovery algorithms. In the end, we will further elaborate on the essential role of non-negative constraint in underdetermined inverse problems. In particular, we will analyze the nonlinear co-array interpolation problem and develop a universal upper bound of the interpolation error. Bilinear problem with non-negative constraint will be considered next and the exact characterization of the ambiguous solutions will be established for the first time in literature. At last, we will show how to apply the nested array idea to solve real problems such as Kriging. Using spatial correlation information, we are able to have a stable estimate of the field of interest with fewer sensors than classic methodologies. Extensive numerical experiments are implemented to demonstrate our theoretical claims
l0 Sparse signal processing and model selection with applications
Sparse signal processing has far-reaching applications including compressed sensing, media compression/denoising/deblurring, microarray analysis and medical imaging. The main reason for its popularity is that many signals have a sparse representation given that the basis is suitably selected. However the difficulty lies in developing an efficient method of recovering such a representation.
To this aim, two efficient sparse signal recovery algorithms are developed in the first part of this thesis. The first method is based on direct minimization of the l0 norm via cyclic descent, which is called the L0LS-CD (l0 penalized least squares via cyclic descent) algorithm. The other method minimizes smooth approximations of sparsity measures including those of the l0 norm via the majorization minimization (MM) technique, which is called the QC (quadratic concave) algorithm.
The L0LS-CD algorithm is developed further by extending it to its multivariate (V-L0LS-CD (vector L0LS-CD)) and group (gL0LS-CD (group L0LS-CD)) regression variants. Computational speed-ups to the basic cyclic descent algorithm are discussed and a greedy version of L0LS-CD is developed. Stability of these algorithms is analyzed and the impact of the penalty parameter and proper initialization on the algorithm performance are highlighted. A suitable method for performance comparison of sparse approximating algorithms in the presence of noise is established. Simulations compare L0LS-CD and V-L0LS-CD with a range of alternatives on under-determined as well as over-determined systems.
The QC algorithm is applicable to a class of penalties that are neither convex nor concave but have what we call the quadratic concave property. Convergence proofs of this algorithm are presented and it is compared with the Newton algorithm, concave convex (CC) procedure, as well as with the class of proximity algorithms. Simulations focus on the smooth approximations of the l0 norm and compare them with other l0 denoising algorithms.
Next, two applications of sparse modeling are considered. In the first application the L0LS-CD algorithm is extended to recover a sparse transfer function in the presence of coloured noise. The second uses gL0LS-CD to recover the topology of a sparsely connected network of dynamic systems. Both applications use Laguerre basis functions for model expansion.
The role of model selection in sparse signal processing is widely neglected in literature. The tuning/penalty parameter of a sparse approximating problem should be selected using a model selection criterion which minimizes a desired discrepancy measure. Compared to the commonly used model selection methods, the SURE (Stein's unbiased risk estimator) estimator stands out as one which does not suffer from the limitations of other methods. Most model selection criterion are developed based on signal or prediction mean squared error. The last section of this thesis develops an SURE criterion instead for parameter mean square error and applies this result to l1 penalized least squares problem with grouped variables. Simulations based on topology identification of a sparse network are presented to illustrate and compare with alternative model selection criteria
Sparse Signal Recovery Based on Compressive Sensing and Exploration Using Multiple Mobile Sensors
The work in this dissertation is focused on two areas within the general discipline of statistical signal processing. First, several new algorithms are developed and exhaustively tested for solving the inverse problem of compressive sensing (CS). CS is a recently developed sub-sampling technique for signal acquisition and reconstruction which is more efficient than the traditional Nyquist sampling method. It provides the possibility of compressed data acquisition approaches to directly acquire just the important information of the signal of interest. Many natural signals are sparse or compressible in some domain such as pixel domain of images, time, frequency and so forth. The notion of compressibility or sparsity here means that many coefficients of the signal of interest are either zero or of low amplitude, in some domain, whereas some are dominating coefficients. Therefore, we may not need to take many direct or indirect samples from the signal or phenomenon to be able to capture the important information of the signal. As a simple example, one can think of a system of linear equations with N unknowns. Traditional methods suggest solving N linearly independent equations to solve for the unknowns. However, if many of the variables are known to be zero or of low amplitude, then intuitively speaking, there will be no need to have N equations. Unfortunately, in many real-world problems, the number of non-zero (effective) variables are unknown. In these cases, CS is capable of solving for the unknowns in an efficient way. In other words, it enables us to collect the important information of the sparse signal with low number of measurements. Then, considering the fact that the signal is sparse, extracting the important information of the signal is the challenge that needs to be addressed. Since most of the existing recovery algorithms in this area need some prior knowledge or parameter tuning, their application to real-world problems to achieve a good performance is difficult. In this dissertation, several new CS algorithms are proposed for the recovery of sparse signals. The proposed algorithms mostly do not require any prior knowledge on the signal or its structure. In fact, these algorithms can learn the underlying structure of the signal based on the collected measurements and successfully reconstruct the signal, with high probability. The other merit of the proposed algorithms is that they are generally flexible in incorporating any prior knowledge on the noise, sparisty level, and so on.
The second part of this study is devoted to deployment of mobile sensors in circumstances that the number of sensors to sample the entire region is inadequate. Therefore, where to deploy the sensors, to both explore new regions while refining knowledge in aleady visited areas is of high importance. Here, a new framework is proposed to decide on the trajectories of sensors as they collect the measurements. The proposed framework has two main stages. The first stage performs interpolation/extrapolation to estimate the phenomenon of interest at unseen loactions, and the second stage decides on the informative trajectory based on the collected and estimated data. This framework can be applied to various problems such as tuning the constellation of sensor-bearing satellites, robotics, or any type of adaptive sensor placement/configuration problem. Depending on the problem, some modifications on the constraints in the framework may be needed. As an application side of this work, the proposed framework is applied to a surrogate problem related to the constellation adjustment of sensor-bearing satellites