25,495 research outputs found

    Is it time to withdraw from china?

    Get PDF
    This research cross-employs the Social Cognitive Theory (SCT) and three major labor theories comprised of Maslow’s theory, Alderfer’s theory and Herzberg’s theory with Multiple Criteria Decision Making (MCDM) consisting of Factor Analysis (FA), Analytical Network Process (“ANP”), Fuzzy Analytical Network Process (FANP) and Grey Relation Analysis (GRA) to evaluate the four types of innovative investment strategies in China after the Domino Effect of the China’s Labor Revolution. The most contributed conclusion is that the “change of original business at the raising compensation policy” (CBRCP) is the best choice for Taiwanese manufacturers operating in China because it is the highest scores of three assessed measurements in the CBRCP. This conclusion further indicates that manufacturing enterprises have little leverage, in the interim, but to increase employment compensation and benefits to satisfy the demands from the ongoing Chinese labor revolution even though it brings about an incremental expenditure in their manufacturing costs. Therefore, the next step beyond this research is to collect additional empirical macroeconomic data to develop a more comprehensive evaluation model that takes into consideration a more in-depth vertical measurement and horizontal assessment methodologies for developing added comprehensive and effective managerial strategies for surviving in this momentous, dynamically-changing and lower-profit Chinese manufacturing market.China labor revolution; Maslow theory; Alderfer theory and Herzberg theory; Multiple criteria decision making

    An empirical survey: Can green marketing really entice customers to pay more?

    Get PDF
    This research integrated the Social Cognition Theory and the Engel Kollat Blackwell customers’ purchasing model (EKB model) to synthetically discuss the three kinds of possible relations comprising “does negatively entice”, “does possibly entice” and “does positively entice” between green-marketing and customers’ purchasing and payment, with consideration given to environmental-protection issues. Based on the measured results, the most contributed contention of this research not only utilized three cross-analytical theories consisting of the social cognition theory (SCT) , the Fuzzy theory (FT) and the EKB model, and the novel F-ANP of the MCDM methodology to evaluate the collected data but it also manifested that Green-marketing does possibly entice customers to pay more (GMPECPM). These measured results have distinctly stunned the fundamental assumption in the traditional green-marketing research field that customers were supposed to be willing to pay more for green products and services because they were supporting green initiatives and helping environmental-protection. Further, major future research directions were also briefly demonstrated in this research as (1) the collection data have to be strengthened to gather more empirical customer feedback, corporate management comments, and professional scholars’ reports; (2) enterprises have to resoundingly establish a green-branding initiative after successfully executing green-marketing strategies.Green Marketing (G-marketing); Multiple Criteria Decision Making (MCDM); Analytical Network Process (F-ANP).

    Quality of Information in Mobile Crowdsensing: Survey and Research Challenges

    Full text link
    Smartphones have become the most pervasive devices in people's lives, and are clearly transforming the way we live and perceive technology. Today's smartphones benefit from almost ubiquitous Internet connectivity and come equipped with a plethora of inexpensive yet powerful embedded sensors, such as accelerometer, gyroscope, microphone, and camera. This unique combination has enabled revolutionary applications based on the mobile crowdsensing paradigm, such as real-time road traffic monitoring, air and noise pollution, crime control, and wildlife monitoring, just to name a few. Differently from prior sensing paradigms, humans are now the primary actors of the sensing process, since they become fundamental in retrieving reliable and up-to-date information about the event being monitored. As humans may behave unreliably or maliciously, assessing and guaranteeing Quality of Information (QoI) becomes more important than ever. In this paper, we provide a new framework for defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the current state-of-the-art on the topic. We also outline novel research challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN

    Signature Verification Approach using Fusion of Hybrid Texture Features

    Full text link
    In this paper, a writer-dependent signature verification method is proposed. Two different types of texture features, namely Wavelet and Local Quantized Patterns (LQP) features, are employed to extract two kinds of transform and statistical based information from signature images. For each writer two separate one-class support vector machines (SVMs) corresponding to each set of LQP and Wavelet features are trained to obtain two different authenticity scores for a given signature. Finally, a score level classifier fusion method is used to integrate the scores obtained from the two one-class SVMs to achieve the verification score. In the proposed method only genuine signatures are used to train the one-class SVMs. The proposed signature verification method has been tested using four different publicly available datasets and the results demonstrate the generality of the proposed method. The proposed system outperforms other existing systems in the literature.Comment: Neural Computing and Applicatio

    State of the art document clustering algorithms based on semantic similarity

    Get PDF
    The constant success of the Internet made the number of text documents in electronic forms increases hugely. The techniques to group these documents into meaningful clusters are becoming critical missions. The traditional clustering method was based on statistical features, and the clustering was done using a syntactic notion rather than semantically. However, these techniques resulted in un-similar data gathered in the same group due to polysemy and synonymy problems. The important solution to this issue is to document clustering based on semantic similarity, in which the documents are grouped according to the meaning and not keywords. In this research, eighty papers that use semantic similarity in different fields have been reviewed; forty of them that are using semantic similarity based on document clustering in seven recent years have been selected for a deep study, published between the years 2014 to 2020. A comprehensive literature review for all the selected papers is stated. Detailed research and comparison regarding their clustering algorithms, utilized tools, and methods of evaluation are given. This helps in the implementation and evaluation of the clustering of documents. The exposed research is used in the same direction when preparing the proposed research. Finally, an intensive discussion comparing the works is presented, and the result of our research is shown in figures

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated
    corecore