15 research outputs found

    “Professionalization” or “Proletarianization”: Which Concept Defines the Changes in Teachers’ Work?

    Get PDF
    AbstractIn many parts of the world, particularly starting from 1980 a set of transformations is being experienced in the field of education. With these transformations while the meaning and content of the education changes, the teachers who are the most basic actors of education field are expected to keep pace with this change process and to be even active agents of this process. Therefore, it is possible to talk about the fact that some significant changes in education, employment and working conditions of teachers have been taking place in recent times. The discussions in literature in regards to efforts to understand and explain the change and transformations experienced in the profession of teaching are usually based on two main approaches. While in the first approach it is claimed that teachers are being “professionalized” over time, in the latter approach it is claimed that teachers are not professionals no longer, on the contrary, with the transformation process they are increasingly getting deskilled and hence are being “proletarianized”. The aim of this study is that within the axis of profession of teaching and by focusing on theoretical discussions regarding “professionalization” and “proletarianization”, to establish a theoretical framework to understand the changes experienced in teachers’ work

    an empirical study on bug assignment automation using chinese bug data

    No full text
    Univ SE Calif, Ctr Syst & Software Engn, ABB, Microsoft Res, IEEE, ACMSIGSOFT, N Carolina State Univ Comp SciBug assignment is an important step in bug life-cycle management. In large projects, this task would consume a substantial amount of human effort. To compare with the previous studies on automatic bug assignment in FOSS (Free/Open Source Software) projects, we conduct a case study on a proprietary software project in China. Our study consists of two experiments of automatic bug assignment, using Chinese text and the other non-text information of bug data respectively. Based on text data of the bug repository, the first experiment uses SVM to predict bug assignments and achieve accuracy close to that by human triagers. The second one explores the usefulness of non-text data in making such prediction. The main results from our study includes that text data are most useful data in the bug tracking system to triage bugs, and automation based on text data could effectively reduce the manual effort

    improving software testing process: feature prioritization to make winners of success-critical stakeholders

    No full text
    For a successful software project, acceptable quality must be achieved within an acceptable cost, demonstrating business value to customers and satisfactorily meeting delivery timeliness. Testing serves as the most widely used approaches to determine that the intended functionalities are performed correctly and achieve the desired level of services; however, it is also a labor-intensive and expensive process during the whole software life cycle. Most current testing processes are often technique-centered, rather than organized to maximize business value. In this article, we extend and elaborate the '4+1' theoretical lenses of Value-based Software Engineering (VBSE) framework in the software testing process; propose a multi-objective feature prioritization strategy for testing planning and controlling, which aligns the internal testing process with value objectives coming from customers and markets. Our case study in a real-life business project shows that this method allows reasoning about the software testing process in different dimensions: it helps to manage the testing process effectively and efficiently, provides information for continuous internal software process improvement, and increases customer satisfaction, which makes winners of all success-critical stakeholders (SCSs) in the software testing process. © 2010 John Wiley & Sons, Ltd.National Natural Science Foundation of China 90718042, 60873072, 60903050; National Hi-Tech RD Plan of China 2007AA010303; National Basic Research Program 2007CB310802For a successful software project, acceptable quality must be achieved within an acceptable cost, demonstrating business value to customers and satisfactorily meeting delivery timeliness. Testing serves as the most widely used approaches to determine that the intended functionalities are performed correctly and achieve the desired level of services; however, it is also a labor-intensive and expensive process during the whole software life cycle. Most current testing processes are often technique-centered, rather than organized to maximize business value. In this article, we extend and elaborate the '4+1' theoretical lenses of Value-based Software Engineering (VBSE) framework in the software testing process; propose a multi-objective feature prioritization strategy for testing planning and controlling, which aligns the internal testing process with value objectives coming from customers and markets. Our case study in a real-life business project shows that this method allows reasoning about the software testing process in different dimensions: it helps to manage the testing process effectively and efficiently, provides information for continuous internal software process improvement, and increases customer satisfaction, which makes winners of all success-critical stakeholders (SCSs) in the software testing process. © 2010 John Wiley & Sons, Ltd

    Mining Quantitative Associations in Large Database

    No full text
    Association Rule Mining algorithms operate on a data matrix to derive association rule, discarding the quantities of the items, which contains valuable information. In order to make full use of the knowledge inherent in the quantities of the items, an extension named Ratio Rules [6] is proposed to capture the quantitative association. However, the approach, which is addressed in [6], is mainly based on Principle Component Analysis (PCA) and as a result, it cannot guarantee that the ratio coefficient is non-negative. This may lead to serious problems in the association rules' application. In this paper, a new method, called Principal Non-negative Sparse Coding (PNSC), is provided for learning the associations between itemsets in the form of Ratio Rules. Experiments on several datasets illustrate that the proposed method performs well for the purpose of discovering latent associations between itemsets in large datasets. ? Springer-Verlag Berlin Heidelberg 2005.EI

    Mining ratio rules via principal sparse non-negative matrix factorization

    No full text
    Association rules are traditionally designed to capture statistical relationship among itemsets in a given database. To additionally capture the quantitative association knowledge, F.Korn et al recently proposed a paradigm named Ratio Rules [4] for quantifiable data mining. However, their approach is mainly based on Principle Component Analysis (PCA) and as a result, it cannot guarantee that the ratio coefficient is non-negative. This may lead to serious problems in the rules’ application. In this paper, we propose a new method, called Principal Sparse Non-Negative Matrix Factoriza-tion (PSNMF), for learning the associations between itemsets in the form of Ratio Rules. In addition, we provide a support measurement to weigh the importance of each rule for the entire dataset. 1

    Learning quantifiable associations via principal sparse non-negative matrix factorization

    No full text
    Association rules are traditionally designed to capture statistical relationship among itemsets in a given database. To additionally capture the quantitative association knowledge, Korn et. al. recently propose a paradigm named Ratio Rules [6] for quantifiable data mining. However, their approach is mainly based on Principle Component Analysis (PCA), and as a result, it cannot guarantee that the ratio coefficients are non-negative. This may lead to serious problems in the rules' application. In this paper, we propose a new method, called Principal Sparse Non-negative Matrix Factorization (PSNMF), for learning the associations between itemsets in the form of Ratio Rules. In addition, we provide a support measurement to weigh the importance of each rule for the entire dataset. Experiments on several datasets illustrate that the proposed method performs well for discovering latent associations between itemsets in large datasets
    corecore