50 research outputs found
From the Chair
The vote count for the proposed merger of SIGMIS and SIGCPR (Special Interest Group on Computer Personnel Research) has been completed. The merger received overwhelming approval. On behalf of SIGMIS, I would like to extend a warm welcome to the members of SIGCPR to SIGMIS. Thank you for your vote of approval for the merger!Please join me in welcoming Fred Niederman, past Chair of SIGCPR, as the newly appointed CPR Committee Chair for SIGMIS. Fred has graciously agreed to facilitate the strengthening of the bond between SIGMIS and SIGCPR.I would like to encourage everyone to strengthen the bond between these two groups by getting involved in the annual conference. The next conference is scheduled for April 2003 in Philadelphia, PA USA, with a broadened set of topics to include general areas of interest to the MIS community. Take my word for it, you will find springtime in Philadelphia is quite pleasant. Please plan to get involved
XP / Architecture (XA): A Collaborative Learning Process for Agile Methodologies When Teams Grow
Using Machine Learning to Predict Links and Improve Steiner Tree Solutions to Team Formation Problems
The team formation problem has existed for many years in various guises. One important problem in the team formation problem is to produce small teams that have a required set of skills. We propose a framework that incorporates machine learning to predict unobserved links between collaborators, alongside improved Steiner tree problems to form small teams to cover given tasks. Our framework not only considers size of the team but also how likely are team members going to collab-orate with each other. The results show that this model consistently returns smaller collaborative teams
Using Machine Learning Models to Predict the Length of Stay in a Hospital Setting
International audienceProper prediction of Length Of Stay (LOS) has become increasingly important these years. The LOS prediction provides better services, managing hospital resources and controls their costs. In this paper, we implemented and compared two Machine Learning (ML) methods, the Random Forest (RF) and the Gradient Boosting model (GB), using an open source available dataset. This data are been firstly preprocessed by combining data transformation, data standardization and data codification. Then, the RF and the GB were carried out, with a phase of hyper parameters tuning until setting optimal coefficients. Finally, the Mean Square Error (MAE), the R-squared (R2) and the Adjusted R-squared (Adjusted R2) metrics are selected to evaluate model with parameters
