588,958 research outputs found

    Insights from Learning Analytics for Hands-On Cloud Computing Labs in AWS

    Get PDF
    [EN] Cloud computing instruction requires hands-on experience with a myriad of distributed computing services from a public cloud provider. Tracking the progress of the students, especially for online courses, requires one to automatically gather evidence and produce learning analytics in order to further determine the behavior and performance of students. With this aim, this paper describes the experience from an online course in cloud computing with Amazon Web Services on the creation of an open-source data processing tool to systematically obtain learning analytics related to the hands-on activities carried out throughout the course. These data, combined with the data obtained from the learning management system, have allowed the better characterization of the behavior of students in the course. Insights from a population of more than 420 online students through three academic years have been assessed, the dataset has been released for increased reproducibility. The results corroborate that course length has an impact on online students dropout. In addition, a gender analysis pointed out that there are no statistically significant differences in the final marks between genders, but women show an increased degree of commitment with the activities planned in the course.This research was funded by the Spanish "Ministerio de Economia, Industria y Competitividad through grant number TIN2016-79951-R (BigCLOE)", the "Vicerrectorado de Estudios, Calidad y Acreditacion" of the Universitat Politecnica de Valencia (UPV) to develop the PIME B29 and PIME/19-20/166, and by the Conselleria d'Innovacio, Universitat, Ciencia i Societat Digital for the project "CloudSTEM" with reference number AICO/2019/313.Moltó, G.; Naranjo-Delgado, DM.; Segrelles Quilis, JD. (2020). Insights from Learning Analytics for Hands-On Cloud Computing Labs in AWS. Applied Sciences. 10(24):1-13. https://doi.org/10.3390/app10249148S1131024Motiwalla, L., Deokar, A. V., Sarnikar, S., & Dimoka, A. (2019). Leveraging Data Analytics for Behavioral Research. Information Systems Frontiers, 21(4), 735-742. doi:10.1007/s10796-019-09928-8Siemens, G., & Baker, R. S. J. d. (2012). Learning analytics and educational data mining. Proceedings of the 2nd International Conference on Learning Analytics and Knowledge - LAK ’12. doi:10.1145/2330601.2330661Blikstein, P. (2013). Multimodal learning analytics. Proceedings of the Third International Conference on Learning Analytics and Knowledge - LAK ’13. doi:10.1145/2460296.2460316Hewson, E. R. F. (2018). Students’ Emotional Engagement, Motivation and Behaviour Over the Life of an Online Course: Reflections on Two Market Research Case Studies. Journal of Interactive Media in Education, 2018(1). doi:10.5334/jime.472Kahan, T., Soffer, T., & Nachmias, R. (2017). Types of Participant Behavior in a Massive Open Online Course. The International Review of Research in Open and Distributed Learning, 18(6). doi:10.19173/irrodl.v18i6.3087Cross, S., & Whitelock, D. (2016). Similarity and difference in fee-paying and no-fee learner expectations, interaction and reaction to learning in a massive open online course. Interactive Learning Environments, 25(4), 439-451. doi:10.1080/10494820.2016.1138312Charleer, S., Klerkx, J., & Duval, E. (2014). Learning Dashboards. Journal of Learning Analytics, 1(3), 199-202. doi:10.18608/jla.2014.13.22Worsley, M. (2012). Multimodal learning analytics. Proceedings of the 14th ACM international conference on Multimodal interaction - ICMI ’12. doi:10.1145/2388676.2388755Spikol, D., Prieto, L. P., Rodríguez-Triana, M. J., Worsley, M., Ochoa, X., Cukurova, M., … Ringtved, U. L. (2017). Current and future multimodal learning analytics data challenges. Proceedings of the Seventh International Learning Analytics & Knowledge Conference. doi:10.1145/3027385.3029437Ochoa, X., Worsley, M., Weibel, N., & Oviatt, S. (2016). Multimodal learning analytics data challenges. Proceedings of the Sixth International Conference on Learning Analytics & Knowledge - LAK ’16. doi:10.1145/2883851.2883913Aguilar, J., Sánchez, M., Cordero, J., Valdiviezo-Díaz, P., Barba-Guamán, L., & Chamba-Eras, L. (2017). Learning analytics tasks as services in smart classrooms. Universal Access in the Information Society, 17(4), 693-709. doi:10.1007/s10209-017-0525-0Lu, O. H. T., Huang, J. C. H., Huang, A. Y. Q., & Yang, S. J. H. (2017). Applying learning analytics for improving students engagement and learning outcomes in an MOOCs enabled collaborative programming course. Interactive Learning Environments, 25(2), 220-234. doi:10.1080/10494820.2016.1278391Drachsler, H., & Kalz, M. (2016). The MOOC and learning analytics innovation cycle (MOLAC): a reflective summary of ongoing research and its challenges. Journal of Computer Assisted Learning, 32(3), 281-290. doi:10.1111/jcal.12135Ruiperez-Valiente, J. A., Munoz-Merino, P. J., Gascon-Pinedo, J. A., & Kloos, C. D. (2017). Scaling to Massiveness With ANALYSE: A Learning Analytics Tool for Open edX. IEEE Transactions on Human-Machine Systems, 47(6), 909-914. doi:10.1109/thms.2016.2630420Er, E., Gómez-Sánchez, E., Dimitriadis, Y., Bote-Lorenzo, M. L., Asensio-Pérez, J. I., & Álvarez-Álvarez, S. (2019). Aligning learning design and learning analytics through instructor involvement: a MOOC case study. Interactive Learning Environments, 27(5-6), 685-698. doi:10.1080/10494820.2019.1610455Tabaa, Y., & Medouri, A. (2013). LASyM: A Learning Analytics System for MOOCs. International Journal of Advanced Computer Science and Applications, 4(5). doi:10.14569/ijacsa.2013.040516Shorfuzzaman, M., Hossain, M. S., Nazir, A., Muhammad, G., & Alamri, A. (2019). Harnessing the power of big data analytics in the cloud to support learning analytics in mobile learning environment. Computers in Human Behavior, 92, 578-588. doi:10.1016/j.chb.2018.07.002Klašnja-Milićević, A., Ivanović, M., & Budimac, Z. (2017). Data science in education: Big data and learning analytics. Computer Applications in Engineering Education, 25(6), 1066-1078. doi:10.1002/cae.21844Logglyhttps://www.loggly.com/Molto, G., & Caballer, M. (2014). On using the cloud to support online courses. 2014 IEEE Frontiers in Education Conference (FIE) Proceedings. doi:10.1109/fie.2014.7044041Caballer, M., Blanquer, I., Moltó, G., & de Alfonso, C. (2014). Dynamic Management of Virtual Infrastructures. Journal of Grid Computing, 13(1), 53-70. doi:10.1007/s10723-014-9296-5AWS CloudTrailhttps://aws.amazon.com/cloudtrail/Amazon Simple Storage Service (Amazon S3)http://aws.amazon.com/s3/Naranjo, D. M., Prieto, J. R., Moltó, G., & Calatrava, A. (2019). A Visual Dashboard to Track Learning Analytics for Educational Cloud Computing. Sensors, 19(13), 2952. doi:10.3390/s19132952Baldini, I., Castro, P., Chang, K., Cheng, P., Fink, S., Ishakian, V., … Suter, P. (2017). Serverless Computing: Current Trends and Open Problems. Research Advances in Cloud Computing, 1-20. doi:10.1007/978-981-10-5026-8_1Zimmerman, D. W. (1987). Comparative Power of StudentTTest and Mann-WhitneyUTest for Unequal Sample Sizes and Variances. The Journal of Experimental Education, 55(3), 171-174. doi:10.1080/00220973.1987.10806451Kruskal, W. H., & Wallis, W. A. (1952). Use of Ranks in One-Criterion Variance Analysis. Journal of the American Statistical Association, 47(260), 583-621. doi:10.1080/01621459.1952.10483441Voyer, D., & Voyer, S. D. (2014). Gender differences in scholastic achievement: A meta-analysis. Psychological Bulletin, 140(4), 1174-1204. doi:10.1037/a0036620Ellemers, N., Heuvel, H., Gilder, D., Maass, A., & Bonvini, A. (2004). The underrepresentation of women in science: Differential commitment or the queen bee syndrome? British Journal of Social Psychology, 43(3), 315-338. doi:10.1348/0144666042037999Sheard, M. (2009). Hardiness commitment, gender, and age differentiate university academic performance. British Journal of Educational Psychology, 79(1), 189-204. doi:10.1348/000709908x30440

    Selecting cash management models from a multiobjective perspective

    Full text link
    [EN] This paper addresses the problem of selecting cash management models under different operating conditions from a multiobjective perspective considering not only cost but also risk. A number of models have been proposed to optimize corporate cash management policies. The impact on model performance of different operating conditions becomes an important issue. Here, we provide a range of visual and quantitative tools imported from Receiver Operating Characteristic (ROC) analysis. More precisely, we show the utility of ROC analysis from a triple perspective as a tool for: (1) showing model performance; (2) choosingmodels; and (3) assessing the impact of operating conditions on model performance. We illustrate the selection of cash management models by means of a numerical example.Work partially funded by projects Collectiveware TIN2015-66863-C2-1-R (MINECO/FEDER) and 2014 SGR 118.Salas-Molina, F.; Rodríguez-Aguilar, JA.; Díaz-García, P. (2018). Selecting cash management models from a multiobjective perspective. Annals of Operations Research. 261(1-2):275-288. https://doi.org/10.1007/s10479-017-2634-9S2752882611-2Ballestero, E. (2007). Compromise programming: A utility-based linear-quadratic composite metric from the trade-off between achievement and balanced (non-corner) solutions. European Journal of Operational Research, 182(3), 1369–1382.Ballestero, E., & Romero, C. (1998). Multiple criteria decision making and its applications to economic problems. Berlin: Springer.Bi, J., & Bennett, K. P. (2003). Regression error characteristic curves. In Proceedings of the 20th international conference on machine learning (ICML-03), pp. 43–50.Bradley, A. P. (1997). The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern Recognition, 30(7), 1145–1159.da Costa Moraes, M. B., Nagano, M. S., & Sobreiro, V. A. (2015). Stochastic cash flow management models: A literature review since the 1980s. In Decision models in engineering and management (pp. 11–28). New York: Springer.Doumpos, M., & Zopounidis, C. (2007). Model combination for credit risk assessment: A stacked generalization approach. Annals of Operations Research, 151(1), 289–306.Drummond, C., & Holte, R. C. (2000). Explicitly representing expected cost: An alternative to roc representation. In Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 98–207). New York: ACM.Drummond, C., & Holte, R. C. (2006). Cost curves: An improved method for visualizing classifier performance. Machine Learning, 65(1), 95–130.Elkan, C. (2001). The foundations of cost-sensitive learning. In International joint conference on artificial intelligence (Vol. 17, pp. 973–978). Lawrence Erlbaum associates Ltd.Fawcett, T. (2006). An introduction to roc analysis. Pattern Recognition Letters, 27(8), 861–874.Flach, P. A. (2003). The geometry of roc space: understanding machine learning metrics through roc isometrics. In Proceedings of the 20th international conference on machine learning (ICML-03), pp. 194–201.Garcia-Bernabeu, A., Benito, A., Bravo, M., & Pla-Santamaria, D. (2016). Photovoltaic power plants: a multicriteria approach to investment decisions and a case study in western spain. Annals of Operations Research, 245(1–2), 163–175.Glasserman, P. (2003). Monte Carlo methods in financial engineering (Vol. 53). New York: Springer.Gregory, G. (1976). Cash flow models: a review. Omega, 4(6), 643–656.Hernández-Orallo, J. (2013). Roc curves for regression. Pattern Recognition, 46(12), 3395–3411.Hernández-Orallo, J., Flach, P., & Ferri, C. (2013). Roc curves in cost space. Machine Learning, 93(1), 71–91.Hernández-Orallo, J., Lachiche, N., & Martınez-Usó, A. (2014). Predictive models for multidimensional data when the resolution context changes. In Workshop on learning over multiple contexts at ECML, volume 2014.Metz, C. E. (1978). Basic principles of roc analysis. In Seminars in nuclear medicine (Vol. 8, pp. 283–298). Amsterdam: Elsevier.Miettinen, K. (2012). Nonlinear multiobjective optimization (Vol. 12). Berlin: Springer.Ringuest, J. L. (2012). Multiobjective optimization: Behavioral and computational considerations. Berlin: Springer.Ross, S. A., Westerfield, R., & Jordan, B. D. (2002). Fundamentals of corporate finance (sixth ed.). New York: McGraw-Hill.Salas-Molina, F., Pla-Santamaria, D., & Rodriguez-Aguilar, J. A. (2016). A multi-objective approach to the cash management problem. Annals of Operations Research, pp. 1–15.Srinivasan, V., & Kim, Y. H. (1986). Deterministic cash flow management: State of the art and research directions. Omega, 14(2), 145–166.Steuer, R. E., Qi, Y., & Hirschberger, M. (2007). Suitable-portfolio investors, nondominated frontier sensitivity, and the effect of multiple objectives on standard portfolio selection. Annals of Operations Research, 152(1), 297–317.Stone, B. K. (1972). The use of forecasts and smoothing in control limit models for cash management. Financial Management, 1(1), 72.Torgo, L. (2005). Regression error characteristic surfaces. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining (pp. 697–702). ACM.Yu, P.-L. (1985). Multiple criteria decision making: concepts, techniques and extensions. New York: Plenum Press.Zeleny, M. (1982). Multiple criteria decision making. New York: McGraw-Hill

    Interactive Instructional: Theoretical Perspective and Its Potential Support in Stimulating Students’ Higher Order Thinking Skills (HOTS)

    Get PDF
    In this disruptive era, the success of teaching approaches that encourage students’ creativity and innovation is presented in students’ attained high-order thinking skills (HOTS). Consequently, the attainment of HOTS aids someone to avert negative things since they are capable of analyzing and evaluating their obtained information. Besides, HOTS also facilitates the process of students attaining knowledge, generating questions, properly interpreting information, and drawing a conclusion for an issue, with solid reasons, an open mind, and an effective means to communicate it. This article presents a theoretical study on the interactive instructional learning model and identifies its potential in accelerating students’ HOTS. It aims to introduce the interactive instructional model in chemistry learning. Further, this model can be adopted in a study with a more intensive evaluation of its empirical contribution to chemistry learning. The learning syntax for this model has been formulated for the Basic Chemistry Class 1.ReferencesBrookhart, S. M. (2010). How To Assess Higher Order thinking Skills in your classroom. Alexandria.Fearon, D. D., Copeland, D., & Saxon, T. F. (2013). The Relationship Between Parenting Styles and Creativity in a Sample of Jamaican Children. Creativity Research Journal, 25(1), 119–128. https://doi.org/10.1080/10400419.2013.752287Ghani, I. . B. ., Ibrahim, N. ., Yahaya, N. ., & Surif, J. (2017). Enhancing students’ HOTS in laboratory educational activity by using concept map as an alternative assessment tool. Chemistry Education Research and Practice, 18(4), 849–874. https://doi.org/10.1039/C7RP00120GHabiddin, H., & Page, E. M. (2020). Probing Students’ Higher Order Thinking Skills Using Pictorial Style Questions. Macedonian Journal of Chemistry and Chemical Engineering, 39(2), 251–263. https://doi.org/10.20450/mjcce.2020.2133Habiddin, H., & Page, E. M. (2021). Examining Students’ Ability to Solve Algorithmic and Pictorial Style Questions in Chemical Kinetics. International Journal of Science and Mathematics Education, 19(1), 65–85. https://doi.org/10.1007/s10763-019-10037-wHabiddin, H., & Page, E. M. (2018). Measuring Indonesian chemistry students’ Higher Order Thinking Skills (HOTS) in solving chemical kinetics questions. In Y. Rahmawati & P. C. Taylor (Eds.), Empowering Science and Mathematics for Global Competitiveness; Proceedings of the Science and Mathematics International Conference (SMIC 2018) (pp. 215–222). CRC Press Taylor & Francis.Heong, Y. M., Sern, L., Kiong, T. T., & Mohamad, M. (2016). The Role of Higher Order Thinking Skills in Green Skill Development.Herunata, H., Widarti, H. R., Amalia, R., Sulistina, O., Habiddin, H., & Rosli, M. S. bin. (2020). An analysis of higher order thinking skill (HOTs) in chemistry national examination for senior high school. AIP Conference Proceedings, 2215(1), 20009. https://doi.org/10.1063/5.0000639Horan, R. (2007). The Relationship Between Creativity and Intelligence: A Combined Yogic-Scientific Approach. Creativity Research Journal, 19(2–3), 179–202. https://doi.org/10.1080/10400410701397230Lather, A. S., Jain, S., & Shukla, A. D. (2014). Student’s Creativity in Relation to Locus of Control: a Study of Mysore University, India. The International Journal of Indian Psychȯlogy , 2(1), 146–165. http://ijip.in/article-details/?dip=18-01-058-20140201Lewis, A., & Smith, D. (1993). Defining Higher Order Thinking. Theory Into Practice, 32(3), 131–137.Lim, S., & Smith, J. (2008). The Structural Relationships of Parenting Style, Creative Personality, and Loneliness. Creativity Research Journal, 20(4), 412–419. https://doi.org/10.1080/10400410802391868McLoughlin, D., & Mynard, J. (2009). An analysis of higher order thinking in online discussions. Innovations in Education and Teaching International, 46(2), 147–160.Narciss, S. (2007). Feedback Strategies for Interactive Learning Tasks. In Handbook of Research on Educational Communications and Technology. Routledge. https://doi.org/10.4324/9780203880869.ch11Paideya, V., & Sookrajh, R. (2010). Exploring the use of supplemental instruction: Supporting deep understanding and higher-order thinking in Chemistry. South African Journal of Higher Education, 24(5), 758–770.Pannells, T. C., & Claxton, A. F. (2008). Happiness, creative ideation, and locus of control. Creativity Research Journal, 20(1), 67–71. https://doi.org/10.1080/10400410701842029Phakiti, A. (2018). Assessing Higher-Order Thinking Skills in Language Learning. In The TESOL Encyclopedia of English Language Teaching (pp. 1–7). https://doi.org/doi:10.1002/9781118784235.eelt0380Proske, A., Körndle, H., & Narciss, S. (2012). Interactive Learning Tasks BT  - Encyclopedia of the Sciences of Learning (N. M. Seel (ed.); pp. 1606–1610). Springer US. https://doi.org/10.1007/978-1-4419-1428-6_1100Resnick, L. B. (1987). Education and Learning to Think. National Academy Press.Toledo, S., & Dubas, J. M. (2016). Encouraging Higher-Order Thinking in General Chemistry by Scaffolding Student Learning Using Marzano’s Taxonomy. Journal of Chemical Education, 93(1), 64–69. https://doi.org/10.1021/acs.jchemed.5b00184Zohar, A. (2004). Elements of Teachers’ Pedagogical Knowledge Regarding Instruction of Higher Order Thinking. Journal of Science Teacher Education, 15(4), 293–312. https://doi.org/10.1023/B:JSTE.0000048332.39591.e3Zohar, A., & Dori, Y. J. (2003). Higher Order Thinking Skills and Low-Achieving Students: Are They Mutually Exclusive? Journal of the Learning Sciences, 12(3), 145–181. https://doi.org/10.1207/S15327809JLS1202_1Zoller, U, & Dori, Y. J. (2002). Algorithmic, LOCS and HOCS (chemistry) exam questions: performance and attitudes of college students. International Journal of Science Education, 24(2), 185–203. https://doi.org/10.1080/09500690110049060Zoller, Uri, & Pushkin, D. (2007). Matching Higher-Order Cognitive Skills (HOCS) promotion goals with problem-based laboratory practice in a freshman organic chemistry course. Chemistry Education Research and Practice, 8(2), 153–17

    The role of pedagogical tools in active learning: a case for sense-making

    Full text link
    Evidence from the research literature indicates that both audience response systems (ARS) and guided inquiry worksheets (GIW) can lead to greater student engagement, learning, and equity in the STEM classroom. We compare the use of these two tools in large enrollment STEM courses delivered in different contexts, one in biology and one in engineering. The instructors studied utilized each of the active learning tools differently. In the biology course, ARS questions were used mainly to check in with students and assess if they were correctly interpreting and understanding worksheet questions. The engineering course presented ARS questions that afforded students the opportunity to apply learned concepts to new scenarios towards improving students conceptual understanding. In the biology course, the GIWs were primarily used in stand-alone activities, and most of the information necessary for students to answer the questions was contained within the worksheet in a context that aligned with a disciplinary model. In the engineering course, the instructor intended for students to reference their lecture notes and rely on their conceptual knowledge of fundamental principles from the previous ARS class session in order to successfully answer the GIW questions. However, while their specific implementation structures and practices differed, both instructors used these tools to build towards the same basic disciplinary thinking and sense-making processes of conceptual reasoning, quantitative reasoning, and metacognitive thinking.Comment: 20 pages, 5 figure

    Conceptual Framework for the Use of Building Information Modeling in Engineering Education

    Get PDF
    The objective of this paper is to present a critical literature review of the Building Information Modelling (BIM) methodologyandtoanalyzewhetherBIMcanbeconsideredaVirtualLearningEnvironment.Aconceptualframeworkis proposed for using BIM in a university context. A search of documents was carried out in the Core Collection of Web of Science; it was restricted to the last five years (2013–2017). A total of 95 documents were analyzed; all documents were written in English and peer reviewed. BIM meets all the characteristics of Virtual Learning Environments. The proposed framework has three dimensions (competencies, pedagogical approach and level of integration).It allows for the planning and analysis of future experiences of teaching BIM in a university context.Ministry of Economy and Competitiveness of Spain and AEI/FEDER, UE Projects EDU2016-77007-RRegional Government of Extremadura (Spain) IB 16068Regional Government of Extremadura (Spain) GR1800

    Application of mutual information-based sequential feature selection to ISBSG mixed data

    Full text link
    [EN] There is still little research work focused on feature selection (FS) techniques including both categorical and continuous features in Software Development Effort Estimation (SDEE) literature. This paper addresses the problem of selecting the most relevant features from ISBSG (International Software Benchmarking Standards Group) dataset to be used in SDEE. The aim is to show the usefulness of splitting the ranked list of features provided by a mutual information-based sequential FS approach in two, regarding categorical and continuous features. These lists are later recombined according to the accuracy of a case-based reasoning model. Thus, four FS algorithms are compared using a complete dataset with 621 projects and 12 features from ISBSG. On the one hand, two algorithms just consider the relevance, while the remaining two follow the criterion of maximizing relevance and also minimizing redundancy between any independent feature and the already selected features. On the other hand, the algorithms that do not discriminate between continuous and categorical features consider just one list, whereas those that differentiate them use two lists that are later combined. As a result, the algorithms that use two lists present better performance than those algorithms that use one list. Thus, it is meaningful to consider two different lists of features so that the categorical features may be selected more frequently. We also suggest promoting the usage of Application Group, Project Elapsed Time, and First Data Base System features with preference over the more frequently used Development Type, Language Type, and Development Platform.Fernández-Diego, M.; González-Ladrón-De-Guevara, F. (2018). Application of mutual information-based sequential feature selection to ISBSG mixed data. Software Quality Journal. 26(4):1299-1325. https://doi.org/10.1007/s11219-017-9391-5S12991325264Angelis, L., & Stamelos, I. (2000). A simulation tool for efficient analogy based cost estimation. Empirical Software Engineering, 5(1), 35–68. https://doi.org/10.1023/A:1009897800559 .Auer, M., Trendowicz, A., Graser, B., Haunschmid, E., & Biffl, S. (2006). Optimal project feature weights in analogy-based cost estimation: improvement and limitations. Software Engineering, IEEE Transactions on, 32(2), 83–92.Awada, W., Khoshgoftaar, T. M., Dittman, D., Wald, R., Napolitano, A. (2012). A review of the stability of feature selection techniques for bioinformatics data. In 2012 I.E. 13th International Conference on Information Reuse and Integration (IRI) (pp. 356–363). Presented at the 2012 I.E. 13th International Conference on Information Reuse and Integration (IRI). https://doi.org/10.1109/IRI.2012.6303031 .Battiti, R. (1994). Using mutual information for selecting features in supervised neural net learning. Neural Networks, IEEE Transactions, 5(4), 537–550.Bennasar, M., Hicks, Y., & Setchi, R. (2015). Feature selection using joint mutual information maximisation. Expert Systems with Applications, 42(22), 8520–8532. https://doi.org/10.1016/j.eswa.2015.07.007 .Bibi, S., Tsoumakas, G., Stamelos, I., & Vlahavas, I. (2008). Regression via classification applied on software defect estimation. Expert Systems with Applications, 34(3), 2091–2101. https://doi.org/10.1016/j.eswa.2007.02.012 .Chandrashekar, G., & Sahin, F. (2014). A survey on feature selection methods. Computers & Electrical Engineering, 40(1), 16–28.Chatzipetrou, P., Papatheocharous, E., Angelis, L., Andreou, A. S. (2012). An investigation of software effort phase distribution using compositional data analysis. In 2012 38th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA) (pp. 367–375). Presented at the 2012 38th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA). https://doi.org/10.1109/SEAA.2012.50 .Chen, Z., Menzies, T., Port, D., & Boehm, B. (2005). Feature subset selection can improve software cost estimation accuracy. In Proceedings of the 2005 workshop on predictor models in software engineering (pp. 1–6). New York: ACM. https://doi.org/10.1145/1082983.1083171 .Chiu, N.-H., & Huang, S.-J. (2007). The adjusted analogy-based software effort estimation based on similarity distances. Journal of Systems and Software, 80(4), 628–640.Dash, M., & Liu, H. (2003). Consistency-based search in feature selection. Artificial Intelligence, 151(1), 155–176.Dejaeger, K., Verbeke, W., Martens, D., & Baesens, B. (2012). Data mining techniques for software effort estimation: a comparative study. Software Engineering, IEEE Transactions on, 38(2), 375–397. https://doi.org/10.1109/TSE.2011.55 .Deng, K., & MacDonell, S. G. (2008). Maximising data retention from the ISBSG repository. In Proceedings of the 12th international conference on evaluation and assessment in software engineering (pp. 21–30). Swinton: British Computer Society http://dl.acm.org/citation.cfm?id=2227115.2227118 . Accessed 21 Jan 2014.Doquire, G., & Verleysen, M. (2011). An hybrid approach to feature selection for mixed categorical and continuous data. In International Conference on Knowledge Discovery and Information Retrieval. http://hdl.handle.net/2078.1/90765 . Accessed 2 Nov 2015.Dudani, S. A. (1976). The distance-weighted k-nearest-neighbor rule. IEEE Transactions on Systems, Man and Cybernetics, SMC, 6(4), 325–327. https://doi.org/10.1109/TSMC.1976.5408784 .Estévez, P. A., Tesmer, M., Perez, C. A., & Zurada, J. M. (2009). Normalized mutual information feature selection. IEEE Transactions on Neural Networks, 20(2), 189–201. https://doi.org/10.1109/TNN.2008.2005601 .Fayyad, U.M., & Irani, K.B. (1993). Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning. In Proceedings of the International Joint Conference on Uncertainty in AI (pp. 1022–1027). Presented at the International Joint Conference on Uncertainty in AI. https://www.researchgate.net/publication/220815890_Multi-Interval_Discretization_of_Continuous-Valued_Attributes_for_Classification_Learning . Accessed 22 June 2016.Fernández-Diego, M., & González-Ladrón-de-Guevara, F. (2014). Potential and limitations of the ISBSG dataset in enhancing software engineering research: a mapping review. Information and Software Technology, 56(6), 527–544. https://doi.org/10.1016/j.infsof.2014.01.003 .Ferreira, A., & Figueiredo, M. (2011). Unsupervised joint feature discretization and selection. In J. Vitrià, J. M. Sanches, & M. Hernández (Eds.), Pattern recognition and image analysis (Vol. 6669, pp. 200–207). Berlin, Heidelberg: Springer Berlin Heidelberg http://link.springer.com/10.1007/978-3-642-21257-4_25 . Accessed 4 Mar 2016.Fleuret, F. (2004). Fast binary feature selection with conditional mutual information. Journal of Machine Learning Research, 5, 1531–1555.González-Ladrón-de-Guevara, F., Fernández-Diego, M., & Lokan, C. (2016). The usage of ISBSG data fields in software effort estimation: a systematic mapping study. Journal of Systems and Software, 113, 188–215. https://doi.org/10.1016/j.jss.2015.11.040 .Gupta, P., Jain, S., & Jain, A. (2014). A review of fast clustering-based feature subset selection algorithm. International Journal of Scientific & Technology Research, 3(11), 86–91.Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. The Journal of Machine Learning Research, 3, 1157–1182.Hall, M. A., & Holmes, G. (2003). Benchmarking attribute selection techniques for discrete class data mining. IEEE Transactions on Knowledge and Data Engineering, 15(6), 1437–1447. https://doi.org/10.1109/TKDE.2003.1245283 .Hausser, J., & Strimmer, K. (2009). Entropy inference and the James-Stein estimator, with application to nonlinear gene association networks. Journal of Machine Learning Research, 10(Jul), 1469–1484.Hill, P. (2010). Practical software project estimation: a toolkit for estimating software development effort & duration. McGraw Hill Professional.Hsu, H.-H., Hsieh, C.-W., & Lu, M.-D. (2011). Hybrid feature selection by combining filters and wrappers. Expert Systems with Applications, 38(7), 8144–8150.Huang, S.-J., & Chiu, N.-H. (2006). Optimization of analogy weights by genetic algorithm for software effort estimation. Information and Software Technology, 48(11), 1034–1045. https://doi.org/10.1016/j.infsof.2005.12.020 .Huang, S.-J., Chiu, N.-H., & Liu, Y.-J. (2008). A comparative evaluation on the accuracies of software effort estimates from clustered data. Information and Software Technology, 50(9–10), 879–888. https://doi.org/10.1016/j.infsof.2008.02.005 .Huang, J., Li, Y.-F., & Xie, M. (2015). An empirical analysis of data preprocessing for machine learning-based software cost estimation. Information and Software Technology, 67, 108–127. https://doi.org/10.1016/j.infsof.2015.07.004 .ISBSG. (2013a). ISBSG Dataset Release 12. ISBSG. http://isbsg.org/ . Accessed 1 Mar 2016.ISBSG. (2013b). ISBSG Guidelines Release 12.ISBSG. (2013c). ISBSG Data Demographics Release 12.Jeffery, R., Ruhe, M., Wieczorek, I. (2001). Using public domain metrics to estimate software development effort. In Software Metrics Symposium, 2001. METRICS 2001. Proceedings. Seventh International (pp. 16–27). https://doi.org/10.1109/METRIC.2001.915512 .Jiang, Z., & Comstock, C. (2007). The factors significant to software development productivity. In C. Ardil (Ed.), Proceedings of World Academy of Science, Engineering and Technology, Vol 19 (Vol. 19, pp. 160–164). Presented at the Conference of the World-Academy-of-Science-Engineering-and-Technology, Bangkok: World Acad Sci, Eng & Tech-Waset.Jørgensen, M., Indahl, U., & Sjøberg, D. (2003). Software effort estimation by analogy and ‘regression toward the mean’. Journal of Systems and Software, 68(3), 253–262. https://doi.org/10.1016/S0164-1212(03)00066-9 .Kabir, M. M., Shahjahan, M., & Murase, K. (2011). A new local search based hybrid genetic algorithm for feature selection. Neurocomputing, 74(17), 2914–2928.Kadoda, G., Cartwright, M., Chen, L., Shepperd, M. (2000). Experiences using case-based reasoning to predict software project effort. In EASE 2000 (pp. 2–3). Presented at the EASE 2000, Staffordshire, UK.Keung, J., Kocaguneli, E., & Menzies, T. (2012). Finding conclusion stability for selecting the best effort predictor in software effort estimation. Automated Software Engineering, 20(4), 543–567. https://doi.org/10.1007/s10515-012-0108-5 .Kirsopp, C., Shepperd, M. J., Hart, J. (2002). Search heuristics, case-based reasoning and software project effort prediction. In Proceedings of the Genetic and Evolutionary Computation Conference (pp. 9–13). New York, USA. http://v-scheiner.brunel.ac.uk/handle/2438/1554 . Accessed 27 Jan 2016.Kohavi, R., & John, G. H. (1997). Wrappers for feature subset selection. Artificial Intelligence, 97(1–2), 273–324. https://doi.org/10.1016/S0004-3702(97)00043-X .Kwak, N., & Choi, C.-H. (2002). Input feature selection for classification problems. IEEE Transactions on Neural Networks, 13(1), 143–159. https://doi.org/10.1109/72.977291 .Langdon, W. B., Dolado, J., Sarro, F., & Harman, M. (2016). Exact mean absolute error of baseline predictor, MARP0. Information and Software Technology, 73, 16–18. https://doi.org/10.1016/j.infsof.2016.01.003 .Li, Y. F., Xie, M., & Goh, T. N. (2009). A study of mutual information based feature selection for case based reasoning in software cost estimation. Expert Systems with Applications, 36(3), 5921–5931.Liu, H., & Motoda, H. (2012). Feature selection for knowledge discovery and data mining (Vol. 454). Springer Science & Business Media. https://books.google.es/books?hl=en&lr=&id=aaDbBwAAQBAJ&oi=fnd&pg=PP10&dq=Feature+selection+for+knowledge+discovery+and+data+mining&ots=iuMhcWZGcf&sig=KlmNEIcsBdDVs-m1HUuICfpYZiM . Accessed 25 Jan 2016.Liu, H., & Yu, L. (2005). Toward integrating feature selection algorithms for classification and clustering. IEEE Transactions on Knowledge and Data Engineering, 17(4), 491–502. https://doi.org/10.1109/TKDE.2005.66 .Liu, H., Wei, R., & Jiang, G. (2013). A hybrid feature selection scheme for mixed attributes data. Computational and Applied Mathematics, 32(1), 145–161. https://doi.org/10.1007/s40314-013-0019-5 .Liu, Q., Wang, J., Xiao, J., Zhu, H. (2014). Mutual information based feature selection for symbolic interval data. In International Conference on Software Intelligence Technologies and Applications International Conference on Frontiers of Internet of Things 2014 (pp. 62–69). Presented at the International Conference on Software Intelligence Technologies and Applications International Conference on Frontiers of Internet of Things 2014. https://doi.org/10.1049/cp.2014.1537 .Lokan, C. (2005). What should you optimize when building an estimation model? In Software Metrics, 2005. 11th IEEE International Symposium (pp. 1–10). https://doi.org/10.1109/METRICS.2005.55 .Lokan, C., & Mendes, E. (2009a). Investigating the use of chronological split for software effort estimation. Software, IET, 3(5), 422–434. https://doi.org/10.1049/iet-sen.2008.0107 .Lokan, C., & Mendes, E. (2009b). Applying moving windows to software effort estimation. In Proceedings of the 2009 3rd international symposium on empirical software engineering and measurement (pp. 111–122). Washington, DC: IEEE Computer Society. https://doi.org/10.1109/ESEM.2009.5316019 .Lokan, C., & Mendes, E. (2012). Investigating the use of duration-based moving windows to improve software effort prediction. In Software Engineering Conference (APSEC), 2012 19th Asia-Pacific (Vol. 1, pp. 818–827). Presented at the Software Engineering Conference (APSEC), 2012 19th Asia-Pacific. https://doi.org/10.1109/APSEC.2012.74 .Lustgarten, J.L., Visweswaran, S., Grover, H., Gopalakrishnan, V. (2008). An evaluation of discretization methods for learning rules from biomedical datasets. In BIOCOMP (pp. 527–532).Mandal, M., & Mukhopadhyay, A. (2013). An improved minimum redundancy maximum relevance approach for feature selection in gene expression data. Procedia Technology, 10, 20–27. https://doi.org/10.1016/j.protcy.2013.12.332 .Mendes, E., Watson, I., Triggs, C., Mosley, N., & Counsell, S. (2003). A comparative study of cost estimation models for web hypermedia applications. Empirical Software Engineering, 8(2), 163–196.Mendes, E., Lokan, C., Harrison, R., Triggs, C. (2005). A replicated comparison of cross-company and within-company effort estimation models using the ISBSG database. In Software Metrics, 2005. 11th IEEE International Symposium (pp. 1–10). https://doi.org/10.1109/METRICS.2005.4 .Moses, J., Farrow, M., Parrington, N., & Smith, P. (2006). A productivity benchmarking case study using Bayesian credible intervals. Software Quality Journal, 14(1), 37–52. https://doi.org/10.1007/s11219-006-6000-4 .Núñez, H., Sànchez-Marrè, M., Cortés, U., Comas, J., Martínez, M., Rodríguez-Roda, I., & Poch, M. (2004). A comparative study on the use of similarity measures in case-based reasoning to improve the classification of environmental system situations. Environmental Modelling & Software, 19(9), 809–819. https://doi.org/10.1016/j.envsoft.2003.03.003 .Oh, I.-S., Lee, J.-S., & Moon, B.-R. (2004). Hybrid genetic algorithms for feature selection. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 26(11), 1424–1437.Peng, H., Long, F., & Ding, C. (2005). Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(8), 1226–1238. https://doi.org/10.1109/TPAMI.2005.159 .R Core Team. (2015). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing https://www.R-project.org/ .Romanski, P., & Kotthoff, L. (2014). FSelector: Selecting attributes. R package version 0.20. https://CRAN.R-project.org/package=FSelector .Shannon, C. E. (1949). The mathematical theory of communication. Urbana: University of Illinois Press.Shepperd, M., & MacDonell, S. (2012). Evaluating prediction systems in software project estimation. Information and Software Technology, 54(8), 820–827.Shepperd, M., & Schofield, C. (1997). Estimating software project effort using analogies. Software Engineering, IEEE Transactions on, 23(11), 736–743.Somol, P., Pudil, P., & Kittler, J. (2004). Fast branch & bound algorithms for optimal feature selection. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 26(7), 900–912.Song, Q., & Shepperd, M. (2007). A new imputation method for small software project data sets. Journal of Systems and Software, 80(1), 51–62.Top, O. O., Ozkan, B., Nabi, M., Demirors, O. (2011). Internal and External Software Benchmark Repository Utilization for Effort Estimation. In Software Measurement, 2011 Joint Conference of the 21st Int’l Workshop on and 6th Int’l Conference on Software Process and Product Measurement (IWSM-MENSURA) (pp. 302–307). https://doi.org/10.1109/IWSM-MENSURA.2011.41 .Vinh, L.T., Thang, N.D., Lee, Y.-K. (2010). An improved maximum relevance and minimum redundancy feature selection algorithm based on normalized mutual information. In 2010 10th IEEE/IPSJ International Symposium on Applications and the Internet (SAINT) (pp. 395–398). Presented at the 2010 10th IEEE/IPSJ International Symposium on Applications and the Internet (SAINT). https://doi.org/10.1109/SAINT.2010.50 .Witten, I.H., Frank, E., Hall, M.A., Pal, C.J. (2011). Data mining: Practical machine learning tools and techniques. Morgan Kaufmann

    Why not empower knowledge workers and lifelong learners to develop their own environments?

    Get PDF
    In industrial and educational practice, learning environments are designed and implemented by experts from many different fields, reaching from traditional software development and product management to pedagogy and didactics. Workplace and lifelong learning, however, implicate that learners are more self-motivated, capable, and self-confident in achieving their goals and, consequently, tempt to consider that certain development tasks can be shifted to end-users in order to facilitate a more flexible, open, and responsive learning environment. With respect to streams like end-user development and opportunistic design, this paper elaborates a methodology for user-driven environment design for action-based activities. Based on a former research approach named 'Mash-Up Personal Learning Environments'(MUPPLE) we demonstrate how workplace and lifelong learners can be empowered to develop their own environment for collaborating in learner networks and which prerequisites and support facilities are necessary for this methodology

    Learning theory and its application to female learner support in engineering

    Get PDF
    School of Engineering at Murdoch University is now in its fifth year: a new School sited on the new regional Campus. This environment enabled the staff to take an innovative approach to the School's development. One key issue addressed from the outset was that of women in a nontraditional area. Positive action was taken to attract high calibre female staff and as a consequence over 50% of the School's staff, academic and non-academic, are female. From the student perspective, issues confronting females studying in Engineering, which are reflected in international low recruitment and retention, continue to be addressed. Individuals are different and these differences affect how a student performs. In particular, gender differences in learning styles have been noted. This has directed us to administer, as part of a first year foundational unit, learning style inventories to all first year students, who then identify their self-reported learning styles. In this positive atmosphere many varied and successful initiatives, based on our learning style research, are being trialled to encourage female students into our programs and then support and retain them throughout their four years of study. This research discusses the initial learning style results and their application to our initiatives
    corecore