974 research outputs found

    Macroeconomic Indicators as Potential Predictors of Construction Material Prices

    Get PDF
    The rate of construction materials is subjected to constant changes. The unexpected price changes affect the carrying-out rates of projects and even challenges the competence to finish the projects. The rapid and vast changes that occur all over the world in construction materials prices impacts the individual construction market value of each country. To avoid this problem, the contractor should have a tool or method that is capable to predict the future material prices. It is essential to predict the material prices variations during the implementation of the project as well as for preparing the tenders. Prediction of material price is an important function for effectively handling projects in terms of more exactly estimating, pursuing and monitoring projects. There are many tools that can help the construction contractors by its ability to accurately predict the future material price. Some of the methods normally used for prediction of materials prices are Artificial Neural Network, Fuzzy Logic, Statistical Method (includes regression analysis, MONTE CARLO method, ANOVA), and Trend Analysis. The type of predictors to these tools can be any factors that tend to have an impact on the prices of material.  Macroeconomic indicators are one such factor that influences the prices of material as it reflects a country’s economic status. This is a pilot study conducted in India to determine the possible macroeconomic indicators that influence the building material prices namely Portland cement and steel. Keywords-Cost estimation, Artificial Neural Network, Macroeconomic indicato

    Job seeking and job application in social networking sites : predicting job seekers\u27 behavioral intentions

    Get PDF
    Social networking sites (SNSs) are revolutionizing the way in which employers and job seekers connect and interact with each other. Despite the reported benefits of SNSs with respect to finding a job, there are issues such as privacy concerns that might be deterring job seekers from using these sites in their attempts to secure a job. It is therefore important to understand the factors that are salient in predicting job seekers\u27 use of SNSs in applying for jobs. In this research, a theoretical model was developed to explicate job seekers\u27 intentions to use SNSs to apply for jobs. Two aspects of intentions to use SNSs to apply for jobs were examined: (i) the likelihood of using these sites to submit applications, and (ii) the likelihood of sharing personal information requested by recruiters and potential employers using SNSs to recruit employees. Factors that could determine preference for the use of traditional job boards over SNSs in applying for jobs were also investigated. The initial theoretical model tested in this research was anchored on the Unified Theory of Acceptance and Use of Technology (UTAUT), and thus, variables such as performance expectancy, effort expectancy and social influence were predicted to have an impact on job seekers\u27 intentions. Other factors hypothesized as having an influence on job seekers\u27 intentions to apply for jobs using SNSs were: privacy concerns; perceived justice (trust that the information revealed in SNSs will be used fairly in the job candidate selection process); perceived risks; and the provision of information on a distinctive function within some SNSs referred to, in this study, as the inside connections feature (which illustrates to job seekers their social network connections to potential employers). Data for this study were gathered through an online survey from 490 registered users (alumni and students hoping to graduate soon) of career services databases managed by two universities in New Jersey, USA. The test of the measurement model of the initial research model suggested that survey respondents did not sufficiently distinguish performance expectancy from intention to apply for jobs using SNSs. Thus, an alternative model with only intention to share information with recruiters and potential employers using SNSs to recruit employees as the dependent variable was developed. The results of the test of the alternative model suggest that performance expectancy and privacy concerns are the most dominant direct predictors, and that social influence specific to image and perceived justice are indirect predictors. However, effort expectancy and risk beliefs did not influence directly the intention to share information with recruiters and potential employers using SNSs to recruit employees. The R2 value for this alternative model was 37.3%. Exploratory analyses suggest that all of the model variables, except the provision of information on the inside connections feature, have a significant influence on intention to apply for job using SNSs and preference for job boards over SNSs. The results of this study suggest that, in efforts to encourage the use of SNSs for securing a job, designers should pay significantly more attention to promoting the usefulness of these sites and to providing job seekers with more control in handling their personal information in order to alleviate privacy concerns. This study provides insights into predictors of job seekers\u27 behavior in SNSs that can inform future research

    Video Game Development in a Rush: A Survey of the Global Game Jam Participants

    Full text link
    Video game development is a complex endeavor, often involving complex software, large organizations, and aggressive release deadlines. Several studies have reported that periods of "crunch time" are prevalent in the video game industry, but there are few studies on the effects of time pressure. We conducted a survey with participants of the Global Game Jam (GGJ), a 48-hour hackathon. Based on 198 responses, the results suggest that: (1) iterative brainstorming is the most popular method for conceptualizing initial requirements; (2) continuous integration, minimum viable product, scope management, version control, and stand-up meetings are frequently applied development practices; (3) regular communication, internal playtesting, and dynamic and proactive planning are the most common quality assurance activities; and (4) familiarity with agile development has a weak correlation with perception of success in GGJ. We conclude that GGJ teams rely on ad hoc approaches to development and face-to-face communication, and recommend some complementary practices with limited overhead. Furthermore, as our findings are similar to recommendations for software startups, we posit that game jams and the startup scene share contextual similarities. Finally, we discuss the drawbacks of systemic "crunch time" and argue that game jam organizers are in a good position to problematize the phenomenon.Comment: Accepted for publication in IEEE Transactions on Game

    Exploratory multivariate longitudinal data analysis and models for multivariate longitudinal binary data

    Get PDF
    Longitudinal data occurs when repeated measurements from the same subject are observed over time. In this thesis, exploratory data analysis and models are utilized jointly to analyze longitudinal data which leads to stronger and better justified conclusions. The complex structure of longitudinal data with covariates requires new visual methods that enable interactive exploration. Here we catalog the general principles of exploratory data analysis for multivariate longitudinal data, and illustrate the use of the linked brushing approach for studying the mean structure over time. It is possible to reveal the unexpected, to explore the interaction between responses and covariates, to observe the individual variations, understand structure in multiple dimensions, and diagnose and fix models by using these methods. We also propose models for multivariate longitudinal binary data that directly model marginal covariate effects while accounting for the dependence across time via a transition structure and across responses within a subject for a given time via random effects. Markov Chain Monte Carlo Methods, specifically Gibbs sampling with Hybrid steps, are used to sample from the posterior distribution of parameters. Graphical and quantitative checks are used to assess model fit. The methods are illustrated on several real datasets, primarily the Iowa Youth and Families Project.*;*This dissertation is a compound document (contains both a paper copy and a CD as part of the dissertation)

    Classical test theory versus Rasch analysis for quality of life questionnaire reduction

    Get PDF
    BACKGROUND: Although health-related quality of life (HRQOL) instruments may offer satisfactory results, their length often limits the extent to which they are actually applied in clinical practice. Efforts to develop short questionnaires have largely focused on reducing existing instruments. The approaches most frequently employed for this purpose rely on statistical procedures that are considered exponents of Classical Test Theory (CTT). Despite the popularity of CTT, two major conceptual limitations have been pointed out: the lack of an explicit ordered continuum of items that represent a unidimensional construct, and the lack of additivity of rating scale data. In contrast to the CTT approach, the Rasch model provides an alternative scaling methodology that enables the examination of the hierarchical structure, unidimensionality and additivity of HRQOL measures. METHODS: In order to empirically compare CTT and Rasch Analysis (RA) results, this paper presents the parallel reduction of a 38-item questionnaire, the Nottingham Health Profile (NHP), through the analysis of the responses of a sample of 9,419 individuals. RESULTS: CTT resulted in 20 items (4 dimensions) whereas RA in 22 items (2 dimensions). Both instruments showed similar characteristics under CTT requirements: item-total correlation ranged 0.45–0.75 for NHP20 and 0.46–0.68 for NHP22, while reliability ranged 0.82–0.93 and 0.87–94 respectively. CONCLUSIONS: Despite the differences in content, NHP20 and NHP22 convergent scores also showed high degrees of association (0.78–0.95). Although the unidimensional view of health of the NHP20 and NHP22 composite scores was also confirmed by RA, NHP20 dimensions failed to meet the goodness-of fit criteria established by the Rasch model, precluding the interval-level of measurement of its scores

    Application of Predicted Models in Debt Management: Developing a Machine Learning Algorithm to Predict Customer Risk at EDP Comercial

    Get PDF
    Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceThis report is a result of a nine-month internship at EDP Comercial where the main project of research was the application of artificial intelligence tools in the field of debt management. Debt management involves a set of strategies and processes aimed at reducing or eliminating debt and the use of artificial intelligence has shown great potential to optimize these processes and minimize the risk of debt for individuals and organizations. In terms of monitoring and controlling the creditworthiness and quality of clients, debt management has mainly been responsive and reactive, attempting to recover losses after a client has become delinquent. There is a gap in the knowledge of how to proactively identify at-risk accounts before they fall behind on payments. To avoid the constant reactive response in the field, it was developed a machine-learning algorithm that predicts the risk of a client becoming in debt by analyzing their scorecard, which measures the quality of a client based on their infringement history. After preprocessing the data, XGBoost was implemented to a dataset of 3M customers with at least one active contract on EDP, on electricity or gas. Hyperparameter tuning was performed on the model to reach an F1 score of 0.7850 on the training set and 0.7835 on the test set. The results were discussed and based on those, recommendations and improvements were also identified

    Stream diatom community assembly processes in islands and continents: a global perspective

    Get PDF
    [EN] Understanding the roles of deterministic and stochastic processes in community assembly is essential for gaining insights into the biogeographical patterns of biodiversity. However, the way community assembly processes operate is still not fully understood, especially in oceanic islands. In this study, we examine the importance of assembly processes in shaping diatom communities in islands and continents, while also investigating the influence of climate and local water chemistry variables on species distributions. Location Global. Taxon Stream benthic diatoms. Methods We used diatom datasets from five continents and 19 islands and applied beta diversity analyses with a null model approach and hierarchical joint species distribution modelling. To facilitate comparisons with continents, we used continental area equivalents (CAEs), which represent continental subsets with comparable areas and the same number of study sites as their corresponding islands counterparts. Results We found that homogeneous selection (i.e., communities being more similar than the random expectation) was the dominant assembly process within islands whereas stochastic processes tended to be more important within continents. In addition, assembly processes were influenced by study scale and island isolation. Climatic variables showed a greater influence on species distribution than local factors. However, in islands, local environmental variables had a greater impact on the distributions of unique taxa as opposed to non-unique taxa. Main Conclusions We observed that the assembly processes of diatom communities were complex and influenced by a combination of deterministic and stochastic forces, which varied across spatial scales. In islands, there was no universal pattern of assembly processes, given that their influence depends on abiotic conditions such as area, isolation, and environmental heterogeneity. In addition, the sensitivity of species occurring uniquely in islands to local environmental variables suggests that they are perhaps less vulnerable to climatic changes but may be more influenced by changes in local physicochemistrySIFor financial support, the authors thank the Academy of Finland (grant nr. 346812 to JS); the Institut Francais de Finlande; the Embassy of France to Finland; the French Ministry of Education and Higher Education; Finnish Society of Sciences and Letters. J.J. Wang was further supported by the National Natural Science Foundation of China (91851117, 41871048), CAS Key Research Program of Frontier Sciences (QYZDB-SSW-DQC043), and The National Key Research and Development Program of China (2019YFA0607100

    The role of motivation in regulating the extent to which data visualisation literacy influences business intelligence and analytics use in organisations

    Get PDF
    Dissertation (MCom (Informatics))--University of Pretoria 2022.The ability to read and interpret visualised data is a critical skill to have in this information age where business intelligence and analytics (BI&A) systems are increasingly used to support decision-making. Data visualisation literacy is seen as the foundation of analytics. Moreover, there is great hype about data-driven analytical culture and data democratisation, where users are encouraged to have wide access to data and fully use BI&A to reap the benefits. Motivation is a stimulant to the richer use of any information system (IS), yet literature provides a limited understanding of the evaluation of data visualisation literacy and the effect of motivation in the BI&A context. Thus, this study aims to explain the role of motivation in regulating the extent to which data visualisation literacy influences BI&A’s exploitative and explorative use in organisations. Data visualisation literacy is measured using six data visualisations that focus on the five cognitive basic intelligent analytical tasks that assess the user's ability to read and interpret visualised data. Two types of motivations are assessed using perceived enjoyment as an intrinsic motivator and perceived usefulness as an extrinsic motivator. The model is tested using quantitative data collected from 111 users, applying Structural Equation Modelling (SEM). The results indicate that intrinsic motivation exerts a positive effect on BI&A exploitative and explorative use while extrinsic motivation has a positive effect on BI&A exploitative use but weakens innovation with a negative effect on explorative use. The results further show an indirect relationship between data visualisation literacy with BI&A use through motivation. In addition, exploitation leads to creativity with exploitation positively being associated with exploration.InformaticsMCom (Informatics)Unrestricte

    Quantitative analysis of optical coherence tomography for neovascular age-related macular degeneration using deep learning

    Get PDF
    PURPOSE: To apply a deep learning algorithm for automated, objective, and comprehensive quantification of optical coherence tomography (OCT) scans to a large real-world dataset of eyes with neovascular age-related macular degeneration (AMD), and make the raw segmentation output data openly available for further research. DESIGN: Retrospective analysis of OCT images from the Moorfields Eye Hospital AMD Database. PARTICIPANTS: 2473 first-treated eyes and another 493 second-treated eyes that commenced therapy for neovascular AMD between June 2012 and June 2017. METHODS: A deep learning algorithm was used to segment all baseline OCT scans. Volumes were calculated for segmented features such as neurosensory retina (NSR), drusen, intraretinal fluid (IRF), subretinal fluid (SRF), subretinal hyperreflective material (SHRM), retinal pigment epithelium (RPE), hyperreflective foci (HRF), fibrovascular pigment epithelium detachment (fvPED), and serous PED (sPED). Analyses included comparisons between first and second eyes, by visual acuity (VA) and by race/ethnicity, and correlations between volumes. MAIN OUTCOME MEASURES: Volumes of segmented features (mm3), central subfield thickness (CST) (μm). RESULTS: In first-treated eyes, the majority had both IRF and SRF (54.7%). First-treated eyes had greater volumes for all segmented tissues, with the exception of drusen, which was greater in second-treated eyes. In first-treated eyes, older age was associated with lower volumes for RPE, SRF, NSR and sPED; in second-treated eyes, older age was associated with lower volumes of NSR, RPE, sPED, fvPED and SRF. Eyes from black individuals had higher SRF, RPE and serous PED volumes, compared with other ethnic groups. Greater volumes of the vast majority of features were associated with worse VA. CONCLUSION: We report the results of large scale automated quantification of a novel range of baseline features in neovascular AMD. Major differences between first and second-treated eyes, with increasing age, and between ethnicities are highlighted. In the coming years, enhanced, automated OCT segmentation may assist personalization of real-world care, and the detection of novel structure-function correlations. These data will be made publicly available for replication and future investigation by the AMD research community
    • …
    corecore