2,067 research outputs found

    An Evolutionary Based Algorithm for Resources System Selection Problem in Agile/Virtual Enterprises

    Get PDF
    The problem of resources systems selection takes an important role in Agile/Virtual Enterprises (A/VE) integration. However, the resources systems selection problem is difficult to solve in A/VE because: it can be of exponential complexity resolution; it can be a multi criteria problem; and because there are different types of A/V Es with different requisites that have originated the development of a specific resources selection model for each one of them. In this work we have made some progress in order to identify the principal gaps to be solved. This paper will show one of those gaps in the algorithms area to be applied for its resolution. In attention to that gaps we address the necessity to develop new algorithms and with more information disposal, for its selection by the Broker. In this paper we propose a genetic algorithm to deal with a specific case of resources system selection problem when the space solution dimension is high.info:eu-repo/semantics/publishedVersio

    Dynamic Scheduling for Maintenance Tasks Allocation supported by Genetic Algorithms

    Get PDF
    Since the first factories were created, man has always tried to maximize its production and, consequently, his profits. However, the market demands have changed and nowadays is not so easy to get the maximum yield of it. The production lines are becoming more flexible and dynamic and the amount of information going through the factory is growing more and more. This leads to a scenario where errors in the production scheduling may occur often. Several approaches have been used over the time to plan and schedule the shop-floor’s production. However, some of them do not consider some factors present in real environments, such as the fact that the machines are not available all the time and need maintenance sometimes. This increases the complexity of the system and makes it harder to allocate the tasks competently. So, more dynamic approaches should be used to explore the large search spaces more efficiently. In this work is proposed an architecture and respective implementation to get a schedule including both production and maintenance tasks, which are often ignored on the related works. It considers the maintenance shifts available. The proposed architecture was implemented using genetic algorithms, which already proved to be good solving combinatorial problems such as the Job-Shop Scheduling problem. The architecture considers the precedence order between the tasks of a same product and the maintenance shifts available on the factory. The architecture was tested on a simulated environment to check the algorithm behavior. However, it was used a real data set of production tasks and working stations

    Entwicklung einer Methodik zur UnterstĂŒtzung der Entscheidungsfindung auf der Grundlage heuristischer Entscheidungen in der Produktentwicklung und im Innovationsmanagement = Developing a method to support decision making based on heuristic decisions in product development and inno-vation management

    Get PDF
    Die Entscheidungsfindung ist ein wichtiger Faktor, der die Produktentwicklung so steuert, dass sie auf dem Markt erfolgreich ist oder scheitert. Die meisten Entschei-dungen beruhen auf den verfĂŒgbaren Informationen, Berechnungen und Analysen verschiedener systematischer Methoden. Diese Methoden können jedoch keine definitiv richtige und gute Entscheidung liefern, insbesondere unter Unsicherheit und begrenzten Informationen. Die Entscheidungsfindung mit einfachen Strategien und mentalen Prozessen, die heuristische Entscheidungen genannt werden, wird dann automatisch angewendet, um eine Lösung zu finden. Obwohl Heuristiken in vielen Situationen hilfreich sind, können sie zu Entscheidungsverzerrungen fĂŒhren. Diese Forschung zielt darauf ab, das Auftreten Entscheidungsverzerrungen bei der Verwendung von Heuristiken zur Entscheidungsfindung zu untersuchen. Dies zu verstehen und sich dessen bewusst zu sein, ist wichtig fĂŒr die Verbesserung der Entscheidungseffizienz. DarĂŒber hinaus wurden die De-Biasing Techniken entwi-ckelt, um kognitive Probleme zu lösen und Entscheidungsfehler zu reduzieren, die durch Entscheidungsverzerrungen verursacht werden. Informationen aus der Literatur und empirischen Studien zeigen viele Erscheinungen von heuristischen Entscheidungen und Verzerrungen wĂ€hrend der Entscheidungs-findung in verschiedenen AktivitĂ€ten der Produktentwicklung - insbesondere bei der Priorisierung von Alternativen und der Auswahl der Lösung. Methoden zum Umgang mit den Entscheidungsverzerrungen bei der Entscheidungsfindung in der Produkt-entwicklung wurden dann durch die Modifizierung verfĂŒgbarer Techniken aus anderen Bereichen entwickelt und zur Behandlung der kognitiven Vorurteile bei der Entscheidungsfindung in der Produktentwicklung angewandt. Es wurden vier Arten von De-Biasing Techniken entwickelt und in einem Rahmenwerk vorgeschlagen, die in Experimenten unter simulierten und realen Situationen angewandt und ausgewer-tet wurden. Die Ergebnisse zeigen, dass Entscheidungsverzerrungen mit verschiedenen Entzer-rungstechniken auf der Grundlage der Ziele und Ableitung von Vorurteilen behandelt werden kann. Diese Techniken können mit einer Art von Verzerrung umgehen, aber auch zu einer anderen Art von Verzerrung fĂŒhren. DarĂŒber hinaus gibt es keine Antwort darauf, dass Entscheidungen mit oder ohne Verzerrungen eine korrekte Lösung bieten können. Daher ist die geeignetste Methode zur Behandlung von Entscheidungsverzerrungen in der Produktentwicklung, das Bewusstsein der Ent-scheidungstrĂ€ger fĂŒr heuristische Entscheidungen und Verzerrungen zu schĂ€rfen. Somit kann der EntscheidungstrĂ€ger entscheiden, die Entscheidungen und Lösun-gen zu akzeptieren oder abzulehnen

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    Towards using intelligent techniques to assist software specialists in their tasks

    Full text link
    L’automatisation et l’intelligence constituent des prĂ©occupations majeures dans le domaine de l’Informatique. Avec l’évolution accrue de l’Intelligence Artificielle, les chercheurs et l’industrie se sont orientĂ©s vers l’utilisation des modĂšles d’apprentissage automatique et d’apprentissage profond pour optimiser les tĂąches, automatiser les pipelines et construire des systĂšmes intelligents. Les grandes capacitĂ©s de l’Intelligence Artificielle ont rendu possible d’imiter et mĂȘme surpasser l’intelligence humaine dans certains cas aussi bien que d’automatiser les tĂąches manuelles tout en augmentant la prĂ©cision, la qualitĂ© et l’efficacitĂ©. En fait, l’accomplissement de tĂąches informatiques nĂ©cessite des connaissances, une expertise et des compĂ©tences bien spĂ©cifiques au domaine. GrĂące aux puissantes capacitĂ©s de l’intelligence artificielle, nous pouvons dĂ©duire ces connaissances en utilisant des techniques d’apprentissage automatique et profond appliquĂ©es Ă  des donnĂ©es historiques reprĂ©sentant des expĂ©riences antĂ©rieures. Ceci permettra, Ă©ventuellement, d’allĂ©ger le fardeau des spĂ©cialistes logiciel et de dĂ©brider toute la puissance de l’intelligence humaine. Par consĂ©quent, libĂ©rer les spĂ©cialistes de la corvĂ©e et des tĂąches ordinaires leurs permettra, certainement, de consacrer plus du temps Ă  des activitĂ©s plus prĂ©cieuses. En particulier, l’IngĂ©nierie dirigĂ©e par les modĂšles est un sous-domaine de l’informatique qui vise Ă  Ă©lever le niveau d’abstraction des langages, d’automatiser la production des applications et de se concentrer davantage sur les spĂ©cificitĂ©s du domaine. Ceci permet de dĂ©placer l’effort mis sur l’implĂ©mentation vers un niveau plus Ă©levĂ© axĂ© sur la conception, la prise de dĂ©cision. Ainsi, ceci permet d’augmenter la qualitĂ©, l’efficacitĂ© et productivitĂ© de la crĂ©ation des applications. La conception des mĂ©tamodĂšles est une tĂąche primordiale dans l’ingĂ©nierie dirigĂ©e par les modĂšles. Par consĂ©quent, il est important de maintenir une bonne qualitĂ© des mĂ©tamodĂšles Ă©tant donnĂ© qu’ils constituent un artĂ©fact primaire et fondamental. Les mauvais choix de conception, ainsi que les changements conceptuels rĂ©pĂ©titifs dus Ă  l’évolution permanente des exigences, pourraient dĂ©grader la qualitĂ© du mĂ©tamodĂšle. En effet, l’accumulation de mauvais choix de conception et la dĂ©gradation de la qualitĂ© pourraient entraĂźner des rĂ©sultats nĂ©gatifs sur le long terme. Ainsi, la restructuration des mĂ©tamodĂšles est une tĂąche importante qui vise Ă  amĂ©liorer et Ă  maintenir une bonne qualitĂ© des mĂ©tamodĂšles en termes de maintenabilitĂ©, rĂ©utilisabilitĂ© et extensibilitĂ©, etc. De plus, la tĂąche de restructuration des mĂ©tamodĂšles est dĂ©licate et compliquĂ©e, notamment, lorsqu’il s’agit de grands modĂšles. De lĂ , automatiser ou encore assister les architectes dans cette tĂąche est trĂšs bĂ©nĂ©fique et avantageux. Par consĂ©quent, les architectes de mĂ©tamodĂšles pourraient se concentrer sur des tĂąches plus prĂ©cieuses qui nĂ©cessitent de la crĂ©ativitĂ©, de l’intuition et de l’intelligence humaine. Dans ce mĂ©moire, nous proposons une cartographie des tĂąches qui pourraient ĂȘtre automatisĂ©es ou bien amĂ©liorĂ©es moyennant des techniques d’intelligence artificielle. Ensuite, nous sĂ©lectionnons la tĂąche de mĂ©tamodĂ©lisation et nous essayons d’automatiser le processus de refactoring des mĂ©tamodĂšles. A cet Ă©gard, nous proposons deux approches diffĂ©rentes: une premiĂšre approche qui consiste Ă  utiliser un algorithme gĂ©nĂ©tique pour optimiser des critĂšres de qualitĂ© et recommander des solutions de refactoring, et une seconde approche qui consiste Ă  dĂ©finir une spĂ©cification d’un mĂ©tamodĂšle en entrĂ©e, encoder les attributs de qualitĂ© et l’absence des design smells comme un ensemble de contraintes et les satisfaire en utilisant Alloy.Automation and intelligence constitute a major preoccupation in the field of software engineering. With the great evolution of Artificial Intelligence, researchers and industry were steered to the use of Machine Learning and Deep Learning models to optimize tasks, automate pipelines, and build intelligent systems. The big capabilities of Artificial Intelligence make it possible to imitate and even outperform human intelligence in some cases as well as to automate manual tasks while rising accuracy, quality, and efficiency. In fact, accomplishing software-related tasks requires specific knowledge and skills. Thanks to the powerful capabilities of Artificial Intelligence, we could infer that expertise from historical experience using machine learning techniques. This would alleviate the burden on software specialists and allow them to focus on valuable tasks. In particular, Model-Driven Engineering is an evolving field that aims to raise the abstraction level of languages and to focus more on domain specificities. This allows shifting the effort put on the implementation and low-level programming to a higher point of view focused on design, architecture, and decision making. Thereby, this will increase the efficiency and productivity of creating applications. For its part, the design of metamodels is a substantial task in Model-Driven Engineering. Accordingly, it is important to maintain a high-level quality of metamodels because they constitute a primary and fundamental artifact. However, the bad design choices as well as the repetitive design modifications, due to the evolution of requirements, could deteriorate the quality of the metamodel. The accumulation of bad design choices and quality degradation could imply negative outcomes in the long term. Thus, refactoring metamodels is a very important task. It aims to improve and maintain good quality characteristics of metamodels such as maintainability, reusability, extendibility, etc. Moreover, the refactoring task of metamodels is complex, especially, when dealing with large designs. Therefore, automating and assisting architects in this task is advantageous since they could focus on more valuable tasks that require human intuition. In this thesis, we propose a cartography of the potential tasks that we could either automate or improve using Artificial Intelligence techniques. Then, we select the metamodeling task and we tackle the problem of metamodel refactoring. We suggest two different approaches: A first approach that consists of using a genetic algorithm to optimize set quality attributes and recommend candidate metamodel refactoring solutions. A second approach based on mathematical logic that consists of defining the specification of an input metamodel, encoding the quality attributes and the absence of smells as a set of constraints and finally satisfying these constraints using Alloy

    Algorithm-aided Information Design: Hybrid Design approach on the edge of associative methodologies in AEC

    Get PDF
    Dissertação de mestrado em European Master in Building Information ModellingLast three decades have brought colossal progress to design methodologies within the common pursuit toward a seamless fusion between digital and physical worlds and augmenting it with the of computation power and network coverage. For this historically short period, two generations of methodologies and tools have emerged: Additive generation and parametric Associative generation of CAD. Currently, designers worldwide engaged in new forms of design exploration. From this race, two prominent methodologies have developed from Associative Design approach – Object-Oriented Design (OOD) and Algorithm-Aided Design (AAD). The primary research objective is to investigate, examine, and push boundaries between OOD and AAD for new design space determination, where advantages of both design methods are fused to produce a new generation methodology which is called in the present study AID (Algorithm-aided Information Design). The study methodology is structured into two flows. In the first flow, existing CAD methodologies are investigated, and the conceptual framework is extracted based on the state of art analysis, then analysed data is synthesized into the subject proposal. In the second flow, tools and workflows are elaborated and examined on practice to confirm the subject proposal. In compliance, the content of the research consists of two theoretical and practical parts. In the first theoretical part, a literature review is conducted, and assumptions are made to speculate about AID methodology, its tools, possible advantages and drawbacks. Next, case studies are performed according to sequential stages of digital design through the lens of practical AID methodology implementation. Case studies are covering such design aspects as model & documentation generation, design automation, interoperability, manufacturing control, performance analysis and optimization. Ultimately, a set of test projects is developed with the AID methodology applied. After the practical part, research returns to the theory where analytical information is gathered based on the literature review, conceptual framework, and experimental practice reports. In summary, the study synthesizes AID methodology as part of Hybrid Design, which enables creative use of tools and elaborating of agile design systems integrating additive and associative methodologies of Digital Design. In general, the study is based on agile methods and cyclic research development mixed between practice and theory to achieve a comprehensive vision of the subject.Last three decades have brought colossal progress to design methodologies within the common pursuit toward a seamless fusion between digital and physical worlds and augmenting it with the of computation power and network coverage. For this historically short period, two generations of methodologies and tools have emerged: Additive generation and parametric Associative generation of CAD. Currently, designers worldwide engaged in new forms of design exploration. From this race, two prominent methodologies have developed from Associative Design approach – Object-Oriented Design (OOD) and Algorithm-Aided Design (AAD). The primary research objective is to investigate, examine, and push boundaries between OOD and AAD for new design space determination, where advantages of both design methods are fused to produce a new generation methodology which is called in the present study AID (Algorithm-aided Information Design). The study methodology is structured into two flows. In the first flow, existing CAD methodologies are investigated, and the conceptual framework is extracted based on the state of art analysis, then analysed data is synthesized into the subject proposal. In the second flow, tools and workflows are elaborated and examined on practice to confirm the subject proposal. In compliance, the content of the research consists of two theoretical and practical parts. In the first theoretical part, a literature review is conducted, and assumptions are made to speculate about AID methodology, its tools, possible advantages and drawbacks. Next, case studies are performed according to sequential stages of digital design through the lens of practical AID methodology implementation. Case studies are covering such design aspects as model & documentation generation, design automation, interoperability, manufacturing control, performance analysis and optimization. Ultimately, a set of test projects is developed with the AID methodology applied. After the practical part, research returns to the theory where analytical information is gathered based on the literature review, conceptual framework, and experimental practice reports. In summary, the study synthesizes AID methodology as part of Hybrid Design, which enables creative use of tools and elaborating of agile design systems integrating additive and associative methodologies of Digital Design. In general, the study is based on agile methods and cyclic research development mixed between practice and theory to achieve a comprehensive vision of the subject

    Management, Technology and Learning for Individuals, Organisations and Society in Turbulent Environments

    Get PDF
    This book presents the collection of fifty papers which were presented in the Second International Conference on BUSINESS SUSTAINABILITY 2011 - Management, Technology and Learning for Individuals, Organisations and Society in Turbulent Environments , held in Póvoa de Varzim, Portugal, from 22ndto 24thof June, 2011.The main motive of the meeting was growing awareness of the importance of the sustainability issue. This importance had emerged from the growing uncertainty of the market behaviour that leads to the characterization of the market, i.e. environment, as turbulent. Actually, the characterization of the environment as uncertain and turbulent reflects the fact that the traditional technocratic and/or socio-technical approaches cannot effectively and efficiently lead with the present situation. In other words, the rise of the sustainability issue means the quest for new instruments to deal with uncertainty and/or turbulence. The sustainability issue has a complex nature and solutions are sought in a wide range of domains and instruments to achieve and manage it. The domains range from environmental sustainability (referring to natural environment) through organisational and business sustainability towards social sustainability. Concerning the instruments for sustainability, they range from traditional engineering and management methodologies towards “soft” instruments such as knowledge, learning, and creativity. The papers in this book address virtually whole sustainability problems space in a greater or lesser extent. However, although the uncertainty and/or turbulence, or in other words the dynamic properties, come from coupling of management, technology, learning, individuals, organisations and society, meaning that everything is at the same time effect and cause, we wanted to put the emphasis on business with the intention to address primarily companies and their businesses. Due to this reason, the main title of the book is “Business Sustainability 2.0” but with the approach of coupling Management, Technology and Learning for individuals, organisations and society in Turbulent Environments. Also, the notation“2.0” is to promote the publication as a step further from our previous publication – “Business Sustainability I” – as would be for a new version of software. Concerning the Second International Conference on BUSINESS SUSTAINABILITY, its particularity was that it had served primarily as a learning environment in which the papers published in this book were the ground for further individual and collective growth in understanding and perception of sustainability and capacity for building new instruments for business sustainability. In that respect, the methodology of the conference work was basically dialogical, meaning promoting dialog on the papers, but also including formal paper presentations. In this way, the conference presented a rich space for satisfying different authors’ and participants’ needs. Additionally, promoting the widest and global learning environment and participation, in accordance with the Conference's assumed mission to promote Proactive Generative Collaborative Learning, the Conference Organisation shares/puts open to the community the papers presented in this book, as well as the papers presented on the previous Conference(s). These papers can be accessed from the conference webpage (http://labve.dps.uminho.pt/bs11). In these terms, this book could also be understood as a complementary instrument to the Conference authors’ and participants’, but also to the wider readerships’ interested in the sustainability issues. The book brought together 107 authors from 11 countries, namely from Australia, Belgium, Brazil, Canada, France, Germany, Italy, Portugal, Serbia, Switzerland, and United States of America. The authors “ranged” from senior and renowned scientists to young researchers providing a rich and learning environment. At the end, the editors hope, and would like, that this book to be useful, meeting the expectation of the authors and wider readership and serving for enhancing the individual and collective learning, and to incentive further scientific development and creation of new papers. Also, the editors would use this opportunity to announce the intention to continue with new editions of the conference and subsequent editions of accompanying books on the subject of BUSINESS SUSTAINABILITY, the third of which is planned for year 2013.info:eu-repo/semantics/publishedVersio

    Cloud Service Selection System Approach based on QoS Model: A Systematic Review

    Get PDF
    The Internet of Things (IoT) has received a lot of interest from researchers recently. IoT is seen as a component of the Internet of Things, which will include billions of intelligent, talkative "things" in the coming decades. IoT is a diverse, multi-layer, wide-area network composed of a number of network links. The detection of services and on-demand supply are difficult in such networks, which are comprised of a variety of resource-limited devices. The growth of service computing-related fields will be aided by the development of new IoT services. Therefore, Cloud service composition provides significant services by integrating the single services. Because of the fast spread of cloud services and their different Quality of Service (QoS), identifying necessary tasks and putting together a service model that includes specific performance assurances has become a major technological problem that has caused widespread concern. Various strategies are used in the composition of services i.e., Clustering, Fuzzy, Deep Learning, Particle Swarm Optimization, Cuckoo Search Algorithm and so on. Researchers have made significant efforts in this field, and computational intelligence approaches are thought to be useful in tackling such challenges. Even though, no systematic research on this topic has been done with specific attention to computational intelligence. Therefore, this publication provides a thorough overview of QoS-aware web service composition, with QoS models and approaches to finding future aspects

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    Bridging different worlds:Using people analytics effectively for improving well-being and performance

    Get PDF
    Sparked by an ever-increasing amount of data, organizations have begun to analyze the data of their workforce in hopes of improving their business outcomes (Cascio, Boudreau, & Fink, 2019; Levenson, 2005). This practice is called people analytics and refers to “the analysis of employee and workforce data to reveal insights and provide recommendations to improve business outcomes” (Ferrar & Green, 2021). People analytics can support any employee-related decision (Ellmer & Reichel, 2021; Huselid & Minbaeva, 2019), help the Human Resources Management (HRM) function become more strategic (Angrave, Charlwood, Kirkpatrick, Lawrence, & Stuart, 2016), and allow an organization to prepare for the future (Guenole, Ferrar, & Feinzig, 2017). Practically, people analytics can, for example, identify internal and external talents, create succession pipelines, predict which talents may be tempted to leave the organization and provide recommendations on how they may be retained most efficiently (Minbaeva & Vardi, 2018; Rosenbaum, 2019; Yuan, Kroon, & Kramer, 2021). Due to these proposed benefits, organizations invest heavily in people analytics (Ledet, McNulty, Morales, & Shandell, 2021). Nevertheless, most organizations struggle to use it effectively (Ledet et al., 2020; Orgvue, 2019; Inc. Sierra-Cedar, 2019). Therefore, this dissertation aims to answer the following research question: How people analytics can be used to gain insights into and provide recommendations to enhance business outcomes? To answer this question, this dissertation addressed three challenges from the people analytics literature after a general introduction of the topic and challenges in chapter 1. The challenges, their importance and the results of the different chapters are briefly discussed in the following. How can an effective people analytics function be created? (challenge 1) This dissertation investigates what a people analytics function requires to be effective. This is important, as there is a rather limited understanding of how people analytics can be implemented effectively within the people analytics literature (Fernandez & Gallardo-Gallardo, 2020; Qamar & Samad, 2021). To address this issue, I conducted a narrative literature review (chapter 2) and follow-up qualitative research (chapter 3). For the literature review, the people analytics literature and broader more advanced, business intelligence domain that people analytics is part of (Davenport & Harris, 2017; Holsapple et al., 2014) were investigated. Based upon this, a number of crucial elements for an effective people analytics function were identified. However, a number of gaps within the literature were also found. Specifically, the relationships between the different elements and the processes a people analytics function requires to transform its inputs into outputs remained unclear. To address these gaps in our knowledge, qualitative follow-up research was conducted (chapter 3). To this end, 36 in-depth interviews with members of nine people analytics functions and their stakeholders were conducted. Based on the findings, eight processes were identified to transform the inputs into outputs. Some of these are related to the projects of a people analytics function (i.e. project selection, management, execution and the compliant and ethical behavior of people analytics experts) and others to their stakeholders (i.e. the attitude of stakeholders, collaborations, partnerships and the transparency of people analytics function to their stakeholders). Furthermore, the “People Analytics Effectiveness Model” together with seven propositions to guide future research were developed. These propositions illustrated on one hand the relative importance of the different elements a people analytics function requires. For example, having data was found to be more crucial than a specific organizational culture. On the other hand, the propositions showed the relationships between the different elements: Delivering high-quality people analytics products, for instance, increased the reputation of the people analytics function. Furthermore, as the reputation increased, people analytics functions were typically provided with more inputs and better contextual factors, such as access to new datasets and increased support from senior management. How can people analytics be used to enhance employee well-being and performance? (challenge 2) This dissertation demonstrates how people analytics can be used to enhance employee well-being and performance through two use cases. This is relevant, as organizations increasingly consider how the interest of the manager and employees may be achieved in conjunction (Battilana, Obloj, Pache, & Sengul, 2020; Paauwe, 2004). However, there are few empirical studies on people analytics that demonstrate it can provide insights and recommendations that support employee well-being or performance (Margherita, 2021). In chapter 4, I therefore demonstrate how people analytics can be used to evaluate whether the decision of a company to adopt the agile way of working is beneficial to employee well-being and performance. The agile way of working is an increasingly popular way of working among teams, that is characterized by self-management, face-to-face communication, reflexivity, a quick product turnaround and customer interaction (Beck et al., 2001). To do this, I developed a survey focused on the agile way of working and tested among 97 teams from an organization whether the agile way of working leads to beneficial outcomes. Based upon the results, it appeared that this was indeed the case: The agile way of working was found to be related to increased levels of team engagement and performance regardless of teams’ functional domains. Moreover, it was found that these effects are partially mediated by psychological safety climate. Following this research, the company central to this research now has data-driven insights that support the decision to implement the agile way of working across a variety of functional domains. In chapter 5, I show how people analytics can be used to provide insights about employee well-being and performance and inform job design practices. Specifically, I tested in line with the HRM literature (e.g. Ayala et al., 2017; Benitez et al., 2019; Tordera et al., 2020) whether complex trade-off patterns may occur between employee well-being and performance. Based upon data of 5,729 employees working in a large financial organization, I find support for the notion that five well-being and performance profiles exist: 1. Low well-being/low performance, 2. low well-being/medium performance, 3. high well-being/medium performance, 4. high well-being/high performance, and 5. high well-being/top performance. Furthermore, it appeared that specific job demands and resources are related to these well-being and performance profiles. Specifically, employees with more learning and development opportunities, more social support from colleagues, more autonomy, and less role-conflict were related to the high well-being profiles. Additionally, employees with more role clarity, more performance feedback, more autonomy, and less work pressure were related to the high- and top-performance profiles. Finally, communication and social support from the manager were found to be relatively weak antecedents of the different profiles. How can people analytics departments benefit from a collaboration with academia? (challenge 3) The final challenge this dissertation addresses is how people analytics departments may benefit from a collaboration with academia. This is an important topic, as a competency gap among people analytics practitioners has been identified as being one of the main obstacles for organizations to use people analytics effectively (Fernandez & Gallardo-Gallardo, 2020; McCartney et al., 2020). Specifically, Human Resource (HR) professionals usually fall short of statistical skills and statistically strong individuals usually lack business acumen and HR knowledge (Andersen, 2016; McCartney et al., 2020; Rasmussen & Ulrich, 2015). As a potential solution, a collaboration with academia has been suggested (Simón & Ferreiro, 2018; Van der Togt & Rasmussen, 2017). Specifically, the so-called “boundary spawners”, in which for example PhD candidates bridge the gap between academia and a people analytics department is frequently mentioned within the people analytics literature (Minbaeva, 2018; Van der Togt & Rasmussen, 2017). To illustrate how this may work in practice, chapter 6 of this dissertation discusses the benefits, challenges and potential ways to navigate through these challenges based upon my own experience of working in a joined PhD trajectory for 4.5 years. In total, six benefits and five challenges were identified in this chapter. Among the benefits, the opportunity to conduct relevant research for both parties and the time and opportunity to identify and address real and pressing business needs are for instance discussed. With regards to the challenges, topics such as the different potential interest for both parties and limitations regarding the data are described. Discussion After addressing the challenges, the discussion follows in chapter 7. This chapter holds a summary of the main findings of this dissertation, their theoretical and practical contributions, strengths and limitations and points of reflection. The primary contribution of this dissertation is to explore how people analytics can be used to gain insights into and provide recommendations to enhance business outcomes. To this end, the discussion chapter described what a people analytics function requires to be effective, investigated two potential use cases and showed how a collaboration with academics may be beneficial and challenging. Furthermore, four points of reflection are discussed within this chapter. First, I describe how people analytics can contribute to and benefit from the employee experience. The employee experience is one of the actual trends within the field of HRM and emphasizes that organizations need to consider the wants, needs and expectations of their employees from the moment of their recruitment all the way to the moment they leave the organization. Furthermore, as each employee is different, employee experience experts emphasize the need to offer a differentiating employee experience depending on the wants, needs and expectations of specific employees (Dye et al., 2020; Whitter, 2019). In this section, five concrete ways in which people analytics can support employee experience experts through data-driven insight are discussed. Furthermore, the reverse value of the employee experience for people analytics is also discussed. Specifically, whereas a substantial amount of HR professionals are confused or skeptical about the use of people analytics (Guenole & Feinzig, 2018), the far majority is enthousiastic about improving the employee experience (Dye et al., 2020). By offering insights and recommendations on a topic HR professionals are enthousiastic about, it is suggested people analytics can improve the number of data-driven decisions taken within the HRM function, and through this, enhance employee well-being and performance. Second, I explore how data science and HRM research can become more intertwined. On one hand, it is discussed how HRM scholars can utilize the data sources and analysis techniques used by data scientists to make new contributions to the HRM literature. Specifically, the analysis of non-survey data, such as unstructured text and (HRM) system data, is highlighted as a method to unveil relevant insights into the sentiment, behavior, and perceptions of employees (e.g., Gloor et al., 2017; Yang et al., 2021). On the other hand, it is suggested that data scientists may benefit more from using survey data, theories, and analysis and interpretation techniques common among HRM scholars. This could help them to avoid oversimplifying reality (e.g., human beings are more complex and unpredictable than the numbers captured in the HR information system or their model output may suggest) and avoid misinterpretations, miscalculations and errors as a result (Giermindl et al., 2021). Third, I discuss the topic of ethics within people analytics. Despite of the benefits of people analytics that this dissertation highlighted, people analytics has also been used by organizations for unethical matters, such as intrusively tracking employees (Ajunwa et al., 2017; Tursunbayeva et al., 2021), (unintentionally) discrimination (Dastin, 2018) or even firing employees (e.g., Business Internet Tech, 2021). Therefore, the ethical aspect of people analytics are highlighted in this section. On one side of the spectrum, I discuss that it is always necessary to operate within the boundaries of the law but not always sufficient, and explore the negative consequences of behaving unethically for the people analytics function itself. On the other end of the spectrum, I also discuss three examples in which I believe it is ethically just to push for the use of people analytics. Specifically, I advocate that data-driven insights can bring more equality and fairness to the workplace, increase the employability of employees and enhance employee well-being. Therefore, it is concluded in this section people analytics is not necessarly good or evil and that it should be reviewed on a case-by-case basis whether it is ethical to use people analytics. Fourth, I discuss the governance of people analytics. Although in this dissertation various governance aspects are discussed (e.g., data governance, governance of the people analytics function), I suggest people analytics scholars and practitioners should also pay attention to the question of who owns people analytics. This is important, as software providers are increasingly facilitating HR experts and line managers to run their own (semi) automated advanced analytics models. However, as these professionals typically lack the skills, there is a high risk of misinterpretation of the results, finding incorrect findings due to pure chance (e.g., as a result of the error margin for all statistical models) and statistical artifacts such as reverse causal relationships and spurious effects. I therefore recommend caution in enabling professionals who lack the capabilities to run advanced analytical models in fear of wasting valuable organizational resources on the wrong actions, and to focus on building their analytical capability first. Finally, I conclude this dissertation by emphasizing that the age of people analytics is just beginning. Continued attention from academics and practitioners will therefore be needed to ensure that the right bridges are built between different worlds to be effective at people analytics: These are the worlds of HRM and technology; the worlds of academia and practice; the worlds of data science practitioners and HR practitioners; the worlds of subjectivity and objectivity; and the worlds of employee well-being and performance
    • 

    corecore