4,331 research outputs found

    ILR Research in Progress 2013-14

    Get PDF
    The production of scholarly research continues to be one of the primary missions of the ILR School. During a typical academic year, ILR faculty members published or had accepted for publication over 25 books, edited volumes, and monographs, 170 articles and chapters in edited volumes, numerous book reviews. In addition, a large number of manuscripts were submitted for publication, presented at professional association meetings, or circulated in working paper form. Our faculty's research continues to find its way into the very best industrial relations, social science and statistics journals.Research_in_Progress_2013_14.pdf: 54 downloads, before Oct. 1, 2020

    Initiating organizational memories using ontology network analysis

    Get PDF
    One of the important problems in organizational memories is their initial set-up. It is difficult to choose the right information to include in an organizational memory, and the right information is also a prerequisite for maximizing the uptake and relevance of the memory content. To tackle this problem, most developers adopt heavy-weight solutions and rely on a faithful continuous interaction with users to create and improve its content. In this paper, we explore the use of an automatic, light-weight solution, drawn from the underlying ingredients of an organizational memory: ontologies. We have developed an ontology-based network analysis method which we applied to tackle the problem of identifying communities of practice in an organization. We use ontology-based network analysis as a means to provide content automatically for the initial set up of an organizational memory

    TRACKING FORMATION CHANGES AND ITS EFFECTS ON SOCCER USING POSITION DATA

    Get PDF
    This study investigated the application of advanced machine learning methods, specifically k-means clustering, k-Nearest Neighbors (kNN), and Support Vector Machines (SVM), to analyze player tracking data in soccer. The primary hypothesis posits that such data can yield a standalone, in-depth understanding of soccer matches. The study revealed that while k-means and spatial analysis are promising in analyzing player positions, kNN and SVM show limitations without additional variables. Spatial analysis examined each team’s convex hull and studied the correlation between team length, width, and surface area. Results showed team length and surface area have a strong positive correlation with a value of 0.8954. This suggested that teams with longer team length have a more direct style of play with players more spread out which led to larger surface areas. k-means clustering was performed with different k values derived from different approaches. The silhouette method recommended a k value of 2 and the elbow recommended a k value of 4. The context of the sport suggested additional analysis with a k value of 11. The results from k-means suggested natural data partitions, highlighting distinct player roles and field positions. kNN was performed to find similar players with the model of k = 19 showing the highest accuracy of 8.61%. The SVM model returned a classification of 55 classes which indicated a highly granular level of categorization for player roles. The results from kNN and SVM indicated the necessity of further contextual data for more effective analysis and emphasized the need for balanced datasets and careful model evaluation to avoid biases and ensure practical application in real-world scenarios. In conclusion, each algorithm offers unique perspectives and interpretations on player positioning and team formations. These algorithms, when combined with expert knowledge and additional contextual data, can significantly enrich the scope of analysis in soccer. Future work should consider incorporating event data and additional variables to enhance the depth of analytical insights, enabling a more comprehensive understanding of how formations evolve in response to various in-game situations

    Supporting the Production of High-Quality Data in Concurrent Plant Engineering Using a MetaDataRepository

    Get PDF
    In recent years, several process models for data quality management have been proposed. As data quality problems are highly application-specific, these models have to remain abstract. This leaves the question of what to do exactly in a given situation unanswered. The task of implementing a data quality process is usually delegated to data quality experts. To do so, they rely heavily on input from domain experts, especially regarding data quality rules. However, in large engineering projects, the number of rules is very large and different domain experts might have different data quality needs. This considerably complicates the task of the data quality experts. Nevertheless, the domain experts need quality measures to support their decision-making process what data quality problems to solve most urgently. In this paper, we propose a MetaDataRepository architecture which allows domain experts to model their quality expectations without the help from technical experts. It balances three conflicting goals: non-intrusiveness, simple and easy usage for domain experts and sufficient expressive power to handle most common data quality problems in a large concurrent engineering environment

    Alamprotsessidest, protsesside variatsioonidest ja nendevahelisest koosmĂ”just: Integreeritud “jaga ja valitse” meetod Ă€riprotsesside ja nende variatsioonide modelleerimiseks

    Get PDF
    Igat organisatsiooni vĂ”ib vaadelda kui sĂŒsteemi, mis rakendab Ă€riprotsesse vÀÀrtuste loomiseks. Suurtes organisatsioonides on tavapĂ€rane esitada Ă€riprotsesse kasutades protsessimudeleid, mida kasutatakse erinevatel eesmĂ€rkidel nagu nĂ€iteks sisekommunikatsiooniks, koolitusteks, protsesside parendamiseks ja infosĂŒsteemide arendamiseks. Arvestades protsessimudelite multifunktsionaalset olemust tuleb protsessimudeleid koostada selliselt, et see vĂ”imaldab nendest arusaamist ning haldamist erinevate osapoolte poolt. KĂ€esolev doktoritöö pakkudes vĂ€lja integreeritud dekompositsioonist ajendatud meetodi Ă€riprotsesside modelleerimiseks koos nende variatsioonidega. Meetodi kandvaks ideeks on jĂ€rkjĂ€rguline Ă€riprotsessi ja selle variatsioonide dekomponeerimine alamprotsessideks. Igal dekompositsiooni tasemel ning iga alamprotsessi jaoks mÀÀratletakse esmalt kas vastavat alamprotsessi tuleks modelleerida konsolideeritud moel (ĂŒks alamprotsessi mudel kĂ”ikide vĂ”i osade variatsioonide jaoks) vĂ”i fragmenteeritud moel (ĂŒks alamprotsess ĂŒhe variatsiooni jaoks). Sel moel kasutades ĂŒlalt-alla lĂ€henemist viilutatakse ja tĂŒkeldatakse Ă€riprotsess vĂ€iksemateks osadeks. Äriprotsess viilutatakse esmalt tema variatsioonideks ning seejĂ€rel tĂŒkeldatakse dekompositsioonideks kasutades kaht peamist parameetrit. Esimeseks on Ă€ri ajendid variatsioonide jaoks – igal Ă€riprotsessi variatsioonil on oma juurpĂ”hjus, mis pĂ€rineb Ă€rist endast ja pĂ”hjustab protsesside kĂ€ivitamisel erisusi. Need juurpĂ”hjused jagatakse viide kategooriasse – ajendid kliendist, tootest, operatiivsetest pĂ”hjustest, turust ja ajast. Teine parameeter on erinevuste hulk viisides (tegevuste jĂ€rjekord, tulemuste vÀÀrtused jms) kuidas variatsioonid oma vĂ€ljundit toodavad. KĂ€esolevas töös esitatud meetod on valideeritud kahes praktilises juhtumiuuringus. Kui esimeses juhtumiuuringus on pĂ”hirĂ”hk olemasolevate protsessimudelite konsolideerimisel, siis teises protsessimudelite avastamisel. Sel moel rakendatakse meetodit kahes eri kontekstis kahele ĂŒksteisest eristatud juhtumile. MĂ”lemas juhtumiuuringus tootis meetod protsessimudelite hulgad, milles oli liiasust kuni 50% vĂ€hem vĂ”rreldes tavapĂ€raste meetoditega jĂ€ttes samas mudelite keerukuse nendega vĂ”rreldes enamvĂ€hem samale tasemele.Every organization can be conceived as a system where value is created by means of business processes. In large organizations, it is common for business processes to be represented by means of process models, which are used for a range of purposes such as internal communication, training, process improvement and information systems development. Given their multifunctional character, process models need to be captured in a way that facilitates understanding and maintenance by a variety of stakeholders. This thesis proposes an integrated decomposition-driven method for modeling business processes with variants. The core idea of the method is to incrementally construct a decomposition of a business process and its variants into subprocesses. At each level of the decomposition and for each subprocess, we determine if this subprocess should be modeled in a consolidated manner (one subprocess model for all variants or for multiple variants) or in a fragmented manner (one subprocess model per variant). In this manner, a top-down approach of slicing and dicing a business process is taken. The process model is sliced in accordance with its variants, and then diced (decomposed). This decision is taken based on two parameters. The first is the business drivers for the existence of the variants. All variants of a business process has a root cause i.e. a reason stemming from the business that causes the processes to have differences in how they are executed. The second parameter considered when deciding how to model the variants is the degree of difference in the way the variants produce their outcomes. As such, the modeling of business process variations is dependent on their degree of similarity in regards to how they produce value (such as values, execution order and so on). The method presented in this thesis is validated by two real-life case studies. The first case study concerns a case of consolidation existing process models. The other deals with green-field process discovery. As such, the method is applied in two different contexts (consolidation and discovery) on two different cases that differ from each other. In both cases, the method produced sets of process models that had reduced the duplicity rate by up to 50 % while keeping the degree of complexity of the models relatively stable
    • 

    corecore