349 research outputs found

    NONLINEAR APPROACH IN CLASSIFICATION VISUALIZATION AND EVALUATION

    Get PDF
    In this paper we have proposed the novel methodology to visualize classification scheme in informatics domain. We have mapped a documents collection of ACM (Association for Computing Machinery) Digital Library to a sphere surface. Two main stages of visualization processes complement one another: classification and clusterization. Primarily classified documents were visualized and their further clusterization by means of keywords was crucial in evaluation process. For clusters analysis of given visualization maps nonlinear digital filtering techniques were applied. The clusters of keywords were characterized by a local accuracy. Obtained semantic map was included to validation process

    Data Analytics for Effectiveness Evaluation of Islamic Higher Educationusing K-Means Algorithm

    Get PDF
    The aim of this research is to utilize data analytics technology in evaluating the development of Indonesian national curriculum based on Indonesian National Qualification Framework, especially in universities. This research uses Exploratory Data Analysis (EDA) and several clusterization method, among others K-Means, K-Means++, MiniBatch K-Means, and MiniBatch K-Means++. The result of this research is not to measure the accuracy of clasterization result, but to discover the insight and interpretasion information from data collections that related with national curriculum in Indonesia. Based on the EDA and claterization methods with 30 variables of quetions and 67 students as respondent, MiniBatch K-Means with 2 cluster has the best pattern that reliable with highest Silhouette Coefficient value. However, on average K-Means++ has better interpretation than the others, with the average of Silhouette Coefficient value is highest. From that result, thisresearch found that generally around 77,67% students can understand and feel the application of the Indonesian national curriculum well, but specifically only about 19.4% of students really understand and feel the impact of the curriculum very well. This is important to be evaluated by curriculum users in this case students and tertiary educational institution to improve the quality of academic services in the application of the Indonesian national qualification network

    Cheating to achieve Formal Concept Analysis over a large formal context

    Get PDF
    International audienceResearchers are facing one of the main problems of the Information Era. As more articles are made electronically available, it gets harder to follow trends in the different domains of research. Cheap, coherent and fast to construct knowledge models of research domains will be much required when information becomes unmanageable. While Formal Concept Analysis (FCA) has been widely used on several areas to construct knowledge artifacts for this purpose (Ontology development, Information Retrieval, Software Refactoring, Knowledge Discovery), the large amount of documents and terminology used on research domains makes it not a very good option (because of the high computational cost and humanly-unprocessable output). In this article we propose a novel heuristic to create a taxonomy from a large term-document dataset using Latent Semantic Analysis and Formal Concept Analysis. We provide and discuss its implementation on a real dataset from the Software Architecture community obtained from the ISI Web of Knowledge (4400 documents)

    Automatic tolerance inspection through Reverse Engineering: a segmentation technique for plastic injection moulded parts

    Get PDF
    This work studies segmentations procedures to recognise features in a Reverse Engineering (RE) application that is oriented to computer-aided tolerance inspection of injection moulding die set-up, necessary to manufacture electromechanical components. It will discuss all steps of the procedures, from the initial acquisition to the final measure data management, but specific original developments will be focused on the RE post-processing method, that should solve the problem related to the automation of the surface recognition and then of the inspection process. As it will be explained in the first two Chapters, automation of the inspection process pertains, eminently, to feature recognition after the segmentation process. This work presents a voxel-based approach with the aim of reducing the computation efforts related to tessellation and curvature analysis, with or without filtering. In fact, a voxel structure approximates the shape through parallelepipeds that include small sub-set of points. In this sense, it represents a filter, since the number of voxels is less than the total number of points, but also a local approximation of the surface, if proper fitting models are applied. Through sensitivity analysis and industrial applications, limits and perspectives of the proposed algorithms are discussed and validated in terms of accuracy and save of time. Validation case-studies are taken from real applications made in ABB Sace S.p.A., that promoted this research. Plastic injection moulding of electromechanical components has a time-consuming die set-up. It is due to the necessity of providing dies with many cavities, which during the cooling phase may present different stamping conditions, thus defects that include lengths outside their dimensional tolerance, and geometrical errors. To increase the industrial efficiency, the automation of the inspection is not only due to the automatic recognition of features but also to a computer-aided inspection protocol (path planning and inspection data management). For this reason, also these steps will be faced, as the natural framework of the thesis research activity. The work structure concerns with six chapters. In Chapter 1, an introduction to the whole procedure is presented, focusing on reasons and utilities of the application of RE techniques in industrial engineering. Chapter 2 analyses acquisition issues and methods that are related to our application, describing: (a) selected hardware; (b) adopted strategy related to the cloud of point acquisition. In Chapter 3, the proposed RE post-processing is described together with a state of art about data segmentation and surface reconstruction. Chapter 4 discusses the proposed algorithms through sensitivity studies concerning thresholds and parameters utilised in segmentation phase and surface reconstruction. Chapter 5 explains briefly the inspection workflow, PDM requirements and solution, together with a preliminary assessing of measures and their reliability. These three chapters (3, 4 and 5) report final sections, called “Discussion”, in which specific considerations are given. Finally, Chapter 6 gives examples of the proposed segmentation technique in the framework of the industrial applications, through specific case studies

    The Role of Non-R&D Expenditures in Promoting Innovation in Europe

    Get PDF
    In this article we estimate the value of “Non-R&D Innovation Expenditures” in Europe. We use data from the European Innovation Scoreboard-EIS of the European Commission from the period 2010-2019. We test data with the following econometric models i.e.: Pooled OLS, Dynamic Panel, Panel Data with Fixed Effects, Panel Data with Random Effects, WLS. We found that “Non-R&D Innovation Expenditures” is positively associated among others to “Innovation Index” and “Firm Investments” and negatively associated among others to “Human Resources” and “Government Procurement of Advanced Technology Products”. We use the k-Means algorithm with either the Silhouette Coefficient and the Elbow Method in a confrontation with the network analysis optimized with the Distance of Manhattan and we find that the optimal number of clusters is four. Furthermore, we propose a confrontation among eight machine learning algorithms to predict the level of “Non-R&D Innovation Expenditures” either with Original Data-OD either with Augmented Data-AD. We found that Gradient Boost Trees Regression is the best predictor for OD while Tree Ensemble Regression is the best Predictor for AD. Finally, we verify that the prediction with AD is more efficient of that with OD with a reduction in the average value of statistical errors equal to 40,50%

    Industry 4.0 & Servitization: Role and impact of digital servitization strategies in international industrial markets

    Get PDF
    Il presente progetto di ricerca riguarda l'indagine di due fenomeni principali, la digitalizzazione e la servitizzazione, e la risultante ‘servitizzazione digitale’, all’interno dei mercati industriali. L'obiettivo principale di questo studio è contribuire alla generazione di nuova conoscenza circa i fenomeni indagati e fornire a manager e professionisti validi suggerimenti su come affrontarli con successo. La digitalizzazione e la servitizzazione digitale sono ambiti di ricerca recenti ma in forte crescita, attorno ai quali sta convergendo l'attenzione di numerose figure accademiche e professionali. Nonostante il grande dinamismo che caratterizza tali fenomeni, le imprese industriali si trovano ad affrontare ancora oggi diverse sfide nel tentativo di implementarli. In effetti, le aziende manifatturiere dimostrano di percepire barriere elevate all'investimento in strategie digitali. Adottando un approccio alla problematizzazione, è possibile notare come sia necessario sviluppare ulteriori conoscenze circa i processi di digitalizzazione e servitizzazione al fine di comprendere al meglio i vantaggi e le sfide ad essi connessi. Partendo dall’identificazione di una serie di nuove aree di ricerca ancora poco studiate, questa tesi analizza empiricamente i fenomeni di digitalizzazione e servitizzazione. A tal fine, il presente lavoro di ricerca è strutturato in cinque capitoli principali. Il Capitolo I – Fondamenti teorici e note metodologiche – passa in rassegna la letteratura disponibile su digitalizzazione e servitizzazione digitale e fornisce chiarimenti sulle note metodologiche adottate in questa tesi. Lo scopo del capitolo è fornire un'analisi teorica preliminare sulla digitalizzazione e la servitizzazione. Il Capitolo II – I meccanismi di diffusione della conoscenza di Industria 4.0 nei distretti industriali tradizionali: evidenze dall'Italia – indaga empiricamente la digitalizzazione a livello di analisi contestuale. Il capitolo esamina il contesto e i meccanismi attraverso i quali si stanno diffondendo le tecnologie di Industria 4.0. Il Capitolo III – Verso una prospettiva multilivello sulla servitizzazione digitale – studia empiricamente i percorsi di servitizzazione digitale di due aziende manifatturiere a livello intersettoriale. Il Capitolo IV – Tensioni intra e interorganizzative di una strategia di servitizzazione digitale: evidenze dal settore meccatronico in Italia – è uno studio empirico circa le tensioni emergenti legate alla servitizzazione digitale. In particolare, il capitolo implementa un'indagine approfondita su un'azienda industriale esplorando longitudinalmente le fasi del suo percorso di servitizzazione digitale al fine di districarne la complessità. Il Capitolo V – Osservazioni conclusive e percorsi di ricerca futuri – traccia le conclusioni della ricerca e fornisce linee di ricerca future. I principali contributi di questo lavoro di ricerca possono essere riassunti come segue. Questa tesi evidenzia la stretta connessione tra i fenomeni di digitalizzazione e servitizzazione e prova che la digitalizzazione può essere ‘un’arma a doppio taglio’. I risultati empirici raccolti descrivono la servitizzazione digitale attraverso la sua natura multilivello, che si manifesta su tre livelli: micro (individuale), organizzativo e di network. Inoltre, sottolinea l'impatto del networking nei processi di digitalizzazione e servitizzazione; nuovi attori entrano a far parte della catena del valore e possono influenzare l’andamento dei due fenomeni. Infine, la complessità dei processi di digitalizzazione e servitizzazione è provata empiricamente e si propone evidenza delle difficoltà incontrate dalle aziende manifatturiere nel tentativo di realizzarli

    The impact of patent applications on technological innovation in European countries

    Get PDF
    We investigate the innovational determinants of “Patent Applications” in Europe. We use data from the European Innovation Scoreboard-EIS of the European Commission for 36 countries in the period 2010-2019. We use Panel Data with Fixed Effects, Panel Data with Random Effects, Pooled OLS, WLS and Dynamic Panel. We found that the variables that have a deeper positive association with “Patent Applications” are “Human Resources” and “Intellectual Assets”, while the variables that show a more intense negative relation with Patent Applications are “Employment Share in Manufacturing” and “Total Entrepreneurial Activity”. A cluster analysis with the k-Means algorithm optimized with the Silhouette Coefficient has been realized. The results show the presence of two clusters. A network analysis with the distance of Manhattan has been performed and we find three different complex network structures. Finally, a comparison is made among eight machine learning algorithms for the prediction of the future value of the “Patent Applications”. We found that PNN-Probabilistic Neural Network is the best performing algorithm. Using PNN the results show that the mean future value of “Patent Applications” in the estimated countries is expected to decrease of -0.1%
    • …
    corecore