13 research outputs found

    Analysis and Forecast of Mining Fatalities in Cherat Coal Field, Pakistan

    Get PDF
    Mineral exploitation contributes to the economic growth of developing countries. Managing mineral production brought a more disturbing environment linked to workers' causalities due to scarcities in the safety management system. One of the barriers to attaining an adequate safety management system is the unavailability of future information relating to accidents causing fatalities. Policymakers always try to manage the safety system after each accident. Therefore, a precise forecast of the number of workers fatalities can provide significant observation to strengthen the safety management system. This study involves forecasting the number of mining workers fatalities in Cherat coal mines by using Auto-Regressive Integrating Moving Average Method (ARIMA) model. Workers' fatalities information was collected over the period of 1994 to 2018 from Mine Workers Federation, Inspectorate of Mines and Minerals and company records to evaluate the long-term forecast. Various diagnostic tests were used to obtain an optimistic model. The results show that ARIMA (0, 1, 2) was the most appropriate model for workers fatalities. Based on this model, casualties from 2019 to 2025 have been forecasted. The results suggest that policymakers should take systematic consideration by evaluating possible risks associated with an increased number of fatalities and develop a safe and effective working platform

    A holistic team approach (HTA) model to curb machinery accidents in power plants.

    Get PDF
    Machinery accidents have been an important aspect that needs proper attention in all work places in recent years especially power plants. A large number of accident cases have been reported from the year 2018 to 2022. Accident report from DOSH (Department of Safety and Health Malaysia) indicates that a significant number of machinery accident cases occur in power plants while PERKESO (Social Security Organisation Malaysia)has investigated and tabulated accidents based on area of workplace and injury. Research shows that most statistical studies do not comprise of a preventive model to curb machinery accidents, which involves employees and management. A model that comprises of Machinery or Area of work (M) and the type of injury (I) is identified and summed in a form of a scientific equation which results in the possible accident type (α) which is the accident occurred. A Holistic Team Approach (HTA) model is designed that involves a team for each element M and I which comprises of engineers, technicians and operators working in the same area of equipment and a management representative. Each team is assigned to specific accidents according to the M and I element and classified as α-combinations. Teams are sent for incident investigation where preventive actions and reporting are discussed. A decision analysis is performed based on the model that emphasizes two Process Safety Management (PSM) elements which are accident investigation and employee participation. The HTA model is able to reduce machinery accidents by involving the elements of machinery and injury types, which is applicable to workplaces worldwide

    Medical disease prediction using Grey Wolf optimization and auto encoder based recurrent neural network

    Get PDF
    Big data development in biomedical and medical service networks provides a research on medical data benefits, early ailment detection, patient care and network administrations.e-Health applications are particularly important for the patients who are unfit to see a specialist or any health expert. The objective is to encourage clinicians and families to predict disease using Machine Learning (ML) procedures. In addition, diverse regions show important qualities of certain provincial ailments, which may hinder the forecast of disease outbreaks. The objective of this work is to predict the different kinds of diseases using Grey Wolf optimization and auto encoder based Recurrent Neural Network (GWO+RNN). The features are selected using GWO and the diseases are predicted by using RNN method. Initially the GWO algorithm avoids the irrelevant and redundant attributes significantly, after the features are forwarded to the RNN classifier. The experimental result proved that the performance of GWO+RNN algorithm achieved better than existing method like Group Search Optimizer and Fuzzy Min-Max Neural Network (GFMMNN) approach. The GWO-RNN method used the medical UCI database based on various datasets such as Hungarian, Cleveland, PID, mammographic masses, Switzerland and performance was measured with the help of efficient metrics like accuracy, sensitivity and specificity. The proposed GWO+RNN method achieved 16.82% of improved prediction accuracy for Cleveland dataset

    Beyond panoptic surveillance: On the ethical dilemmas of the connected workplace

    Get PDF
    Technological advances such as the Internet-of-Things, big data, and artificial intelligence have enabled new ways of managerial oversight moving away from panoptic surveillance to what we call “connected surveillance”. The COVID-19 pandemic has accelerated the adoption of connected surveillance, which purpose is not only scrutinizing employees’ work performance, but also health, personal beliefs, and other private matters. With the implementation of connected workplaces, therefore, various ethical dilemmas arise. We highlight four emerging dilemmas, namely: (1) the good of the individual versus the good of the community, (2) ownership versus information disclosure, (3) justice versus mercy, and (4) truth versus loyalty. We discuss those ethical dilemmas for the case of corporate wellness programs which is frequently being used as guise to introduce connected surveillance. Following a socio-technical perspective, we discuss ethical responses that focus on people involvement and technology assessment. We highlight practical responses that can aim at mitigating the dilemmas

    Beyond Panoptic Surveillance: On the Ethical Dilemmas of the Connected Workplace

    Get PDF
    Technological advances such as the Internet-of-Things, big data, and artificial intelligence have enabled new ways of managerial oversight moving away from panoptic surveillance to what we call “connected surveillance”. The COVID-19 pandemic has accelerated the adoption of connected surveillance which purpose is not only scruitizing employees’ work performance but also health, personal beliefs, and other private matters. With the implementation of connected workplaces, therefore, various ethical dilemmas arise. We highlight four emerging dilemmas, namely: (1) the good of the individual versus the good of the community, (2) ownership versus information disclosure, (3) justice versus mercy, and (4) truth versus loyalty. We discuss those ethical dilemmas for the case of corporate wellness programs which is frequently used as guise to introduce connected surveillance. Following a socio-technical perspective, we discuss ethical responses that focus on people involvement and technology assessment. We also highlight practical responses that can aim at mitigating the dilemmas

    Klasifikasi Laporan Keluhan Pelayanan Publik Berdasarkan Instansi Menggunakan Metode LDA-SVM

    Get PDF
    Sebuah sistem layanan untuk menyampaikan aspirasi dan keluhan masyarakat terhadap layanan pemerintah Indonesia, bernama Lapor! Pemerintah sudah lama memanfaatkan sistem tersebut untuk menjawab permasalahan masyarakat Indonesia terkait permasalahan birokrasi. Namun, peningkatan volume laporan dan pemilahan laporan yang dilakukan oleh operator dengan membaca setiap keluhan yang masuk melalui sistem menyebabkan sering terjadi kesalahan dimana operator meneruskan laporan tersebut ke instansi yang salah. Oleh karena itu, diperlukan suatu solusi yang dapat menentukan konteks laporan secara otomatis dengan menggunakan teknik Natural Language Processing. Penelitian ini bertujuan untuk membangun klasifikasi laporan secara otomatis berdasarkan topik laporan yang ditujukan kepada instansi yang berwenang dengan menggabungkan metode Latent Dirichlet Allocation (LDA) dan Support Vector Machine (SVM). Proses pemodelan topik untuk setiap laporan dilakukan dengan menggunakan metode LDA. Metode ini mengekstrak laporan untuk menemukan pola tertentu dalam dokumen yang akan menghasilkan keluaran dalam nilai distribusi topik. Selanjutnya, proses klasifikasi untuk menentukan laporan agensi tujuan dilakukan dengan menggunakan SVM berdasarkan nilai topik yang diekstraksi dengan metode LDA. Performa model LDA-SVM diukur dengan menggunakan confusion matrix dengan menghitung nilai akurasi, presisi, recall, dan F1 Score. Hasil pengujian menggunakan teknik split train-test dengan skor 70:30 menunjukkan bahwa model menghasilkan kinerja yang baik dengan akurasi 79,85%, presisi 79,98%, recall 72,37%, dan Skor F1 74,67%. AbstractA service system to convey aspirations and complaints from the public against Indonesia's government services, named Lapor! The Government has used the Government for a long time to answer the problems of the Indonesian people related to bureaucratic problems. However, the increasing volume of reports and the sorting of reports carried out by operators by reading every complaint that comes through the system cause frequent errors where operators forward the reports to the wrong agencies. Therefore, we need a solution that can automatically determine the report's context using Natural Language Processing techniques. This study aims to build automatic report classifications based on report topics addressed to authorized agencies by combining Latent Dirichlet Allocation (LDA) and Support Vector Machine (SVM). The topic-modeling process for each report was carried out using the LDA method. This method extracts reports to find specific patterns in documents that will produce output in topic distribution values. Furthermore, the classification process to determine the report's destination agency carried out using the SVM based on the value of the topics extracted by the LDA method. The LDA-SVM model's performance is measured using a confusion matrix by calculating the value of accuracy, precision, recall, and F1 Score. The test results using the train-test split technique with a 70:30 show that the model produces good performance with 79.85% accuracy, 79.98% precision, 72.37% recall, and 74.67% F1 Scor

    Saf Sci

    Get PDF
    Big data and analytics have shown promise in predicting safety incidents and identifying preventative measures directed towards specific risk variables. However, the safety industry is lagging in big data utilization due to various obstacles, which may include lack of data readiness (e.g., disparate databases, missing data, low validity) and personnel competencies. This paper provides a primer on the application of big data to safety. We then describe a safety analytics readiness assessment framework that highlights system requirements and the challenges that safety professionals may encounter in meeting these requirements. The proposed framework suggests that safety analytics readiness depends on (a) the quality of the data available, (b) organizational norms around data collection, scaling, and nomenclature, (c) foundational infrastructure, including technological platforms and skills required for data collection, storage, and analysis of health and safety metrics, and (d) measurement culture, or the emergent social patterns between employees, data acquisition, and analytic processes. A safety-analytics readiness assessment can assist organizations with understanding current capabilities so measurement systems can be matured to accommodate more advanced analytics for the ultimate purpose of improving decisions that mitigate injury and incidents.CC999999/ImCDC/Intramural CDC HHSUnited States

    A scientometric analysis of the emerging topics in general computer science

    Get PDF
    Citations have been an acceptable journal performance metric used by many indexing databases for inclusion and discontinuation of journals in their list. Therefore, editorial teams must maintain their journal performance by increasing article citations for continuous content indexing in the databases. With this aim in hand, this study intended to assist the editorial team of the Journal of Information and Communication Technology (JICT) in increasing the performance and impact of the journal. Currently, the journal has suffered from low citation count, which may jeopardise its sustainability. Past studies in library science suggested a positive correlation between keywords and citations. Therefore, keyword and topic analyses could be a solution to address the issue of journal citation. This article described a scientometric analysis of emerging topics in general computer science, the Scopus subject area for which JICT is indexed. This study extracted bibliometric data of the top 10% journals in the subject area to create a dataset of 5,546 articles. The results of the study suggested ten emerging topics in computer science that can be considered by the journal editorial team in selecting articles and a list of highly used keywords in articles published in 2019 and 2020 (as of 15 April 2020). The outcome of this study might be considered by the JICT editorial team and other journals in general computer science that suffer from a similar issue

    Predictive analytics in agribusiness industries

    Get PDF
    Agriculturally related industries are routinely among the most hazardous work environments. Workplace injuries directly impact labor-market outcomes including income reduction, job loss, and health of the injured workers. In addition to medical and indemnity costs, workplace incidents include indirect costs such as equipment damage and repair, incident investigation time, training new personnel for replacement of the injured ones, an increase in insurance premiums for the year following the incidents, a slowdown of production schedules, damage to companies’ reputation, and lowering the workers’ motivation to return to work. The main purpose of incident analysis is the derivation and development of preventative measures from injury data. Applying proper analytical tools aimed at discovering the causes of occupational incidents is essential to gain useful information that contributes in preventing those incidents in future. Insight gained from the analyses of workers’ compensation data can efficiently direct preventative activities at high-risk industries. Since incidents arise from a combination of factors rather than a single cause, research on occupational incidents must go deeper into identifying the underlying causes and their relationship through applying more comprehensive analyses. Therefore, this study aimed at identifying underlying patterns in occupational injury occurrence and costs using data mining and predictive modeling techniques instead of traditional statistical methods. Utilizing a workers’ compensation claims dataset, the objectives of this study were to: investigate the use of predictive modeling techniques in forecasting future claims costs based on historical data; identify distinctive patterns of high-cost occupational injuries; and examine how well machine learning methods work in finding the predictive relationship between factors influencing occupational injuries and workers’ compensation claims occurrence and severity. The results lead to a better understanding of injury patterns, identification of prevalent causes of occupational injuries, and identification of high-risk industries and occupations. Therefore, various stakeholders such as policymakers, insurance companies, safety standard writers, and manufacturers of safety equipment can use the findings of the study to plan for remedial actions and revise safety standards. The implementation of safety measures by agribusiness organizations can prevent occupational injuries, save lives, and reduce the occurrence and cost of such incidents in agricultural work environments

    Tekoälyn hyödyntäminen työturvallisuusriskien arvioinnissa

    Get PDF
    Tekoäly voidaan määritellä tietojenkäsittelytieteen osa-alueena, jossa keskitytään suoritettavan tehtävän ja ympäristön kannalta järkevällä tavalla toimivien tietokoneohjelmien kehitykseen. Tekoäly kattaa useita erilaisia tekoälyteknologioita, joista keskeisimpiä ovat koneoppiminen, luonnollisen kielen käsittely, konenäkö, asiantuntijajärjestelmät ja robotiikka. Tekoälyyn kohdistuvista suurista odotuksista ja aihepiiriin liittyvien julkaisujen kasvavasta määrästä huolimatta tekoälyn käyttöönottoaste erilaisissa organisaatioissa on vielä suhteellisen alhainen. Tähän perustuen aikaisempi tutkimus tekoälysovellusten käyttöönoton kuvauksesta sekä tekoälyn hyödyntämismahdollisuuksista rajatuissa organisaatiokonteksteissa ja toiminnoissa, kuten turvallisuusjohtamisessa, on vähäistä. Tämän työn tavoitteena oli selvittää, miten työturvallisuutta voidaan kehittää hyödyntämällä tekoälyä riskien arvioinnin tehtävissä sekä tarkastella tekoälysovellusten käyttöönottoa työturvallisuusriskien arvioinnissa yksilöiden sekä organisaation näkökulmasta. Kyseessä oli laadullinen tapaustutkimus, jossa tutkimuksen kohteena oli Neste Oyj:n Porvoossa sijaitseva jalostamo. Tutkimuksessa toteutettiin kirjallisuuskatsaus ja empiirinen haastattelututkimus, joka koostui kahdesta puolistrukturoidusta haastattelukierroksesta. Tutkimuksessa muodostettiin Hameed, Counsell et al. (2012) esittelemää IT-innovaation käyttöönottoprosessia mukaillen tekoälysovelluksen käyttöönottoprosessia työturvallisuusriskien arvioinnissa yksilö- ja organisaatiotasolla kuvaava malli. Tutkimuksessa tunnistettiin organisaation työturvallisuuden kehitystarpeisiin perustuen 10 erilaista työturvallisuusriskien arviointiin soveltuvaa tekoälysovellusta, joilla voidaan tukea ennakoivaa ja joustavaa turvallisuustoimintaa. Tekoälyn tunnistettiin soveltuvan työturvallisuusriskien arvioinnin tehtäviin, kun tekoälysovellus tukee tehokkaasti työntekijöiden työskentelyä jättäen riskeihin liittyvän päätöksentekovastuun työntekijöille. Erityisesti tekoälyllä on mahdollisuus puuttua organisaation vielä tiedostamattomiin vaaratilanteisiin kehittämällä reaaliaikaista tilannetietoisuutta jalostamoympäristöstä ja laajentamalla näkemystä turvallisuuskulttuurin nykytilanteesta. Tekoäly koettiinkin hyödylliseksi työturvallisuusriskien arvioinnissa jalostamoympäristössä niin yksilö- kuin organisaatiotasolla, kunhan varmistetaan riittävä tekoäly-ymmärrys ja tekoälyn käyttökohteen sekä -tavan yhteensopivuus. Tekoälysovellusten käyttöönottoon organisaatiotasolla vaikuttavat erilaiset tekijät tekoälyyn, organisaatioon, toimintaympäristöön ja johtoportaaseen liittyen. Näiden tekijöiden perusteella tekoälyn käyttöönoton keskeisimmiksi haasteiksi tarkasteltavassa organisaatiossa tunnistettiin digitalisaatioaste turvallisuuden hallintaan liittyvissä tehtävissä, dataresurssit ja data-infrastrukruuri. Organisaatiota rohkaistaankin kehittämään digitalisaatiota jalostamoympäristössä ja omaksumaan strateginen lähestymistapa datan muodostamiseen ja hallintaan. Tekoälyn käyttöönotto voidaan aloittaa kehittämällä ymmärrystä tekoälystä ja sen edellyttämästä datasta, tarkastelemalla lyhyellä tähtäimellä hyödyllisten tekoälysovellusten käyttöönoton tarkempia käyttöönoton edellytyksiä sekä huolehtimalla tekoälysovellusten käyttäjätasolla hyvin suunnitellusta ja pitkäjänteisestä muutoksenhallinnasta käyttöönottoprosessissa
    corecore