177 research outputs found
Towards Dynamic Criticality-Based Maintenance Strategy for Industrial Assets
An asset’s risk is a useful indicator for determining optimal time of repair/replacement for assets in order to yield minimal operational cost of maintenance. For a successful asset management practice, asset-intensive organisations must understand the risk profile associated with their asset portfolio and how this will change over time. Unfortunately, in many risk-based asset management approaches, the only thing that is known to change in the risk profile of the asset is the likelihood (or probability) of failure. The criticality (or consequences of failure) of asset is assumed to be fixed and has considered as more or less a static quantity that is not updated with sufficient frequency as the operating environment changes. This paper proposes a dynamic criticality-based maintenance approach where asset criticality is modeled as a dynamic quantity and changes in asset’s criticality is used to optimize maintenance plans (e.g. determining the optimal repair time/replacement age for an asset over it life cycle period) to have a better risk management and cost savings. An illustrative example is used to demonstrate the effect of implementing dynamic criticality in determining the optimal time of repair for a bridge infrastructure. It is shown that capturing changes in the criticality of the bridge over time and using this understanding in the risk analysis of the bridge provided the opportunity for better maintenance planning resulting to reduction of the total risk
Requirements for an Intelligent Maintenance System for Industry 4.0
comprobación paso "titulo publicación " - Service Oriented, Holonic and Multi-agent Manufacturing Systems for Industry of the Future[EN] Recent advances in the development of technological devices
and software for Industry 4.0 have pushed a change in the maintenance
management systems and processes. Nowadays, in order to maintain a
company competitive, a computerised management system is required
to help in its maintenance tasks. This paper presents an analysis of the
complexities and requirements for maintenance of Industry 4.0. It focuses
on intelligent systems that can help to improve the intelligent management of maintenance. Finally, it presents a summary of lessons learned
specified as guidelines for the design of such intelligent systems that can
be applied horizontally to any company in the Industry.This work is supported by the FEDER/Ministry of Science, Innovation and Universities - State Research Agency RTC-2017-6401-7Garcia, E.; Araujo, A.; Palanca Cámara, J.; Giret Boggino, AS.; Julian Inglada, VJ.; Botti, V. (2019). Requirements for an Intelligent Maintenance System for Industry 4.0. Springer. 340-351. https://doi.org/10.1007/978-3-030-27477-1_26S340351CEN, European Committee for Standardization: EN 13306:2017. Maintenance Terminology. European Standard (2017)Chen, B., Wan, J., Shu, L., Li, P., Mukherjee, M., Yin, B.: Smart factory of Industry 4.0: key technologies, application case, and challenges. IEEE Access 6, 6505–6519 (2018). https://doi.org/10.1109/access.2017.2783682Crespo Marquez, A., Gupta, J.N.: Contemporary maintenance management: process, framework and supporting pillars. Omega 34(3), 313–326 (2006). https://doi.org/10.1016/j.omega.2004.11.003Ferreira, L.L., Albano, M., Silva, J., Martinho, D., Marreiros, G., di Orio, G., Malo, P., Ferreira, H.: A pilot for proactive maintenance in Industry 4.0. In: 2017 IEEE 13th International Workshop on Factory Communication Systems (WFCS). IEEE (2017). https://doi.org/10.1109/wfcs.2017.7991952Goh, K., Tjahjono, B., Baines, T., Subramaniam, S.: A review of research in manufacturing prognostics. In: 2006 IEEE International Conference on Industrial Informatics, Singapore, pp. 417–422. IEEE (2006). https://doi.org/10.1109/INDIN.2006.275836Hashemian, H.M., Bean, W.C.: State-of-the-art predictive maintenance techniques. IEEE Trans. Instrum. Meas. 60(10), 3480–3492 (2011). https://doi.org/10.1109/TIM.2009.2036347Lee, W.J., Wu, H., Yun, H., Kim, H., Jun, M.B., Sutheralnd, J.W.: Predictive maintenance of machine tool systems using artificial intelligence techniques applied to machine condition data. Procedia CIRP 80, 506–511 (2019)Lu, B., Durocher, D., Stemper, P.: Predictive maintenance techniques. IEEE Ind. Appl. Mag. 15(6), 52–60 (2009). https://doi.org/10.1109/MIAS.2009.934444Mrugalska, B., Wyrwicka, M.K.: Towards lean production in Industry 4.0. Procedia Eng. 182, 466–473 (2017). https://doi.org/10.1016/j.proeng.2017.03.135O’Donoghue, C., Prendergast, J.: Implementation and benefits of introducing a computerised maintenance management system into a textile manufacturing company. J. Mater. Process. Technol. 153, 226–232 (2004)Paolanti, M., Romeo, L., Felicetti, A., Mancini, A., Frontoni, E., Loncarski, J.: Machine learning approach for predictive maintenance in Industry 4.0. In: 2018 14th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications (MESA). IEEE (2018). https://doi.org/10.1109/mesa.2018.8449150Patil, R.B., Mhamane, D.A., Kothavale, P.B., Kothavale, B.: Fault tree analysis: a case study from machine tool industry. Available at SSRN 3382241 (2018)Potes Ruiz, P.A., Kamsu-Foguem, B., Noyes, D.: Knowledge reuse integrating the collaboration from experts in industrial maintenance management. Knowl. Based Syst. 50, 171–186 (2013). https://doi.org/10.1016/j.knosys.2013.06.005Razmi-Farooji, A., Kropsu-Vehkaperä, H., Härkönen, J., Haapasalo, H.: Advantages and potential challenges of data management in e-maintenance. J. Qual. Maint. Eng. (2019)Rüßmann, M., Lorenz, M., Gerbert, P., Waldner, M., Justus, J., Harnisch, M.: Industry 4.0: the future of productivity and growth in manufacturing industries. Boston Consult. Group 9(1), 54–89 (2015)Wan, J., Tang, S., Li, D., Wang, S., Liu, C., Abbas, H., Vasilakos, A.V.: A manufacturing big data solution for active preventive maintenance. IEEE Trans. Ind. Inform. 13(4), 2039–2047 (2017). https://doi.org/10.1109/tii.2017.267050
Strategic maintenance technique selection using combined quality function deployment, the analytic hierarchy process and the benefit of doubt approach
The business performance of manufacturing organizations depends on the reliability and productivity of equipment, machineries and entire manufacturing system. Therefore, the main role of maintenance and production managers is to keep manufacturing system always up by adopting most appropriate maintenance methods. There are alternative maintenance techniques for each machine, the selection of which depend on multiple factors. The contemporary approaches to maintenance technique selection emphasize on operational needs and economic factors only. As the reliability of production systems is the strategic intent of manufacturing organizations, maintenance technique selection must consider strategic factors of the concerned organization along with operational and economic criteria. The main aim of this research is to develop a method for selecting the most appropriate maintenance technique for manufacturing industry with the consideration of strategic, planning and operational criteria through involvement of relevant stakeholders. The proposed method combines quality function deployment (QFD), the analytic hierarchy process (AHP) and the benefit of doubt (BoD) approach. QFD links strategic intents of the organizations with the planning and operational needs, the AHP helps in prioritizing the criteria for selection and ranking the alternative maintenance techniques, and the BoD approach facilitates analysing robustness of the method through sensitivity analysis through setting the realistic limits for decision making. The proposed method has been applied to maintenance technique selection problems of three productive systems of a gear manufacturing organization in India to demonstrate its effectiveness
Influence of IL28B Polymorphisms on Response to a Lower-Than-Standard Dose peg-IFN-α 2a for Genotype 3 Chronic Hepatitis C in HIV-Coinfected Patients
Background: Data on which to base definitive recommendations on the doses and duration of therapy for genotype 3 HCV/HIV-coinfected patients are scarce. We evaluated the efficacy of a lower peginterferon-α 2a dose and a shorter duration of therapy than the current standard of care in genotype 3 HCV/HIV-coinfected patients. Methods and Findings: Pilot, open-label, single arm clinical trial which involved 58 Caucasian HCV/HIV-coinfected patients who received weekly 135 μg peginterferon-α 2a plus ribavirin 400 mg twice daily during 20 weeks after attaining undetectable viremia. The relationships between baseline patient-related variables, including IL28B genotype, plasma HCV-RNA, ribavirin dose/kg, peginterferon-α 2a and ribavirin levels with virological responses were analyzed. Only 4 patients showed lack of response and 5 patients dropped out due to adverse events related to the study medication. Overall, sustained virologic response (SVR) rates were 58.3% by intention-to-treat and 71.4% by per protocol analysis, respectively. Among patients with rapid virologic response (RVR), SVR and relapses rates were 92.6% and 7.4%, respectively. No relationships were observed between viral responses and ribavirin dose/kg, peginterferon-α 2a concentrations, ribavirin levels or rs129679860 genotype. Conclusions: Weekly 135 μg pegIFN-α 2a could be as effective as the standard 180 μg dose, with a very low incidence of severe adverse events. A 24-week treatment duration appears to be appropriate in patients achieving RVR, but extending treatment up to just 20 weeks beyond negativization of viremia is associated with a high relapse rate in those patients not achieving RVR. There was no influence of IL28B genotype on the virological responses. © 2012 López-Cortés et al.Funding provided by Fundación Pública Andaluza para la gestión de la Investigación en Salud de Sevilla. Hospitales Universitarios Virgen del Rocío. Seville, Spain. The enzyme-linked immunosorbent assay Hu-INF-α kits for determination of pegIFN-α-2a were financed by Roche Pharma, S.A. (Spain).Peer Reviewe
Life beyond 30: Probing the-20 < M (UV) <-17 Luminosity Function at 8 < z < 13 with the NIRCam Parallel Field of the MIRI Deep Survey
We present the ultraviolet luminosity function and an estimate of the cosmic star formation rate density at 8 8 galaxy candidates based on their dropout nature in the F115W and/or F150W filters, a high probability for their photometric redshifts, estimated with three different codes, being at z > 8, good fits based on χ 2 calculations, and predominant solutions compared to z < 8 alternatives. We find mild evolution in the luminosity function from z ∼ 13 to z ∼ 8, i.e., only a small increase in the average number density of ∼0.2 dex, while the faint-end slope and absolute magnitude of the knee remain approximately constant, with values α = − 2.2 ± 0.1, and M * = − 20.8 ± 0.2 mag. Comparing our results with the predictions of state-of-the-art galaxy evolution models, we find two main results: (1) a slower increase with time in the cosmic star formation rate density compared to a steeper rise predicted by models; (2) nearly a factor of 10 higher star formation activity concentrated in scales around 2 kpc in galaxies with stellar masses ∼108 M ⊙ during the first 350 Myr of the universe, z ∼ 12, with models matching better the luminosity density observational estimations ∼150 Myr later, by z ∼ 9
Amphioxus functional genomics and the origins of vertebrate gene regulation.
Vertebrates have greatly elaborated the basic chordate body plan and evolved highly distinctive genomes that have been sculpted by two whole-genome duplications. Here we sequence the genome of the Mediterranean amphioxus (Branchiostoma lanceolatum) and characterize DNA methylation, chromatin accessibility, histone modifications and transcriptomes across multiple developmental stages and adult tissues to investigate the evolution of the regulation of the chordate genome. Comparisons with vertebrates identify an intermediate stage in the evolution of differentially methylated enhancers, and a high conservation of gene expression and its cis-regulatory logic between amphioxus and vertebrates that occurs maximally at an earlier mid-embryonic phylotypic period. We analyse regulatory evolution after whole-genome duplications, and find that-in vertebrates-over 80% of broadly expressed gene families with multiple paralogues derived from whole-genome duplications have members that restricted their ancestral expression, and underwent specialization rather than subfunctionalization. Counter-intuitively, paralogues that restricted their expression increased the complexity of their regulatory landscapes. These data pave the way for a better understanding of the regulatory principles that underlie key vertebrate innovations
Multiwavelength study of the galactic PeVatron candidate LHAASO J2108+5157
Context. Several new ultrahigh-energy (UHE) γ-ray sources have recently been discovered by the Large High Altitude Air Shower Observatory (LHAASO) collaboration. These represent a step forward in the search for the so-called Galactic PeVatrons, the enigmatic sources of the Galactic cosmic rays up to PeV energies. However, it has been shown that multi-TeV γ-ray emission does not necessarily prove the existence of a hadronic accelerator in the source; indeed this emission could also be explained as inverse Compton scattering from electrons in a radiation-dominated environment. A clear distinction between the two major emission mechanisms would only be made possible by taking into account multi-wavelength data and detailed morphology of the source. Aims. We aim to understand the nature of the unidentified source LHAASO J2108+5157, which is one of the few known UHE sources with no very high-energy (VHE) counterpart. Methods. We observed LHAASO J2108+5157 in the X-ray band with XMM-Newton in 2021 for a total of 3.8 hours and at TeV energies with the Large-Sized Telescope prototype (LST-1), yielding 49 hours of good-quality data. In addition, we analyzed 12 years of Fermi-LAT data, to better constrain emission of its high-energy (HE) counterpart 4FGL J2108.0+5155. We used naima and jetset software packages to examine the leptonic and hadronic scenario of the multi-wavelength emission of the source. Results. We found an excess (3.7σ) in the LST-1 data at energies E > 3 TeV. Further analysis of the whole LST-1 energy range, assuming a point-like source, resulted in a hint (2.2σ) of hard emission, which can be described with a single power law with a photon index of Σ = 1.6 ± 0.2 the range of 0.3 - 100 TeV. We did not find any significant extended emission that could be related to a supernova remnant (SNR) or pulsar wind nebula (PWN) in the XMM-Newton data, which puts strong constraints on possible synchrotron emission of relativistic electrons. We revealed a new potential hard source in Fermi-LAT data with a significance of 4σ and a photon index of Σ = 1.9 ± 0.2, which is not spatially correlated with LHAASO J2108+5157, but including it in the source model we were able to improve spectral representation of the HE counterpart 4FGL J2108.0+5155. Conclusions. The LST-1 and LHAASO observations can be explained as inverse Compton-dominated leptonic emission of relativistic electrons with a cutoff energy of 100-30+70 TeV. The low magnetic field in the source imposed by the X-ray upper limits on synchrotron emission is compatible with a hypothesis of a PWN or a TeV halo. Furthermore, the spectral properties of the HE counterpart are consistent with a Geminga-like pulsar, which would be able to power the VHE-UHE emission. Nevertheless, the lack of a pulsar in the neighborhood of the UHE source is a challenge to the PWN/TeV-halo scenario. The UHE γ rays can also be explained as π0 decay-dominated hadronic emission due to interaction of relativistic protons with one of the two known molecular clouds in the direction of the source. Indeed, the hard spectrum in the LST-1 band is compatible with protons escaping a shock around a middle-aged SNR because of their high low-energy cut-off, but the origin of the HE γ-ray emission remains an open question
Observations of the Crab Nebula and Pulsar with the Large-sized Telescope Prototype of the Cherenkov Telescope Array
The Cherenkov Telescope Array (CTA) is a next-generation ground-based observatory for gamma-ray astronomy at very high energies. The Large-Sized Telescope prototype (LST-1) is located at the CTA-North site, on the Canary Island of La Palma. LSTs are designed to provide optimal performance in the lowest part of the energy range covered by CTA, down to ≃20 GeV. LST-1 started performing astronomical observations in 2019 November, during its commissioning phase, and it has been taking data ever since. We present the first LST-1 observations of the Crab Nebula, the standard candle of very-high-energy gamma-ray astronomy, and use them, together with simulations, to assess the performance of the telescope. LST-1 has reached the expected performance during its commissioning period—only a minor adjustment of the preexisting simulations was needed to match the telescope’s behavior. The energy threshold at trigger level is around 20 GeV, rising to ≃30 GeV after data analysis. Performance parameters depend strongly on energy, and on the strength of the gamma-ray selection cuts in the analysis: angular resolution ranges from 0.°12-0.°40, and energy resolution from 15%-50%. Flux sensitivity is around 1.1% of the Crab Nebula flux above 250 GeV for a 50 hr observation (12% for 30 minutes). The spectral energy distribution (in the 0.03-30 TeV range) and the light curve obtained for the Crab Nebula agree with previous measurements, considering statistical and systematic uncertainties. A clear periodic signal is also detected from the pulsar at the center of the Nebula
Observations of the Crab Nebula and Pulsar with the Large-Sized Telescope Prototype of the Cherenkov Telescope Array
CTA (Cherenkov Telescope Array) is the next generation ground-based
observatory for gamma-ray astronomy at very-high energies. The Large-Sized
Telescope prototype (\LST{}) is located at the Northern site of CTA, on the
Canary Island of La Palma. LSTs are designed to provide optimal performance in
the lowest part of the energy range covered by CTA, down to GeV.
\LST{} started performing astronomical observations in November 2019, during
its commissioning phase, and it has been taking data since then. We present the
first \LST{} observations of the Crab Nebula, the standard candle of very-high
energy gamma-ray astronomy, and use them, together with simulations, to assess
the basic performance parameters of the telescope. The data sample consists of
around 36 hours of observations at low zenith angles collected between November
2020 and March 2022. \LST{} has reached the expected performance during its
commissioning period - only a minor adjustment of the preexisting simulations
was needed to match the telescope behavior. The energy threshold at trigger
level is estimated to be around 20 GeV, rising to GeV after data
analysis. Performance parameters depend strongly on energy, and on the strength
of the gamma-ray selection cuts in the analysis: angular resolution ranges from
0.12 to 0.40 degrees, and energy resolution from 15 to 50\%. Flux sensitivity
is around 1.1\% of the Crab Nebula flux above 250 GeV for a 50-h observation
(12\% for 30 minutes). The spectral energy distribution (in the 0.03 - 30 TeV
range) and the light curve obtained for the Crab Nebula agree with previous
measurements, considering statistical and systematic uncertainties. A clear
periodic signal is also detected from the pulsar at the center of the Nebula.Comment: Submitted to Ap
Star tracking for pointing determination of Imaging Atmospheric Cherenkov Telescopes: Application to the Large-Sized Telescope of the Cherenkov Telescope Array
We present a novel approach to the determination of the pointing of Imaging Atmospheric Cherenkov Telescopes (IACTs) using the trajectories of the stars in their camera s field of view. The method starts with the reconstruction of the star positions from the Cherenkov camera data, taking into account the point spread function of the telescope, to achieve a satisfying reconstruction accuracy of the pointing position. A simultaneous fit of all reconstructed star trajectories is then performed with the orthogonal distance regression (ODR) method. ODR allows us to correctly include the star position uncertainties and use the time as an independent variable. Having the time as an independent variable in the fit makes it better suitable for various star trajectories. This method can be applied to any IACT and requires neither specific hardware nor interface or special data-taking mode. In this paper, we use the Large-Sized Telescope (LST) data to validate it as a useful tool to improve the determination of the pointing direction during regular data taking. The simulation studies show that the accuracy and precision of the method are comparable with the design requirements on the pointing accuracy of the LST (=14''). With the typical LST event acquisition rate of 10 kHz, the method can achieve up to 50 Hz pointing monitoring rate, compared to O(1) Hz achievable with standard techniques. The application of the method to the LST prototype (LST-1) commissioning data shows the stable pointing performance of the telescope
- …