500 research outputs found

    Complexity measurement in two supply chains with different competitive priorities

    Get PDF
    Complexity measurement based on the Shannon information entropy is widely used to evaluate variety and uncertainty in supply chains. However, how to use a complexity measurement to support control actions is still an open issue. This article presents a method to calculate the relative complexity, i.e., the relationship between the current and the maximum possible complexity in a Supply Chain. The method relies on unexpected information requirements to mitigate uncertainty. The article studies two real-world Supply Chains of the footwear industry, one competing by cost and quality, the other by flexibility, dependability, and innovation. The second is twice as complex as the first, showing that competitive priorities influence the complexity of the system and that lower complexity does not ensure competitivity

    Homogenization of magnitudes of the ISC Bulletin

    Get PDF
    We implemented an automatic procedure to download the hypocentral data of the online Bulletin of the International Seismological Centre (ISC) in order to produce in near real-time a homogeneous catalogue of the Global and EuroMediterranean instrumental seismicity to be used for forecasting experiments and other statistical analyses. For the interval covered by the reviewed ISC Bulletin, we adopt the ISC locations and convert the surface wave magnitude (Ms) and short-period body-wave magnitude (mb) as computed by the ISC to moment magnitude (Mw), using empirical relations. We merge the so obtained proxies with real Mw provided by global and EuroMediterranean moment tensor catalogues. For the most recent time interval (about 2 yr) for which the reviewed ISC Bulletin is not available, we do the same but using the preferred (prime) location provided by the ISC Bulletin and converting to Mw the Ms and mb provided by some authoritative agencies. For computing magnitude conversion equations, we use curvilinear relations defined in a previous work and the chi-square regression method that accounts for the uncertainties of both x and y variables

    Multispectral pansharpening with radiative transfer-based detail-injection modeling for preserving changes in vegetation cover

    Get PDF
    Whenever vegetated areas are monitored over time, phenological changes in land cover should be decoupled from changes in acquisition conditions, like atmospheric components, Sun and satellite heights and imaging instrument. This especially holds when the multispectral (MS) bands are sharpened for spatial resolution enhancement by means of a panchromatic (Pan) image of higher resolution, a process referred to as pansharpening. In this paper, we provide evidence that pansharpening of visible/near-infrared (VNIR) bands takes advantage of a correction of the path radiance term introduced by the atmosphere, during the fusion process. This holds whenever the fusion mechanism emulates the radiative transfer model ruling the acquisition of the Earth's surface from space, that is for methods exploiting a multiplicative, or contrast-based, injection model of spatial details extracted from the panchromatic (Pan) image into the interpolated multispectral (MS) bands. The path radiance should be estimated and subtracted from each band before the product by Pan is accomplished. Both empirical and model-based estimation techniques of MS path radiances are compared within the framework of optimized algorithms. Simulations carried out on two GeoEye-1 observations of the same agricultural landscape on different dates highlight that the de-hazing of MS before fusion is beneficial to an accurate detection of seasonal changes in the scene, as measured by the normalized differential vegetation index (NDVI)

    Retrospective short-term forecasting experiment in Italy based on the occurrence of strong (fore) shocks

    Get PDF
    In a recent work, we computed the relative frequencies with which strong shocks (4.0 ≤ Mw < 5.0), widely felt by the population were followed in the same area by po- tentially destructive main shocks (Mw ≥ 5.0) in Italy. Assuming the stationarity of the seismic release properties, such frequencies can be tentatively used to estimate the probabilities of potentially destructive shocks after the occurrence of future strong shocks. This allows us to set up an alarm-based forecasting hypothesis related to strong foreshocks occurrence. Such hypothesis is tested retrospectively on the data of a homogenized seismic catalogue of the Italian area against a purely random hypothesis that simply forecasts the target main shocks proportionally to the space–time fraction occupied by the alarms. We compute the latter frac- tion in two ways (i) as the ratio between the average time covered by the alarms in each area and the total duration of the forecasting experiment (60 yr) and (ii) as the same ratio but weighted by the past frequency of occurrence of earthquakes in each area. In both cases the overall retrospective performance of our forecasting algorithm is definitely better than the random case. Considering an alarm duration of three months, the algorithm retrospectively forecasts more than 70 per cent of all shocks with Mw ≥ 5.5 occurred in Italy from 1960 to 2019 with a total space–time fraction covered by the alarms of the order of 2 per cent. Considering the same space–time coverage, the algorithm is also able to retrospectively forecasts more than 40 per cent of the first main shocks with Mw ≥ 5.5 of the seismic sequences occurred in the same time interval. Given the good reliability of our results, the forecasting algorithm is set and ready to be tested also prospectively, in parallel to other ongoing procedures operating on the Italian territory

    A human-machine learning curve for stochastic assembly line balancing problems

    Get PDF
    The Assembly Line Balancing Problem (ALBP) represents one of the most explored research topics in manufacturing. However, only a few contributions have investigated the effect of the combined abilities of humans and machines in order to reach a balancing solution. It is well-recognized that human beings learn to perform assembly tasks over time, with the effect of reducing the time needed for unitary tasks. This implies a need to re-balance assembly lines periodically, in accordance with the increased level of human experience. However, given an assembly task that is partially performed by automatic equipment, it could be argued that some subtasks are not subject to learning effects. Breaking up assembly tasks into human and automatic subtasks represents the first step towards more sophisticated approaches for ALBP. In this paper, a learning curve is introduced that captures this disaggregation, which is then applied to a stochastic ALBP. Finally, a numerical example is proposed to show how this learning curve affects balancing solutions

    Rain evaporation rate estimates from dual-wavelength lidar measurements and intercomparison against a model analytical solution

    Get PDF
    Rain evaporation, while significantly contributing to moisture and heat cloud budgets, is a still poorly understood process with few measurements presently available. Multiwavelength lidars, widely employed in aerosols and clouds studies, can also provide useful information on the microphysical characteristics of light precipitation, for example, drizzle and virga. In this paper, lidar measurements of the median volume raindrop diameter and rain evaporation rate profiles are compared with a model analytical solution. The intercomparison reveals good agreement between the model and observations, with a correlation between the profiles up to 65% and a root-mean-square error up to 22% with a 5% bias. Larger discrepancies are due to radiosonde soundings different air masses and model assumptions no more valid along the profile as nonsteady atmosphere and/or appearance of collision–coalescence processes. Nevertheless, this study shares valuable information to better characterize the rain evaporation processes

    Machine learning for multi-criteria inventory classification applied to intermittent demand

    Get PDF
    Multi-criteria inventory classification groups inventory items into classes, each of which is managed by a specific re-order policy according to its priority. However, the tasks of inventory classification and control are not carried out jointly if the classification criteria and the classification approach are not robustly established from an inventory-cost perspective. Exhaustive simulations at the single item level of the inventory system would directly solve this issue by searching for the best re-order policy per item, thus achieving the subsequent optimal classification without resorting to any multi-criteria classification method. However, this would be very time-consuming in real settings, where a large number of items need to be managed simultaneously. In this article, a reduction in simulation effort is achieved by extracting from the population of items a sample on which to perform an exhaustive search of best re-order policies per item; the lowest cost classification of in-sample items is, therefore, achieved. Then, in line with the increasing need for ICT tools in the production management of Industry 4.0 systems, supervised classifiers from the machine learning research field (i.e. support vector machines with a Gaussian kernel and deep neural networks) are trained on these in-sample items to learn to classify the out-of-sample items solely based on the values they show on the features (i.e. classification criteria). The inventory system adopted here is suitable for intermittent demands, but it may also suit non-intermittent demands, thus providing great flexibility. The experimental analysis of two large datasets showed an excellent accuracy, which suggests that machine learning classifiers could be implemented in advanced inventory classification systems

    Renewable energy in eco-industrial parks and urban-industrial symbiosis: A literature review and a conceptual synthesis

    Get PDF
    Replacing fossil fuels with renewable energy sources is considered as an effective means to reduce carbon emissions at the industrial level and it is often supported by local authorities. However, individual firms still encounter technical and financial barriers that hinder the installation of renewables. The eco-industrial park approach aims to create synergies among firms thereby enabling them to share and efficiently use natural and economic resources. It also provides a suitable model to encourage the use of renewable energy sources in the industry sector. Synergies among eco-industrial parks and the adjacent urban areas can lead to the development of optimized energy production plants, so that the excess energy is available to cover some of the energy demands of nearby towns. This study thus provides an overview of the scientific literature on energy synergies within eco-industrial parks, which facilitate the uptake of renewable energy sources at the industrial level, potentially creating urban-industrial energy symbiosis. The literature analysis was conducted by arranging the energy-related content into thematic categories, aimed at exploring energy symbiosis options within eco-industrial parks. It focuses on the urban-industrial energy symbiosis solutions, in terms of design and optimization models, technologies used and organizational strategies. The study highlights four main pathways to implement energy synergies, and demonstrates viable solutions to improve renewable energy sources uptake at the industrial level. A number of research gaps are also identified, revealing that the energy symbiosis networks between industrial and urban areas integrating renewable energy systems, are under-investigated

    High-intensity endurance capacity assessment as a tool for talent identification in elite youth female soccer.

    Get PDF
    Talent identification and development programmes have received broad attention in the last decades, yet evidence regarding the predictive utility of physical performance in female soccer players is limited. Using a retrospective design, we appraised the predictive value of performance-related measures in a sample of 228 youth female soccer players previously involved in residential Elite Performance Camps (age range: 12.7-15.3 years). With 10-m sprinting, 30-m sprinting, counter-movement jump height, and Yo-Yo Intermittent Recovery Test Level 1 (IR1) distance as primary predictor variables, the Akaike Information Criterion (AIC) assessed the relative quality of four penalised logistic regression models for determining future competitive international squads U17-U20 level selection. The model including Yo-Yo IR1 was the best for predicting career outcome. Predicted probabilities of future selection to the international squad increased with higher Yo-Yo IR1 distances, from 4.5% (95% confidence interval, 0.8 to 8.2%) for a distance lower than 440 m to 64.7% (95% confidence interval, 47.3 to 82.1%) for a score of 2040 m. The present study highlights the predictive utility of high-intensity endurance capacity for informing career progression in elite youth female soccer and provides reference values for staff involved in the talent development of elite youth female soccer players
    • …
    corecore