314 research outputs found

    A domain transformation approach for addressing staff scheduling problems

    Get PDF
    Staff scheduling is a complex combinatorial optimisation problem concerning allocation of staff to duty rosters in a wide range of industries and settings. This thesis presents a novel approach to solving staff scheduling problems, and in particular nurse scheduling, by simplifying the problem space through information granulation. The complexity of the problem is due to a large solution space and the many constraints that need to be satisfied. Published research indicates that methods based on random searches of the solution space did not produce good-quality results consistently. In this study, we have avoided random searching and proposed a systematic hierarchical method of granulation of the problem domain through pre-processing of constraints. The approach is general and can be applied to a wide range of staff scheduling problems. The novel approach proposed here involves a simplification of the original problem by a judicious grouping of shift types and a grouping of individual shifts into weekly sequences. The schedule construction is done systematically, while assuring its feasibility and minimising the cost of the solution in the reduced problem space of weekly sequences. Subsequently, the schedules from the reduced problem space are translated into the original problem space by taking into account the constraints that could not be represented in the reduced space. This two-stage approach to solving the scheduling problem is referred to here as a domain-transformation approach. The thesis reports computational results on both standard benchmark problems and a specific scheduling problem from Kajang Hospital in Malaysia. The results confirm that the proposed method delivers high-quality results consistently and is computationally efficient

    A domain transformation approach for addressing staff scheduling problems

    Get PDF
    Staff scheduling is a complex combinatorial optimisation problem concerning allocation of staff to duty rosters in a wide range of industries and settings. This thesis presents a novel approach to solving staff scheduling problems, and in particular nurse scheduling, by simplifying the problem space through information granulation. The complexity of the problem is due to a large solution space and the many constraints that need to be satisfied. Published research indicates that methods based on random searches of the solution space did not produce good-quality results consistently. In this study, we have avoided random searching and proposed a systematic hierarchical method of granulation of the problem domain through pre-processing of constraints. The approach is general and can be applied to a wide range of staff scheduling problems. The novel approach proposed here involves a simplification of the original problem by a judicious grouping of shift types and a grouping of individual shifts into weekly sequences. The schedule construction is done systematically, while assuring its feasibility and minimising the cost of the solution in the reduced problem space of weekly sequences. Subsequently, the schedules from the reduced problem space are translated into the original problem space by taking into account the constraints that could not be represented in the reduced space. This two-stage approach to solving the scheduling problem is referred to here as a domain-transformation approach. The thesis reports computational results on both standard benchmark problems and a specific scheduling problem from Kajang Hospital in Malaysia. The results confirm that the proposed method delivers high-quality results consistently and is computationally efficient

    Uncertainty and Interpretability Studies in Soft Computing with an Application to Complex Manufacturing Systems

    Get PDF
    In systems modelling and control theory, the benefits of applying neural networks have been extensively studied. Particularly in manufacturing processes, such as the prediction of mechanical properties of heat treated steels. However, modern industrial processes usually involve large amounts of data and a range of non-linear effects and interactions that might hinder their model interpretation. For example, in steel manufacturing the understanding of complex mechanisms that lead to the mechanical properties which are generated by the heat treatment process is vital. This knowledge is not available via numerical models, therefore an experienced metallurgist estimates the model parameters to obtain the required properties. This human knowledge and perception sometimes can be imprecise leading to a kind of cognitive uncertainty such as vagueness and ambiguity when making decisions. In system classification, this may be translated into a system deficiency - for example, small input changes in system attributes may result in a sudden and inappropriate change for class assignation. In order to address this issue, practitioners and researches have developed systems that are functional equivalent to fuzzy systems and neural networks. Such systems provide a morphology that mimics the human ability of reasoning via the qualitative aspects of fuzzy information rather by its quantitative analysis. Furthermore, these models are able to learn from data sets and to describe the associated interactions and non-linearities in the data. However, in a like-manner to neural networks, a neural fuzzy system may suffer from a lost of interpretability and transparency when making decisions. This is mainly due to the application of adaptive approaches for its parameter identification. Since the RBF-NN can be treated as a fuzzy inference engine, this thesis presents several methodologies that quantify different types of uncertainty and its influence on the model interpretability and transparency of the RBF-NN during its parameter identification. Particularly, three kind of uncertainty sources in relation to the RBF-NN are studied, namely: entropy, fuzziness and ambiguity. First, a methodology based on Granular Computing (GrC), neutrosophic sets and the RBF-NN is presented. The objective of this methodology is to quantify the hesitation produced during the granular compression at the low level of interpretability of the RBF-NN via the use of neutrosophic sets. This study also aims to enhance the disitnguishability and hence the transparency of the initial fuzzy partition. The effectiveness of the proposed methodology is tested against a real case study for the prediction of the properties of heat-treated steels. Secondly, a new Interval Type-2 Radial Basis Function Neural Network (IT2-RBF-NN) is introduced as a new modelling framework. The IT2-RBF-NN takes advantage of the functional equivalence between FLSs of type-1 and the RBF-NN so as to construct an Interval Type-2 Fuzzy Logic System (IT2-FLS) that is able to deal with linguistic uncertainty and perceptions in the RBF-NN rule base. This gave raise to different combinations when optimising the IT2-RBF-NN parameters. Finally, a twofold study for uncertainty assessment at the high-level of interpretability of the RBF-NN is provided. On the one hand, the first study proposes a new methodology to quantify the a) fuzziness and the b) ambiguity at each RU, and during the formation of the rule base via the use of neutrosophic sets theory. The aim of this methodology is to calculate the associated fuzziness of each rule and then the ambiguity related to each normalised consequence of the fuzzy rules that result from the overlapping and to the choice with one-to-many decisions respectively. On the other hand, a second study proposes a new methodology to quantify the entropy and the fuzziness that come out from the redundancy phenomenon during the parameter identification. To conclude this work, the experimental results obtained through the application of the proposed methodologies for modelling two well-known benchmark data sets and for the prediction of mechanical properties of heat-treated steels conducted to publication of three articles in two peer-reviewed journals and one international conference

    Characterisation of Tablets and Roller-Compacted Ribbons with Terahertz Time-Domain Pulsed Imaging

    Get PDF
    The pharmaceutical process of dry granulation using roller-compaction (DG/RC) is effectively a non-batch based procedure orientated to deliver a continuous stream of material free of a pre-defined batch-size with reduced plant equipment/scale-up R&D resources and an enhanced work-throughput, particularly suitable for moisture sensitive formulation. The desirable accreditations of DG/RC are many; yet by the nature of a more flexible approach than (i.e. wet-granulation), it must be highly monitored and controlled to accomplish higher-throughput rates and reduced ‘static’ material testing stages. To monitor rapidly and in-line with production, pre-granulated ribbons of RC (which highly correlates to the post milled granulates), terahertz time-domain spectroscopy (TDS) is used to elucidate the key physical attributes of post-compression density and thickness uniformity, key to end-product consistency. Invariably a great number of conditions apply to DG/RC (viz: System design, material characteristics, environmental and unit configuration), although widely regarded as the key processing parameters (PP’s) are roll-pressure and roll-gap [1-4]. The target of the study is to derive a strategy to position TDS as PAT to DG/RC. Two terahertz time-domain TD methods of a conventional transmission setup and reflection (TPI) THz analysis are used on standards of glass slides for verifying the interpretational foundations of the TD methods. Achieving RI/thickness error-discrepancies +2.2 to -0.4% c.f. literature ([150]) values provides foundations to test the solid-fraction ratios of pharma tablets with regard to RI’s being surrogate values to SF/path-length (R2 = 1). Combining transmission principles to the portion of reflected EMR removes the pre-requisite for RI or path-length knowledge, giving +1.5 to +2.4% RI agreement (vs. frequency-domain attained results) thus enabling thickness estimations to be above 95% against physical micrometre judgement in all models. Augmentation of the TD methods, refined in Experimental chapter 2 ,then chiefly focuses on TPI as the principle THz-TD method (as the most ideal tool for PAT) for adopting the RI measures for ribbon uniformity analysis in Experimental chapter 4 in an off-line environment again resulting in RI and thicknesses < 5 % error of known parameters of thickness and further use of RI as a proxy porosity equivalent to gas pycnometry. Elucidated in the work are the limitations encountered with tablets and RC’s, data interpretation of industrial considerations. Experimental chapter 3 diverges from RI to differentiate thickness in-order to assess the FD transmission for non-destructive mechanical assessment. This demonstrates a clear relationship between compaction force and the surrogate value for density, following a linear trend below a certain threshold of force. The ‘threshold’ value is observed for less massive tablets, and concluded is that the mechanistic interplay and permanent (plastic) consolidation is greater in instances where compaction-force increases proportionally with target-fill weights, and thus the various behaviour of MCC to stress

    Angular Momentum Transport in Stellar Interiors

    Get PDF
    Stars lose a significant amount of angular momentum between birth and death, implying that efficient processes transporting it from the core to the surface are active. Space asteroseismology delivered the interior rotation rates of more than a thousand low- and intermediate-mass stars, revealing that: 1) single stars rotate nearly uniformly during the core hydrogen and core helium burning phases; 2) stellar cores spin up to a factor 10 faster than the envelope during the red giant phase; 3) the angular momentum of the helium-burning core of stars is in agreement with the angular momentum of white dwarfs. Observations reveal a strong decrease of core angular momentum when stars have a convective core. Current theory of angular momentum transport fails to explain this. We propose improving the theory with a data-driven approach, whereby angular momentum prescriptions derived from multi-dimensional (magneto)hydrodynamical simulations and theoretical considerations are continously tested against modern observations. The TESS and PLATO space missions have the potential to derive the interior rotation of large samples of stars, including high-mass and metal-poor stars in binaries and clusters. This will provide the powerful observational constraints needed to improve theory and simulations.Comment: Manuscript submitted to Annual Reviews of Astronomy and Astrophysics for Volume 57. This is the authors' submitted version. Revisions and the final version will only become available from https://www.annualreviews.org/journal/astr

    EXPLOITING HIGHER ORDER UNCERTAINTY IN IMAGE ANALYSIS

    Get PDF
    Soft computing is a group of methodologies that works synergistically to provide flexible information processing capability for handling real-life ambiguous situations. Its aim is to exploit the tolerance for imprecision, uncertainty, approximate reasoning, and partial truth in order to achieve tractability, robustness, and low-cost solutions. Soft computing methodologies (involving fuzzy sets, neural networks, genetic algorithms, and rough sets) have been successfully employed in various image processing tasks including image segmentation, enhancement and classification, both individually or in combination with other soft computing techniques. The reason of such success has its motivation in the fact that soft computing techniques provide a powerful tools to describe uncertainty, naturally embedded in images, which can be exploited in various image processing tasks. The main contribution of this thesis is to present tools for handling uncertainty by means of a rough-fuzzy framework for exploiting feature level uncertainty. The first contribution is the definition of a general framework based on the hybridization of rough and fuzzy sets, along with a new operator called RF-product, as an effective solution to some problems in image analysis. The second and third contributions are devoted to prove the effectiveness of the proposed framework, by presenting a compression method based on vector quantization and its compression capabilities and an HSV color image segmentation technique

    Oral formulations for children : the microstructure of functionalized calcium carbonate as key characteristic to develop age-appropriate and compliance enhanced formulations

    Get PDF
    The development of age-appropriate formulation for children is a challenging task. Children cannot easily swallow a conventional tablet, therefore alternative dosage forms that can be administered orally are required. These are buccal tablets, oral films as well as orally disintegrating tablets or rapid disintegrating tablets. The age-appropriate formulations are contributing enormously to compliance, as such formulations ensure acceptable palatability. Therefore, there is a need of suitable excipients. Functionalized calcium carbonate (FCC) has already been investigated for different applications. It was used to develop orally disintegrating tablets (ODTs) because the tablets were characterized by high physical stability at low compressive stress. To ensure acceptable palatability, a taste masked and mouthfeel enhanced formulation based on FCC-granules was developed and tested for its acceptance in 20 healthy volunteers. This formulation was also analyzed with a novel in vitro model to determine rate constants for liquid sorption and disintegration as well as disintegration time. As a further step, the stability of the FCC-based granules combined with two model drugs were investigated in form of tablets for oral suspension (TOS). The influence of stress conditions on content, disintegration time and hardness was assessed. To understand and describe the distribution of drug in different drug loads, moxidectin containing mini-tablets were analyzed with synchrotron X-rays micro tomography. Moreover, a mineral polymer composite material (FCC-PCL) was developed and investigated for the use in geometry constrained sustained release formulation in form of a tablet-in-cup (TIC) device. The results show that the FCC-based ODTs with enhanced mouthfeel and taste-masking show good acceptability in vivo and the analysis with the in vitro model showed, that the ODTs do not need more liquid to completely disintegrate than available in the human mouth. The additional excipient in the formulation did not change the characteristics of the FCC under pressure. TOS were found to be stable in stress conditions and there was no chemical degradation detected. Humidity and temperature affected disintegration time, highlighting the importance of correct storage conditions. It was possible to analyze content distribution based on the data obtained from synchrotron X-ray micro tomography. The composite material was successfully used in the TIC device providing higher drug load than a commercial product by ensuring the same sustained release kinetic. The FCC, with the unique lamellar structure on its surface, is able to provide a novel formulation platform based on a ready-to-use granule that ensures fast disintegration times, whether formulated in ODTs, TOS or mini-tablets. It was also possible to compact mini-tablets with different drug loads. The composite material showed to have plastic flow under pressure which is based on the fact that the FCC particles are embedded in the PCL. Even though they were exposed to shear stress the lamellae stayed intact and resulted in stable compacts, whereas the pure polymer PCL is not compactable. It can therefore be concluded, that the microstructure is the key characteristic to the development of age-appropriate as well as compliance enhanced formulations
    • …
    corecore