302 research outputs found

    Scheduling of unit-length jobs with bipartite incompatibility graphs on four uniform machines

    Full text link
    In the paper we consider the problem of scheduling nn identical jobs on 4 uniform machines with speeds s1s2s3s4,s_1 \geq s_2 \geq s_3 \geq s_4, respectively. Our aim is to find a schedule with a minimum possible length. We assume that jobs are subject to some kind of mutual exclusion constraints modeled by a bipartite incompatibility graph of degree Δ\Delta, where two incompatible jobs cannot be processed on the same machine. We show that the problem is NP-hard even if s1=s2=s3s_1=s_2=s_3. If, however, Δ4\Delta \leq 4 and s112s2s_1 \geq 12 s_2, s2=s3=s4s_2=s_3=s_4, then the problem can be solved to optimality in time O(n1.5)O(n^{1.5}). The same algorithm returns a solution of value at most 2 times optimal provided that s12s2s_1 \geq 2s_2. Finally, we study the case s1s2s3=s4s_1 \geq s_2 \geq s_3=s_4 and give an O(n1.5)O(n^{1.5})-time 32/1532/15-approximation algorithm in all such situations

    Scheduling of Identical Jobs with Bipartite Incompatibility Graphs on Uniform Machines. Computational Experiments

    Get PDF
    Abstract. In the paper we consider the problem of scheduling of unit-length jobs on 3 or 4 uniform parallel machines to minimize schedule length or total completion time. We assume that jobs are subject to some kind of mutual exclusion constraints, modeled by a bipartite graph of bounded degree. The edges of the graph correspond to pairs of jobs that cannot be processed on the same machine. Although the problem is generally NP-hard, we show that under some conditions imposed on machine speeds and the structure of  incompatibility graph our problem can be solved to optimality in polynomial time. Theoretical considerations are accompanied by computer experiments  with some particular model of scheduling

    Implementazione ed ottimizzazione di algoritmi per l'analisi di Biomedical Big Data

    Get PDF
    Big Data Analytics poses many challenges to the research community who has to handle several computational problems related to the vast amount of data. An increasing interest involves Biomedical data, aiming to get the so-called personalized medicine, where therapy plans are designed on the specific genotype and phenotype of an individual patient and algorithm optimization plays a key role to this purpose. In this work we discuss about several topics related to Biomedical Big Data Analytics, with a special attention to numerical issues and algorithmic solutions related to them. We introduce a novel feature selection algorithm tailored on omics datasets, proving its efficiency on synthetic and real high-throughput genomic datasets. We tested our algorithm against other state-of-art methods obtaining better or comparable results. We also implemented and optimized different types of deep learning models, testing their efficiency on biomedical image processing tasks. Three novel frameworks for deep learning neural network models development are discussed and used to describe the numerical improvements proposed on various topics. In the first implementation we optimize two Super Resolution models showing their results on NMR images and proving their efficiency in generalization tasks without a retraining. The second optimization involves a state-of-art Object Detection neural network architecture, obtaining a significant speedup in computational performance. In the third application we discuss about femur head segmentation problem on CT images using deep learning algorithms. The last section of this work involves the implementation of a novel biomedical database obtained by the harmonization of multiple data sources, that provides network-like relationships between biomedical entities. Data related to diseases and other biological relates were mined using web-scraping methods and a novel natural language processing pipeline was designed to maximize the overlap between the different data sources involved in this project

    Design, development and construction of an ATEX compliant ISO 9001:2008 magnetic ink manufacturing facility

    Get PDF
    This Thesis charts the cradle-to-grave development of a chemical processing plant suitable for the manufacture of 160 tonnes per annum of magnetic ink, and the associated, in-line process, quality control and assurance methodologies, developing innovations for the printing industry. The work was undertaken through Knowledge Transfer Partnership number 9576 between BemroseBooth Paragon, Ltd. and The University of Hull.First, the formulation of magnetic inks is described and characterized through a variety of physical and chemical measurements. The magnetic properties of the development inks are presented. Thirteen different ink formulations were developed during the course of this work, all of which are currently now available on the global market, being sold in four continents to, amongst others, the Rail Delivery Group (RDG, formerly ATOC), Régie-Autonome des Transports Parisiens (RATP), all operators for the French motorway tolls (Sanef, Vinci, ASF, etc.), New York Metropolitan and Casa da Moeda do Brasil (CMB).The design of the manufacturing process, including safety, health and environment consideration, are outlined, with their realization within an ISO 9001:2008 quality management system. The process economics are rationalized and pre-project estimations are contrasted with actual costs.Fast moving manufacturing environments always require the development of innovations to expand product ranges and resolve issues associated with limited reverse supply chains and complications in the use of manufactured product. A variety of problems are presented, with realized and pragmatic pathways to their solution given. In keeping with the spirit of environmental responsibility, innovations in the development of water-based magnetic inks are presented, and routes to their low cost, in situ process monitoring, presented.Last, an entirely new electrochemical approach to the detection of security threats in a mass transit environment is illustrated to a proof-of-concept

    Subject index volumes 1–92

    Get PDF

    Carbon-profit-aware job scheduling and load balancing in geographically distributed cloud for HPC and web applications

    Get PDF
    This thesis introduces two carbon-profit-aware control mechanisms that can be used to improve performance of job scheduling and load balancing in an interconnected system of geographically distributed data centers for HPC and web applications. These control mechanisms consist of three primary components that perform: 1) measurement and modeling, 2) job planning, and 3) plan execution. The measurement and modeling component provide information on energy consumption and carbon footprint as well as utilization, weather, and pricing information. The job planning component uses this information to suggest the best arrangement of applications as a possible configuration to the plan execution component to perform it on the system. For reporting and decision making purposes, some metrics need to be modeled based on directly measured inputs. There are two challenges in accurately modeling of these necessary metrics: 1) feature selection and 2) curve fitting (regression). First, to improve the accuracy of power consumption models of the underutilized servers, advanced fitting methodologies were used on the selected server features. The resulting model is then evaluated on real servers and is used as part of load balancing mechanism for web applications. We also provide an inclusive model for cooling system in data centers to optimize the power consumption of cooling system, which in turn is used by the planning component. Furthermore, we introduce another model to calculate the profit of the system based on the price of electricity, carbon tax, operational costs, sales tax, and corporation taxes. This model is used for optimized scheduling of HPC jobs. For position allocation of web applications, a new heuristic algorithm is introduced for load balancing of virtual machines in a geographically distributed system in order to improve its carbon awareness. This new heuristic algorithm is based on genetic algorithm and is specifically tailored for optimization problems of interconnected system of distributed data centers. A simple version of this heuristic algorithm has been implemented in the GSN project, as a carbon-aware controller. Similarly, for scheduling of HPC jobs on servers, two new metrics are introduced: 1) profitper-core-hour-GHz and 2) virtual carbon tax. In the HPC job scheduler, these new metrics are used to maximize profit and minimize the carbon footprint of the system, respectively. Once the application execution plan is determined, plan execution component will attempt to implement it on the system. Plan execution component immediately uses the hypervisors on physical servers to create, remove, and migrate virtual machines. It also executes and controls the HPC jobs or web applications on the virtual machines. For validating systems designed using the proposed modeling and planning components, a simulation platform using real system data was developed, and new methodologies were compared with the state-of-the-art methods considering various scenarios. The experimental results show improvement in power modeling of servers, significant carbon reduction in load balancing of web applications, and significant profit-carbon improvement in HPC job scheduling

    Cell Nuclear Morphology Analysis Using 3D Shape Modeling, Machine Learning and Visual Analytics

    Full text link
    Quantitative analysis of morphological changes in a cell nucleus is important for the understanding of nuclear architecture and its relationship with cell differentiation, development, proliferation, and disease. Changes in the nuclear form are associated with reorganization of chromatin architecture related to altered functional properties such as gene regulation and expression. Understanding these processes through quantitative analysis of morphological changes is important not only for investigating nuclear organization, but also has clinical implications, for example, in detection and treatment of pathological conditions such as cancer. While efforts have been made to characterize nuclear shapes in two or pseudo-three dimensions, several studies have demonstrated that three dimensional (3D) representations provide better nuclear shape description, in part due to the high variability of nuclear morphologies. 3D shape descriptors that permit robust morphological analysis and facilitate human interpretation are still under active investigation. A few methods have been proposed to classify nuclear morphologies in 3D, however, there is a lack of publicly available 3D data for the evaluation and comparison of such algorithms. There is a compelling need for robust 3D nuclear morphometric techniques to carry out population-wide analyses. In this work, we address a number of these existing limitations. First, we present a largest publicly available, to-date, 3D microscopy imaging dataset for cell nuclear morphology analysis and classification. We provide a detailed description of the image analysis protocol, from segmentation to baseline evaluation of a number of popular classification algorithms using 2D and 3D voxel-based morphometric measures. We proposed a specific cross-validation scheme that accounts for possible batch effects in data. Second, we propose a new technique that combines mathematical modeling, machine learning, and interpretation of morphometric characteristics of cell nuclei and nucleoli in 3D. Employing robust and smooth surface reconstruction methods to accurately approximate 3D object boundary enables the establishment of homologies between different biological shapes. Then, we compute geometric morphological measures characterizing the form of cell nuclei and nucleoli. We combine these methods into a highly parallel computational pipeline workflow for automated morphological analysis of thousands of nuclei and nucleoli in 3D. We also describe the use of visual analytics and deep learning techniques for the analysis of nuclear morphology data. Third, we evaluate proposed methods for 3D surface morphometric analysis of our data. We improved the performance of morphological classification between epithelial vs mesenchymal human prostate cancer cells compared to the previously reported results due to the more accurate shape representation and the use of combined nuclear and nucleolar morphometry. We confirmed previously reported relevant morphological characteristics, and also reported new features that can provide insight in the underlying biological mechanisms of pathology of prostate cancer. We also assessed nuclear morphology changes associated with chromatin remodeling in drug-induced cellular reprogramming. We computed temporal trajectories reflecting morphological differences in astroglial cell sub-populations administered with 2 different treatments vs controls. We described specific changes in nuclear morphology that are characteristic of chromatin re-organization under each treatment, which previously has been only tentatively hypothesized in literature. Our approach demonstrated high classification performance on each of 3 different cell lines and reported the most salient morphometric characteristics. We conclude with the discussion of the potential impact of method development in nuclear morphology analysis on clinical decision-making and fundamental investigation of 3D nuclear architecture. We consider some open problems and future trends in this field.PHDBioinformaticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147598/1/akalinin_1.pd

    I-Light Symposium 2005 Proceedings

    Get PDF
    I-Light was made possible by a special appropriation by the State of Indiana. The research described at the I-Light Symposium has been supported by numerous grants from several sources. Any opinions, findings and conclusions, or recommendations expressed in the 2005 I-Light Symposium Proceedings are those of the researchers and authors and do not necessarily reflect the views of the granting agencies.Indiana University Office of the Vice President for Research and Information Technology, Purdue University Office of the Vice President for Information Technology and CI
    corecore