1,517 research outputs found

    The Project Scheduling Problem with Non-Deterministic Activities Duration: A Literature Review

    Get PDF
    Purpose: The goal of this article is to provide an extensive literature review of the models and solution procedures proposed by many researchers interested on the Project Scheduling Problem with nondeterministic activities duration. Design/methodology/approach: This paper presents an exhaustive literature review, identifying the existing models where the activities duration were taken as uncertain or random parameters. In order to get published articles since 1996, was employed the Scopus database. The articles were selected on the basis of reviews of abstracts, methodologies, and conclusions. The results were classified according to following characteristics: year of publication, mathematical representation of the activities duration, solution techniques applied, and type of problem solved. Findings: Genetic Algorithms (GA) was pointed out as the main solution technique employed by researchers, and the Resource-Constrained Project Scheduling Problem (RCPSP) as the most studied type of problem. On the other hand, the application of new solution techniques, and the possibility of incorporating traditional methods into new PSP variants was presented as research trends. Originality/value: This literature review contents not only a descriptive analysis of the published articles but also a statistical information section in order to examine the state of the research activity carried out in relation to the Project Scheduling Problem with non-deterministic activities duration.Peer Reviewe

    Evaluation of 3D image-treatment algorithms applied to optical-sectioning microscopy

    Get PDF
    La información extraída de especimenes biológicos es inherentemente tridimensional. Los datos tridimensionales (3D) permiten un mejor entendimiento de las estructuras y los eventos biológicos, comparados con sus proyecciones bidimensionales (2D), aunque a veces son más difíciles de manejar. Esto explica porqué actualmente se están investigando y mejorando las técnicas de seccionamiento óptico. El principal objetivo del presente trabajo fue evaluar la relevancia de algoritmos de tratamiento de imágenes, los cuales incluyen métodos de preprocesamiento (tales como promediación de imágenes, corrección de background y normalización de intensidades) y procesamiento (desconvolución de desborroneo y de restauración). Esto se realizó mediante la implementación de un algoritmo de cuantificación basado en el Laplaciano y un detector de puntos brillantes. Los algoritmos se aplicaron a un modelo 3D de adhesión celular en piel, basado en un espécimen comúnmente utilizado por nuestro grupo de investigación. Los resultados indican que ciertos métodos de preprocesamiento son requeridos para mejorar el rendimiento de los algoritmos de procesamiento, mientras que otros no deben ser aplicados para asegurar una adecuada y precisa cuantificación.Information extracted from biological specimens is inherently three-dimensional. Though it is sometimes hard to handle, three-dimensional (3D) data provides greater understanding of biological structures and events than its bidimensional (2D) projections. This explains why optical-sectioning techniques are currently being explored and enhanced. The main objective of the present work was to evaluate the relevance of image-treatment algorithms, which included preprocessing (such as image-averaging, background correction and normalization of intensities) and processing (deblurring and restoration deconvolution) methods. This was done by implementing a quantification algorithm based on the Laplacian and a bright-point detector. Algorithms were applied to a 3D cell-adhesion skin model, based upon a specimen commonly used by our research group. Results indicated that certain preprocessing methods are required to enhance the performance of processing algorithms, while others must not be applied in order to ensure an adequate and precise quantification.IV Workshop de Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI

    Enhancement of an automatic algorithm for deconvolution and quantification of three-dimensional microscopy images

    Get PDF
    En trabajos previos hemos diseñado y desarrollado un software para la optimización del procesamiento de imágenes multidimensionales, el cual consiste de un algoritmo automático de desconvolución de restauración (desconvolución con restricción de positividad) y tres indicadores de restauración de la imágenes (Ancho Total a la Mitad del Máximo, Relación Contraste-Ruido y Relación Señal-Ruido) usados para evaluar cuantitativamente la calidad de restauración. Dado que el diseño del algoritmo se implementó en módulos desacoplados, hemos podido incorporar dos nuevos parámetros para evaluar la restauración de imágenes (indicadores tridimensionales basados en la función de Tenegrad) sin realizar cambios significativos en el código. La versión mejorada del algoritmo se utilizó para procesar imágenes tridimensionales utilizando diversas Funciones de Esparcimiento Puntual experimentales; las imágenes se obtuvieron mediante microscopia de campo amplio de fluorescencia del patrón de expresión de E-caderina en la piel de embriones de Rhinella arenarum y de microesferas fluorescentes. Se compararon los indicadores de restauración y el rendimiento de las versiones previa y mejorada del algoritmo. Los resultados indican que los indicadores basados en la función de Tenegrad coinciden con los evaluados previamente y que los nuevos módulos no incrementan significativamente el tiempo de procesamiento.In previous works we designed and developed a software tool for the optimization of multidimensional-image processing, which consisted of an automatic restoration deconvolution method (positive constrained deconvolution) and three image-restoration indicators (Full-Width at Half-Maximum, Contrast-to-Noise Ratio and Signal-to-Noise Ratio) used to assess the quality of restoration qualitatively. Since the algorithm’s design was implemented in uncoupled modules, we were able to introduce two new image-restoration parameters (two three-dimensional Tenegrad-based indicators) without mayor modifications to the script. The enhanced version of the algorithm was used to process raw three-dimensional images using several experimental Point Spread Functions; raw images were obtained by fluorescence wide-field microscopy of epidermal E-cadherin expression in Rhinella arenarum embryos and fluorescent microspheres. The image-restoration indicators and the performance of the previous and enhanced versions of the algorithm were compared. Results show that Tenengrad-based indicators concur with the previously used ones and that the new modules do not increase processing time significantly.Workshop de Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI

    Solving the SUSY CP problem with flavor breaking F-terms

    Full text link
    Supersymmetric flavor models for the radiative generation of fermion masses offer an alternative way to solve the SUSY-CP problem. We assume that the supersymmetric theory is flavor and CP conserving. CP violating phases are associated to the vacuum expectation values of flavor violating susy-breaking fields. As a consequence, phases appear at tree level only in the soft supersymmetry breaking matrices. Using a U(2) flavor model as an example we show that it is possible to generate radiatively the first and second generation of quark masses and mixings as well as the CKM CP phase. The one-loop supersymmetric contributions to EDMs are automatically zero since all the relevant parameters in the lagrangian are flavor conserving and as a consequence real. The size of the flavor and CP mixing in the susy breaking sector is mostly determined by the fermion mass ratios and CKM elements. We calculate the contributions to epsilon, epsilon^{prime} and to the CP asymmetries in the B decays to psi Ks, phi Ks, eta^{\prime} Ks and Xs gamma. We analyze a case study with maximal predictivity in the fermion sector. For this worst case scenario the measurements of Delta mK, Delta mB and epsilon constrain the model requiring extremely heavy squark spectra.Comment: 21 pages, RevTex

    Feedback-control & queueing theory-based resource management for streaming applications

    Get PDF
    Recent advances in sensor technologies and instrumentation have led to an extraordinary growth of data sources and streaming applications. A wide variety of devices, from smart phones to dedicated sensors, have the capability of collecting and streaming large amounts of data at unprecedented rates. A number of distinct streaming data models have been proposed. Typical applications for this include smart cites & built environments for instance, where sensor-based infrastructures continue to increase in scale and variety. Understanding how such streaming content can be processed within some time threshold remains a non-trivial and important research topic. We investigate how a cloud-based computational infrastructure can autonomically respond to such streaming content, offering Quality of Service guarantees. We propose an autonomic controller (based on feedback control and queueing theory) to elastically provision virtual machines to meet performance targets associated with a particular data stream. Evaluation is carried out using a federated Cloud-based infrastructure (implemented using CometCloud) – where the allocation of new resources can be based on: (i) differences between sites, i.e. types of resources supported (e.g. GPU vs. CPU only), (ii) cost of execution; (iii) failure rate and likely resilience, etc. In particular, we demonstrate how Little’s Law –a widely used result in queuing theory– can be adapted to support dynamic control in the context of such resource provisioning

    Mobility-aware application scheduling in fog computing

    Get PDF
    Fog computing provides a distributed infrastructure at the edges of the network, resulting in low-latency access and faster response to application requests when compared to centralized clouds. With this new level of computing capacity introduced between users and the data center-based clouds, new forms of resource allocation and management can be developed to take advantage of the Fog infrastructure. A wide range of applications with different requirements run on end-user devices, and with the popularity of cloud computing many of them rely on remote processing or storage. As clouds are primarily delivered through centralized data centers, such remote processing/storage usually takes place at a single location that hosts user applications and data. The distributed capacity provided by Fog computing allows execution and storage to be performed at different locations. The combination of distributed capacity, the range and types of user applications, and the mobility of smart devices require resource management and scheduling strategies that takes into account these factors altogether. We analyze the scheduling problem in Fog computing, focusing on how user mobility can influence application performance and how three different scheduling policies, namely concurrent, FCFS, and delay-priority, can be used to improve execution based on application characteristics

    Current sensorless power factor correction based on digital current rebuilding

    Get PDF
    A new digital control technique for power factor correction is presented. The main novelty of the method is that there is no current sensor. Instead, the input current is digitally rebuilt, using the estimated input current for the current loop. Apart from that, the ADCs used for the acquisition of the input and output voltages have been designed ad-hoc. Taking advantage of the slow dynamic behavior of these voltages, almost completely digital ADCs have been designed, leaving only a comparator and an RC filter in the analog part. The final objective is obtaining a low cost digital controller which can be easily integrated in an ASIC along with the controller of paralleled and subsequent power section

    Coordinating data analysis and management in multi-layered clouds

    Get PDF
    We introduce an architecture for undertaking data processing across multiple layers of a distributed computing infrastructure, composed of edge devices (making use of Internet-of-Things (IoT) based protocols), intermediate gateway nodes and large scale data centres. In this way, data processing that is intended to be carried out in the data centre can be pushed to the edges of the network -- enabling more efficient use of data centre and in-network resources. We suggest the need for specialist data analysis and management algorithms that are resource-aware, and are able to split computation across these different layers. We propose a coordination mechanism that is able to combine different types of data processing capability, such as in-transit and in-situ. An application scenario is used to illustrate the concepts, subsequently evaluated through a multi-site deployment

    Evaluation of 3D image-treatment algorithms applied to optical-sectioning microscopy

    Get PDF
    La información extraída de especimenes biológicos es inherentemente tridimensional. Los datos tridimensionales (3D) permiten un mejor entendimiento de las estructuras y los eventos biológicos, comparados con sus proyecciones bidimensionales (2D), aunque a veces son más difíciles de manejar. Esto explica porqué actualmente se están investigando y mejorando las técnicas de seccionamiento óptico. El principal objetivo del presente trabajo fue evaluar la relevancia de algoritmos de tratamiento de imágenes, los cuales incluyen métodos de preprocesamiento (tales como promediación de imágenes, corrección de background y normalización de intensidades) y procesamiento (desconvolución de desborroneo y de restauración). Esto se realizó mediante la implementación de un algoritmo de cuantificación basado en el Laplaciano y un detector de puntos brillantes. Los algoritmos se aplicaron a un modelo 3D de adhesión celular en piel, basado en un espécimen comúnmente utilizado por nuestro grupo de investigación. Los resultados indican que ciertos métodos de preprocesamiento son requeridos para mejorar el rendimiento de los algoritmos de procesamiento, mientras que otros no deben ser aplicados para asegurar una adecuada y precisa cuantificación.Information extracted from biological specimens is inherently three-dimensional. Though it is sometimes hard to handle, three-dimensional (3D) data provides greater understanding of biological structures and events than its bidimensional (2D) projections. This explains why optical-sectioning techniques are currently being explored and enhanced. The main objective of the present work was to evaluate the relevance of image-treatment algorithms, which included preprocessing (such as image-averaging, background correction and normalization of intensities) and processing (deblurring and restoration deconvolution) methods. This was done by implementing a quantification algorithm based on the Laplacian and a bright-point detector. Algorithms were applied to a 3D cell-adhesion skin model, based upon a specimen commonly used by our research group. Results indicated that certain preprocessing methods are required to enhance the performance of processing algorithms, while others must not be applied in order to ensure an adequate and precise quantification.IV Workshop de Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI

    Enhancement of an automatic algorithm for deconvolution and quantification of three-dimensional microscopy images

    Get PDF
    En trabajos previos hemos diseñado y desarrollado un software para la optimización del procesamiento de imágenes multidimensionales, el cual consiste de un algoritmo automático de desconvolución de restauración (desconvolución con restricción de positividad) y tres indicadores de restauración de la imágenes (Ancho Total a la Mitad del Máximo, Relación Contraste-Ruido y Relación Señal-Ruido) usados para evaluar cuantitativamente la calidad de restauración. Dado que el diseño del algoritmo se implementó en módulos desacoplados, hemos podido incorporar dos nuevos parámetros para evaluar la restauración de imágenes (indicadores tridimensionales basados en la función de Tenegrad) sin realizar cambios significativos en el código. La versión mejorada del algoritmo se utilizó para procesar imágenes tridimensionales utilizando diversas Funciones de Esparcimiento Puntual experimentales; las imágenes se obtuvieron mediante microscopia de campo amplio de fluorescencia del patrón de expresión de E-caderina en la piel de embriones de Rhinella arenarum y de microesferas fluorescentes. Se compararon los indicadores de restauración y el rendimiento de las versiones previa y mejorada del algoritmo. Los resultados indican que los indicadores basados en la función de Tenegrad coinciden con los evaluados previamente y que los nuevos módulos no incrementan significativamente el tiempo de procesamiento.In previous works we designed and developed a software tool for the optimization of multidimensional-image processing, which consisted of an automatic restoration deconvolution method (positive constrained deconvolution) and three image-restoration indicators (Full-Width at Half-Maximum, Contrast-to-Noise Ratio and Signal-to-Noise Ratio) used to assess the quality of restoration qualitatively. Since the algorithm’s design was implemented in uncoupled modules, we were able to introduce two new image-restoration parameters (two three-dimensional Tenegrad-based indicators) without mayor modifications to the script. The enhanced version of the algorithm was used to process raw three-dimensional images using several experimental Point Spread Functions; raw images were obtained by fluorescence wide-field microscopy of epidermal E-cadherin expression in Rhinella arenarum embryos and fluorescent microspheres. The image-restoration indicators and the performance of the previous and enhanced versions of the algorithm were compared. Results show that Tenengrad-based indicators concur with the previously used ones and that the new modules do not increase processing time significantly.Workshop de Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI
    corecore