963 research outputs found

    An (MI)LP-based Primal Heuristic for 3-Architecture Connected Facility Location in Urban Access Network Design

    Full text link
    We investigate the 3-architecture Connected Facility Location Problem arising in the design of urban telecommunication access networks. We propose an original optimization model for the problem that includes additional variables and constraints to take into account wireless signal coverage. Since the problem can prove challenging even for modern state-of-the art optimization solvers, we propose to solve it by an original primal heuristic which combines a probabilistic fixing procedure, guided by peculiar Linear Programming relaxations, with an exact MIP heuristic, based on a very large neighborhood search. Computational experiments on a set of realistic instances show that our heuristic can find solutions associated with much lower optimality gaps than a state-of-the-art solver.Comment: This is the authors' final version of the paper published in: Squillero G., Burelli P. (eds), EvoApplications 2016: Applications of Evolutionary Computation, LNCS 9597, pp. 283-298, 2016. DOI: 10.1007/978-3-319-31204-0_19. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-31204-0_1

    Scatter search based met heuristic for robust optimization of the deploying of "DWDM" technology on optical networks with survivability

    Get PDF
    In this paper we discuss the application of a met heuristic approach based on the Scatter Search to deal with robust optimization of the planning problem in the deploying of the Dense Wavelength Division Multiplexing (DWDM) technology on an existing optical fiber network taking into account, in addition to the forecasted demands, the uncertainty in the survivability requirements

    Building the Evryscope: Hardware Design and Performance

    Get PDF
    The Evryscope is a telescope array designed to open a new parameter space in optical astronomy, detecting short timescale events across extremely large sky areas simultaneously. The system consists of a 780 MPix 22-camera array with an 8150 sq. deg. field of view, 13" per pixel sampling, and the ability to detect objects down to Mg=16 in each 2 minute dark-sky exposure. The Evryscope, covering 18,400 sq.deg. with hours of high-cadence exposure time each night, is designed to find the rare events that require all-sky monitoring, including transiting exoplanets around exotic stars like white dwarfs and hot subdwarfs, stellar activity of all types within our galaxy, nearby supernovae, and other transient events such as gamma ray bursts and gravitational-wave electromagnetic counterparts. The system averages 5000 images per night with ~300,000 sources per image, and to date has taken over 3.0M images, totaling 250TB of raw data. The resulting light curve database has light curves for 9.3M targets, averaging 32,600 epochs per target through 2018. This paper summarizes the hardware and performance of the Evryscope, including the lessons learned during telescope design, electronics design, a procedure for the precision polar alignment of mounts for Evryscope-like systems, robotic control and operations, and safety and performance-optimization systems. We measure the on-sky performance of the Evryscope, discuss its data-analysis pipelines, and present some example variable star and eclipsing binary discoveries from the telescope. We also discuss new discoveries of very rare objects including 2 hot subdwarf eclipsing binaries with late M-dwarf secondaries (HW Vir systems), 2 white dwarf / hot subdwarf short-period binaries, and 4 hot subdwarf reflection binaries. We conclude with the status of our transit surveys, M-dwarf flare survey, and transient detection.Comment: 24 pages, 24 figures, accepted PAS

    Design and fabrication of a long-life Stirling cycle cooler for space application. Phase 3: Prototype model

    Get PDF
    A second-generation, Stirling-cycle cryocooler (cryogenic refrigerator) for space applications, with a cooling capacity of 5 watts at 65 K, was recently completed. The refrigerator, called the Prototype Model, was designed with a goal of 5 year life with no degradation in cooling performance. The free displacer and free piston of the refrigerator are driven directly by moving-magnet linear motors with the moving elements supported by active magnetic bearings. The use of clearance seals and the absence of outgassing material in the working volume of the refrigerator enable long-life operation with no deterioration in performance. Fiber-optic sensors detect the radial position of the shafts and provide a control signal for the magnetic bearings. The frequency, phase, stroke, and offset of the compressor and expander are controlled by signals from precision linear position sensors (LVDTs). The vibration generated by the compressor and expander is cancelled by an active counter balance which also uses a moving-magnet linear motor and magnetic bearings. The driving signal for the counter balance is derived from the compressor and expander position sensors which have wide bandwidth for suppression of harmonic vibrations. The efficiency of the three active members, which operate in a resonant mode, is enhanced by a magnetic spring in the expander and by gas springs in the compressor and counterbalance. The cooling was achieved with a total motor input power of 139 watts. The magnetic-bearing stiffness was significantly increased from the first-generation cooler to accommodate shuttle launch vibrations

    Machine Learning Assisted Framework for Advanced Subsurface Fracture Mapping and Well Interference Quantification

    Get PDF
    The oil and gas industry has historically spent significant amount of capital to acquire large volumes of analog and digital data often left unused due to lack of digital awareness. It has instead relied on individual expertise and numerical modelling for reservoir development, characterization, and simulation, which is extremely time consuming and expensive and inevitably invites significant human bias and error into the equation. One of the major questions that has significant impact in unconventional reservoir development (e.g., completion design, production, and well spacing optimization), CO2 sequestration in geological formations (e.g., well and reservoir integrity), and engineered geothermal systems (e.g., maximizing the fluid flow and capacity of the wells) is to be able to quantify and map the subsurface natural fracture systems. This needs to be done both locally, i.e., near the wellbore and generally in the scale of the wellpad, or region. In this study, the conventional near wellbore natural fracture mapping techniques is first discussed and integrated with more advanced technologies such as application of fiber optics, specifically Distributed Acoustic Sensing (DAS) and Distributed Strain Sensing (DSS), to upscale the fracture mapping in the region. Next, a physics-based automated machine learning (AutoML) workflow is developed that incorporates the advanced data acquisition system that collects high-resolution drilling acceleration data to infer the near well bore natural fracture intensities. The new AutoML workflow aims to minimize human bias and accelerate the near wellbore natural fracture mapping in real time. The new AutoML workflow shows great promise by reducing the fracture mapping time and cost by 10-fold and producing more accurate, robust, reproducible, and measurable results. Finally, to completely remove human intervention and consequently accelerate the process of fracture mapping while drilling, the application of computer vision and deep learning techniques in new workflows to automate the process of identifying natural fractures and other lithological features using borehole image logs were integrated. Different structures and workflows have been tested and two specific workflows are designed for this purpose. In the first workflow, the fracture footprints on actual acoustic image logs (i.e., full, or partial sigmoidal signatures with a range of amplitude and vertical and horizontal displacement) is detected and classified in different categories with varying success. The second workflow implements the actual amplitude values recorded by the borehole image log and the binary representation of the produced images to detect and quantify the major fractures and beddings. The first workflow is more detailed and capable of identifying different classes of fractures albeit computationally more expensive. The second workflow is faster in detecting the major fractures and beddings. In conclusion, regional subsurface natural fracture mapping technique using an integration of conventional logging, microseismic, and fiber optic data is presented. A new AutoML workflow designed and tested in a Marcellus Shale gas reservoir was used to predict near wellbore fracture intensities using high frequency drilling acceleration data. Two integrated workflows were designed and validated using 3 wells in Marcellus Shale to extract natural fractures from acoustic image logs and amplitude recordings obtained during logging while drilling. The new workflows have: i) minimized human bias in different aspects of fracture mapping from image log analysis to machine learning model selection and hyper parameter optimization; ii) generated and quantified more accurate fracture predictions using different score matrices; iii) decreased the time and cost of the fracture interpretation by tenfold, and iv) presented more robust and reproducible results

    Nonlinear Interference Generation in Wideband and Disaggregated Optical Network Architectures

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Ontwerp en evaluatie van content distributie netwerken voor multimediale streaming diensten.

    Get PDF
    Traditionele Internetgebaseerde diensten voor het verspreiden van bestanden, zoals Web browsen en het versturen van e-mails, worden aangeboden via één centrale server. Meer recente netwerkdiensten zoals interactieve digitale televisie of video-op-aanvraag vereisen echter hoge kwaliteitsgaranties (QoS), zoals een lage en constante netwerkvertraging, en verbruiken een aanzienlijke hoeveelheid bandbreedte op het netwerk. Architecturen met één centrale server kunnen deze garanties moeilijk bieden en voldoen daarom niet meer aan de hoge eisen van de volgende generatie multimediatoepassingen. In dit onderzoek worden daarom nieuwe netwerkarchitecturen bestudeerd, die een dergelijke dienstkwaliteit kunnen ondersteunen. Zowel peer-to-peer mechanismes, zoals bij het uitwisselen van muziekbestanden tussen eindgebruikers, als servergebaseerde oplossingen, zoals gedistribueerde caches en content distributie netwerken (CDN's), komen aan bod. Afhankelijk van de bestudeerde dienst en de gebruikte netwerktechnologieën en -architectuur, worden gecentraliseerde algoritmen voor netwerkontwerp voorgesteld. Deze algoritmen optimaliseren de plaatsing van de servers of netwerkcaches en bepalen de nodige capaciteit van de servers en netwerklinks. De dynamische plaatsing van de aangeboden bestanden in de verschillende netwerkelementen wordt aangepast aan de heersende staat van het netwerk en aan de variërende aanvraagpatronen van de eindgebruikers. Serverselectie, herroutering van aanvragen en het verspreiden van de belasting over het hele netwerk komen hierbij ook aan bod

    Tecniche di protezione da interferenze elettromagnetiche: modellistica e prove sperimentali in camera riverberante

    Get PDF
    Electromagnetic interference and compatibility are problems that claim an increasing attention in many environments, all over the world

    Optimización metaheurística para la planificación de redes WDM

    Get PDF
    Las implementaciones actuales de las redes de telecomunicaciones no permiten soportar el incremento en la demanda de ancho de banda producido por el crecimiento del tráfico de datos en las últimas décadas. La aparición de la fibra óptica y el desarrollo de la tecnología de multiplexación por división de longitudes de onda (WDM) permite incrementar la capacidad de redes de telecomunicaciones existentes mientras se minimizan costes. En este trabajo se planifican redes ópticas WDM mediante la resolución de los problemas de Provisión y Conducción en redes WDM (Provisioning and Routing Problem) y de Supervivencia (Survivability Problem). El Problema de Conducción y Provisión consiste en incrementar a mínimo coste la capacidad de una red existente de tal forma que se satisfaga un conjunto de requerimientos de demanda. El problema de supervivencia consiste en garantizar el flujo del tráfico a través de una red en caso de fallo de alguno de los elementos de la misma. Además se resuelve el Problema de Provisión y Conducción en redes WDM con incertidumbre en las demandas. Para estos problemas se proponen modelos de programación lineal entera. Las metaheurísticas proporcionan un medio para resolver problemas de optimización complejos, como los que surgen al planificar redes de telecomunicaciones, obteniendo soluciones de alta calidad en un tiempo computacional razonable. Las metaheurísticas son estrategias que guían y modifican otras heurísticas para obtener soluciones más allá de las generadas usualmente en la búsqueda de optimalidad local. No garantizan que la mejor solución encontrada, cuando se satisfacen los criterios de parada, sea una solución óptima global del problema. Sin embargo, la experimentación de implementaciones metaheurísticas muestra que las estrategias de búsqueda embebidas en tales procedimientos son capaces de encontrar soluciones de alta calidad a problemas difíciles en industria, negocios y ciencia. Para la solución del problema de Provisión y Conducción en Redes WDM, se desarrolla un algoritmo metaheurístico híbrido que combina principalmente ideas de las metaheurísticas Búsqueda Dispersa (Scatter Search) y Búsqueda Mutiarranque (Multistart). Además añade una componente tabú en uno de los procedimiento del algoritmo. Se utiliza el modelo de programación lineal entera propuesto por otros autores y se propone un modelo de programación lineal entera alternativo que proporciona cotas superiores al problema, pero incluye un menor número de variables y restricciones, pudiendo ser resuelto de forma óptima para tamaños de red mayores. Los resultados obtenidos por el algoritmo metaheurístico diseñado se comparan con los obtenidos por un procedimiento basado en permutaciones de las demandas propuesto anteriormente por otros autores, y con los dos modelos de programación lineal entera usados. Se propone modelos de programación lineal entera para sobrevivir la red en caso de fallos en un único enlace. Se proponen modelos para los esquemas de protección de enlace compartido, de camino compartido con enlaces disjuntos, y de camino compartido sin enlaces disjuntos. Se propone un método de resolución metaheurístico que obtiene mejores costes globales que al resolver el problema en dos fases, es decir, al resolver el problema de servicio y a continuación el de supervivencia. Se proponen además modelos de programación entera para resolver el problema de provisión en redes WDM con incertidumbres en las demandas
    • …
    corecore