3,095 research outputs found

    A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing

    Get PDF
    The emergence of cloud computing based on virtualization technologies brings huge opportunities to host virtual resource at low cost without the need of owning any infrastructure. Virtualization technologies enable users to acquire, configure and be charged on pay-per-use basis. However, Cloud data centers mostly comprise heterogeneous commodity servers hosting multiple virtual machines (VMs) with potential various specifications and fluctuating resource usages, which may cause imbalanced resource utilization within servers that may lead to performance degradation and service level agreements (SLAs) violations. To achieve efficient scheduling, these challenges should be addressed and solved by using load balancing strategies, which have been proved to be NP-hard problem. From multiple perspectives, this work identifies the challenges and analyzes existing algorithms for allocating VMs to PMs in infrastructure Clouds, especially focuses on load balancing. A detailed classification targeting load balancing algorithms for VM placement in cloud data centers is investigated and the surveyed algorithms are classified according to the classification. The goal of this paper is to provide a comprehensive and comparative understanding of existing literature and aid researchers by providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres

    Throughput-driven floorplanning with wire pipelining

    Get PDF
    The size of future high-performance SoC is such that the time-of-flight of wires connecting distant pins in the layout can be much higher than the clock period. In order to keep the frequency as high as possible, the wires may be pipelined. However, the insertion of flip-flops may alter the throughput of the system due to the presence of loops in the logic netlist. In this paper, we address the problem of floorplanning a large design where long interconnects are pipelined by inserting the throughput in the cost function of a tool based on simulated annealing. The results obtained on a series of benchmarks are then validated using a simple router that breaks long interconnects by suitably placing flip-flops along the wires

    Refined AFC-Enabled High-Lift System Integration Study

    Get PDF
    A prior trade study established the effectiveness of using Active Flow Control (AFC) for reducing the mechanical complexities associated with a modern high-lift system without sacrificing aerodynamic performance at low-speed flight conditions representative of takeoff and landing. The current technical report expands on this prior work in two ways: (1) a refined conventional high-lift system based on the NASA Common Research Model (CRM) is presented that is more representative of modern commercial transport aircraft in terms of stall characteristics and maximum Lift/Drag (L/D) ratios at takeoff and landing-approach flight conditions; and (2) the design trade space for AFC-enabled high-lift systems is expanded to explore a wider range of options for improving their efficiency. The refined conventional high-lift CRM (HL-CRM) concept features leading edge slats and slotted trailing edge flaps with Fowler motion. For the current AFC-enhanced high lift system trade study, the refined conventional high-lift system is simplified by substituting simply-hinged trailing edge flaps for the slotted single-element flaps with Fowler motion. The high-lift performance of these two high-lift CRM variants is established using Computational Fluid Dynamics (CFD) solutions to the Reynolds-Averaged Navier-Stokes (RANS) equations. These CFD assessments identify the high-lift performance that needs to be recovered through AFC to have the CRM variant with the lighter and mechanically simpler high-lift system match the performance of the conventional high-lift system. In parallel to the conventional high-lift concept development, parametric studies using CFD guided the development of an effective and efficient AFC-enabled simplified high-lift system. This included parametric trailing edge flap geometry studies addressing the effects of flap chord length and flap deflection. As for the AFC implementation, scaling effects (i.e., wind-tunnel versus full-scale flight conditions) are addressed, as are AFC architecture aspects such as AFC unit placement, number AFC units, operating pressures, mass flow rates, and steady versus unsteady AFC applications. These efforts led to the development of a novel traversing AFC actuation concept which is efficient in that it reduces the AFC mass flow requirements by as much as an order of magnitude compared to previous AFC technologies, and it is predicted to be effective in driving the aerodynamic performance of a mechanical simplified high-lift system close to that of the reference conventional high-lift system. Conceptual system integration studies were conducted for the AFC-enhanced high-lift concept applied to a NASA Environmentally Responsible Aircraft (ERA) reference configuration, the so-called ERA-0003 concept. The results from these design integration assessments identify overall system performance improvement opportunities over conventional high-lift systems that suggest the viability of further technology maturation efforts for AFC-enabled high lift flap systems. To that end, technical challenges are identified associated with the application of AFC-enabled high-lift systems to modern transonic commercial transports for future technology maturation efforts

    On k-Convex Polygons

    Get PDF
    We introduce a notion of kk-convexity and explore polygons in the plane that have this property. Polygons which are \mbox{kk-convex} can be triangulated with fast yet simple algorithms. However, recognizing them in general is a 3SUM-hard problem. We give a characterization of \mbox{22-convex} polygons, a particularly interesting class, and show how to recognize them in \mbox{O(nlogn)O(n \log n)} time. A description of their shape is given as well, which leads to Erd\H{o}s-Szekeres type results regarding subconfigurations of their vertex sets. Finally, we introduce the concept of generalized geometric permutations, and show that their number can be exponential in the number of \mbox{22-convex} objects considered.Comment: 23 pages, 19 figure

    Embedding Graphs under Centrality Constraints for Network Visualization

    Full text link
    Visual rendering of graphs is a key task in the mapping of complex network data. Although most graph drawing algorithms emphasize aesthetic appeal, certain applications such as travel-time maps place more importance on visualization of structural network properties. The present paper advocates two graph embedding approaches with centrality considerations to comply with node hierarchy. The problem is formulated first as one of constrained multi-dimensional scaling (MDS), and it is solved via block coordinate descent iterations with successive approximations and guaranteed convergence to a KKT point. In addition, a regularization term enforcing graph smoothness is incorporated with the goal of reducing edge crossings. A second approach leverages the locally-linear embedding (LLE) algorithm which assumes that the graph encodes data sampled from a low-dimensional manifold. Closed-form solutions to the resulting centrality-constrained optimization problems are determined yielding meaningful embeddings. Experimental results demonstrate the efficacy of both approaches, especially for visualizing large networks on the order of thousands of nodes.Comment: Submitted to IEEE Transactions on Visualization and Computer Graphic

    Outlier Detection for Mixed Model with Application to RNA-Seq Data

    Full text link
    Extracting messenger RNA (mRNA) molecules using oligo-dT probes targeting on the Poly(A) tail is common in RNA-sequencing (RNA-seq) experiments. This approach, however, is limited when the specimen is profoundly degraded or formalin-fixed such that either the majority of mRNAs have lost their Poly(A) tails or the oligo-dT probes do not anneal with the formalin-altered adenines. For this problem, a new protocol called capture RNA sequencing was developed using probes for target sequences, which gives unbiased estimates of RNA abundance even when the specimens are degraded. However, despite the effectiveness of capture sequencing, mRNA purification by the traditional Poly(A) protocol still underlies most reference libraries. A bridging mechanism that makes the two types of measurements comparable is needed for data integration and efficient use of information. In the first project, we developed an optimization algorithm that was later applied to outlier detection in a linear mixed model for data integration. In particular, we minimized the sum of truncated convex functions, which is often encountered in models with L0 penalty. The solution is exact in one-dimensional and two-dimensional spaces. For higher-dimensional problems, we applied the algorithm in a coordinate descent fashion. Although the global optimality is compromised, this approach generates local solutions with much higher efficiency. In the second project, we investigated the differences between Poly(A) libraries and capture sequencing libraries. We showed that without conversion, directly merging the two types of measurements lead to biases in subsequent analyses. A practical solution was to use a linear mixed model to predict one type of measurements based on the other. The predicted values based on this approach have high correlations, low errors and high efficiency compared with those based on the fixed model. Moreover, the procedure eliminates false positive findings and biases introduced by the technology differences between the two measurements. In the third project, we noted outlying observations and outlying random effects when fitting the mixed model. As they lead to the discovery of dysfunctional probes and batch effects, we developed an algorithm that screened for the outliers and provided a robust estimation. Specifically, we modified the mean-shift model with variable selection using L0 penalties, which was first introduced by Gannaz (2007), McCann and Welsch (2007) and She and Owen (2012). By incorporating the optimization method proposed in the first project, the algorithm became scalable and yielded exact solutions for low-dimensional problems. In particular, under the assumption of normality, there existed analytic expressions for the penalty parameters. In simulation studies, we showed that the proposed algorithm attained reliable outlier detection, delivered robust estimation and achieved efficient computation.PHDBiostatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147613/1/ltzuying_1.pd
    corecore