29 research outputs found

    Stochastic Dynamic Programming and Stochastic Fluid-Flow Models in the Design and Analysis of Web-Server Farms

    Get PDF
    A Web-server farm is a specialized facility designed specifically for housing Web servers catering to one or more Internet facing Web sites. In this dissertation, stochastic dynamic programming technique is used to obtain the optimal admission control policy with different classes of customers, and stochastic uid- ow models are used to compute the performance measures in the network. The two types of network traffic considered in this research are streaming (guaranteed bandwidth per connection) and elastic (shares available bandwidth equally among connections). We first obtain the optimal admission control policy using stochastic dynamic programming, in which, based on the number of requests of each type being served, a decision is made whether to allow or deny service to an incoming request. In this subproblem, we consider a xed bandwidth capacity server, which allocates the requested bandwidth to the streaming requests and divides all of the remaining bandwidth equally among all of the elastic requests. The performance metric of interest in this case will be the blocking probability of streaming traffic, which will be computed in order to be able to provide Quality of Service (QoS) guarantees. Next, we obtain bounds on the expected waiting time in the system for elastic requests that enter the system. This will be done at the server level in such a way that the total available bandwidth for the requests is constant. Trace data will be converted to an ON-OFF source and fluid- flow models will be used for this analysis. The results are compared with both the mean waiting time obtained by simulating real data, and the expected waiting time obtained using traditional queueing models. Finally, we consider the network of servers and routers within the Web farm where data from servers flows and merges before getting transmitted to the requesting users via the Internet. We compute the waiting time of the elastic requests at intermediate and edge nodes by obtaining the distribution of the out ow of the upstream node. This out ow distribution is obtained by using a methodology based on minimizing the deviations from the constituent in flows. This analysis also helps us to compute waiting times at different bandwidth capacities, and hence obtain a suitable bandwidth to promise or satisfy the QoS guarantees. This research helps in obtaining performance measures for different traffic classes at a Web-server farm so as to be able to promise or provide QoS guarantees; while at the same time helping in utilizing the resources of the server farms efficiently, thereby reducing the operational costs and increasing energy savings

    Spatially and Temporally Directed Noise Cancellation Using Federated Learning

    Get PDF
    Machine learning models can be trained to cancel noise of diverse types or spectral characteristics, e.g. traffic noise, background chatter, etc. Such models are trained by feeding training data that includes labeled noise waveforms, which is an expensive and time-consuming procedure. Further, the effectiveness of such machine learning models is limited in canceling types of noise absent from training data. Trained models occupy significant amounts of memory which limits their use in consumer devices. This disclosure describes the use of federated learning techniques to train noise canceling models locally at diverse device locations and times. With user permission, the trained models are tagged with timestamp and location, such that when a user device has time or location matching a particular noise cancellation model, the particular model is provided to the user device. Noise cancellation on the user device is then performed with a compact machine learning model that is suited to the time and location of the user device

    Stochastic Dynamic Programming and Stochastic Fluid-Flow Models in the Design and Analysis of Web-Server Farms

    Get PDF
    A Web-server farm is a specialized facility designed specifically for housing Web servers catering to one or more Internet facing Web sites. In this dissertation, stochastic dynamic programming technique is used to obtain the optimal admission control policy with different classes of customers, and stochastic uid- ow models are used to compute the performance measures in the network. The two types of network traffic considered in this research are streaming (guaranteed bandwidth per connection) and elastic (shares available bandwidth equally among connections). We first obtain the optimal admission control policy using stochastic dynamic programming, in which, based on the number of requests of each type being served, a decision is made whether to allow or deny service to an incoming request. In this subproblem, we consider a xed bandwidth capacity server, which allocates the requested bandwidth to the streaming requests and divides all of the remaining bandwidth equally among all of the elastic requests. The performance metric of interest in this case will be the blocking probability of streaming traffic, which will be computed in order to be able to provide Quality of Service (QoS) guarantees. Next, we obtain bounds on the expected waiting time in the system for elastic requests that enter the system. This will be done at the server level in such a way that the total available bandwidth for the requests is constant. Trace data will be converted to an ON-OFF source and fluid- flow models will be used for this analysis. The results are compared with both the mean waiting time obtained by simulating real data, and the expected waiting time obtained using traditional queueing models. Finally, we consider the network of servers and routers within the Web farm where data from servers flows and merges before getting transmitted to the requesting users via the Internet. We compute the waiting time of the elastic requests at intermediate and edge nodes by obtaining the distribution of the out ow of the upstream node. This out ow distribution is obtained by using a methodology based on minimizing the deviations from the constituent in flows. This analysis also helps us to compute waiting times at different bandwidth capacities, and hence obtain a suitable bandwidth to promise or satisfy the QoS guarantees. This research helps in obtaining performance measures for different traffic classes at a Web-server farm so as to be able to promise or provide QoS guarantees; while at the same time helping in utilizing the resources of the server farms efficiently, thereby reducing the operational costs and increasing energy savings

    Squashed embedding of E-R schemas in hypercubes

    Full text link
    We have been investigating an approach to parallel database processing based on treating Entity-Relationship (E-R) schema graphs as dataflow graphs. A prerequisite is to find appropriate embeddings of the schema graphs into a processor graph, in this case a hypercube. This paper studies a class of adjacency preserving embeddings that map a node in the schema graph into a subcube (relaxed squashed or RS embeddings) or into adjacent subcubes (relaxed extended squashed or RES embeddings) of a hypercube. The mapping algorithm is motivated by the technique used for state assignment in asynchronous sequential machines. In general, the dimension of the cube required for squashed embedding of a graph is called the weak cubical dimension or WCD of the graph. The RES embedding provides an RES-WCD of O([left ceiling]log2n[right ceiling]) for a completely connected graph, Kn, and RS embedding provides an RS-WCD of O([left ceiling]log2n[right ceiling] + [left ceiling]log2m[right ceiling]) for a completely connected bigraph, Km,n. Typical E-R graphs are incompletely connected bigraphs. An algorithm for embedding incomplete bigraphs is presented.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/28651/1/0000467.pd

    Ancestral Inference and the Study of Codon Bias Evolution: Implications for Molecular Evolutionary Analyses of the Drosophila melanogaster Subgroup

    Get PDF
    Reliable inference of ancestral sequences can be critical to identifying both patterns and causes of molecular evolution. Robustness of ancestral inference is often assumed among closely related species, but tests of this assumption have been limited. Here, we examine the performance of inference methods for data simulated under scenarios of codon bias evolution within the Drosophila melanogaster subgroup. Genome sequence data for multiple, closely related species within this subgroup make it an important system for studying molecular evolutionary genetics. The effects of asymmetric and lineage-specific substitution rates (i.e., varying levels of codon usage bias and departures from equilibrium) on the reliability of ancestral codon usage was investigated. Maximum parsimony inference, which has been widely employed in analyses of Drosophila codon bias evolution, was compared to an approach that attempts to account for uncertainty in ancestral inference by weighting ancestral reconstructions by their posterior probabilities. The latter approach employs maximum likelihood estimation of rate and base composition parameters. For equilibrium and most non-equilibrium scenarios that were investigated, the probabilistic method appears to generate reliable ancestral codon bias inferences for molecular evolutionary studies within the D. melanogaster subgroup. These reconstructions are more reliable than parsimony inference, especially when codon usage is strongly skewed. However, inference biases are considerable for both methods under particular departures from stationarity (i.e., when adaptive evolution is prevalent). Reliability of inference can be sensitive to branch lengths, asymmetry in substitution rates, and the locations and nature of lineage-specific processes within a gene tree. Inference reliability, even among closely related species, can be strongly affected by (potentially unknown) patterns of molecular evolution in lineages ancestral to those of interest

    Dataflow query processing and optimization.

    Full text link
    This research presents a novel approach to parallel database processing based on treating database schema graphs as dataflow graphs. The problems addressed here include obtaining lower bounds on adjacency-preserving squashed embeddings of certain schema graphs (viz. ER schema graphs) in hypercube, design of a dataflow processing strategy, estimation of intermediate result sizes, and query optimization, all in the context of dataflow query processing. The dataflow processing strategy proposed here generates the query results in a single pass. Dataflow algorithms proposed in the past generally employ two pass strategy. In the second pass, a central coordinating processor generates the final result by combining the data generated by the various processors in the multicomputer system in the first pass. This strategy may result in the coordinating processor becoming the bottleneck. In the approach taken here, logical schema graphs are used as the physical representational structures. Nodes in the schema graph are mapped to a set of processors in a Shared Nothing multicomputer system to support parallel database processing. Nodes in the schema graph correspond to entity sets or object classes and are assumed to be directly implemented as such. Given a query graph, which is a subgraph of the schema graph, a key problem is determining the optimal dataflow directions in this graph. The problem of determining the optimal dataflow direction is formulated for single queries. The total number of schedules is determined for chain, tree and simple cyclic query graphs. A method is presented which enumerates all possible schedules of a general query graph. However, since the total number of schedules increases exponentially with the number of edges in the query graph, it is not feasible to determine the optimal schedule for a general query graph. Several heuristics are proposed to generate sub-optimal schedules. These include three heuristics for generating an initial schedule and eight heuristics for generating new schedules from a given schedule. Experiments are conducted to compare different heuristic combinations on a wide range of queries. The experiments show that the heuristics that generate chain schedules (or schedules that are "close" to a chain schedule) have the lowest cost.Ph.D.Computer, Information and Control EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/105899/1/9226903.pdfDescription of 9226903.pdf : Restricted to UM users only

    Spiking neural network based approach to EEG signal analysis

    No full text
    The research described in this thesis presents a new classification technique for continuous electroencephalographic (EEG) recordings, based on a network of spiking neurons. Analysis of the signals is performed on ensemble EEG and the task of the neural network is to identify the P300 component in the signals. The network employs leaky-integrate-and-fire neurons as nodes in a multi-layered structure. The method involves formation of multiple weak classifiers to perform voting and collective results are used for final classification.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Identification of essential surgical competencies to be imparted in urological residency: A survey-based study

    No full text
    Introduction: There are variations in surgical procedures included in urology residency curricula across various programs. We conducted a survey of practicing urologists to determine which procedures are considered essential to a core urology residency curriculum. Materials and Methods: A web-based survey was conducted between October 2016 and February 2017 using SurveyMonkey. The questionnaire, comprising a set of 5-questions, was sent to the members of the Urological Society of India. Respondents were requested to grade 37 of the most common urological procedures (competencies) into three groups. Group A, were those that the respondent believed were vital for the trainee to learn (must know). Group B, were those that the respondent thought were essential to acquire (good to know). Group C procedures were labeled as desirable to know by respondents. Results: A total of 485 (15.75%) responses were received out of 3018 members contacted. 67% respondents were working in the private-sector. Out of the 37 listed procedures, 20 procedures received a median weightage of 1 indicating vital clinical competency for urology curriculum, 15 were identified as “essential to know” while two procedures were identified as “desirable to know.” Conclusions: Twenty surgical procedures were identified as 'must-know' for a urology trainee. The choice of procedures was not affected by the region of the responder or his practice type, suggesting a wide consensus

    Need for a reliable alternative to custom-made implant impression trays: An in vitro study comparing accuracy of custom trays versus specialized aluminum stock tray

    No full text
    Purpose: The aim of the present study was to evaluate and to compare the accuracy of implant casts obtained by open tray pick-up impression technique using 2 types of custom-made trays and a specialized aluminum stock impression tray. Materials and Methods: A heat-cure acrylic resin master model was fabricated. Two implants were placed parallel to each other. Ten impressions were made from each group. Polyvinylsiloxane impression material with single step putty wash technique was used for making all the impressions. The resultant casts obtained were compared to the master models with respect to the distances measured between the reference points using a stereomicroscope. The data obtained was statistically analyzed using one-way ANOVA, Tukey's post hoc procedures, and t-test. Results: Mean value obtained was 2.012967 cm (±0.007060) for corimplant stock tray, 2.012627 cm (±0.007945) for autopolymerizing acrylic resin tray, 2.010279 cm (±0.006832) for light-cure hybrid composite tray. P value was calculated to be> 0.05; hence, there was nonsignificant deviation of observations from standard value in each group. Conclusion: Statistically insignificant difference was found between the accuracy of casts obtained by the different impression trays. However, light-cure hybrid composite trays showed best results followed by autopolymerizing acrylic resin trays and Corimplant stock tray
    corecore