8 research outputs found

    A queueing approach to the latency of decoupled UL/DL with flexible TDD and asymmetric services

    Get PDF
    One of the main novelties in 5G is the flexible Time Division Duplex (TDD) frame, which allows adaptation to the latency requirements. However, this flexibility is not sufficient to support heterogeneous latency requirements, in which different traffic instances have different switching requirements between Uplink (UL) and Downlink (DL). This is visible in a traffic mix of enhanced mobile broadband (eMBB) and ultra-reliable low-latency communications (URLLC). In this paper we address this problem through the use of a decoupled UL/DL access, where the UL and the DL of a device are not necessarily served by the same base station. The latency gain over coupled access is quantified in the form of queueing sojourn time in a Rayleigh channel, as well as an upper bound for critical traffic

    The Response Times of Priority Classes under Preemptive Resume in M/G/m Queues

    Get PDF

    Estimating the waiting time of multi-priority emergency patients with downstream blocking

    Get PDF
    To characterize the coupling effect between patient flow to access the emergency department (ED) and that to access the inpatient unit (IU), we develop a model with two connected queues: one upstream queue for the patient flow to access the ED and one downstream queue for the patient flow to access the IU. Building on this patient flow model, we employ queueing theory to estimate the average waiting time across patients. Using priority specific wait time targets, we further estimate the necessary number of ED and IU resources. Finally, we investigate how an alternative way of accessing ED (Fast Track) impacts the average waiting time of patients as well as the necessary number of ED/IU resources. This model as well as the analysis on patient flow can help the designer or manager of a hospital make decisions on the allocation of ED/IU resources in a hospital

    Essays on operational productivity and customer satisfaction in offshore software projects

    Get PDF
    In recent times, both academia and practitioners have increasingly focused on the importance of offshore outsourcing. Analysts estimate that the offshore component of IT services is expected to rise to $70 billion by 2007. Despite this increase, the popular press has cited dissatisfaction among firms that have outsourced software projects to offshore locations. Primary reasons cited for the customer dissatisfaction with outsourcing include the increased complexity of managing the relationship, reduced productivity and reduced operational effectiveness. This issue has not received much academic attention. This dissertation attempts to address this gap in the academic literature by studying the problem from two different perspectives of a software supply chain. The first perspective is effectiveness – where the focus is on managing the internal processes to have a positive impact on customers. This is important, because a satisfied customer is key to a successful and profitable organization. Accordingly, in Chapter 2 of this dissertation, we study the determinants of project performance and customer satisfaction in outsourced offshore software projects. The second perspective is the internal efficiency – where focus is on increasing the efficiency of processes and people; thus, leading to increase in productivity. Clearly these two perspectives are intertwined. An understanding of factors affecting productivity of individuals will enable the managers to set appropriate goals for team members, improve delivery performance, and ultimately increase customer satisfaction. Chapters 3 and 4 of this dissertation investigate productivity improvement using software maintenance as a context. In Chapter 3, we investigate the role of both individual-level factors, such as overall experience, task variety, and newness of task handled, and team-level factors such as team size, new team member entry, and team member exit, on individual productivity. Next, in Chapter 4, we investigate how productivity can be improved by better allocation of individual’s effort to tasks that have the following property: the longer it takes to resolve the task, the less is the likelihood that the task will be completed successfully

    Scheduling for today’s computer systems: bridging theory and practice

    Get PDF
    Scheduling is a fundamental technique for improving performance in computer systems. From web servers to routers to operating systems, how the bottleneck device is scheduled has an enormous impact on the performance of the system as a whole. Given the immense literature studying scheduling, it is easy to think that we already understand enough about scheduling. But, modern computer system designs have highlighted a number of disconnects between traditional analytic results and the needs of system designers. In particular, the idealized policies, metrics, and models used by analytic researchers do not match the policies, metrics, and scenarios that appear in real systems. The goal of this thesis is to take a step towards modernizing the theory of scheduling in order to provide results that apply to today’s computer systems, and thus ease the burden on system designers. To accomplish this goal, we provide new results that help to bridge each of the disconnects mentioned above. We will move beyond the study of idealized policies by introducing a new analytic framework where the focus is on scheduling heuristics and techniques rather than individual policies. By moving beyond the study of individual policies, our results apply to the complex hybrid policies that are often used in practice. For example, our results enable designers to understand how the policies that favor small job sizes are affected by the fact that real systems only have estimates of job sizes. In addition, we move beyond the study of mean response time and provide results characterizing the distribution of response time and the fairness of scheduling policies. These results allow us to understand how scheduling affects QoS guarantees and whether favoring small job sizes results in large job sizes being treated unfairly. Finally, we move beyond the simplified models traditionally used in scheduling research and provide results characterizing the effectiveness of scheduling in multiserver systems and when users are interactive. These results allow us to answer questions about the how to design multiserver systems and how to choose a workload generator when evaluating new scheduling designs

    RISK AND DECISION ANALYSIS OF SPECTRUM USAGE

    Get PDF
    The past decades have witnessed wireless communications traffic exploding. The static spectrum allocation approach can hardly meet the soaring service requirement. Therefore, different spectrum sharing methodologies emerged, such as Authorized Spectrum Access, TV White Space, unlicensed usage, etc. The vast amounts of research work demonstrates that spectrum sharing provides flexibility in spectrum access, increases spectrum usage efficiency, and improves spectrum users utilities. Despite of these advantages, spectrum sharing has been adopted slowly due in part to the embedded risks. Specifically, each spectrum sharing method leads to different costs, revenue, and Quality of Services (QoS) levels. Based on spectrum users requirement on QoS and profits, they encounter distinct risks. Meanwhile, risks may not necessarily lead to failure. Spectrum users can actively cope with risks through mitigation strategies. Moreover, like any engineering investment, spectrum usage is a decision making process for spectrum users. Different choices are made based on distinct incentives and limitations. In order to transform spectrum sharing from a radical strategy to commercial reality, it is essential to quantify risks that associate with each spectrum usage method and understand spectrum users decision process. Consequently, this dissertation focuses on determining expected profits, QoS level, risks, and mitigation strategies for each spectrum sharing method, and applying a decision model to analyze spectrum users’ choices. In detail, two types of risks are modeled in this dissertation: (1) QoS risks with respect to throughput, and (2) monetary risks in terms of profits. Specifically, QoS risks are quantified by M/G/C queue. Monetary risks consider costs, revenues, and mitigation strategies. The iv value of mitigation strategies is determined by the real options approach to reflect the worth of management flexibility. The best spectrum usage method is identified according to decision criteria such as profits maximization and risk minimization. The merit of this dissertation is two-fold. First, it helps spectrum entrants select the most appropriate spectrum sharing method based on existing spectrum usage environment, potentials of each method, as well as their goals and limitations. Second, it helps regulators, policy makers, and spectrum market understand spectrum entrants’ behavior and create interventions in order to obtain favorable outcomes
    corecore