14 research outputs found

    Half-Filled Lowest Landau Level on a Thin Torus

    Full text link
    We solve a model that describes an interacting electron gas in the half-filled lowest Landau level on a thin torus, with radius of the order of the magnetic length. The low energy sector consists of non-interacting, one-dimensional, neutral fermions. The ground state, which is homogeneous, is the Fermi sea obtained by filling the negative energy states and the excited states are gapless neutral excitations out of this one-dimensional sea. Although the limit considered is extreme, the solution has a striking resemblance to the composite fermion description of the bulk ν=1/2\nu=1/2 state--the ground state is homogeneous and the excitations are neutral and gapless. This suggests a one-dimensional Luttinger liquid description, with possible observable effects in transport experiments, of the bulk state where it develops continuously from the state on a thin torus as the radius increases.Comment: 4 pages, 1 figur

    Fair Resource Allocation in Macroscopic Evacuation Planning Using Mathematical Programming: Modeling and Optimization

    Get PDF
    Evacuation is essential in the case of natural and manmade disasters such as hurricanes, nuclear disasters, fire accidents, and terrorism epidemics. Random evacuation plans can increase risks and incur more losses. Hence, numerous simulation and mathematical programming models have been developed over the past few decades to help transportation planners make decisions to reduce costs and protect lives. However, the dynamic transportation process is inherently complex. Thus, modeling this process can be challenging and computationally demanding. The objective of this dissertation is to build a balanced model that reflects the realism of the dynamic transportation process and still be computationally tractable to be implemented in reality by the decision-makers. On the other hand, the users of the transportation network require reasonable travel time within the network to reach their destinations. This dissertation introduces a novel framework in the fields of fairness in network optimization and evacuation to provide better insight into the evacuation process and assist with decision making. The user of the transportation network is a critical element in this research. Thus, fairness and efficiency are the two primary objectives addressed in the work by considering the limited capacity of roads of the transportation network. Specifically, an approximation approach to the max-min fairness (MMF) problem is presented that provides lower computational time and high-quality output compared to the original algorithm. In addition, a new algorithm is developed to find the MMF resource allocation output in nonconvex structure problems. MMF is the fairness policy used in this research since it considers fairness and efficiency and gives priority to fairness. In addition, a new dynamic evacuation modeling approach is introduced that is capable of reporting more information about the evacuees compared to the conventional evacuation models such as their travel time, evacuation time, and departure time. Thus, the contribution of this dissertation is in the two areas of fairness and evacuation. The first part of the contribution of this dissertation is in the field of fairness. The objective in MMF is to allocate resources fairly among multiple demands given limited resources while utilizing the resources for higher efficiency. Fairness and efficiency are contradicting objectives, so they are translated into a bi-objective mathematical programming model and solved using the ϵ-constraint method, introduced by Vira and Haimes (1983). Although the solution is an approximation to the MMF, the model produces quality solutions, when ϵ is properly selected, in less computational time compared to the progressive-filling algorithm (PFA). In addition, a new algorithm is developed in this research called the θ progressive-filling algorithm that finds the MMF in resource allocation for general problems and works on problems with the nonconvex structure problems. The second part of the contribution is in evacuation modeling. The common dynamic evacuation models lack a piece of essential information for achieving fairness, which is the time each evacuee or group of evacuees spend in the network. Most evacuation models compute the total time for all evacuees to move from the endangered zone to the safe destination. Lack of information about the users of the transportation network is the motivation to develop a new optimization model that reports more information about the users of the network. The model finds the travel time, evacuation time, departure time, and the route selected for each group of evacuees. Given that the travel time function is a non-linear convex function of the traffic volume, the function is linearized through a piecewise linear approximation. The developed model is a mixed-integer linear programming (MILP) model with high complexity. Hence, the model is not capable of solving large scale problems. The complexity of the model was reduced by introducing a linear programming (LP) version of the full model. The complexity is significantly reduced while maintaining the exact output. In addition, the new θ-progressive-filling algorithm was implemented on the evacuation model to find a fair and efficient evacuation plan. The algorithm is also used to identify the optimal routes in the transportation network. Moreover, the robustness of the evacuation model was tested against demand uncertainty to observe the model behavior when the demand is uncertain. Finally, the robustness of the model is tested when the traffic flow is uncontrolled. In this case, the model's only decision is to distribute the evacuees on routes and has no control over the departure time

    Geometrical approach to bosonization of D>1 dimensional (non)-Fermi liquids

    Get PDF
    We discuss an approach to higher-dimensional bosonization of interacting fermions based on a picture of fluctuating Fermi surface. Compared with the linearized ''constructive'' approach this method allows an account of the Fermi surface curvature due to non-Gaussian terms in the bosonized Lagrangian. On the basis of this description we propose a procedure of calculating density response functions beyond the random-phase approximation (RPA). We also formulate a bosonic theory of the compressible metal-like state at the half-filled lowest Landau level and check that in the Gaussian approximation it reproduces RPA results of the gauge theory by Halperin, Lee, and Read

    Advances in electric power systems : robustness, adaptability, and fairness

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 151-157).The electricity industry has been experiencing fundamental changes over the past decade. Two of the arguably most significant driving forces are the integration of renewable energy resources into the electric power system and the creation of the deregulated electricity markets. Many new challenges arise. In this thesis, we focus on two important ones: How to reliably operate the power system under high penetration of intermittent and uncertain renewable resources and uncertain demand: and how to design an electricity market that considers both efficiency and fairness. We present some new advances in these directions. In the first part of the thesis, we focus on the first issue in the context of the unit commitment (UC) problem, one of the most critical daily operations of an electric power system. Unit commitment in large scale power systems faces new challenges of increasing uncertainty from both generation and load. We propose an adaptive robust model for the security constrained unit commitment problem in the presence of nodal net load uncertainty. We develop a practical solution methodology based on a combination of Benders decomposition type algorithm and outer approximation techniques. We present an extensive numerical study on the real-world large scale power system operated by the ISO New England (ISO-NE). Computational results demonstrate the advantages of the robust model over the traditional reserve adjustment approach in terms of economic efficiency, operational reliability, and robustness to uncertain distributions. In the second part of the thesis, we are concerned with a geometric characterization of the performance of adaptive robust solutions in a multi-stage stochastic optimization problem. We study the notion of finite adaptability in a general setting of multi-stage stochastic and adaptive optimization. We show a significant role that geometric properties of uncertainty sets, such as symmetry, play in determining the power of robust and finitely adaptable solutions. We show that a class of finitely adaptable solutions is a good approximation for both the multi-stage stochastic as well as the adaptive optimization problem. To the best of our knowledge, these are the first approximation results for multi-stage problems in such generality. Moreover, the results and the proof techniques are quite general and extend to include important constraints such as integrality and linear conic constraints. In the third part of the thesis, we focus on how to design an auction and pricing scheme for the day-ahead electricity market that achieves both economic efficiency and fairness. The work is motivated by two outstanding problems in the current practice - the uplift problem and equitable selection problem. The uplift problem is that the electricity payment determined by the electricity price cannot fully recover the production cost (especially the fixed cost) of some committed generators, and therefore the ISOs make side payments to such generators to make up the loss. The equitable selection problem is how to achieve fairness and integrity of the day-ahead auction in choosing from multiple (near) optimal solutions. We offer a new perspective and propose a family of fairness based auction and pricing schemes that resolve these two problems. We present numerical test result using ISO-NE's day-ahead market data. The proposed auction- pricing schemes produce a frontier plot of efficiency versus fairness, which can be used as a vaulable decision tool for the system operation.by Xu Andy Sun.Ph.D

    Automatic human face detection in color images

    Get PDF
    Automatic human face detection in digital image has been an active area of research over the past decade. Among its numerous applications, face detection plays a key role in face recognition system for biometric personal identification, face tracking for intelligent human computer interface (HCI), and face segmentation for object-based video coding. Despite significant progress in the field in recent years, detecting human faces in unconstrained and complex images remains a challenging problem in computer vision. An automatic system that possesses a similar capability as the human vision system in detecting faces is still a far-reaching goal. This thesis focuses on the problem of detecting human laces in color images. Although many early face detection algorithms were designed to work on gray-scale Images, strong evidence exists to suggest face detection can be done more efficiently by taking into account color characteristics of the human face. In this thesis, we present a complete and systematic face detection algorithm that combines the strengths of both analytic and holistic approaches to face detection. The algorithm is developed to detect quasi-frontal faces in complex color Images. This face class, which represents typical detection scenarios in most practical applications of face detection, covers a wide range of face poses Including all in-plane rotations and some out-of-plane rotations. The algorithm is organized into a number of cascading stages including skin region segmentation, face candidate selection, and face verification. In each of these stages, various visual cues are utilized to narrow the search space for faces. In this thesis, we present a comprehensive analysis of skin detection using color pixel classification, and the effects of factors such as the color space, color classification algorithm on segmentation performance. We also propose a novel and efficient face candidate selection technique that is based on color-based eye region detection and a geometric face model. This candidate selection technique eliminates the computation-intensive step of window scanning often employed In holistic face detection, and simplifies the task of detecting rotated faces. Besides various heuristic techniques for face candidate verification, we developface/nonface classifiers based on the naive Bayesian model, and investigate three feature extraction schemes, namely intensity, projection on face subspace and edge-based. Techniques for improving face/nonface classification are also proposed, including bootstrapping, classifier combination and using contextual information. On a test set of face and nonface patterns, the combination of three Bayesian classifiers has a correct detection rate of 98.6% at a false positive rate of 10%. Extensive testing results have shown that the proposed face detector achieves good performance in terms of both detection rate and alignment between the detected faces and the true faces. On a test set of 200 images containing 231 faces taken from the ECU face detection database, the proposed face detector has a correct detection rate of 90.04% and makes 10 false detections. We have found that the proposed face detector is more robust In detecting in-plane rotated laces, compared to existing face detectors. +D2

    Improved congestion control for packet switched data networks and the Internet

    Get PDF
    Congestion control is one of the fundamental issues in computer networks. Without proper congestion control mechanisms there is the possibility of inefficient utilization of resources, ultimately leading to network collapse. Hence congestion control is an effort to adapt the performance of a network to changes in the traffic load without adversely affecting users perceived utilities. This thesis is a step in the direction of improved network congestion control. Traditionally the Internet has adopted a best effort policy while relying on an end-to-end mechanism. Complex functions are implemented by end users, keeping the core routers of network simple and scalable. This policy also helps in updating the software at the users' end. Thus, currently most of the functionality of the current Internet lie within the end users' protocols, particularly within Transmission Control Protocol (TCP). This strategy has worked fine to date, but networks have evolved and the traffic volume has increased many fold; hence routers need to be involved in controlling traffic, particularly during periods of congestion. Other benefits of using routers to control the flow of traffic would be facilitating the introduction of differentiated services or offering different qualities of service to different users. Any real congestion episode due to demand of greater than available bandwidth, or congestion created on a particular target host by computer viruses, will hamper the smooth execution of the offered network services. Thus, the role of congestion control mechanisms in modern computer networks is very crucial. In order to find effective solutions to congestion control, in this thesis we use feedback control system models of computer networks. The closed loop formed by TCPIIP between the end hosts, through intermediate routers, relies on implicit feedback of congestion information through returning acknowledgements. This feedback information about the congestion state of the network can be in the form of lost packets, changes in round trip time and rate of arrival of acknowledgements. Thus, end hosts can either execute reactive or proactive congestion control mechanisms. The former approach uses duplicate acknowledgements and timeouts as congestion signals, as done in TCP Reno, whereas the latter approach depends on changes in the round trip time, as in TCP Vegas. The protocols employing the second approach are still in their infancy as they cannot co-exist safely with protocols employing the first approach. Whereas TCP Reno and its mutations, such as TCP Sack, are presently widely used in computer networks, including the current Internet. These protocols require packet losses to happen before they can detect congestion, thus inherently leading to wastage of time and network bandwidth. Active Queue Management (AQM) is an alternative approach which provides congestion feedback from routers to end users. It makes a network to behave as a sensitive closed loop feedback control system, with a response time of one round trip time, congestion information being delivered to the end host to reduce data sending rates before actual packets losses happen. From this congestion information, end hosts can reduce their congestion window size, thus pumping fewer packets into a congested network until the congestion period is over and routers stop sending congestion signals. Keeping both approaches in view, we have adopted a two-pronged strategy to address the problem of congestion control. They are to adapt the network at its edges as well as its core routers. We begin by introducing TCPIIP based computer networks and defining the congestion control problem. Next we look at different proactive end-to-end protocols, including TCP Vegas due to its better fairness properties. We address the incompatibility problem between TCP Vegas and TCP Reno by using ECN based on Random Early Detection (RED) algorithm to adjust parameters of TCP Vegas. Further, we develop two alternative algorithms, namely optimal minimum variance and generalized optimal minimum variance, for fair end-to-end protocols. The relationship between (p, 1) proportionally fair algorithm and the generalized algorithm is investigated along with conditions for its stable operation. Noteworthy is a novel treatment of the issue of transient fairness. This represents the work done on congestion control at the edges of network. Next, we focus on router-based congestion control algorithms and start with a survey of previous work done in that direction. We select the RED algorithm for further work due to it being recommended for the implementation of AQM. First we devise a new Hybrid RED algorithm which employs instantaneous queue size along with an exponential weighted moving average queue size for making decisions about packet marking/dropping, and adjusts the average value during periods of low traffic. This algorithm improves the link utilization and packet loss rate as compared to basic RED. We further propose a control theory based Auto-tuning RED algorithm that adapts to changing traffic load. This algorithm can clamp the average queue size to a desired reference value which can be used to estimate queuing delays for Quality of Service purposes. As an alternative approach to router-based congestion control, we investigate Proportional, Proportional-Integral (PI) and Proportional-Integral-Derivative (PID) principles based control algorithms for AQM. New control-theoretic RED and frequency response based PI and PID control algorithms are developed and their performance is compared with that of existing algorithms. Later we transform the RED and PI principle based algorithms into their adaptive versions using the well known square root of p formula. The performance of these load adaptive algorithms is compared with that of the previously developed fixed parameter algorithms. Apart from some recent research, most of the previous efforts on the design of congestion control algorithms have been heuristic. This thesis provides an effective use of control theory principles in the design of congestion control algorithms. We develop fixed-parameter-type feedback congestion control algorithms as well as their adaptive versions. All of the newly proposed algorithms are evaluated by using ns-based simulations. The thesis concludes with a number of research proposals emanating from the work reported

    Management And Security Of Multi-Cloud Applications

    Get PDF
    Single cloud management platform technology has reached maturity and is quite successful in information technology applications. Enterprises and application service providers are increasingly adopting a multi-cloud strategy to reduce the risk of cloud service provider lock-in and cloud blackouts and, at the same time, get the benefits like competitive pricing, the flexibility of resource provisioning and better points of presence. Another class of applications that are getting cloud service providers increasingly interested in is the carriers\u27 virtualized network services. However, virtualized carrier services require high levels of availability and performance and impose stringent requirements on cloud services. They necessitate the use of multi-cloud management and innovative techniques for placement and performance management. We consider two classes of distributed applications – the virtual network services and the next generation of healthcare – that would benefit immensely from deployment over multiple clouds. This thesis deals with the design and development of new processes and algorithms to enable these classes of applications. We have evolved a method for optimization of multi-cloud platforms that will pave the way for obtaining optimized placement for both classes of services. The approach that we have followed for placement itself is predictive cost optimized latency controlled virtual resource placement for both types of applications. To improve the availability of virtual network services, we have made innovative use of the machine and deep learning for developing a framework for fault detection and localization. Finally, to secure patient data flowing through the wide expanse of sensors, cloud hierarchy, virtualized network, and visualization domain, we have evolved hierarchical autoencoder models for data in motion between the IoT domain and the multi-cloud domain and within the multi-cloud hierarchy

    Health Care Reform Through Medicaid Managed Care: Tennessee (TennCare) as a Case Study and a Paradigm

    Get PDF
    TennCare is a Medicaid demonstration project that allows Tennessee to require all Medicaid beneficiaries to secure medical care through a mandatory managed care system. Enrollees contract with private managed care organizations ( MCOs\u27), which are responsible for organizing a network of care providers and delivering medical care to covered beneficiaries. Driven by rapidly escalating Medicaid costs, TennCare\u27s mandatory managed care program has succeeded in saving money for the state in its Medicaid program. To secure the federal waiver that allowed the program to proceed, the state included non-Medicaid-eligible uninsured and uninsurable residents as TennCare beneficiaries. Federal matching funds accrue for all TennCare expenditures, including those for non-Medicaid-eligible enrollees, but federal matching is subject to a global cap. Cost savings from managed care were to pay for the improved access. The program covers about 1.3 million persons, 38% of whom are non- Medicaid-eligibles. The Medicaid component of TennCare has been stable, but the non-Medicaid-eligible TennCare population has risen by about 41% in the last two fiscal years, stressing the fiscal capacity of the program. The Article provides background on the development of TennCare, describing the political effect of the federal matching (cooperative federalism) aspect of TennCare on both state-level and federal- level decisionmaking. The Article identifies what it describes as the political moral hazard dimensions of these federal-state partnerships on state political decisionmaking and the correlative lock-in effect of the program on the state. Federal matching funds make program enhancement appealing and make cutbacks extremely painful. The interaction of state and federal program incentives is considered in depth, and both the state responses (use of private funding and provider-focused taxation) and federal responses (limits on federal matching for those sources of state revenue) to these incentives are described and analyzed
    corecore