286 research outputs found
Recommended from our members
Multi particle swarm optimisation algorithm applied to supervisory power control systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonPower quality problems come in numerous forms (commonly spikes, surges, sags, outages and harmonics) and their resolution can cost from a few hundred to millions of pounds, depending on the size and type of problem experienced by the power network. They are commonly experienced as burnt-out motors, corrupt data on hard drives, unnecessary downtime and increased maintenance costs. In order to minimise such events, the network can be monitored and controlled with a specific control regime to deal with particular faults. This study developed a control and Optimisation system and applied it to the stability of electrical power networks using artificial intelligence techniques. An intelligent controller was designed to control and optimise simulated models for electrical system power stability. Fuzzy logic controller controlled the power generation, while particle swarm Optimisation (PSO) techniques optimised the systemâs power quality in normal operation conditions and after faults. Different types of PSO were tested, then a multi-swarm (M-PSO) system was developed to give better Optimisation results in terms of accuracy and convergence speed.. The developed Optimisation algorithm was tested on seven benchmarks and compared to the other types of single PSOs.
The developed controller and Optimisation algorithm was applied to power system stability control. Two power electrical network models were used (with two and four generators), controlled by fuzzy logic controllers tuned using the Optimisation algorithm. The system selected the optimal controller parameters automatically for normal and fault conditions during the operation of the power network. Multi objective cost function was used based on minimising the recovery time, overshoot, and steady state error. A supervisory control layer was introduced to detect and diagnose faults then apply the correct controller parameters. Different fault scenarios were used to test the system performance. The results indicate the great potential of the proposed power system stabiliser as a superior tool compared to conventional control systems
Custom optimization algorithms for efficient hardware implementation
The focus is on real-time optimal decision making with application in advanced control
systems. These computationally intensive schemes, which involve the repeated solution of
(convex) optimization problems within a sampling interval, require more efficient computational
methods than currently available for extending their application to highly dynamical
systems and setups with resource-constrained embedded computing platforms.
A range of techniques are proposed to exploit synergies between digital hardware, numerical
analysis and algorithm design. These techniques build on top of parameterisable
hardware code generation tools that generate VHDL code describing custom computing
architectures for interior-point methods and a range of first-order constrained optimization
methods. Since memory limitations are often important in embedded implementations we
develop a custom storage scheme for KKT matrices arising in interior-point methods for
control, which reduces memory requirements significantly and prevents I/O bandwidth
limitations from affecting the performance in our implementations. To take advantage of
the trend towards parallel computing architectures and to exploit the special characteristics
of our custom architectures we propose several high-level parallel optimal control
schemes that can reduce computation time. A novel optimization formulation was devised
for reducing the computational effort in solving certain problems independent of the computing
platform used. In order to be able to solve optimization problems in fixed-point
arithmetic, which is significantly more resource-efficient than floating-point, tailored linear
algebra algorithms were developed for solving the linear systems that form the computational
bottleneck in many optimization methods. These methods come with guarantees
for reliable operation. We also provide finite-precision error analysis for fixed-point implementations
of first-order methods that can be used to minimize the use of resources while
meeting accuracy specifications. The suggested techniques are demonstrated on several
practical examples, including a hardware-in-the-loop setup for optimization-based control
of a large airliner.Open Acces
NetSquid, a NETwork Simulator for QUantum Information using Discrete events
In order to bring quantum networks into the real world, we would like to
determine the requirements of quantum network protocols including the
underlying quantum hardware. Because detailed architecture proposals are
generally too complex for mathematical analysis, it is natural to employ
numerical simulation. Here we introduce NetSquid, the NETwork Simulator for
QUantum Information using Discrete events, a discrete-event based platform for
simulating all aspects of quantum networks and modular quantum computing
systems, ranging from the physical layer and its control plane up to the
application level. We study several use cases to showcase NetSquid's power,
including detailed physical layer simulations of repeater chains based on
nitrogen vacancy centres in diamond as well as atomic ensembles. We also study
the control plane of a quantum switch beyond its analytically known regime, and
showcase NetSquid's ability to investigate large networks by simulating
entanglement distribution over a chain of up to one thousand nodes.Comment: NetSquid is freely available at https://netsquid.org; refined main
text section
Report No. 31: The Role of Social Protection as an Economic Stabiliser: Lessons from the Current Crisis
Report based on a study conducted for the European Parliament, Bonn 2010 (188 pages)
Advancing classical simulators by measuring the magic of quantum computation
Stabiliser operations and state preparations are efficiently simulable by classical computers. Stabiliser circuits play a key role in quantum error correction and fault-tolerance, and can be promoted to universal quantum computation by the addition of "magic" resource states or non-Clifford gates. It is believed that classically simulating stabiliser circuits supplemented by magic must incur a performance overhead scaling exponentially with the amount of magic. Early simulation methods were limited to circuits with very few Clifford gates, but the need to simulate larger quantum circuits has motivated the development of new methods with reduced overhead. A common theme is that algorithm performance can often be linked to quantifiers of computational resource known as magic monotones. Previous methods have typically been restricted to specific types of circuit, such as unitary or gadgetised circuits. In this thesis we develop a framework for quantifying the resourcefulness of general qubit quantum circuits, and present improved classical simulation methods. We first introduce a family of magic state monotones that reveal a previously unknown formal connection between stabiliser rank and quasiprobability methods. We extend this family by presenting channel monotones that measure the magic of general qubit quantum operations. Next, we introduce a suite of classical algorithms for simulating quantum circuits, which improve on and extend previous methods. Each classical simulator has performance quantified by a related resource measure. We extend the stabiliser rank simulation method to admit mixed states and noisy operations, and refine a previously known sparsification method to yield improved performance. We present a generalisation of quasiprobability sampling techniques with significantly reduced exponential scaling. Finally, we evaluate the simulation cost per use for practically relevant quantum operations, and illustrate how to use our framework to realistically estimate resource costs for particular ideal or noisy quantum circuit instances
Quantum multipartite entangled states, classical and quantum error correction
Studying entanglement is essential for our understanding of such diverse areas as high-energy physics, condensed matter physics, and quantum optics. Moreover, entanglement allows us to surpass classical physics and technologies enabling better information processing, computation, and improved metrology. Recently, entanglement also played a prominent role in characterizing and simulating quantum many-body states and in this way deepened our understanding of quantum matter. While bipartite entanglement is well understood, multipartite entanglement is much richer and leads to stronger contradictions with classical physics. Among all possible entangled states, a special class of states has attracted attention for a wide range of tasks. These states are called k-uniform states and are pure multipartite quantum states of n parties and local dimension q with the property that all of their reductions to k parties are maximally mixed. Operationally, in a k-uniform state any subset of at most k parties is maximally entangled with the rest. The k = bn/2c-uniform states are called absolutely maximally entangled because they are maximally entangled along any splitting of the n parties into two groups. These states find applications in several protocols and, in particular, are the building blocks of quantum error correcting codes with a holographic geometry, which has provided valuable insight into the connections between quantum information theory and conformal field theory. Their properties and the applications are however intriguing, as we know little about them: when they exist, how to construct them, how they relate to other multipartite entangled states, such as graph states, or how they connect under local operations and classical communication. With this motivation in mind, in this thesis we first study the properties of k-uniform states and then present systematic methods to construct closed-form expressions of them. The structure of our methods proves to be particularly fruitful in understanding the structure of these quantum states, their graph-state representation and classification under local operations and classical communication. We also construct several examples of absolutely maximally entangled states whose existence was open so far. Finally, we explore a new family of quantum error correcting codes that generalize and improve the link between classical error correcting codes, multipartite entangled states, and the stabilizer formalism. The results of this thesis can have a role in characterizing and studying the following three topics: multipartite entanglement, classical error correcting codes and quantum error correcting codes. The multipartite entangled states can provide a link to find different resources for quantum information processing tasks and quantify entanglement. Constructing two sets of highly entangled multipartite states, it is important to know if they are equivalent under local operations and classical communication. By understanding which states belong to the same class of quantum resource, one may discuss the role they play in some certain quantum information tasks like quantum key distribution, teleportation and constructing optimum quantum error correcting codes. They can also be used to explore the connection between the Antide Sitter/Conformal Field Theory holographic correspondence and quantum error correction, which will then allow us to construct better quantum error correcting codes. At the same time, their roles in the characterization of quantum networks will be essential to design functional networks, robust against losses and local noise.El estudio del entrelazamiento cuĂĄntico es esencial para la comprensiĂłn de diversas ĂĄreas como la Ăłptica cuĂĄntica, la materia condensada e incluso la fĂsica de altas energĂas. AdemĂĄs, el entrelazamiento nos permite superar la fĂsica y tecnologĂas clĂĄsicas llevando a una mejora en el procesado de la informaciĂłn, la computaciĂłn y la metrologĂa. Recientemente se ha descubierto que el entrelazamiento desarrolla un papel central en la caracterizaciĂłn y simulaciĂłn de sistemas cuĂĄnticos de muchos cuerpos, de esta manera facilitando nuestra comprensiĂłn de la materia cuĂĄntica. Mientras que se tiene un buen conocimiento del entrelazamiento en estados puros bipartitos, nuestra comprensiĂłn del caso de muchas partes es mucho mĂĄs limitada, a pesar de que sea un escenario mĂĄs rico y que presenta un contraste mĂĄs fuerte con la fĂsica clĂĄsica. De entre todos los posibles estados entrelazados, una clase especial ha llamado la atenciĂłn por su amplia gama de aplicaciones. Estos estados se llaman k-uniformes y son los estados multipartitos de n cuerpos con dimensiĂłn local q con la propiedad de que todas las reducciones a k cuerpos son mĂĄximamente desordenadas. Operacionalmente, en un estado k-uniforme cualquier subconjunto de hasta k cuerpos estĂĄ mĂĄximamente entrelazado con el resto. Los estados k = n/2 -uniformes se llaman estados absolutamente mĂĄximamente entrelazados porque son mĂĄximamente entrelazados respecto a cualquier particiĂłn de los n cuerpos en dos grupos. Estos estados encuentran aplicaciones en varios protocolos y, en particular, forman los elementos de base para la construcciĂłn de los cĂłdigos de correcciĂłn de errores cuĂĄnticos con geometrĂa hologrĂĄfica, los cuales han aportado intuiciĂłn importante sobre la conexiĂłn entre la teorĂa de la informaciĂłn cuĂĄntica y la teorĂa conforme de campos. Las propiedades y aplicaciones de estos estados son intrigantes porque conocemos poco sobre las mismas: cuĂĄndo existen, cĂłmo construirlos, cĂłmo se relacionan con otros estados con entrelazamiento multipartito, cĂłmo los estados grafo, o como se relacionan mediante operaciones locales y comunicaciĂłn clĂĄsica.
Con esta motivaciĂłn en mente, en esta tesis primero estudiamos las propiedades de los estados k-uniformes y luego presentamos mĂ©todos sistemĂĄticos para construir expresiones cerradas de los mismos. La naturaleza de nuestros mĂ©todos resulta ser muy Ăștil para entender la estructura de estos estados cuĂĄnticos, su representaciĂłn como estados grafo y su clasificaciĂłn bajo operaciones locales y comunicaciĂłn clĂĄsica. TambiĂ©n construimos varios ejemplos de estados absolutamente mĂĄximamente entrelazados, cuya existencia era desconocida. Finalmente, exploramos una nueva familia de cĂłdigos de correcciĂłn de errores cuĂĄnticos que generalizan y mejoran la conexiĂłn entre los cĂłdigos de correcciĂłn de errores clĂĄsicos, los estados entrelazados multipartitos y el formalismo de estabilizadores.
Los resultados de esta tesis pueden desarrollar un papel importante en la caracterización y el estudio de las tres siguientes åreas: entrelazamiento multipartito, códigos de corrección de errores clåsicos y códigos de corrección de errores cuånticos. Los estados de entrelazamiento multipartito pueden aportar una conexión para encontrar diferentes recursos para tareas de procesamiento de la información cuåntica y cuantificación del entrelazamiento. Al construir dos conjuntos de estados multipartitos altamente entrelazados, es importante saber si son equivalentes entre operaciones locales y comunicación clåsica. Entendiendo qué estados pertenecen a la misma clase de recurso cuåntico, se puede discutir qué papel desempeñan en ciertas tareas de información cuåntica, como la distribución de claves criptogråficas cuånticas, la teleportación y la construcción de códigos de corrección de errores cuånticos óptimos.
TambiĂ©n se pueden usar para explorar la conexiĂłn entre la correspondencia hologrĂĄfica Anti-de Sitter/Conformal Field Theory y cĂłdigos de correcciĂłn de errores cuĂĄnticos, que nos permitirĂa construir mejores cĂłdigos de correcciĂłn de errores. A la vez, su papel en la caracterizaciĂłn de redes cuĂĄnticas serĂĄ esencial en el diseño de redes funcionales, robustas ante pĂ©rdidas y ruidos locales
Lamotrigine for people with borderline personality disorder: a RCT
Background: No drug treatments are currently licensed for the treatment of borderline personality disorder (BPD). Despite this, people with this condition are frequently prescribed psychotropic medications and often with considerable polypharmacy. Preliminary studies have indicated that mood stabilisers may be of benefit to people with BPD.
Objective: To examine the clinical effectiveness and cost-effectiveness of lamotrigine for people with BPD.
Design: A two-arm, double-blind, placebo-controlled individually randomised trial of lamotrigine versus placebo. Participants were randomised via an independent and remote web-based service using permuted blocks and stratified by study centre, the severity of personality disorder and the extent of hypomanic symptoms.
Setting: Secondary care NHS mental health services in six centres in England.
Participants: Potential participants had to be aged â„ 18 years, meet diagnostic criteria for BPD and provide written informed consent. We excluded people with coexisting psychosis or bipolar affective disorder, those already taking a mood stabiliser, those who spoke insufficient English to complete the baseline assessment and women who were pregnant or contemplating becoming pregnant.
Interventions: Up to 200 mg of lamotrigine per day or an inert placebo. Women taking combined oral contraceptives were prescribed up to 400 mg of trial medication per day.
Main outcome measures: Outcomes were assessed at 12, 24 and 52 weeks after randomisation.
The primary outcome was the total score on the Zanarini Rating Scale for Borderline Personality Disorder (ZAN-BPD) at 52 weeks. The secondary outcomes were depressive symptoms, deliberate self-harm, social functioning, health-related quality of life, resource use and costs, side effects of treatment and adverse events. Higher scores on all measures indicate poorer outcomes.
Results: Between July 2013 and October 2015 we randomised 276 participants, of whom 195 (70.6%) were followed up 52 weeks later. At 52 weeks, 49 (36%) of those participants prescribed lamotrigine and 58 (42%) of those prescribed placebo were taking it. At 52 weeks, the mean total ZAN-BPD score was 11.3 [standard deviation (SD) 6.6] among those participants randomised to lamotrigine and 11.5 (SD 7.7) among those participants randomised to placebo (adjusted mean difference 0.1, 95% CI â1.8 to 2.0; p = 0.91). No statistically significant differences in secondary outcomes were seen at any time. Adjusted costs of direct care for those prescribed lamotrigine were similar to those prescribed placebo.
Limitations: Levels of adherence in this pragmatic trial were low, but greater adherence was not associated with better mental health.
Conclusions: The addition of lamotrigine to the usual care of people with BPD was not found to be clinically effective or provide a cost-effective use of resources.
Future work: Future research into the treatment of BPD should focus on improving the evidence base for the clinical effectiveness and cost-effectiveness of non-pharmacological treatments to help policy-makers make better decisions about investing in specialist treatment services
- âŠ