42 research outputs found

    DEVELOPMENT OF AN ALTERNATIVE, PHOTODIODE-BASED, FEMTOSECOND STABLE DETECTION PRINCIPLE FOR THE LINK STABILIZATION IN THE OPTICAL SYNCHRONIZATION SYSTEMS AT FLASH AND XFEL

    Get PDF
    Abstract The fs-stable timing information in the optical synchronization system at FLASH and the upcoming European XFEL is based on the distribution of laser pulses in optical bers. The optical length of the bers is continuously monitored and drifts in signal propagation time are actively compensated in order to provide a phase stable pulse train at the end of the ber link. At present, optical cross-correlation is used to measure the optical length changes. To overcome some of the disadvantages of the current scheme, a different approach for the detection of the optical ber link length variation was developed. This new scheme uses 10 GHz photodiodes to measure the amplitude modulation of harmonics created by overlapping two pulse trains. The long-term stability of the prototype of this detector over 33 h was demonstrated to be below 5 fs (peakto-peak) with a rms jitter of about 0.86 fs. The detection principle itself is practically insensitive to environmental in uences and needs only about 10 % of the optical power, compared to the optical cross-correlator

    An Integrated Business Rules and Constraints Approach to Data Centre Capacity Management

    No full text
    A recurring problem in data centres is that the constantly changing workload is not proportionally distributed over the available servers. Some resources may lay idle while others are pushed to the limits of their capacity. This in turn leads to decreased response times on the overloaded servers, a situation that the data centre provider wants to prevent. To solve this problem, an administrator may move (reallocate) applications or even entire virtual servers around in order to spread the load. Since there is a cost associated with moving applications (in the form of down time during the move, for example), we are interested in solutions with minimal changes. This paper describes a hybrid approach to solving such resource reallocation problems in data centres, where two technologies have to work closely together to solve this problem in an efficient manner. The first technology is a Business Rules Management System (BRMS), which is used to identify which systems are considered to be overloaded on a systematic basis. Data centres use complex rules to track the behaviour of the servers over time, in order to properly identify overloads. Representing these tracking conditions is what the BRMS is good for. It defines the relationships (business constraints) over time between different applications, processes and required resources that are specific to the data centre. As such, it also allows a high degree of customisation. Having identified which servers require reallocation of their processes, the BRMS then automatically creates an optimisation model solved with a Constraint Programming (CP) approach. A CP solver finds a feasible or the optimal solution to this CSP, which is used to provide recommendations on which workload should be moved and whereto. Notice that our use of a hybrid approach is a requirement, not a feature: employing only rules we would not be able to compute an optimal solution; using only CP we would not be able to specify the complex identification rules without hard-coding them into the program. Moreover, the dedicated rule engine allows us to process the large amounts of data rapidly
    corecore