23 research outputs found

    From Preferences to Ingredients - Customised Beverage Product Formulation Workshop

    No full text
    This work aims to test if we can create tailored orange drinks from a consumer's description of their ideal drink. Their description is converted into ingredient quantities, with which a drink is created using our beverage formulator. The consumer then tastes the drink and assesses if it correctly matches what they requested

    Image montage of all false negative (left) and false positive (right) classified croissants with respect to manual classification.

    No full text
    <p>On the left, false negative croissants are shown, i.e. all cells being classified as croissant manually, but not by the neural network. In contrast, all cells classified as croissant shapes by automated analysis but not by hand are depicted in the right montage (false positive croissants). Numerical values given in the yellow box of each picture correspond to the respective output value of the CNN.</p

    Diagram of training (red) and validation (black) status: Evolving convergence loss with growing number of training epochs.

    No full text
    <p>We set a maximum of ten epochs since a prolongation to more training epochs yields no gain in performance but rather causes overtraining. As training method, a gradient descent solver with momentum (SGDM) is used. The red line indicates the training loss, whereas the black dots represent the loss of the validation data set (validation loss). The progression of the validation loss serves as an indicator whether the CNN is overtrained, since the training and validation losses would diverge. As loss function, a root-mean-square error is chosen, being a standard approach for regression problems.</p

    Confusion matrix with absolute values and relative percentages to evaluate the performance of the CNN approach.

    No full text
    <p>The rows hereby indicate the predicted, i.e. real class, whereas the columns indicate the actual class, corresponding to the CNN output. Thus, all values on the diagonal represent the correctly classified cells.</p

    We estimate perfect slippers to be around the peak of the distribution at ≈ −117, whereas croissants occur around ≈ 115.

    No full text
    <p>By fitting the whole spectrum by four Gaussians, we are able to separate the respective contributions of each cell shape class and thus can determine a respective confidence interval. In the lower part, typical cell shapes are depicted for different output value ranges. Starting from the leftmost cell image, we undergo a shape change from slippers (image 1-3) to others (image 4-5) and finally to sheared (image 6-7) and pure croissants (image 8-9).</p

    CNN output values for all cell images.

    No full text
    <p>The gray solid line is the network’s output for the whole dataset, whereas the black solid line represents a fit with four Gaussians, one for each distinct class (croissants, slippers and sheared croissants, resp.) and one to account for indistinguishable cell shapes. The thresholds are shown in light blue and light red, respectively. In the right column, the obtained classification is compared with the manually ascertained phase diagram (solid lines). We stress the fact that the solid line is a guide to the eye, since we have a discrete number of flow veolcities due to the given number of applied pressure drops. In figure (b), a threshold of 1<i>σ</i> was used as a confidence interval to classify the cells into one of the two categories. Figures (d) and (f) show the resulting phase diagrams for a threshold of 2<i>σ</i> and an adapted <i>σ</i>, resp.</p

    Overview of the used layers in the indicated deep learning CNN.

    No full text
    <p>In the center column, the kernel size of the corresponding layer is given. The resulting image size after layer passage is given in the rightmost column.</p

    Resulting subimages (bottom) of two contrary RBC shapes (croissant, upper left; slipper, upper right) passing the first convolutional layer of a CNN.

    No full text
    <p>The convolution kernels as well as the subimages are represented by a false color mapping for the sake of better visibility. Boxes in the input images indicate typical features of both cell shape classes and the respective enhancement of these after convolution (indicated by arrows).</p

    Extracting component-oriented behaviour for self-healing enabling.

    Get PDF
    Rich and multifaceted domain specific specification languages like the Autonomic System Specification Language (ASSL) help to design reliable systems with self-healing capabilities. The GEAR game-based Model Checker has been used successfully to investigate in depth properties of the ESA ExoMars Rover. We show here how to enable GEAR’s gamebased verification techniques for ASSL via systematic model extraction from a behavioral subset of the language, and illustrate it on a description of the Voyager II space mission. This way, we close the gap between the design-time and the run-time techniques provided in the SHADOWS platform for self-healing of concurrency, performance, and functional issues.

    Description table of the experiments: Temperature and precipitation with the nearest meteorological station (INM).

    No full text
    <p>UTM 30 coordinates of the five tested rills are presented.</p><p>Freila 1 and Freila 3 are two experiments in the same rill.</p
    corecore