9 research outputs found

    Real Valued Card Counting Strategies for the Game of Blackjack

    Get PDF
    Card counting is a family of casino card game advantage gambling strategies, in which a player keeps a mental tally of the cards played in order to calculate whether the next hand is likely to be in the favor of the player or the dealer. A card counting system assigns point values (weights) to the cards. Summing the point values of the already played cards gives a concise numerical estimate of how advantageous the remaining cards are for the player. In theory, any assignment of weights is permissible. Historically, card counting systems used integers and rarely the 1/2 and 3/2 fractions, as computation with these are easier and more tractable for the human memory. In this paper we investigate how much advantage would a system using real valued weights provide. Using a blackjack simulator and a simple genetic algorithm, we evolved weights vectors for ace-neutral and ace- reckoned balanced strategies with a fitness function that indicates how much a given strategy empirically under or outperforms a simple card counting system. After convergence, we evaluated the systems in the three efficiency categories used to characterize card counting strategies: playing efficiency, betting and insurance correlation. The obtained systems outperform classical integer count techniques, offering a better balance of the efficiency metrics. Finally, by applying rounding and scaling, we transformed some real valued strategies to integer point counts and found that most of the systems' extra edge is preserved. However, because of the large weight values, it is unlikely that these systems can be played quickly and accurately even by professional card counters

    A review on suppressed fuzzy c-means clustering models

    Get PDF
    Suppressed fuzzy c-means clustering was proposed as an attempt to combine the better properties of hard and fuzzy c-means clustering, namely the quicker convergence of the former and the finer partition quality of the latter. In the meantime, it became much more than that. Its competitive behavior was revealed, based on which it received two generalization schemes. It was found a close relative of the so-called fuzzy c-means algorithm with generalized improved partition, which could improve its popularity due to the existence of an objective function it optimizes. Using certain suppression rules, it was found more accurate and efficient than the conventional fuzzy c-means in several, mostly image processing applications. This paper reviews the most relevant extensions and generalizations added to the theory of fuzzy c-means clustering models with suppressed partitions, and summarizes the practical advances these algorithms can offer

    Automatic detection of hard and soft exudates from retinal fundus images

    No full text
    According to WHO estimates, 400 million people suffer from diabetes, and this number is likely to double by year 2030. Unfortunately, diabetes can have severe complications like glaucoma or retinopathy, which both can cause blindness. The main goal of our research is to provide an automated procedure that can detect retinopathy-related lesions of the retina from fundus images. This paper focuses on the segmentation of so-called white lesions of the retina that include hard and soft exudates. The established procedure consists of three main phases. The preprocessing step compensates the various luminosity patterns found in retinal images, using background and foreground pixel extraction and a data normalization operator similar to Z-transform. This is followed by a modified SLIC algorithm that provides homogeneous superpixels in the image. The final step is an ANN-based classification of pixels using fifteen features extracted from the neighborhood of the pixels taken from the equalized images and from the properties of the superpixel where the pixel belongs. The proposed methodology was tested using high-resolution fundus images originating from the IDRiD database. Pixelwise accuracy is characterized by a 54% Dice score in average, but the presence of exudates is detected with 94% precision

    Estimation of parameters for a humidity-dependent compartmental model of the COVID-19 outbreak

    No full text
    Building an effective and highly usable epidemiology model presents two main challenges: finding the appropriate, realistic enough model that takes into account complex biological, social and environmental parameters and efficiently estimating the parameter values with which the model can accurately match the available outbreak data, provide useful projections. The reproduction number of the novel coronavirus (SARS-CoV-2) has been found to vary over time, potentially being influenced by a multitude of factors such as varying control strategies, changes in public awareness and reaction or, as a recent study suggests, sensitivity to temperature or humidity changes. To take into consideration these constantly evolving factors, the paper introduces a time dynamic, humidity-dependent SEIR-type extended epidemiological model with range-defined parameters. Using primarily the historical data of the outbreak from Northern and Southern Italy and with the help of stochastic global optimization algorithms, we are able to determine a model parameter estimation that provides a high-quality fit to the data. The time-dependent contact rate showed a quick drop to a value slightly below 2. Applying the model for the COVID-19 outbreak in the northern region of Italy, we obtained parameters that suggest a slower shrinkage of the contact rate to a value slightly above 4. These findings indicate that model fitting and validation, even on a limited amount of available data, can provide useful insights and projections, uncover aspects that upon improvement might help mitigate the disease spreading

    Comparing epidemiological models with the help of visualization dashboards

    No full text
    In 2020, due to the COVID − 19 pandemic, various epidemiological models appeared in major studies [16, 22, 21], which differ in terms of complexity, type, etc. In accordance with the hypothesis, a complex model is more accurate and gives more reliable results than a simpler one because it takes into consideration more parameters

    Enacting Algorithms: Evolution of the AlgoRythmics Storytelling

    No full text
    This dataset includes responses from 51 students who participated in a survey evaluating a short film used in Computer Science education, that portrayed three algorithmic approaches: ad-hoc, greedy, and dynamic programming. Using a 7-point Likert scale (-3 to 3), students rated statements about the film's characteristics and potential benefits. The questionnaire aimed to thoroughly capture students' perspectives on the film's attributes and educational impact.Items used in the survey.EF.Entertainment - The short film provided a high entertainment value.EF.ProductionValue - The short film had a high production value.EF.Premise - The premise (escape room) was intriguing.EF.Expressive - The short film was expressive.EF.Immersive - The short film was immersive.EF.Creative - The short film was creative.EF.Pacing The pacing of the story was appropriate.FA.Story-plot - I appreciate as important the presence of the story-plot.FA.LiveAction - I appreciate as important the use of live-action performances.FA.CameraWork - I appreciate as important the cut and switch of camera angles.FA.Atmosphere - I appreciate as important the mood and atmosphere.FA.Choreography - I appreciate as important the choreography.FA.Cinematography - I appreciate as important the depicted cinematography.FA.NonVerbal - I appreciate as important the facial expressions, body language of the actors.FA.SoundDesign I appreciate as important the sound design, narration and sound effects present.CB.Educational. The short film provided a high educational value.CB.Understanding The learning experience deepened my understanding of the subject.CB.Clarity The algorithmic strategies were clearly depicted.EB.Attention - The movie engaged my attention.EB.Curiosity - The movie engaged my curiosity.PU.Quicker - Using such short films during a class would enable me to learn and deepen algorithmic concepts more quickly.PU.Performance - Using such short films during a class would improve my learning performance and grades.PU.Efficiency - Using such short films could help me get the most out of my time while learning.PU.Knowledge - Using such short films may improve my knowledge.PU.Easier - Using such short films would make it easier to accomplish my learning tasks.PU.Overall Using such short films would be overall beneficial.PE.Enjoyable - The learning experience was enjoyable.PE.Exciting - The learning experience was exciting.PE.Pleasant - The learning experience was pleasant.PE.Interesting - The learning experience was interesting.PE.Immersive - The learning experience was immersive.C.Changes - The use of such short films may imply major changes in how I learn.C.Incorporation - It would be easy to incorporate such short films in my learning process.A.Worthwhile - Using similar educational short films to learn algorithmic concepts is a good idea.A.Positivity - I am positive towards using visual media to better understand algorithmic concepts.A.Appreciate - I would appreciate the availabilty of similar short films as learning instuments.A.WouldUse - If available, I would use such short films in my learning process.Eval.Use - I often use/used existing AlgoRythmics videos in my learning process.Eval.Comp - Overall, the short film approach (story-line, live-acting etc.) provides a richer and more valuable learning experience than the viewing of simple videos or animations.</p

    A study on histogram normalization for brain tumor segmentation from multispectral MR image data

    No full text
    Absolute values in magnetic resonance image data do not say anything about the investigated tissues. All these numerical values are relative, they depend on the imaging device and they may vary from session to session. Consequently, there is a need for histogram normalization before any other processing is performed on MRI data. The Brain Tumor Segmentation (BraTS) challenge organized yearly since 2012 contributed to the intensification of the focus on tumor segmentation techniques based on multi-spectral MRI data. A large subset of methods developed within the bounds of this challenge declared that they rely on a classical histogram normalization method proposed by Nyúl et al in 2000, which supposed that the corrected histogram of a certain organ composed of normal tissues only should be similar in all patients. However, this classical method did not count with possible lesions that can vary a lot in size, position, and shape. This paper proposes to perform a comparison of three sets of histogram normalization methods deployed in a brain tumor segmentation framework, and formulates recommendations regarding this preprocessing step

    AnnoCerv: A new dataset for feature-driven and image-based automated colposcopy analysis

    No full text
    Colposcopy imaging is pivotal in cervical cancer diagnosis, a major health concern for women. The computational challenge lies in accurate lesion recognition. A significant hindrance for many existing machine learning solutions is the scarcity of comprehensive training datasets
    corecore