187 research outputs found

    Bump huntingとその顧客データへの応用

    Get PDF
    In difficult classification problems of the z-dimensional points into two groups having 0-1 responses due to the messy data structure, it is more favorable to search for the denser regions for the response 1 assigned points than to find the boundaries to separate the two groups. To such problems often seen in customer databases, we have developed a bump hunting method using probabilistic and statistical methods. By specifying a pureness rate in advance, a maximum capture rate will be obtained.Then, a trade-off curve between the pureness rate and the capture rate can be constructed. In finding the maximum capture rate, we have used the decision tree method combined with the genetic algorithm. We first explain a brief introduction of our research: what the bump hunting is, the trade-off curve between the pureness rate and the capture rate, the bump hunting using the tree genetic algorithm, the upper bounds for the trade-off curve using the extreme-value statistics. Then, the assessment for the accuracy of the trade-off curve is tackled from the genetic algorithm procedure viewpoint. Using the new genetic algorithm procedure proposed, we can obtain the upper bound accuracy for the trade-off curve. Then, we may expect the actually attainable trade-off curve upper bound. The bootstrapped hold-out method is used in assessing the accuracy of the trade-off curve, as well as the cross validation method

    Bump Hunting using the Tree-GA

    Get PDF
    The bump hunting is to find the regions where points we are interested in are located more densely than elsewhere and are hardly separable from other points. By specifying a pureness rate p for the points, a maximum capture rate c of the points could be obtained. Then, a trade-off curve between p and c can be constructed. Thus, to find the bump regions is equivalent to construct the trade-off curve. We adopt simpler boundary shapes for the bumps such as the box-shaped regions located parallel to variable axes for convenience. We use the genetic algorithm, specified to the tree structure, called the tree-GA, to obtain the maximum capture rates, because the conventional binary decision tree will not provide the maximum capture rates. Using the tree-GA tendency providing many local maxima for the capture rates, we can estimate the return period for the trade-off curve by using the extremevalue statistics. We have assessed the accuracy for the trade-off curve in typical fundamental cases that may be observed in real customer data cases, and found that the proposed tree-GA can construct the effective trade-off curve which is close to the optimal one

    Emergent Communication: The evolution of simplistic machines using different communication types

    Get PDF
    The methods of transmitting information may be divided as follows: direct; and, indirect. The âdirectâ method occurs when a creature transmits a signal that other creatures in its local environment can receive. Word of mouth advertising is a form of direct communication. âIndirectâ communication relays a message through the environment. This type of communication is known as stigmergy. Both word of mouth communication and stigmergy require the existence of groups of communicators. It is, however, difficult to analyse a very large number of local interactions that occur in group behaviour. A global phenomenon known as âemergenceâ arises from such behaviour. The phrase ââthe whole is greater than the sum of its partsâ normally describes emergence. In this research, we investigate how the two methods of communicating, direct and indirect (including a combination of these), result in emergent behaviour. In order to establish this outcome we employed the use of agent-based software in which we designed groups of agents to evolve over generations in response to specific situations. The manner in which these agent groups evolve is by a genetic algorithm. This is based on the consumption and collection of resources from the environment - a metric for gauging how well the population performs as a whole. For the purpose of this dissertation, we measure and examine the performance of four styles of the two methods of communication: No Communication, Word of Mouth, Stigmergic and Both (a combination of direct and indirect). We observe the fitness arising through successive generations of agents for each of the four styles and compare the results. The âNo Communicationâ style is markedly the worst performer and is âthe sum of the partsâ in terms of the definition of emergence. The âWord of Mouthâ style is marginally below the best performer but is rated well above that of âNo Communicationâ. The âStigmergicâ style is only the third best performer. Combining the direct and indirect methods yields the best result for the âBothâ style. All the communicating categories, considered âthe wholeâ in terms of the definition for emergence, outperform the âNo Communicationâ style. This demonstrates that emergence occurs when using these communication methods in groups. Keywords: Communication, Emergence, Genetic Algorithms, Group Behaviou

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Parallel memetic algorithms for the problem of workforce distribution in dynamis multi-agent system

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Arquitectura de Computadores y Automática, leída el 20/09/2013Esta tesis describe un novedoso enfoque para resolver el problema de distribución de carga de trabajo en sistemas multi-agente dinámicos basados en arquitecturas de pizarra, enfocándose especialmente en un escenario real: el call center multitarea. Para abordar este tipo de entornos dinámicos, tradicionalmente se han aplicado diversas heurísticas voraces que permiten dar una solución en tiempo real. Básicamente, dichas heurísticas realizan planificaciones continuamente, considerando el estado del sistema en cada momento. Como las decisiones se toman de forma voraz sin hacer una planificación óptima, la distribución de la carga de trabajo puede ser pobre a medio y/o largo plazo. El uso de algoritmos meméticos paralelos nos puede permitir encontrar soluciones mucho más precisas. Para aplicar este tipo de algoritmos, introducimos el concepto de ventana temporal adaptativa. De esta forma, el tamaño de la ventana temporal depende del nivel de dinamismo del sistema en un instante dado. Este trabajo propone una serie de herramientas para determinar el dinamismo del sistema de forma automática, así como un novedoso módulo de predicción basado en una red neuronal y un potente método de búsqueda basado en meta-algoritmos meméticos paralelos para poder lidiar con entornos dinámicos complejos. Para concluir, comparamos nuestro enfoque con otras técnicas del estado del arte en un entorno de producción real (Telefónica) obteniendo mejores resultados que el resto de técnicas actuales. También se proporciona un estudio exhaustivo de cada uno de los módulos.Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEunpu

    On-the-fly synthesizer programming with rule learning

    Get PDF
    This manuscript explores automatic programming of sound synthesis algorithms within the context of the performative artistic practice known as live coding. Writing source code in an improvised way to create music or visuals became an instrument the moment affordable computers were able to perform real-time sound synthesis with languages that keep their interpreter running. Ever since, live coding has dealt with real time programming of synthesis algorithms. For that purpose, one possibility is an algorithm that automatically creates variations out of a few presets selected by the user. However, the need for real-time feedback and the small size of the data sets (which can even be collected mid-performance) are constraints that make existing automatic sound synthesizer programmers and learning algorithms unfeasible. Also, the design of such algorithms is not oriented to create variations of a sound but rather to find the synthesizer parameters that match a given one. Other approaches create representations of the space of possible sounds, allowing the user to explore it by means of interactive evolution. Even though these systems are exploratory-oriented, they require longer run-times. This thesis investigates inductive rule learning for on-the-fly synthesizer programming. This approach is conceptually different from those found in both synthesizer programming and live coding literature. Rule models offer interpretability and allow working with the parameter values of the synthesis algorithms (even with symbolic data), making preprocessing unnecessary. RuLer, the proposed learning algorithm, receives a dataset containing user labeled combinations of parameter values of a synthesis algorithm. Among those combinations sharing the same label, it analyses the patterns based on dissimilarity. These patterns are described as an IF-THEN rule model. The algorithm parameters provide control to define what is considered a pattern. As patterns are the base for inducting new parameter settings, the algorithm parameters control the degree of consistency of the inducted settings respect to the original input data. An algorithm (named FuzzyRuLer) able to extend IF-THEN rules to hyperrectangles, which in turn are used as the cores of membership functions, is presented. The resulting fuzzy rule model creates a map of the entire input feature space. For such a pursuit, the algorithm generalizes the logical rules solving the contradictions by following a maximum volume heuristics. Across the manuscript it is discussed how, when machine learning algorithms are used as creative tools, glitches, errors or inaccuracies produced by the resulting models are sometimes desirable as they might offer novel, unpredictable results. The evaluation of the algorithms follows two paths. The first focuses on user tests. The second responds to the fact that this work was carried out within the computer science department and is intended to provide a broader, nonspecific domain evaluation of the algorithms performance using extrinsic benchmarks (i.e not belonging to a synthesizer's domain) for cross validation and minority oversampling. In oversampling tasks, using imbalanced datasets, the algorithm yields state-of-the-art results. Moreover, the synthetic points produced are significantly different from those created by the other algorithms and perform (controlled) exploration of more distant regions. Finally, accompanying the research, various performances, concerts and an album were produced with the algorithms and examples of this thesis. The reviews received and collections where the album has been featured show a positive reception within the community. Together, these evaluations suggest that rule learning is both an effective method and a promising path for further research.Aquest manuscrit explora la programació automàtica d’algorismes de síntesi de so dins del context de la pràctica artística performativa coneguda com a live coding. L'escriptura improvisada de codi font per crear música o visuals es va convertir en un instrument en el moment en què els ordinadors van poder realitzar síntesis de so en temps real amb llenguatges que mantenien el seu intèrpret en funcionament. D'aleshores ençà, el live coding comporta la programació en temps real d’algorismes de síntesi de so. Per a aquest propòsit, una possibilitat és tenir un algorisme que creï automàticament variacions a partir d'alguns presets seleccionats. No obstant, la necessitat de retroalimentació en temps real i la petita mida dels conjunts de dades són restriccions que fan que els programadors automàtics de sintetitzadors de so i els algorismes d’aprenentatge no siguin factibles d’utilitzar. A més, el seu disseny no està orientat a crear variacions d'un so, sinó a trobar els paràmetres del sintetitzador que aplicats a l'algorisme de síntesi produeixen un so determinat (target). Altres enfocaments creen representacions de l'espai de sons possibles, per permetre a l'usuari explorar-lo mitjançant l'evolució interactiva, però requereixen temps més llargs. Aquesta tesi investiga l'aprenentatge inductiu de regles per a la programació on-the-fly de sintetitzadors. Aquest enfocament és conceptualment diferent dels que es troben a la literatura. Els models de regles ofereixen interpretabilitat i permeten treballar amb els valors dels paràmetres dels algorismes de síntesi, sense processament previ. RuLer, l'algorisme d'aprenentatge proposat, rep dades amb combinacions etiquetades per l'usuari dels valors dels paràmetres d'un algorisme de síntesi. A continuació, analitza els patrons, basats en la dissimilitud, entre les combinacions de cada etiqueta. Aquests patrons es descriuen com un model de regles IF-THEN. Els paràmetres de l'algorisme proporcionen control per definir el que es considera un patró. Llavors, controlen el grau de consistència dels nous paràmetres de síntesi induïts respecte a les dades d'entrada originals. A continuació, es presenta un algorisme (FuzzyRuLer) capaç d’estendre les regles IF-THEN a hiperrectangles, que al seu torn s’utilitzen com a nuclis de funcions de pertinença. El model de regles difuses resultant crea un mapa complet de l'espai de la funció d'entrada. Per això, l'algorisme generalitza les regles lògiques seguint una heurística de volum màxim. Al llarg del manuscrit es discuteix com, quan s’utilitzen algorismes d’aprenentatge automàtic com a eines creatives, de vegades són desitjables glitches, errors o imprecisions produïdes pels models resultants, ja que poden oferir nous resultats imprevisibles. L'avaluació dels algorismes segueix dos camins. El primer es centra en proves d'usuari. El segon, que respon al fet que aquest treball es va dur a terme dins del departament de ciències de la computació, pretén proporcionar una avaluació més àmplia, no específica d'un domini, del rendiment dels algorismes mitjançant benchmarks extrínsecs utilitzats per cross-validation i minority oversampling. En tasques d'oversampling, mitjançant imbalanced data sets, l'algorisme proporciona resultats equiparables als de l'estat de l'art. A més, els punts sintètics produïts són significativament diferents als creats pels altres algorismes i realitzen exploracions (controlades) de regions més llunyanesEste manuscrito explora la programación automática de algoritmos de síntesis de sonido dentro del contexto de la práctica artística performativa conocida como live coding. La escritura de código fuente de forma improvisada para crear música o imágenes, se convirtió en un instrumento en el momento en que las computadoras asequibles pudieron realizar síntesis de sonido en tiempo real con lenguajes que mantuvieron su interprete en funcionamiento. Desde entonces, el live coding ha implicado la programación en tiempo real de algoritmos de síntesis. Para ese propósito, una posibilidad es tener un algoritmo que cree automáticamente variaciones a partir de unos pocos presets seleccionados. Sin embargo, la necesidad de retroalimentación en tiempo real y el pequeño tamaño de los conjuntos de datos (que incluso pueden recopilarse durante la misma actuación), limitan el uso de los algoritmos existentes, tanto de programación automática de sintetizadores como de aprendizaje de máquina. Además, el diseño de dichos algoritmos no está orientado a crear variaciones de un sonido, sino a encontrar los parámetros del sintetizador que coincidan con un sonido dado. Otros enfoques crean representaciones del espacio de posibles sonidos, para permitir al usuario explorarlo mediante evolución interactiva. Aunque estos sistemas están orientados a la exploración, requieren tiempos más largos. Esta tesis investiga el aprendizaje inductivo de reglas para la programación de sintetizadores on-the-fly. Este enfoque es conceptualmente diferente de los que se encuentran en la literatura, tanto de programación de sintetizadores como de live coding. Los modelos de reglas ofrecen interpretabilidad y permiten trabajar con los valores de los parámetros de los algoritmos de síntesis (incluso con datos simbólicos), haciendo innecesario el preprocesamiento. RuLer, el algoritmo de aprendizaje propuesto, recibe un conjunto de datos que contiene combinaciones, etiquetadas por el usuario, de valores de parámetros de un algoritmo de síntesis. Luego, analiza los patrones, en función de la disimilitud, entre las combinaciones de cada etiqueta. Estos patrones se describen como un modelo de reglas lógicas IF-THEN. Los parámetros del algoritmo proporcionan el control para definir qué se considera un patrón. Como los patrones son la base para inducir nuevas configuraciones de parámetros, los parámetros del algoritmo controlan también el grado de consistencia de las configuraciones inducidas con respecto a los datos de entrada originales. Luego, se presenta un algoritmo (llamado FuzzyRuLer) capaz de extender las reglas lógicas tipo IF-THEN a hiperrectángulos, que a su vez se utilizan como núcleos de funciones de pertenencia. El modelo de reglas difusas resultante crea un mapa completo del espacio de las clases de entrada. Para tal fin, el algoritmo generaliza las reglas lógicas resolviendo las contradicciones utilizando una heurística de máximo volumen. A lo largo del manuscrito se analiza cómo, cuando los algoritmos de aprendizaje automático se utilizan como herramientas creativas, los glitches, errores o inexactitudes producidas por los modelos resultantes son a veces deseables, ya que pueden ofrecer resultados novedosos e impredecibles. La evaluación de los algoritmos sigue dos caminos. El primero se centra en pruebas de usuario. El segundo, responde al hecho de que este trabajo se llevó a cabo dentro del departamento de ciencias de la computación y está destinado a proporcionar una evaluación más amplia, no de dominio específica, del rendimiento de los algoritmos utilizando beanchmarks extrínsecos para cross-validation y oversampling. En estas últimas pruebas, utilizando conjuntos de datos no balanceados, el algoritmo produce resultados equiparables a los del estado del arte. Además, los puntos sintéticos producidos son significativamente diferentes de los creados por los otros algoritmos y realizan una exploración (controlada) de regiones más distantes. Finalmente, acompañando la investigación, realicé diversas presentaciones, conciertos y un ´álbum utilizando los algoritmos y ejemplos de esta tesis. Las críticas recibidas y las listas donde se ha presentado el álbum muestran una recepción positiva de la comunidad. En conjunto, estas evaluaciones sugieren que el aprendizaje de reglas es al mismo tiempo un método eficaz y un camino prometedor para futuras investigaciones.Postprint (published version

    Classification of clinical outcomes using high-throughput and clinical informatics.

    Get PDF
    It is widely recognized that many cancer therapies are effective only for a subset of patients. However clinical studies are most often powered to detect an overall treatment effect. To address this issue, classification methods are increasingly being used to predict a subset of patients which respond differently to treatment. This study begins with a brief history of classification methods with an emphasis on applications involving melanoma. Nonparametric methods suitable for predicting subsets of patients responding differently to treatment are then reviewed. Each method has different ways of incorporating continuous, categorical, clinical and high-throughput covariates. For nonparametric and parametric methods, distance measures specific to the method are used to make classification decisions. Approaches are outlined which employ these distances to measure treatment interactions and predict patients more sensitive to treatment. Simulations are also carried out to examine empirical power of some of these classification methods in an adaptive signature design. Results were compared with logistic regression models. It was found that parametric and nonparametric methods performed reasonably well. Relative performance of the methods depends on the simulation scenario. Finally a method was developed to evaluate power and sample size needed for an adaptive signature design in order to predict the subset of patients sensitive to treatment. It is hoped that this study will stimulate more development of nonparametric and parametric methods to predict subsets of patients responding differently to treatment
    corecore