52 research outputs found

    Metameric representations on optimization of nano particle cancer treatment

    Get PDF
    In silico evolutionary optimization of cancer treatment based on multiple nano-particle (NP) assisted drug delivery systems was investigated in this study. The use of multiple types of NPs is expected to increase the robustness of the treatment, due to imposing higher complexity on the solution tackling a problem of high complexity, namely the physiology of a tumor. Thus, the utilization of metameric representations in the evolutionary optimization method was examined, along with suitable crossover and mutation operators. An open-source physics-based simulator was utilized, namely PhysiCell, after appropriate modifications, to test the fitness of possible treatments with multiple types of NPs. The possible treatments could be comprised of up to ten types of NPs, simultaneously injected in an area close to the cancerous tumour. Initial results seem to suffer from bloat, namely the best solutions discovered are converging towards the maximum amount of different types of NPs, however, without providing a significant return in fitness when compared with solutions of fewer types of NPs. As the large diversity of NPs will most probably prove to be quite toxic in lab experiments, we opted for methods to reduce the bloat, thus, resolve to therapies with fewer types of NPs. Namely, the bloat control methods studied here were removing types of NPs from the optimization genome as part of the mutation operator and applying parsimony pressure in the replacement operator. By utilizing these techniques, the treatments discovered are composed of fewer types of NPs, while their fitness is not significantly smaller

    In silico optimization of cancer therapies with multiple types of nanoparticles applied at different times

    Get PDF
    © 2020 The Author(s) Background and Objective: Cancer tumors constitute a complicated environment for conventional anti-cancer treatments to confront, so solutions with higher complexity and, thus, robustness to diverse conditions are required. Alternations in the tumor composition have been documented, as a result of a conventional treatment, making an ensemble of cells drug resistant. Consequently, a possible answer to this problem could be the delivery of the pharmaceutic compound with the assistance of nano-particles (NPs) that modify the delivery characteristics and biodistribution of the therapy. Nonetheless, to tackle the dynamic response of the tumor, a variety of application times of different types of NPs could be a way forward. Methods: The in silico optimization was investigated here, in terms of the design parameters of multiple NPs and their application times. The optimization methodology used an open-source simulator to provide the fitness of each possible treatment. Because the number of different NPs that will achieve the best performance is not known a priori, the evolutionary algorithm utilizes a variable length genome approach, namely a metameric representation and accordingly modified operators. Results: The results highlight the fact that different application times have a significant effect on the robustness of a treatment. Whereas, applying all NPs at earlier time slots and without the ordered sequence unveiled by the optimization process, proved to be less effective. Conclusions: The design and development of a dynamic tool that will navigate through the large search space of possible combinations can provide efficient solutions that prove to be beyond human intuition

    Evolutionary algorithms designing nanoparticle cancer treatments with multiple particle types

    Get PDF
    There is a rich history of evolutionary algorithms tackling optimization problems where the most appropriate size of solutions, namely the genome length, is unclear a priori. Here, we investigated the applicability of this methodology on the problem of designing a nanoparticle (NP) based drug delivery system targeting cancer tumors. Utilizing a treatment comprised of multiple types of NPs is expected to be more effective due to the higher complexity of the treatment. This paper begins by using the well-known NK model to explore the effects of fitness landscape ruggedness on the evolution of genome length and, hence, solution complexity. The size of novel sequences and variations of the methodology with and without sequence deletion are also considered. Results show that whilst landscape ruggedness can alter the dynamics of the process, it does not hinder the evolution of genome length. On the contrary, the expansion of genome lengths can be encouraged by the topology of such landscapes. These findings are then explored within the aforementioned real-world problem. Variable sized treatments with multiple NP types are studied via an agent-based open source physics-based cell simulator. We demonstrate that the simultaneous evolution of multiple types of NPs leads to more than 50% reduction in tumor size. In contrast, evolution of a single NP type leads to only 7% reduction in tumor size. We also demonstrate that the initial stages of evolution are characterized by a fast increase in solution complexity (addition of new NP types), while later phases are characterized by a slower optimization of the best NP composition. Finally, the smaller the number of NP types added per mutation step, the shorter the length of the typical solution found

    Gridless Evolutionary Approach for Line Spectral Estimation with Unknown Model Order

    Full text link
    Gridless methods show great superiority in line spectral estimation. These methods need to solve an atomic l0l_0 norm (i.e., the continuous analog of l0l_0 norm) minimization problem to estimate frequencies and model order. Since this problem is NP-hard to compute, relaxations of atomic l0l_0 norm, such as nuclear norm and reweighted atomic norm, have been employed for promoting sparsity. However, the relaxations give rise to a resolution limit, subsequently leading to biased model order and convergence error. To overcome the above shortcomings of relaxation, we propose a novel idea of simultaneously estimating the frequencies and model order by means of the atomic l0l_0 norm. To accomplish this idea, we build a multiobjective optimization model. The measurment error and the atomic l0l_0 norm are taken as the two optimization objectives. The proposed model directly exploits the model order via the atomic l0l_0 norm, thus breaking the resolution limit. We further design a variable-length evolutionary algorithm to solve the proposed model, which includes two innovations. One is a variable-length coding and search strategy. It flexibly codes and interactively searches diverse solutions with different model orders. These solutions act as steppingstones that help fully exploring the variable and open-ended frequency search space and provide extensive potentials towards the optima. Another innovation is a model order pruning mechanism, which heuristically prunes less contributive frequencies within the solutions, thus significantly enhancing convergence and diversity. Simulation results confirm the superiority of our approach in both frequency estimation and model order selection.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Rapid design of aircraft fuel quantity indication systems via multi-objective evolutionary algorithms

    Get PDF
    The design of electrical, mechanical and fluid systems on aircraft is becoming increasingly integrated with the aircraft structure definition process. An example is the aircraft fuel quantity indication (FQI) system, of which the design is strongly dependent on the tank geometry definition. Flexible FQI design methods are therefore desirable to swiftly assess system-level impact due to aircraft level changes. For this purpose, a genetic algorithm with a two-stage fitness assignment and FQI specific crossover procedure is proposed (FQI-GA). It can handle multiple measurement accuracy constraints, is coupled to a parametric definition of the wing tank geometry and is tested with two performance objectives. A range of crossover procedures of comparable node placement problems were tested for FQI-GA. Results show that the combinatorial nature of the probe architecture and accuracy constraints require a probe set selection mechanism before any crossover process. A case study, using approximated Airbus A320 requirements and tank geometry, is conducted and shows good agreement with the probe position results obtained with the FQI-GA. For the objectives of accessibility and probe mass, the Pareto front is linear, with little variation in mass. The case study confirms that the FQI-GA method can incorporate complex requirements and that designers can employ it to swiftly investigate FQI probe layouts and trade-offs

    Differential Evolution with a Variable Population Size for Deployment Optimization in a UAV-Assisted IoT Data Collection System

    Get PDF
    This paper studies an unmanned aerial vehicle (UAV)-assisted Internet of Things (IoT) data collection system, where a UAV is employed as a data collection platform for a group of ground IoT devices. Our objective is to minimize the energy consumption of this system by optimizing the UAV’s deployment, including the number and locations of stop points of the UAV. When using evolutionary algorithms to solve this UAV’s deployment problem, each individual usually represents an entire deployment. Since the number of stop points is unknown a priori, the length of each individual in the population should be varied during the optimization process. Under this condition, the UAV’s deployment is a variable-length optimization problem and the traditional fixed-length mutation and crossover operators should be modified. In this paper, we propose a differential evolution algorithm with a variable population size, called DEVIPS, for optimizing the UAV’s deployment. In DEVIPS, the location of each stop point is encoded into an individual, and thus the whole population represents an entire deployment. Over the course of evolution, differential evolution is employed to produce offspring. Afterward, we design a strategy to adjust the population size according to the performance improvement. By this strategy, the number of stop points can be increased, reduced, or kept unchanged adaptively. In DEVIPS, since each individual has a fixed length, the UAV’s deployment becomes a fixed-length optimization problem and the traditional fixed-length mutation and crossover operators can be used directly. The performance of DEVIPS is compared with that of five algorithms on a set of instances. The experimental studies demonstrate its effectiveness

    Optimization of medical image steganography using n-decomposition genetic algorithm

    Get PDF
    Protecting patients' confidential information is a critical concern in medical image steganography. The Least Significant Bits (LSB) technique has been widely used for secure communication. However, it is susceptible to imperceptibility and security risks due to the direct manipulation of pixels, and ASCII patterns present limitations. Consequently, sensitive medical information is subject to loss or alteration. Despite attempts to optimize LSB, these issues persist due to (1) the formulation of the optimization suffering from non-valid implicit constraints, causing inflexibility in reaching optimal embedding, (2) lacking convergence in the searching process, where the message length significantly affects the size of the solution space, and (3) issues of application customizability where different data require more flexibility in controlling the embedding process. To overcome these limitations, this study proposes a technique known as an n-decomposition genetic algorithm. This algorithm uses a variable-length search to identify the best location to embed the secret message by incorporating constraints to avoid local minimum traps. The methodology consists of five main phases: (1) initial investigation, (2) formulating an embedding scheme, (3) constructing a decomposition scheme, (4) integrating the schemes' design into the proposed technique, and (5) evaluating the proposed technique's performance based on parameters using medical datasets from kaggle.com. The proposed technique showed resistance to statistical analysis evaluated using Reversible Statistical (RS) analysis and histogram. It also demonstrated its superiority in imperceptibility and security measured by MSE and PSNR to Chest and Retina datasets (0.0557, 0.0550) and (60.6696, 60.7287), respectively. Still, compared to the results obtained by the proposed technique, the benchmark outperforms the Brain dataset due to the homogeneous nature of the images and the extensive black background. This research has contributed to genetic-based decomposition in medical image steganography and provides a technique that offers improved security without compromising efficiency and convergence. However, further validation is required to determine its effectiveness in real-world applications

    Principles of Neural Network Architecture Design - Invertibility and Domain Knowledge

    Get PDF
    Neural networks architectures allow a tremendous variety of design choices. In this work, we study two principles underlying these architectures: First, the design and application of invertible neural networks (INNs). Second, the incorporation of domain knowledge into neural network architectures. After introducing the mathematical foundations of deep learning, we address the invertibility of standard feedforward neural networks from a mathematical perspective. These results serve as a motivation for our proposed invertible residual networks (i-ResNets). This architecture class is then studied in two scenarios: First, we propose ways to use i-ResNets as a normalizing flow and demonstrate the applicability for high-dimensional generative modeling. Second, we study the excessive invariance of common deep image classifiers and discuss consequences for adversarial robustness. We finish with a study of convolutional neural networks for tumor classification based on imaging mass spectrometry (IMS) data. For this application, we propose an adapted architecture guided by our knowledge of the domain of IMS data and show its superior performance on two challenging tumor classification datasets

    Determinants of colour constancy

    Get PDF
    Colour constancy describes the ability of our visual system to keep colour percepts stable through illumination changes. This is an outstanding feat given that in the retinal image surface and illuminant properties are conflated. Still, in our everyday lives we are able attribute stable colour-labels to objects to make communication economic and efficient. Past research shows colour constancy to be imperfect, compensating for 40% and 80% of the illumination change. While different constancy determinants are suggested, no carefully controlled study shows perfect constancy. The first study presented here addresses the issue of imperfect constancy by investigating colour constancy in a cue rich environment, using a task that resembles our everyday experience with colours. Participants were asked to recall the colour of unique personal objects in natural environment under four chromatic illuminations. This approach yielded perfect colour constancy. The second study investigated the relation between illumination discrimination and chromatic detection. Recent studies using an illumination discrimination paradigm suggest that colour constancy is optimized for bluish daylight illuminations. Because it is not clear if illumination discrimination is directly related to colour constancy or is instead explained by sensitivity to changes in chromaticity of different hues, thresholds for illumination discrimination and chromatic detection for the same 12 illumination hues were compared. While the reported blue bias could be replicated, thresholds for illumination discrimination and chromatic detection were highly related, indicating that lower sensibility towards bluish hues is not exclusive to illumination discrimination. Accompanying the second study, the third study investigated the distribution of colour constancy for 40 chromatic illuminations of different hue using achromatic adjustments and colour naming. These measurements were compared to several determinants of colour constancy, including the daylight locus, colour categories, illumination discrimination, chromatic detection, relational colour constancy and metameric mismatching. In accordance with the observations in study 2, achromatic adjustments revealed a bias towards bluish daylight illumination. This blue bias and naming consistency explained most of the variance in achromatic adjustments, while illumination discrimination was not directly related to colour constancy. The fourth study examined colour memory biases. Past research shows that colours of objects are remembered as being more saturated than they are perceived. These works often used natural objects that exist in a variety of colour and hue, such as grass or bananas. The approach presented here directly compared perceived and memorized colours for unique objects, used also in the first study, and confirmed the previous findings that on average, objects were remembered more saturated than they were perceived.Farbkonstanz beschreibt die Fähigkeit unseres visuellen Systems Farbeindrücke unter Beleuchtungsänderungen beständig zu halten. Dies ist eine außergewöhnliche Leistung, wenn man in Betracht zieht, dass in dem Lichtsignal welches das Auge erreicht Eigenschaften der Beleuchtung und der Oberflächen konfundiert sind. Trotz dieser Problematik sind wir in unserem alltäglichen Leben in der Lage Objekten stabile Farbnamen zuzuordnen, und damit unsere Kommunikation effizient und ökonomisch zu gestalten. Bisherige Studien zur Farbkonstanz berichten jedoch, dass Farbkonstanz nicht perfekt ist, Beleuchtungswechsel wurden nur zwischen 40-80% kompensiert. Während unterschiedliche Determinanten der Farbkonstanz vorgeschlagen wurden, konnte bisher keine sorgfältig kontrollierte Studie perfekte Farbkonstanz zeigen. In der ersten Studie dieser Arbeit wurde dieser Aspekt untersucht, indem Farbkonstanz in einer hinweisreichen Umgebung unter Verwendung einer Aufgabe, die möglichst präzise unserer alltäglichen Erfahrung im Umgang mit Farben wiederspiegelt, gemessen wurde. Die Versuchsteilnehmer wurden aufgefordert die Farbe eines spezifischen persönlichen Gegenstandes unter vier farbigen Beleuchtungen aus dem Gedächtnis abzurufen. Unter Verwendung dieses Ansatzes konnte perfekte Farbkonstanz erreicht werden. Die zweite Studie untersuchte die Beziehung zwischen Beleuchtungs-Diskrimination und chromatischer Detektion. Die Ergebnisse von kürzlich veröffentlichten Forschungsarbeiten, welche ein Beleuchtungs-Diskriminations-Paradigma verwendeten, zeigen das diese Diskrimination in Richtung bläulicher Beleuchtung verzerrt ist. Daraus wurde geschlossen, das Farbkonstanz für bläuliche Tageslicht-Beleuchtungen optimiert ist . Da es aber nicht klar ist, ob Beleuchtungs-Diskrimination in direkter Beziehung zur Farbkonstanz steht, oder aber vielmehr auf die Sensitivität für chromatische Veränderungen zurückführen ist, wurden Wahrnehmungsschwellen für Beleuchtungs-Diskrimination und chromatische Detektion für die selben 12 Beleuchtungsfarben gemessen und verglichen. Während die bereits berichtete Verzerrung in Richtung der bläulichen Tageslichtbeleuchtung repliziert werden konnte, wurde ebenfalls eine hoher Zusammenhang zwischen chromatischer Detektion und Beleuchtungs-Diskrimination gefunden, welcher darauf hinweist, dass die Verzerrung in Richtung bläulicher Farben keine exklusive Eigenschaft der Beleuchtung-Diskrimination ist. Anknüpfend an die zweite Studie wurde in der dritten Studie die Verteilung von Farbkonstanz über 40 chromatische Beleuchtungen anhand von achromatischen Einstellungen und Farbbenennung untersucht. Farbkonstanz wurde auf ihren Zusammenhang zu mehreren Determinanten der Farbkonstanz überprüft, unter anderem mit Tageslichtvariationen, Farbkategorien, Beleuchtungs-Diskrimination, relationaler Farbkonstanz und metameric mismatching. In Übereinstimmung mit der zweiten Studie wurde auch für achromatische Einstellungen eine Verzerrung in Richtung bläulicher Tageslichtbeleuchtungen gefunden. Diese Verzerrung und der Konsensus der Beleuchtungsbenennung erklärten den Großteil der Varianz der achromatischen Einstellungen, während Beleuchtungs-Diskrimination nicht in direkter Verbindung zur Farbkonstanz stand. In der vierten Studie wurden Verzerrungen des Farbgedächtnisses untersucht. Frühere Studien berichten, dass Objektfarben häufig gesättigter erinnert werden als sie tatsächlich wahrgenommen werden. In diesen Studien wurden häufig natürliche Objekte verwendet, die in einer Vielzahl an Farbtönen und Sättigungen existieren, wie beispielsweise Gras oder Bananen. In dem hier präsentierten Ansatz wurden Farbwahlen aus dem Gedächtnis mit Farbwahlen der direkten Objektwahrnehmung für persönliche, spezifische Objekte, die auch schon in der ersten Studie verwendet wurden, verglichen. Die Ergebnisse der vorherigen Studien konnten für diese Objekte repliziert werden: Im Durchschnitt wurden Objektfarben gesättigter erinnert als das Objekt im direkten Vergleich wahrgenommen wurde
    • …
    corecore