31 research outputs found

    Resolvable designs with large blocks

    Full text link
    Resolvable designs with two blocks per replicate are studied from an optimality perspective. Because in practice the number of replicates is typically less than the number of treatments, arguments can be based on the dual of the information matrix and consequently given in terms of block concurrences. Equalizing block concurrences for given block sizes is often, but not always, the best strategy. Sufficient conditions are established for various strong optimalities and a detailed study of E-optimality is offered, including a characterization of the E-optimal class. Optimal designs are found to correspond to balanced arrays and an affine-like generalization.Comment: Published at http://dx.doi.org/10.1214/009053606000001253 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    New conditions for testing necessarily/possibly efficiency of non-degenerate basic solutions based on the tolerance approach

    Get PDF
    In this paper, a specific type of multiobjective linear programming problem with interval objective func- tion coefficients is studied. Usually, in such problems, it is not possible to obtain an optimal solution which optimizes simultaneously all objective functions in the interval multiobjective linear programming (IMOLP) problem, requiring the selection of a compromise solution. In conventional multiobjective pro- gramming problems these compromise solutions are called efficient solutions. However, the efficiency cannot be defined in a unique way in IMOLP problems. Necessary efficiency and possible efficiency have been considered as two natural extensions of efficiency to IMOLP problems. In this case, necessarily ef- ficient solutions may not exist and the set of possibly efficient solutions usually has an infinite number of elements. Furthermore, it has been concluded that the problem of checking necessary efficiency is co- NP-complete even for the case of only one objective function. In this paper, we explore new conditions for testing necessarily/possibly efficiency of basic non-degenerate solutions in IMOLP problems. We show properties of the necessarily efficient solutions in connection with possibly and necessarily optimal solu- tions to the related single objective problems. Moreover, we utilize the tolerance approach and sensitivity analysis for testing the necessary efficiency. Finally, based on the new conditions, a procedure to obtain some necessarily efficient and strictly possibly efficient solutions to multiobjective problems with interval objective functions is suggested.This research was partly supported by the Spanish Ministry of Economy and Competitiveness (project ECO2017-88883-R ) and by the Fundação para a Ciência e a Tecnologia (FCT) under project grant UID/Multi/00308/2019 . This work has been also partly sup- ported by the Consejería de Innovación, Ciencia y Empresa de la Junta de Andalucía (PAI group SEJ-532 ). Carla Oliveira Henriques also acknowledges the training received from the University of Malaga PhD Programme in Economy and Business [Programa de Doctorado en Economía y Empresa de la Universidad de Malaga]. José Rui Figueira acknowledges the support from the FCT grant SFRH/BSAB/139892/2018 under POCH Program and to the DOME (Discrete Optimization Methods for Energy management) FCT Re- search Project (Ref: PTDC/CCI-COM/31198/2017)

    Robust optimality analysis for linear programming problems with uncertain objective function coefficients: an outer approximation approach

    Get PDF
    summary:Linear programming (LP) problems with uncertain objective function coefficients (OFCs) are treated in this paper. In such problems, the decision-maker would be interested in an optimal solution that has robustness against uncertainty. A solution optimal for all conceivable OFCs can be considered a robust optimal solution. Then we investigate an efficient method for checking whether a given non-degenerate basic feasible (NBF) solution is optimal for all OFC vectors in a specified range. When the specified range of the OFC vectors is a hyper-box, i. e., the marginal range of each OFC is given by an interval, it has been shown that the tolerance approach can efficiently solve the robust optimality test problem of an NBF solution. However, the hyper-box case is a particular case where the marginal ranges of some OFCs are the same no matter what values the remaining OFCs take. In real life, we come across cases where some OFCs' marginal ranges depend on the remaining OFCs' values. For example, the prices of products rise together in tandem with raw materials, the gross profit of exported products increases while that of imported products decreases because they depend on the currency exchange rates, and so on. Considering those dependencies, we consider a case where the range of the OFC vector is specified by a convex polytope. In this case, the tolerance approach to the robust optimality test problem of an NBF solution becomes in vain. To treat the problem, we propose an algorithm based on the outer approximation approach. By numerical experiments, we demonstrate how the proposed algorithm efficiently solves the robust optimality test problems of NBF solutions compared to a conventional vertex-listing method

    Parallelized modelling and solution scheme for hierarchically scaled simulations

    Get PDF
    This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy

    Optimal mathematical programming and variable neighborhood search for k-modes categorical data clustering

    Get PDF
    The conventional k-modes algorithm and its variants have been extensively used for categorical data clustering. However, these algorithms have some drawbacks, e.g., they can be trapped into local optima and sensitive to initial clusters/modes. Our numerical experiments even showed that the k-modes algorithm could not identify the optimal clustering results for some special datasets regardless the selection of the initial centers. In this paper, we developed an integer linear programming (ILP) approach for the k-modes clustering, which is independent to the initial solution and can obtain directly the optimal results for small-sized datasets. We also developed a heuristic algorithm that implements iterative partial optimization in the ILP approach based on a framework of variable neighborhood search, known as IPO-ILP-VNS, to search for near-optimal results of medium and large sized datasets with controlled computing time. Experiments on 38 datasets, including 27 synthesized small datasets and 11 known benchmark datasets from the UCI site were carried out to test the proposed ILP approach and the IPO-ILP-VNS algorithm. The experimental results outperformed the conventional and other existing enhanced k-modes algorithms in literature, updated 9 of the UCI benchmark datasets with new and improved results

    Design and Control of Compliant Actuation Topologies for Energy-Efficient Articulated Robots

    Get PDF
    Considerable advances have been made in the field of robotic actuation in recent years. At the heart of this has been increased use of compliance. Arguably the most common approach is that of Series-Elastic Actuation (SEA), and SEAs have evolved to become the core component of many articulated robots. Another approach is integration of compliance in parallel to the main actuation, referred to as Parallel- Elastic Actuation (PEA). A wide variety of such systems has been proposed. While both approaches have demonstrated significant potential benefits, a number of key challenges remain with regards to the design and control of such actuators. This thesis addresses some of the challenges that exist in design and control of compliant actuation systems. First, it investigates the design, dynamics, and control of SEAs as the core components of next-generation robots. We consider the influence of selected physical stiffness on torque controllability and backdrivability, and propose an optimality criterion for impedance rendering. Furthermore, we consider disturbance observers for robust torque control. Simulation studies and experimental data validate the analyses. Secondly, this work investigates augmentation of articulated robots with adjustable parallel compliance and multi-articulated actuation for increased energy efficiency. Particularly, design optimisation of parallel compliance topologies with adjustable pretension is proposed, including multi-articulated arrangements. Novel control strategies are developed for such systems. To validate the proposed concepts, novel hardware is designed, simulation studies are performed, and experimental data of two platforms are provided, that show the benefits over state-of-the-art SEA-only based actuatio

    Antenna selection in massive mimo based on matching pursuit

    Get PDF
    As wireless services proliferate, the demand for available spectrum also grows. As a result, the spectral efficiency is still an issue addressed by many researchers looking for solutions to provide quality of service to a growing number of users. massive MIMO is an attractive technology for the next wireless systems since it can alleviate the expected spectral shortage. This work proposes two antenna selection strategies to be applied in the downlink of a massive MIMO system, aiming at reducing the transmission power. The proposed algorithms can also be employed to select a subset of active sensors in centralized sensor networks. The proposed strategy to select the antennas is inspired by the matching pursuit technique. The presented results show that an efficient selection can be obtained with reduced computational complexity.Com a proliferação de serviços wireless, a demanda por espectro disponível também cresce. Logo, a eficiência espectral é um assunto de grande interesse na comunidade científica, que procura por meios para fornecer qualidade de serviço ao crescente número de usuários. massive MIMO é uma técnica repleta de atrativos a ser empregada na futura geração wireless, já que aproveita o espectro existente eficientemente. Este trabalho propõe duas estratégias de seleção de antenas para serem empregadas no downlink de um sistema massive MIMO, visando a redução da potência de transmissão. Os algoritmos propostos podem também ser usados para selecionar um subconjunto de sensores ativos em uma rede centralizada de sensores. A estratégia proposta para seleção de antenas é inspirada na técnica matching pursuit. Os resultados apresentados indicam que uma seleção eficiente pode ser obtida com baixa complexidade computacional

    Technical Debt in Software Development : Examining Premises and Overcoming Implementation for Efficient Management

    Get PDF
    Software development is a unique field of engineering: all software constructs retain their modifiability — arguably, at least — until client release, no single project stakeholder has exhaustive knowledge about the project, and even this portion of the knowledge is generally acquired only at project completion. These characteristics imply that the field of software development is subject to design decisions that are known to be sub-optimal—either deliberately emphasizing interests of particular stakeholders or indeliberately harming the project due to lack of exhaustive knowledge. Technical debt is a concept that accounts for these decisions and their effects. The concept’s intention is to capture, track, and manage the decisions and their products: the affected software constructs. Reviewing the previous, it is vital for software development projects to acknowledge technical debt both as an enabler and as a hindrance. This thesis looks into facilitating efficient technical debt management for varying software development projects. In the thesis, examination of technical debt’s role in software development produces the premises on to which a management implementation approach is introduced. The thesis begins with a revision of motivations. Basing on prior research in the fields of technical debt management and software engineering in general, the five motivations establish the premises for technical debt in software development. These include notions of subjectivity in technical debt estimation, update frequency demands posed on technical debt information, and technical debt’s polymorphism. Three research questions are derived from the motivations. They ask for tooling support for technical debt management, capturing and modelling technical debt propagation, and characterizing software development environments and their technical debt instances. The questions imply consecutive completion as the first pursued tool would benefit from—possibly automatically assessable—propagation models, and finally the tool’s introduction to software development organizations could be assisted by tailoring it based on the software development environment and the technical debt instance characterizations. The thesis has seven included publications. In introducing them, the thesis maps their backgrounds to the motivations and their outcomes to the research questions. Amongst the outcomes are the DebtFlag tool for technical debt management, the procedures for retrospectively capturing technical debt from software repositories, a procedure for technical debt propagation model creation from these retrospectives, and a multi-national survey characterizing software development environments and their technical debt instances. The thesis concludes that the tooling support, the technical debt propagation modelling, and the software environment and technical debt instance characterization describe an implementation approach to further efficient technical debt management. Simultaneously, future work is implied as all previously described efforts need to be continued and extended. Challenges also remain in the introduced approach. An example of this is the combinatorial explosion of technology-development-context-combinations that technical debt propagation modelling needs to consider. All combinations have to be managed if exhaustive modelling is desired. There is, however, a great deal of motivation to pursue these efforts when one re-notes that technical debt is a permanent component of software development that, when correctly managed, is a development efficiency mechanism comparable to a financial loan investment.Ohjelmistokehitys on uniikki tekniikan ala: kaikki ohjelmistorakenteet säilyttävät muokattavuutensa — otaksuttavasti ainakin — asiakasjulkaisuun asti. Yhdenkään projektiosakkaan tietämys ei kata koko projektia ja merkittävä osa tästäkin tiedosta karttuu vasta projektin suorittamisen aikana. Nämä ominaisuudet antavat ymmärtää, että ohjelmistokehitysala on sellaisten suunnitelupäätösten kohde, joiden tiedetään olevan epätäydellisiä—joko tarkoituksella tiettyjen projektiosakkaiden intressejä painottavia tai tahattomasti projektia vahingoittavia puutteelliseen tietoon perustuvia. Tekninen velka on konsepti, joka huomioi nämä päätökset sekä niiden vaikutukset. Konseptin tarkoitus on havaita, seurata ja hallita näitä päätöksiä sekä tuloksena syntyviä teknisen velan vaikutuksen alla olevia ohjelmistorakenteita. Edellisen kuvauksen valossa ohjelmistokehitysprojekteille on erityisen tärkeää huomioida tekninen velka sekä mahdollistajana että hidasteena. Tämän vuoksi kyseinen väitöskirja perehtyy tehokkaan teknisen velan hallinnan fasilitointiin moninaisille ohjelmistokehitysprojekteille. Väitöskirjassa tarkastellaan teknisen velan roolia osana ohjelmistokehitystä. Tarkastelu tuottaa joukon premissejä, joihin perustuen esitellään lähestymistapa teknisen velan hallinnan toteuttamiselle. Viisi väitöskirjan alussa esitettyä motivaatiota kiinnittävät ne premissit,joille ratkaisu esitetään. Motivaatiot rakennetaan olemassa olevaan teknisen velan sekä ohjelmistotekniikan tutkimustietoon perustuen. Näihin lukeutuvat muun muassa subjektiivisuus teknisen velan estimoinnissa, teknisen velan informaatiolle nähdyt päivitystaajuusvaatimukset sekä teknisen velan polymorfismi. Havainnoista johdetaan kolme tutkimuskysymystä. Ne tavoittelevat työkalutukea teknisen velan hallinnalle, velan propagoitumisen havainnointia sekä mallinnusta kuin myös ohjelmistotuotantoympäristöjen ja niiden velka instanssien kuvaamista. Tutkimuskysymykset implikoivat peräkkäistä suoritusta: tavoiteltu työkalu hyötyy—mahdollisesti automaattisesti arvoitavista—teknisen velan propagaatiomalleista. Valmiin työkalun käyttöönottoa voidaan taas edistää jos kuvaukset kehitysympäristöistä sekä niiden velkainstansseista ovat käytettävissä työkalun räätälöintiin. Väitöskirjaaan sisältyy seitsemän julkaisua. Väitöskirja esittelee ne kiinnittämällä julkaisujen taustatyön aikaisemmin mainittuihin motivaatioihin sekä niiden tulokset edellisiin tutkimuskysymyksiin. Tuloksista huomioidaan esimerkiksi DebtFlag-työkalu teknisen velan hallintaan, retrospektiivinen prosessi teknisen velan kartoittamiselle versionhallintajärjestelmistä, prosessi teknisen velan mallien rakentamiselle näistä kartoituksista ja monikansallinen kyselytutkimus ohjelmistokehitysympäristöjen sekä näiden teknisen velan instanssien luonnehtimiseksi. Väitöskirjan yhteenvetona huomioidaan, että teknisen velan hallinnan työkalutuki, teknisen velan propagaatiomallinnus ja ohjelmistokehitysympäristöjen sekä niiden teknisen velan instanssien luonnehdinta muodostavat toteutustavan, jolla teknisen velan tehokasta hallintaa voidaan kehittää. Samalla implikoidaan jatkotoimia, sillä kaikkia edellä kuvattuja työn osia tulee jatkaa ja laajentaa. Toteutustavalle nähdään myös haasteita. Eräs näistä on kombinatorinen räjähdys teknologia- ja kehityskontekstikombinaatioille. Kaikki kombinaatiot tulee huomioida mikäli teknisen velan propagaatiomallinnuksesta halutaan kattavaa. Motivaatio väitöskirjassa esitetyn työn jatkamiselle on huomattavaa ja sitä kasvattaa entuudestaan edellä tehty huomio siitä, että tekninen velka on pysyvä komponentti ohjelmistokehityksessä, joka oikein hallittuna on kehitystehokkuutta edistävänä komponenttina verrattavissa finanssialan lainainvestointiin.Siirretty Doriast
    corecore