29 research outputs found

    A Hyper-Relation Characterization of Weak Pseudo-Rationalizability

    Get PDF
    I provide a characterization of weakly pseudo-rationalizable choice functions---that is, choice functions rationalizable by a set of acyclic relations---in terms of hyper-relations satisfying certain properties. For those hyper-relations Nehring calls extended preference relations, the central characterizing condition is weaker than (hyper-relation) transitivity but stronger than (hyper-relation) acyclicity. Furthermore, the relevant type of hyper-relation can be represented as the intersection of a certain class of its extensions. These results generalize known, analogous results for path independent choice functions

    Path Independence and a Persistent Paradox of Population Ethics

    Get PDF
    In the face of an impossibility result, some assumption must be relaxed. The Mere Addition Paradox is an impossibility result in population ethics. Here, I explore substantially weakening the decision-theoretic assumptions involved. The central finding is that the Mere Addition Paradox persists even in the general framework of choice functions when we assume Path Independence as a minimal decision-theoretic constraint. Choice functions can be thought of either as generalizing the standard axiological assumption of a binary “betterness” relation, or as providing a general framework for a normative (rather than axiological) theory of population ethics. Path Independence, a weaker assumption than typically (implicitly) made in population ethics, expresses the idea that, in making a choice from a set of alternatives, the order in which options are assessed or considered is ethically arbitrary and should not affect the final choice. Since the result establishes a conflict between the relevant ethical principles and even very weak decision-theoretic principles, we have more reason to doubt the ethical principles

    Effective software support for chemical research

    Get PDF

    The Relation Between Classical and Quantum Mechanics

    Get PDF
    This thesis examines the relation between classical and quantum mechanics from philosophical, mathematical and physical standpoints. It first presents arguments in support of “conjectural realism” in scientific theories distinguished by explicit contextual structure and empirical testability; and it analyses intertheoretic reduction in terms of weakly equivalent theories over a domain of applicability. Familiar formulations of classical and quantum mechanics are shown to follow from a general theory of mechanics based on pure states with an intrinsic prob- ability structure. This theory is developed to the stage where theorems from quantum logic enable expression of the state geometry in Hilbert space. Quan- tum and classical mechanics are then elaborated and applied to subsystems and the measurement process. Consideration is also given to space-time geometry and the constraints this places on the dynamics. Physics and Mathematics, it is argued, are growing apart; the inadequate treatment of approximations in general and localisation in quantum mechanics in particular are seen as contributing factors. In the description of systems, the link between localisation and lack of knowledge shows that quantum mechanics should reflect the domain of applicability. Restricting the class of states provides a means of achieving this goal. Localisation is then shown to have a mathematical expression in terms of compactness, which in turn is applied to yield a topological theory of bound and scattering states. Finally, the thesis questions the validity of “classical limits” and “quantisations” in intertheoretic reduction, and demonstrates that a widely accepted classical limit does not constitute a proof of reduction. It proposes a procedure for determining whether classical and quantum mechanics are weakly equivalent over a domain of applicability, and concludes that, in this restricted sense, classical mechanics reduces to quantum mechanics

    N-colour separation methods for accurate reproduction of spot colours

    Full text link
    In packaging, spot colours are used to print key information like brand logos and elements for which the colour accuracy is critical. The present study investigates methods to aid the accurate reproduction of these spot colours with the n-colour printing process. Typical n-colour printing systems consist of supplementary inks in addition to the usual CMYK inks. Adding these inks to the traditional CMYK set increases the attainable colour gamut, but the added complexity creates several challenges in generating suitable colour separations for rendering colour images. In this project, the n-colour separation is achieved by the use of additional sectors for intermediate inks. Each sector contains four inks with the achromatic ink (black) common to all sectors. This allows the extension of the principles of the CMYK printing process to these additional sectors. The methods developed in this study can be generalised to any number of inks. The project explores various aspects of the n-colour printing process including the forward characterisation methods, gamut prediction of the n-colour process and the inverse characterisation to calculate the n-colour separation for target spot colours. The scope of the study covers different printing technologies including lithographic offset, flexographic, thermal sublimation and inkjet printing. A new method is proposed to characterise the printing devices. This method, the spot colour overprint (SCOP) model, was evaluated for the n-colour printing process with different printing technologies. In addition, a set of real-world spot colours were converted to n-colour separations and printed with the 7-colour printing process to evaluate against the original spot colours. The results show that the proposed methods can be effectively used to replace the spot coloured inks with the n-colour printing process. This can save significant material, time and costs in the packaging industry

    The Polytope Formalism: isomerism and associated unimolecular isomerisation

    Get PDF
    This thesis concerns the ontology of isomerism, this encompassing the conceptual frameworks and relationships that comprise the subject matter; the necessary formal definitions, nomenclature, and representations that have impacts reaching into unexpected areas such as drug registration and patent specifications; the requisite controlled and precise vocabulary that facilitates nuanced communication; and the digital/computational formalisms that underpin the chemistry software and database tools that empower chemists to perform much of their work. Using conceptual tools taken from Combinatorics, and Graph Theory, means are presented to provide a unified description of isomerism and associated unimolecular isomerisation spanning both constitutional isomerism and stereoisomerism called the Polytope Formalism. This includes unification of the varying approaches historically taken to describe and understand stereoisomerism in organic and inorganic compounds. Work for this Thesis began with the synthesis, isolation, and characterisation of compounds not adequately describable using existing IUPAC recommendations. Generalisation of the polytopal-rearrangements model of stereoisomerisation used for inorganic chemistry led to the prescriptions that could deal with the synthesised compounds, revealing an unrecognised fundamental form of isomerism called akamptisomerism. Following on, this Thesis describes how in attempting to place akamptisomerism within the context of existing stereoisomerism reveals significant systematic deficiencies in the IUPAC recommendations. These shortcomings have limited the conceptualisation of broad classes of compounds and hindered development of molecules for medicinal and technological applications. It is shown how the Polytope Formalism can be applied to the description of constitutional isomerism in a practical manner. Finally, a radically different medicinal chemistry design strategy with broad application, based upon the principles, is describe

    Logic and intuition in architectural modelling: philosophy of mathematics for computational design

    Get PDF
    This dissertation investigates the relationship between the shift in the focus of architectural modelling from object to system and philosophical shifts in the history of mathematics that are relevant to that change. Particularly in the wake of the adoption of digital computation, design model spaces are more complex, multidimensional, arguably more logical, less intuitive spaces to navigate, less accessible to perception and visual comprehension. Such spatial issues were encountered much earlier in mathematics than in architectural modelling, with the growth of analytical geometry, a transition from Classical axiomatic proofs in geometry as the basis of mathematics, to analysis as the underpinning of geometry. Can the computational design modeller learn from the changing modern history, philosophy and psychology of mathematics about the construction and navigation of computational geometrical architectural system model space? The research is conducted through a review of recent architectural project examples and reference to three more detailed architectural modelling case studies. The spatial questions these examples and case studies raise are examined in the context of selected historical writing in the history, philosophy and psychology of mathematics and space. This leads to conclusions about changes in the relationship of architecture and mathematics, and reflections on the opportunities and limitations for architectural system models using computation geometry in the light of this historical survey. This line of questioning was motivated as a response to the experience of constructing digital associative geometry models and encountering the apparent limits of their flexibility as the graph of dependencies grew and the messiness of the digital modelling space increased. The questions were inspired particularly by working on the Narthex model for the Sagrada Família church, which extends to many tens of thousands of relationships and constraints, and which was modelled and repeatedly partially remodelled over a very long period. This experience led to the realisation that the limitations of the model were not necessarily the consequence of poor logical schema definition, but could be inevitable limitations of the geometry as defined, regardless of the means of defining it, the ‘shape’ of the multidimensional space being created. This led to more fundamental questions about the nature of Space, its relationship to geometry and the extent to which the latter can be considered simply as an operational and notational system. This dissertation offers a purely inductive journey, offering evidence through very selective examples in architecture, architectural modelling and in the philosophy of mathematics. The journey starts with some questions about the tendency of the model space to break out and exhibit unpredictable and not always desirable behaviour and the opportunities for geometrical construction to solve these questions is not conclusively answered. Many very productive questions about computational architectural modelling are raised in the process of looking for answers
    corecore