53,010 research outputs found

    Deficient Reasoning for Dark Matter in Galaxies

    Full text link
    Astronomers have been using the measured luminosity to estimate the {\em luminous mass} of stars, based on empirically established mass-to-light ratio which seems to be only applicable to a special class of stars---the main-sequence stars---with still considerable uncertainties. Another basic tool to determine the mass of a system of stars or galaxies comes from the study of their motion, as Newton demonstrated with his law of gravitation, which yields the {\em gravitational mass}. Because the luminous mass can at best only represent a portion of the gravitational mass, finding the luminous mass to be different or less than the gravitational mass should not be surprising. Using such an apparent discrepancy as a compelling evidence for the so-called dark matter, which has been believed to possess mysterious nonbaryonic properties and present a dominant amount in galaxies and the universe, seems to be too far a stretch when seriously examining the facts and uncertainties in the measurement techniques. In our opinion, a galaxy with star type distribution varying from its center to edge may have a mass-to-light ratio varying accordingly. With the thin-disk model computations based on measured rotation curves, we found that most galaxies have a typical mass density profile that peaks at the galactic center and decreases rapidly within ∌5\sim 5% of the cut-off radius, and then declines nearly exponentially toward the edge. The predicted mass density in the Galactic disk is reasonably within the reported range of that observed in interstellar medium. This leads us to believe that ordinary baryonic matter can be sufficient for supporting the observed galactic rotation curves; speculation of large amount of non-baryonic matter may be based on an ill-conceived discrepancy between gravitational mass and luminous mass which appears to be unjustified

    Knowledge-based systems and geological survey

    Get PDF
    This personal and pragmatic review of the philosophy underpinning methods of geological surveying suggests that important influences of information technology have yet to make their impact. Early approaches took existing systems as metaphors, retaining the separation of maps, map explanations and information archives, organised around map sheets of fixed boundaries, scale and content. But system design should look ahead: a computer-based knowledge system for the same purpose can be built around hierarchies of spatial objects and their relationships, with maps as one means of visualisation, and information types linked as hypermedia and integrated in mark-up languages. The system framework and ontology, derived from the general geoscience model, could support consistent representation of the underlying concepts and maintain reference information on object classes and their behaviour. Models of processes and historical configurations could clarify the reasoning at any level of object detail and introduce new concepts such as complex systems. The up-to-date interpretation might centre on spatial models, constructed with explicit geological reasoning and evaluation of uncertainties. Assuming (at a future time) full computer support, the field survey results could be collected in real time as a multimedia stream, hyperlinked to and interacting with the other parts of the system as appropriate. Throughout, the knowledge is seen as human knowledge, with interactive computer support for recording and storing the information and processing it by such means as interpolating, correlating, browsing, selecting, retrieving, manipulating, calculating, analysing, generalising, filtering, visualising and delivering the results. Responsibilities may have to be reconsidered for various aspects of the system, such as: field surveying; spatial models and interpretation; geological processes, past configurations and reasoning; standard setting, system framework and ontology maintenance; training; storage, preservation, and dissemination of digital records

    Technology assessment between risk, uncertainty and ignorance

    Get PDF
    The use of most if not all technologies is accompanied by negative side effects, While we may profit from today’s technologies, it is most often future generations who bear most risks. Risk analysis therefore becomes a delicate issue, because future risks often cannot be assigned a meaningful occurance probability. This paper argues that technology assessement most often deal with uncertainty and ignorance rather than risk when we include future generations into our ethical, political or juridal thinking. This has serious implications as probabilistic decision approaches are not applicable anymore. I contend that a virtue ethical approach in which dianoetic virtues play a central role may supplement a welfare based ethics in order to overcome the difficulties in dealing with uncertainty and ignorance in technology assessement

    Using spatio-temporal continuity constraints to enhance visual tracking of moving objects

    No full text
    We present a framework for annotating dynamic scenes involving occlusion and other uncertainties. Our system comprises an object tracker, an object classifier and an algorithm for reasoning about spatio-temporal continuity. The principle behind the object tracking and classifier modules is to reduce error by increasing ambiguity (by merging objects in close proximity and presenting multiple hypotheses). The reasoning engine resolves error, ambiguity and occlusion to produce a most likely hypothesis, which is consistent with global spatio-temporal continuity constraints. The system results in improved annotation over frame-by-frame methods. It has been implemented and applied to the analysis of a team sports video

    Models of everywhere revisited: a technological perspective

    Get PDF
    The concept ‘models of everywhere’ was first introduced in the mid 2000s as a means of reasoning about the environmental science of a place, changing the nature of the underlying modelling process, from one in which general model structures are used to one in which modelling becomes a learning process about specific places, in particular capturing the idiosyncrasies of that place. At one level, this is a straightforward concept, but at another it is a rich multi-dimensional conceptual framework involving the following key dimensions: models of everywhere, models of everything and models at all times, being constantly re-evaluated against the most current evidence. This is a compelling approach with the potential to deal with epistemic uncertainties and nonlinearities. However, the approach has, as yet, not been fully utilised or explored. This paper examines the concept of models of everywhere in the light of recent advances in technology. The paper argues that, when first proposed, technology was a limiting factor but now, with advances in areas such as Internet of Things, cloud computing and data analytics, many of the barriers have been alleviated. Consequently, it is timely to look again at the concept of models of everywhere in practical conditions as part of a trans-disciplinary effort to tackle the remaining research questions. The paper concludes by identifying the key elements of a research agenda that should underpin such experimentation and deployment
    • 

    corecore