14,736 research outputs found

    Transparency in International Investment Law: The Good, the Bad, and the Murky

    Get PDF
    How transparent is the international investment law regime, and how transparent should it be? Most studies approach these questions from one of two competing premises. One camp maintains that the existing regime is opaque and should be made completely transparent; the other finds the regime sufficiently transparent and worries that any further transparency reforms would undermine the regime’s essential functioning. This paper explores the tenability of these two positions by plumbing the precise contours of transparency as an overarching norm within international investment law. After defining transparency in a manner befitting the decentralized nature of the regime, the paper identifies international investment law’s key transparent, semi-transparent, and non-transparent features. It underscores that these categories do not necessarily map onto prevailing normative judgments concerning what might constitute good, bad, and murky transparency practices. The paper then moves beyond previous analyses by suggesting five strategic considerations that should factor into future assessments of whether and how particular aspects of the regime should be rendered more transparent. It concludes with a tentative assessment of the penetration, recent evolution, and likely trajectory of transparency principles within the contemporary international investment law regime

    The evolution of pedagogic models for work-based learning within a virtual university

    Get PDF
    The process of designing a pedagogic model for work-based learning within a virtual university is not a simple matter of using ‘off the shelf’ good practice. Instead, it can be characterised as an evolutionary process that reflects the backgrounds, skills and experiences of the project partners. Within the context of a large-scale project that was building a virtual university for work-based learners, an ambitious goal was set: to base the development of learning materials on a pedagogic model that would be adopted across the project. However, the reality proved to be far more complex than simply putting together an appropriate model from existing research evidence. Instead, the project progressed through a series of redevelopments, each of which was pre-empted by the involvement of a different team from within the project consortium. The pedagogic models that evolved as part of the project will be outlined, and the reasons for rejecting each will be given. They moved from a simple model, relying on core computer-based materials (assessed by multiple choice questions with optional work-based learning), to a more sophisticated model that integrated different forms of learning. The challenges that were addressed included making learning flexible and suitable for work-based learning, the coherence of accreditation pathways, the appropriate use of the opportunities provided by online learning and the learning curves and training needs of the different project teams. Although some of these issues were project-specific (being influenced by the needs of the learners, the aims of the project and the partners involved), the evolutionary process described in this case study illustrates that there can be a steep learning curve for the different collaborating groups within the project team. Whilst this example focuses on work-based learning, the process and the lessons may equally be applicable to a range of learning scenarios

    "There are too many, but never enough": qualitative case study investigating routine coding of clinical information in depression.

    Get PDF
    We sought to understand how clinical information relating to the management of depression is routinely coded in different clinical settings and the perspectives of and implications for different stakeholders with a view to understanding how these may be aligned

    The Evolution of Embedding Metadata in Blockchain Transactions

    Get PDF
    The use of blockchains is growing every day, and their utility has greatly expanded from sending and receiving crypto-coins to smart-contracts and decentralized autonomous organizations. Modern blockchains underpin a variety of applications: from designing a global identity to improving satellite connectivity. In our research we look at the ability of blockchains to store metadata in an increasing volume of transactions and with evolving focus of utilization. We further show that basic approaches to improving blockchain privacy also rely on embedding metadata. This paper identifies and classifies real-life blockchain transactions embedding metadata of a number of major protocols running essentially over the bitcoin blockchain. The empirical analysis here presents the evolution of metadata utilization in the recent years, and the discussion suggests steps towards preventing criminal use. Metadata are relevant to any blockchain, and our analysis considers primarily bitcoin as a case study. The paper concludes that simultaneously with both expanding legitimate utilization of embedded metadata and expanding blockchain functionality, the applied research on improving anonymity and security must also attempt to protect against blockchain abuse.Comment: 9 pages, 6 figures, 1 table, 2018 International Joint Conference on Neural Network

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    Simulated Galaxy Interactions as Probes of Merger Spectral Energy Distributions

    Get PDF
    We present the first systematic comparison of ultraviolet-millimeter spectral energy distributions (SEDs) of observed and simulated interacting galaxies. Our sample is drawn from the Spitzer Interacting Galaxy Survey, and probes a range of galaxy interaction parameters. We use 31 galaxies in 14 systems which have been observed with Herschel, Spitzer, GALEX, and 2MASS. We create a suite of GADGET-3 hydrodynamic simulations of isolated and interacting galaxies with stellar masses comparable to those in our sample of interacting galaxies. Photometry for the simulated systems is then calculated with the SUNRISE radiative transfer code for comparison with the observed systems. For most of the observed systems, one or more of the simulated SEDs match reasonably well. The best matches recover the infrared luminosity and the star formation rate of the observed systems, and the more massive systems preferentially match SEDs from simulations of more massive galaxies. The most morphologically distorted systems in our sample are best matched to simulated SEDs close to coalescence, while less evolved systems match well with SEDs over a wide range of interaction stages, suggesting that an SED alone is insufficient to identify interaction stage except during the most active phases in strongly interacting systems. This result is supported by our finding that the SEDs calculated for simulated systems vary little over the interaction sequence.Comment: 24 pages, 16 figures, 2 tables, accepted for publication in ApJ. Animations of the evolution of the simulated SEDs can be found at http://www.cfa.harvard.edu/~llanz/sigs_sim.htm

    The Foundation Supernova Survey: Measuring Cosmological Parameters with Supernovae from a Single Telescope

    Full text link
    Measurements of the dark energy equation-of-state parameter, ww, have been limited by uncertainty in the selection effects and photometric calibration of z<0.1z<0.1 Type Ia supernovae (SNe Ia). The Foundation Supernova Survey is designed to lower these uncertainties by creating a new sample of z<0.1z<0.1 SNe Ia observed on the Pan-STARRS system. Here, we combine the Foundation sample with SNe from the Pan-STARRS Medium Deep Survey and measure cosmological parameters with 1,338 SNe from a single telescope and a single, well-calibrated photometric system. For the first time, both the low-zz and high-zz data are predominantly discovered by surveys that do not target pre-selected galaxies, reducing selection bias uncertainties. The z>0.1z>0.1 data include 875 SNe without spectroscopic classifications and we show that we can robustly marginalize over CC SN contamination. We measure Foundation Hubble residuals to be fainter than the pre-existing low-zz Hubble residuals by 0.046±0.0270.046 \pm 0.027 mag (stat+sys). By combining the SN Ia data with cosmic microwave background constraints, we find w=0.938±0.053w=-0.938 \pm 0.053, consistent with Λ\LambdaCDM. With 463 spectroscopically classified SNe Ia alone, we measure w=0.933±0.061w=-0.933\pm0.061. Using the more homogeneous and better-characterized Foundation sample gives a 55% reduction in the systematic uncertainty attributed to SN Ia sample selection biases. Although use of just a single photometric system at low and high redshift increases the impact of photometric calibration uncertainties in this analysis, previous low-zz samples may have correlated calibration uncertainties that were neglected in past studies. The full Foundation sample will observe up to 800 SNe to anchor the LSST and WFIRST Hubble diagrams.Comment: 30 pages, 17 figures, accepted by Ap
    corecore