15,098 research outputs found
Transparency in International Investment Law: The Good, the Bad, and the Murky
How transparent is the international investment law regime, and how transparent should it be? Most studies approach these questions from one of two competing premises. One camp maintains that the existing regime is opaque and should be made completely transparent; the other finds the regime sufficiently transparent and worries that any further transparency reforms would undermine the regime’s essential functioning. This paper explores the tenability of these two positions by plumbing the precise contours of transparency as an overarching norm within international investment law. After defining transparency in a manner befitting the decentralized nature of the regime, the paper identifies international investment law’s key transparent, semi-transparent, and non-transparent features. It underscores that these categories do not necessarily map onto prevailing normative judgments concerning what might constitute good, bad, and murky transparency practices. The paper then moves beyond previous analyses by suggesting five strategic considerations that should factor into future assessments of whether and how particular aspects of the regime should be rendered more transparent. It concludes with a tentative assessment of the penetration, recent evolution, and likely trajectory of transparency principles within the contemporary international investment law regime
The evolution of pedagogic models for work-based learning within a virtual university
The process of designing a pedagogic model for work-based learning within a virtual university is not a simple matter of using ‘off the shelf’ good practice. Instead, it can be characterised as an evolutionary process that reflects the backgrounds, skills and experiences of the project partners. Within the context of a large-scale project that was building a virtual university for work-based learners, an ambitious goal was set: to base the development of learning materials on a pedagogic model that would be adopted across the project. However, the reality proved to be far more complex than simply putting together an appropriate model from existing research evidence. Instead, the project progressed through a series of redevelopments, each of which was pre-empted by the involvement of a different team from within the project consortium. The pedagogic models that evolved as part of the project will be outlined, and the reasons for rejecting each will be given. They moved from a simple model, relying on core computer-based materials (assessed by multiple choice questions with optional work-based learning), to a more sophisticated model that integrated different forms of learning. The challenges that were addressed included making learning flexible and suitable for work-based learning, the coherence of accreditation pathways, the appropriate use of the opportunities provided by online learning and the learning curves and training needs of the different project teams. Although some of these issues were project-specific (being influenced by the needs of the learners, the aims of the project and the partners involved), the evolutionary process described in this case study illustrates that there can be a steep learning curve for the different collaborating groups within the project team. Whilst this example focuses on work-based learning, the process and the lessons may equally be applicable to a range of learning scenarios
"There are too many, but never enough": qualitative case study investigating routine coding of clinical information in depression.
We sought to understand how clinical information relating to the management of depression is routinely coded in different clinical settings and the perspectives of and implications for different stakeholders with a view to understanding how these may be aligned
The Evolution of Embedding Metadata in Blockchain Transactions
The use of blockchains is growing every day, and their utility has greatly
expanded from sending and receiving crypto-coins to smart-contracts and
decentralized autonomous organizations. Modern blockchains underpin a variety
of applications: from designing a global identity to improving satellite
connectivity. In our research we look at the ability of blockchains to store
metadata in an increasing volume of transactions and with evolving focus of
utilization. We further show that basic approaches to improving blockchain
privacy also rely on embedding metadata. This paper identifies and classifies
real-life blockchain transactions embedding metadata of a number of major
protocols running essentially over the bitcoin blockchain. The empirical
analysis here presents the evolution of metadata utilization in the recent
years, and the discussion suggests steps towards preventing criminal use.
Metadata are relevant to any blockchain, and our analysis considers primarily
bitcoin as a case study. The paper concludes that simultaneously with both
expanding legitimate utilization of embedded metadata and expanding blockchain
functionality, the applied research on improving anonymity and security must
also attempt to protect against blockchain abuse.Comment: 9 pages, 6 figures, 1 table, 2018 International Joint Conference on
Neural Network
Recommended from our members
Sharing practice, problems and solutions for institutional change
This chapter critiques the roles of different forms of representation of practice as part of an institutional change process. It discusses how these representations can be used both to design and to share learning activities at the various levels of decision-making in a university. We illustrate our arguments with empirical data gathered on change processes associated with an institution-wide change programme: the introduction of a new virtual learning environment (VLE). In particular, we describe a case study of the introduction of the VLE tools in a business course. We focus on two particular forms of representations to describe the essence of the innovation: a pedagogical pattern and a visual learning design. We argue that pedagogical patterns and learning design have emerged as parallel approaches to describing practice in recent years. Despite their very different origins, both provide complementary representations, which emphasize different aspects of the practice being described. We are attempting to combine these approaches. We briefly outline the Open University Learning Design initiative, of which this work is part, and describe its key underpinning philosophies. We believe our approach provides a vehicle for enabling a better articulation of design principles and the discussion of issues concerning the re-use of educational resources and activities
A heuristic-based approach to code-smell detection
Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache
Simulated Galaxy Interactions as Probes of Merger Spectral Energy Distributions
We present the first systematic comparison of ultraviolet-millimeter spectral
energy distributions (SEDs) of observed and simulated interacting galaxies. Our
sample is drawn from the Spitzer Interacting Galaxy Survey, and probes a range
of galaxy interaction parameters. We use 31 galaxies in 14 systems which have
been observed with Herschel, Spitzer, GALEX, and 2MASS. We create a suite of
GADGET-3 hydrodynamic simulations of isolated and interacting galaxies with
stellar masses comparable to those in our sample of interacting galaxies.
Photometry for the simulated systems is then calculated with the SUNRISE
radiative transfer code for comparison with the observed systems. For most of
the observed systems, one or more of the simulated SEDs match reasonably well.
The best matches recover the infrared luminosity and the star formation rate of
the observed systems, and the more massive systems preferentially match SEDs
from simulations of more massive galaxies. The most morphologically distorted
systems in our sample are best matched to simulated SEDs close to coalescence,
while less evolved systems match well with SEDs over a wide range of
interaction stages, suggesting that an SED alone is insufficient to identify
interaction stage except during the most active phases in strongly interacting
systems. This result is supported by our finding that the SEDs calculated for
simulated systems vary little over the interaction sequence.Comment: 24 pages, 16 figures, 2 tables, accepted for publication in ApJ.
Animations of the evolution of the simulated SEDs can be found at
http://www.cfa.harvard.edu/~llanz/sigs_sim.htm
The Foundation Supernova Survey: Measuring Cosmological Parameters with Supernovae from a Single Telescope
Measurements of the dark energy equation-of-state parameter, , have been
limited by uncertainty in the selection effects and photometric calibration of
Type Ia supernovae (SNe Ia). The Foundation Supernova Survey is
designed to lower these uncertainties by creating a new sample of SNe
Ia observed on the Pan-STARRS system. Here, we combine the Foundation sample
with SNe from the Pan-STARRS Medium Deep Survey and measure cosmological
parameters with 1,338 SNe from a single telescope and a single, well-calibrated
photometric system. For the first time, both the low- and high- data are
predominantly discovered by surveys that do not target pre-selected galaxies,
reducing selection bias uncertainties. The data include 875 SNe without
spectroscopic classifications and we show that we can robustly marginalize over
CC SN contamination. We measure Foundation Hubble residuals to be fainter than
the pre-existing low- Hubble residuals by mag (stat+sys).
By combining the SN Ia data with cosmic microwave background constraints, we
find , consistent with CDM. With 463
spectroscopically classified SNe Ia alone, we measure . Using
the more homogeneous and better-characterized Foundation sample gives a 55%
reduction in the systematic uncertainty attributed to SN Ia sample selection
biases. Although use of just a single photometric system at low and high
redshift increases the impact of photometric calibration uncertainties in this
analysis, previous low- samples may have correlated calibration
uncertainties that were neglected in past studies. The full Foundation sample
will observe up to 800 SNe to anchor the LSST and WFIRST Hubble diagrams.Comment: 30 pages, 17 figures, accepted by Ap
- …