113,894 research outputs found
BG Group and “Conditions” to Arbitral Jurisdiction
Although the Supreme Court has over the last decade generated a robust body of arbitration caselaw, its first decision in the area of investment arbitration under a Bilateral Investment Treaty was only handed down in 2014. BG Group v. Argentina was widely anticipated and has attracted much notice, and general approval, on the part of the arbitration community. In this paper we assess the Court’s decision from two different perspectives—the first attempts to situate it in the discourse of the American law of commercial arbitration; the second considers it in light of the expectations of the international community surrounding the proper construction of Conventions between states.
Our initial goal had been to write jointly, with the hope that we could bridge our differences to find, if not common, at least neighboring, ground. On some points we did so, but ultimately our divergent appreciations of the proper way to interpret the condition precedent in the investment treaty in BG Group overcame the idealism with which we commenced the project. Nonetheless we have decided to present the two papers together to emphasize the dichotomous approaches to treaty interpretation that two moderately sensible people, who inhabit overlapping but non-congruent interpretive communities, can have.The Kay Bailey Hutchison Center for Energy, Law, and Busines
INAUT, a Controlled Language for the French Coast Pilot Books Instructions nautiques
We describe INAUT, a controlled natural language dedicated to collaborative
update of a knowledge base on maritime navigation and to automatic generation
of coast pilot books (Instructions nautiques) of the French National
Hydrographic and Oceanographic Service SHOM. INAUT is based on French language
and abundantly uses georeferenced entities. After describing the structure of
the overall system, giving details on the language and on its generation, and
discussing the three major applications of INAUT (document production,
interaction with ENCs and collaborative updates of the knowledge base), we
conclude with future extensions and open problems.Comment: 10 pages, 3 figures, accepted for publication at Fourth Workshop on
Controlled Natural Language (CNL 2014), 20-22 August 2014, Galway, Irelan
Rational Bargaining Theory and Contract: Default Rules, Hypothetical Consent, the Duty to Disclose, and Fraud
The author begins by responding to Coleman\u27s rational choice approach to choosing default rules. In part I, he applies the expanded analysis of contractual consent and default rules that he had recently presented elsewhere to explain how rational bargaining, hypothetical consent, and actual consent figure in the determination of contractual default rules. Whereas Coleman advocates the centrality of rational bargaining analysis to this determination, the author explains why rational bargaining theory\u27s role must be subsidiary to that of consent.
The author then turns his attention to Coleman\u27s appraisal of contracting parties\u27 duty to disclose information concerning the resources that are the subject of a contractual transfer. In part II, he argues that both Coleman\u27s and Anthony Kronman\u27s analyses of John Marshall\u27s opinion in the classic case of Laidlaw v. Organ overlook an important function of his holding permitting nondisclosure. The author concludes by proposing a conception of fraud that explains why trading on and profiting from certain types of undisclosed information is not properly deemed fraudulent
MBT: A Memory-Based Part of Speech Tagger-Generator
We introduce a memory-based approach to part of speech tagging. Memory-based
learning is a form of supervised learning based on similarity-based reasoning.
The part of speech tag of a word in a particular context is extrapolated from
the most similar cases held in memory. Supervised learning approaches are
useful when a tagged corpus is available as an example of the desired output of
the tagger. Based on such a corpus, the tagger-generator automatically builds a
tagger which is able to tag new text the same way, diminishing development time
for the construction of a tagger considerably. Memory-based tagging shares this
advantage with other statistical or machine learning approaches. Additional
advantages specific to a memory-based approach include (i) the relatively small
tagged corpus size sufficient for training, (ii) incremental learning, (iii)
explanation capabilities, (iv) flexible integration of information in case
representations, (v) its non-parametric nature, (vi) reasonably good results on
unknown words without morphological analysis, and (vii) fast learning and
tagging. In this paper we show that a large-scale application of the
memory-based approach is feasible: we obtain a tagging accuracy that is on a
par with that of known statistical approaches, and with attractive space and
time complexity properties when using {\em IGTree}, a tree-based formalism for
indexing and searching huge case bases.} The use of IGTree has as additional
advantage that optimal context size for disambiguation is dynamically computed.Comment: 14 pages, 2 Postscript figure
Decision Taking for Selling Thread Startup
Decision Taking is discussed in the context of the role it may play for a
selling agent in a search market, in particular for agents involved in the sale
of valuable and relatively unique items, such as a dwelling, a second hand car,
or a second hand recreational vessel.
Detailed connections are made between the architecture of decision making
processes and a sample of software technology based concepts including
instruction sequences, multi-threading, and thread algebra.
Ample attention is paid to the initialization or startup of a thread
dedicated to achieving a given objective, and to corresponding decision taking.
As an application, the selling of an item is taken as an objective to be
achieved by running a thread that was designed for that purpose
The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
The nascent field of fair machine learning aims to ensure that decisions
guided by algorithms are equitable. Over the last several years, three formal
definitions of fairness have gained prominence: (1) anti-classification,
meaning that protected attributes---like race, gender, and their proxies---are
not explicitly used to make decisions; (2) classification parity, meaning that
common measures of predictive performance (e.g., false positive and false
negative rates) are equal across groups defined by the protected attributes;
and (3) calibration, meaning that conditional on risk estimates, outcomes are
independent of protected attributes. Here we show that all three of these
fairness definitions suffer from significant statistical limitations. Requiring
anti-classification or classification parity can, perversely, harm the very
groups they were designed to protect; and calibration, though generally
desirable, provides little guarantee that decisions are equitable. In contrast
to these formal fairness criteria, we argue that it is often preferable to
treat similarly risky people similarly, based on the most statistically
accurate estimates of risk that one can produce. Such a strategy, while not
universally applicable, often aligns well with policy objectives; notably, this
strategy will typically violate both anti-classification and classification
parity. In practice, it requires significant effort to construct suitable risk
estimates. One must carefully define and measure the targets of prediction to
avoid retrenching biases in the data. But, importantly, one cannot generally
address these difficulties by requiring that algorithms satisfy popular
mathematical formalizations of fairness. By highlighting these challenges in
the foundation of fair machine learning, we hope to help researchers and
practitioners productively advance the area
Cooperating intelligent systems
Some of the issues connected to the development of a bureaucratic system are discussed. Emphasis is on a layer multiagent approach to distributed artificial intelligence (DAI). The division of labor in a bureaucracy is considered. The bureaucratic model seems to be a fertile model for further examination since it allows for the growth and change of system components and system protocols and rules. The first part of implementing the system would be the construction of a frame based reasoner and the appropriate B-agents and E-agents. The agents themselves should act as objects and the E-objects in particular should have the capability of taking on a different role. No effort was made to address the problems of automated failure recovery, problem decomposition, or implementation. Instead what has been achieved is a framework that can be developed in several distinct ways, and which provides a core set of metaphors and issues for further research
- …