716 research outputs found
Rule, Story, and Commitment in the Teaching of Legal Ethics
The ABA requires each approved law school to provide each student instruction in the duties and responsibilities of the legal profession. First adopted in August, 1973, in the midst of the Watergate disclosures, this requirement has never been interpreted and is infrequently referred to or enforced in the accreditation process. The professional responsibility requirement is the only substantive teaching requirement imposed by the ABA.
Should the ethics teaching requirement be scrapped? We consider that question in Part I. Although we ultimately conclude the rule should be maintained, we believe this fundamental question must be asked. Given the disdain many legal academicians have for legal ethics, 19 we find it more than a little curious that no one has suggested abandoning the requirement. In this Article we ask the question that the skeptics have failed to ask. In the process we will examine the paradox they have created by failing to suggest the elimination of a requirement that they are so willing to scorn.
After concluding that the ABA and law schools should require ethics instruction, we turn in Part II to the questions of what is appropriate subject matter for ethics courses and when they should be taught. We emphasize the nature and importance of rule, story, and commitment throughout this Part, as we do throughout Part I. With Geoffrey Hazard we co-authored a casebook on legal ethics and the law governing lawyers. Thus our view on what should be taught will not surprise people familiar with that book. \u27 In Part II we try to make explicit what is implicit in that other work-the reasons we designed our book as we did and the lessons we hoped to teach through the material we included. We also address the question of when students should learn what concerning legal ethics. We conclude that some first-year instruction is important, but that, after the first year, an additional ethics course is also necessary. The required courses should be supplemented with a well-designed and deliberate effort to-teach ethics through the pervasive method in upper-level courses.
Finally, in Part III, we turn to the question of who should teach legal ethics, a neglected topic within which commitment and character loom large. Although the silence on whether ethics should remain a required course is somewhat unexpected, the silence on what kind of person should teach legal ethics is all-too predictable and, at the same time, enormously problematic. The subject that dare not speak its name within the walls of the academy is the character of academics. We believe nonetheless that we must discuss the character of those who purport to teach ethics, indeed the character of those who purport to teach anything, and so we end by speaking of character and apologize in advance if we offend anyone by mentioning the unmentionable
The Combinatorial World (of Auctions) According to GARP
Revealed preference techniques are used to test whether a data set is
compatible with rational behaviour. They are also incorporated as constraints
in mechanism design to encourage truthful behaviour in applications such as
combinatorial auctions. In the auction setting, we present an efficient
combinatorial algorithm to find a virtual valuation function with the optimal
(additive) rationality guarantee. Moreover, we show that there exists such a
valuation function that both is individually rational and is minimum (that is,
it is component-wise dominated by any other individually rational, virtual
valuation function that approximately fits the data). Similarly, given upper
bound constraints on the valuation function, we show how to fit the maximum
virtual valuation function with the optimal additive rationality guarantee. In
practice, revealed preference bidding constraints are very demanding. We
explain how approximate rationality can be used to create relaxed revealed
preference constraints in an auction. We then show how combinatorial methods
can be used to implement these relaxed constraints. Worst/best-case welfare
guarantees that result from the use of such mechanisms can be quantified via
the minimum/maximum virtual valuation function
A Recombination-Based Tabu Search Algorithm for the Winner Determination Problem
Abstract. We propose a dedicated tabu search algorithm (TSX_WDP) for the winner determination problem (WDP) in combinatorial auctions. TSX_WDP integrates two complementary neighborhoods designed re-spectively for intensification and diversification. To escape deep local optima, TSX_WDP employs a backbone-based recombination opera-tor to generate new starting points for tabu search and to displace the search into unexplored promising regions. The recombination operator operates on elite solutions previously found which are recorded in an global archive. The performance of our algorithm is assessed on a set of 500 well-known WDP benchmark instances. Comparisons with five state of the art algorithms demonstrate the effectiveness of our approach
Addressing robustness in time-critical, distributed, task allocation algorithms.
The aim of this work is to produce and test a robustness module (ROB-M) that can be generally applied to distributed, multi-agent task allocation algorithms, as robust versions of these are scarce and not well-documented in the literature. ROB-M is developed using the Performance Impact (PI) algorithm, as this has previously shown good results in deterministic trials. Different candidate versions of the module are thus bolted on to the PI algorithm and tested using two different task allocation problems under simulated uncertain conditions, and results are compared with baseline PI. It is shown that the baseline does not handle uncertainty well; the task-allocation success rate tends to decrease linearly as degree of uncertainty increases. However, when PI is run with one of the candidate robustness modules, the failure rate becomes very low for both problems, even under high simulated uncertainty, and so its architecture is adopted for ROB-M and also applied to MIT’s baseline Consensus Based Bundle Algorithm (CBBA) to demonstrate its flexibility. Strong evidence is provided to show that ROB-M can work effectively with CBBA to improve performance under simulated uncertain conditions, as long as the deterministic versions of the problems can be solved with baseline CBBA. Furthermore, the use of ROB-M does not appear to increase mean task completion time in either algorithm, and only 100 Monte Carlo samples are required compared to 10,000 in MIT’s robust version of the CBBA algorithm. PI with ROB-M is also tested directly against MIT’s robust algorithm and demonstrates clear superiority in terms of mean numbers of solved tasks.N/
- …
