2,735 research outputs found

    Review of Howard DeLong (1991), "A refutation of Arrow’s theorem", with a reaction, also on its relevance in 2008 for the European Union

    Get PDF
    There will be many researchers who discover voting theory afresh and who will want to understand it and its interesting paradoxes. Arrow's theorem (1951, 1963) is the most celebrated result in social choice theory. It has been criticized a lot but Howard DeLong (1991), "A refutation of Arrow’s theorem", is a monograph that actually succeeds. The booklet has received insufficient attention in the literature. This review also compares DeLong’s approach with my own book "Voting theory for democracy" (2007) and comments on the relevance in 2008 for the European Union, with respect to the veto power of its Member States and their citizens.voting theory; voting systems; elections; public choice; political economy; Borda Fixed Point; democracy; European Union; Arrow's theorem;

    A Framework for Exploiting Emergent Behaviour to capture 'Best Practice' within a Programming Domain

    Get PDF
    Inspection is a formalised process for reviewing an artefact in software engineering. It is proven to significantly reduce defects, to ensure that what is delivered is what is required, and that the finished product is effective and robust. Peer code review is a less formal inspection of code, normally classified as inadequate or substandard Inspection. Although it has an increased risk of not locating defects, it has been shown to improve the knowledge and programming skills of its participants. This thesis examines the process of peer code review, comparing it to Inspection, and attempts to describe how an informal code review can improve the knowledge and skills of its participants by deploying an agent oriented approach. During a review the participants discuss defects, recommendations and solutions, or more generally their own experience. It is this instant adaptability to new 11 information that gives the review process the ability to improve knowledge. This observed behaviour can be described as the emergent behaviour of the group of programmers during the review. The wider distribution of knowledge is currently only performed by programmers attending other reviews. To maximise the benefits of peer code review, a mechanism is needed by which the findings from one team can be captured and propagated to other reviews / teams throughout an establishment. A prototype multi-agent system is developed with the aim of capturing the emergent properties of a team of programmers. As the interactions between the team members is unstructured and the information traded is dynamic, a distributed adaptive system is required to provide communication channels for the team and to provide a foundation for the knowledge shared. Software agents are capable of adaptivity and learning. Multi-agent systems are particularly effective at being deployed within distributed architectures and are believed to be able to capture emergent behaviour. The prototype system illustrates that the learning mechanism within the software agents provides a solid foundation upon which the ability to detect defects can be learnt. It also demonstrates that the multi-agent approach is apposite to provide the free flow communication of ideas between programmers, not only to achieve the sharing of defects and solutions but also at a high enough level to capture social information. It is assumed that this social information is a measure of one element of the review process's emergent behaviour. The system is capable of monitoring the team-perceived abilities of programmers, those who are influential on the programming style of others, and the issues upon III which programmers agree or disagree. If the disagreements are classified as unimportant or stylistic issues, can it not therefore be assumed that all agreements are concepts of "Best Practice"? The conclusion is reached that code review is not a substandard Inspection but is in fact complementary to the Inspection model, as the latter improves the process of locating and identifying bugs while the former improves the knowledge and skill of the programmers, and therefore the chance of bugs not being encoded to start with. The prototype system demonstrates that it is possible to capture best practice from a review team and that agents are well suited to the task. The performance criteria of such a system have also been captured. The prototype system has also shown that a reliable level of learning can be attained for a real world task. The innovative way of concurrently deploying multiple agents which use different approaches to achieve the same goal shows remarkable robustness when learning from small example sets. The novel way in which autonomy is promoted within the agents' design but constrained within the agent community allows the system to provide a sufficiently flexible communications structure to capture emergent social behaviour, whilst ensuring that the agents remain committed to their own goals

    CLiFF Notes: Research in the Language, Information and Computation Laboratory of the University of Pennsylvania

    Get PDF
    One concern of the Computer Graphics Research Lab is in simulating human task behavior and understanding why the visualization of the appearance, capabilities and performance of humans is so challenging. Our research has produced a system, called Jack, for the definition, manipulation, animation and human factors analysis of simulated human figures. Jack permits the envisionment of human motion by interactive specification and simultaneous execution of multiple constraints, and is sensitive to such issues as body shape and size, linkage, and plausible motions. Enhanced control is provided by natural behaviors such as looking, reaching, balancing, lifting, stepping, walking, grasping, and so on. Although intended for highly interactive applications, Jack is a foundation for other research. The very ubiquitousness of other people in our lives poses a tantalizing challenge to the computational modeler: people are at once the most common object around us, and yet the most structurally complex. Their everyday movements are amazingly fluid, yet demanding to reproduce, with actions driven not just mechanically by muscles and bones but also cognitively by beliefs and intentions. Our motor systems manage to learn how to make us move without leaving us the burden or pleasure of knowing how we did it. Likewise we learn how to describe the actions and behaviors of others without consciously struggling with the processes of perception, recognition, and language. Present technology lets us approach human appearance and motion through computer graphics modeling and three dimensional animation, but there is considerable distance to go before purely synthesized figures trick our senses. We seek to build computational models of human like figures which manifest animacy and convincing behavior. Towards this end, we: Create an interactive computer graphics human model; Endow it with reasonable biomechanical properties; Provide it with human like behaviors; Use this simulated figure as an agent to effect changes in its world; Describe and guide its tasks through natural language instructions. There are presently no perfect solutions to any of these problems; ultimately, however, we should be able to give our surrogate human directions that, in conjunction with suitable symbolic reasoning processes, make it appear to behave in a natural, appropriate, and intelligent fashion. Compromises will be essential, due to limits in computation, throughput of display hardware, and demands of real-time interaction, but our algorithms aim to balance the physical device constraints with carefully crafted models, general solutions, and thoughtful organization. The Jack software is built on Silicon Graphics Iris 4D workstations because those systems have 3-D graphics features that greatly aid the process of interacting with highly articulated figures such as the human body. Of course, graphics capabilities themselves do not make a usable system. Our research has therefore focused on software to make the manipulation of a simulated human figure easy for a rather specific user population: human factors design engineers or ergonomics analysts involved in visualizing and assessing human motor performance, fit, reach, view, and other physical tasks in a workplace environment. The software also happens to be quite usable by others, including graduate students and animators. The point, however, is that program design has tried to take into account a wide variety of physical problem oriented tasks, rather than just offer a computer graphics and animation tool for the already computer sophisticated or skilled animator. As an alternative to interactive specification, a simulation system allows a convenient temporal and spatial parallel programming language for behaviors. The Graphics Lab is working with the Natural Language Group to explore the possibility of using natural language instructions, such as those found in assembly or maintenance manuals, to drive the behavior of our animated human agents. (See the CLiFF note entry for the AnimNL group for details.) Even though Jack is under continual development, it has nonetheless already proved to be a substantial computational tool in analyzing human abilities in physical workplaces. It is being applied to actual problems involving space vehicle inhabitants, helicopter pilots, maintenance technicians, foot soldiers, and tractor drivers. This broad range of applications is precisely the target we intended to reach. The general capabilities embedded in Jack attempt to mirror certain aspects of human performance, rather than the specific requirements of the corresponding workplace. We view the Jack system as the basis of a virtual animated agent that can carry out tasks and instructions in a simulated 3D environment. While we have not yet fooled anyone into believing that the Jack figure is real , its behaviors are becoming more reasonable and its repertoire of actions more extensive. When interactive control becomes more labor intensive than natural language instructional control, we will have reached a significant milestone toward an intelligent agent

    Assessment at the centre of strategies of [accountant] learning in groups, substantiated with qualitative reflections in student assessments

    Get PDF
    Having students learn and be assessed in groups is a means to develop among students intellectual and interactive skills/competencies described as generic or “wicked”, as well as of producing deeper learning of various types of knowledge (e.g. organicistic, contextualistic, formistic, mechanistic). This paper reports assessments constituting and reflecting strategies of learning in groups. The assessments and the strategies were crafted while working with students on four courses presented annually in recent years and covering accounting, management and finance for public services and private activities in various organisations. Data about group experiences and their implications for working as accountants were collected from students during assessments and are used to elaborate the strategies. The paper provides insights into reducing impediments among students and teachers to shifting learning from teacher-centred to learner-centred, and suggests areas for further research in reducing institutional impediments.Student engagement; generic skills/competencies; group assessment; group learning

    Evolution, Politics and Law

    Get PDF

    Developing students’ strategies for problem solving in mathematics: the role of pre-designed “sample student work”

    Get PDF
    This paper describes a design strategy that is intended to foster self and peer assessment and develop students’ ability to compare alternative problem solving strategies in mathematics lessons. This involves giving students, after they themselves have tackled a problem, simulated “sample student work” to discuss and critique. We describe the potential uses of this strategy and the issues that have arisen during trials in both US and UK classrooms. We consider how this approach has the potential to develop metacognitive acts in which students reflect on their own decisions and planning actions during mathematical problem solving

    Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty

    Get PDF
    Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison

    Evaluating Computer Vision Methods for Detection and Pose Estimation of Textureless Objects

    Get PDF
    Master's thesis in Automation and signal processingRobotics, AI and automation; search for these words and two things become apparent. An era of automation is upon us, but even so there are still some simple tasks that grinds it to a halt, e.g. picking and placing objects. These simple tasks require coordination from a robot, and object detection from a computer vision system. That’s not to say that robots are incapable of picking up objects, as the simple and organised cases have been solved some time ago. The problems occur in cases where there are no order, in other words chaos. In these cases it is beneficial to detect and find the pose of the object, so that it can be picked up and packed while having full control over the position the object was placed in. This thesis is written at the behest of Pickr.ai, a company looking to automate the picking and packing for retail businesses. The objective of this thesis is to evaluate available pose estimating methods, and if possible single out one that is best suited for the retail environment. Current state of the art methods that are capable of estimating the pose of objects utilise convolutional neural networks for both detection and estimation. The leading methods can achieve accuracy upwards of the high 90% on pretrained objects. The case with retail is that the volume of available wares may be so large that training on each item is prohibitive. Therefore the testing done has mostly been aimed at the method’s generalisability, whether they can detect objects without prior training specific for the object. A few different methods with varying solutions were examined, from the simpler pure object detectors to two stage 6D pose estimators. Unfortunately none of the methods can be deemed appropriate for the task as it currently stands. The methods do not recognise new objects, and the improvement from limited training does not improve the scores significantly. However, by applying the approaches that are incorporated in the other methods, it may be possible to develop an appropriate new pose estimator capable of handling a retail environment

    Evolution And Ethics

    Full text link
    Does evolution inform the ancient debate about the roles that instinct (emotion/passion/sentiment/feeling) and reason do and/or should play in how we decide what to do? Evolutionary ethicists typically adopt Darwinism as a suitable explanation for evolution, and on that basis draw conclusions about moral epistemology. However, if Darwinism is to be offered as a premise from which conclusions about moral epistemology are drawn, in order to assess such arguments we must assess that premise. This reveals the highly speculative and metaphysical quality of our theoretical explanations for how evolution happens. Clarifying that helps to facilitate an assessment of the epistemological claims of evolutionary ethicists. There are four general ways that instinct and reason can function in moral deliberation: descriptive instinctivism asserts that moral deliberation is necessarily a matter of instincts because control of the instincts by our faculty of reason is regarded (descriptively) as impossible; descriptive rationalism asserts that moral deliberation is necessarily a matter of reasoning, which (descriptively) must control instinct; prescriptive instinctivism asserts that moral deliberation can involve both rationality and instinct but prescribes following our instincts; prescriptive rationalism also asserts that deliberation can be either instinctive or rational but prescribes following reason. Micheal Ruse (2012), Peter Singer (2011), and Philip Kitcher (2011) each adopt Darwinism and on that basis arrive at descriptive instinctivism, descriptive rationalism, and prescriptive instinctivism, respectively. Our current level of understanding about evolution implies that prescriptive rationalism is a more practical approach to ethical deliberation than the other three alternatives described. Evolution can inform moral epistemology, but only very generally by helping to inform us of what we can justifiably believe about ourselves and nature

    Measuring the Scale Outcomes of Curriculum Materials

    Get PDF
    • …
    corecore