1,218,003 research outputs found
Asking the Right Questions
At this Symposium, we have heard about forms of law practice that raise large questions about the lawyer\u27s role. My sole theme in the present essay is that we often ask the wrong large questions. Too often, the questions about multidisciplinary practice ( MDP ), mediation and arbitration, and in-house lawyering are whether they are good for lawyers and good for clients. These are questions, I will suggest, that the market itself will decide. The right question is not whether new roles with no rules are good for lawyers and clients, but rather whether they are good for the rest of us- us being the citizenry who count on lawyers to be guardians of the law, and who market forces will not necessarily protect.
All three of the new roles raise the interesting prospect of the lawyer\u27s traditional role dissolving into a different one as role boundaries blur and thin. In MDP, the prospect is that lawyers become indistinguishable from accountants, investment bankers, financial advisors, or business consultants. For in-house lawyers, the prospect is that lawyers become indistinguishable from corporate executives, or, more broadly, from clients. And for third-party neutrals, the prospect is that lawyers become very much like judges.
I will not be discussing all three roles in this paper. My principal focus is on multidisciplinary practice. The role of in-house counsel is a secondary focus, and I shall not address the role of third-party neutral at all
Learning by Asking Questions
We introduce an interactive learning framework for the development and
testing of intelligent visual systems, called learning-by-asking (LBA). We
explore LBA in context of the Visual Question Answering (VQA) task. LBA differs
from standard VQA training in that most questions are not observed during
training time, and the learner must ask questions it wants answers to. Thus,
LBA more closely mimics natural learning and has the potential to be more
data-efficient than the traditional VQA setting. We present a model that
performs LBA on the CLEVR dataset, and show that it automatically discovers an
easy-to-hard curriculum when learning interactively from an oracle. Our LBA
generated data consistently matches or outperforms the CLEVR train data and is
more sample efficient. We also show that our model asks questions that
generalize to state-of-the-art VQA models and to novel test time distributions
Asking questions
Household surveys are one of the primary methodological tools employed in global health research. In this paper, I try to gain insight into the production of global health knowledge by elaborating upon the process of data collection for such surveys. I do so by narrating a story of an impact evaluation in northern India, drawing attention to how data collectors, called ‘enumerators’, follow or disregard different aspects of the research protocol while conducting survey interviews. I pay close attention to how enumerators translate and ask questions, and how the ethical challenges they face affect their interactions with respondents. I use this analysis to draw parallels between the work of enumerators and global health researchers. I argue that researchers also acknowledge or ‘unknow’ different aspects of research practice in order to produce scientific evidence and claim expertise
Asking Questions that Matter … Asking Questions of Value
Excerpt: When I first became involved formally in scholarship of teaching and learning, it was the result of frustration and surprise tempered by high expectations and hope. I was teaching in a school of liberal studies that used program portfolios as an intellectual organizing feature and culminating assessment (self and otherwise). Students were to use this portfolio (physical, not online) to collect and reflect on work they accomplished during their time in the program. But in teaching the senior synthesis course, wherein students were to “go meta” with the portfolio and reflect on their entire undergraduate experience, I learned that virtually all of them treated the portfolio not as..
Asking gender questions
We report on a survey of astronomers asking questions at the most recent
National Astronomy Meeting (NAM2014). The gender balance of both speakers and
session chairs at NAM (31% and 29% women respectively) closely matched that of
attendees (28% female). However, we find that women were under-represented
among question askers (just 18% female). Women were especially
under-represented in asking the first question (only 14% of first questions
asked by women), but when the Q&A session reached four or more questions, women
and men were observed to ask roughly equal numbers of questions. We found a
small, but statistically insignificant, increase in the fraction of questions
from women in sessions where the chair was also female. On average
questions were asked per talk, with no detectable difference in the number of
questions asked of female and male speakers, but on average female chairs
solicited slightly fewer questions than male chairs. We compare these results
to a similar study by Davenport et al. (2014) for the AAS, who also found
under-representation of women amongst question askers, but saw more pronounced
gender effects when a session chair or speaker were female. We conclude with
suggestions for improving the balance of questions at future astronomy
meetings.Comment: 15 pages, 7 figures, comments welcom
- …