15 research outputs found

    Use of Electronic Science Journals in the Undergraduate Curriculum: An Observational Study

    Get PDF
    Phase 2 of a 2-phase project funded by the NSF- National Science Digital Library Project observed undergraduate and graduate engineering, chemistry, and physics students and faculty while they searched the ScienceDirect e-journals system for scholarly science journal articles for simulated class-related assignments. Think-aloud protocol was used to capture affective and cognitive state information, while online monitoring provided an automatic log of interactions with the system. Pre- and post-search questionnaires and a learning style test provided additional data. Preliminary analysis shows differences in search patterns among undergraduates, graduates, and faculty. All groups used basic search functions the most. Graduate students on average spent more time per session and viewed more pages. Further analysis, including analysis of affective and cognitive reactions is continuing

    Facilitating Access to Digital Records of Practice in Education with Technology

    Full text link
    Many disciplines are changing their traditional approaches to data, encouraging data producers to share data and enable researchers and practitioners to reuse data to answer new research questions and address educational needs. In response, data repositories have emerged, and the availability of data has increased. Repositories build infrastructure to facilitate data access and provide software tools for reuse. This paper analyzes the reuse of digital records of practice (DROP) in education through the lens of one software tool, Zaption, focusing on DROP reuse by teachers, teacher educators, and individuals involved in professional development activities. Using analytics data from one repository’s Zaption integration from 2012-2016, we found that producers and reusers of DROP preferred an array of rich communication tools over tools that added technical functionalities. The results contribute both to our knowledge of DROP reusers as well as inform repositories about software choices to facilitate reuse.Institute of Museum and Library Services (LG-06-14-0122-14)Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/147456/1/ELearn_2018_proceedings_FINAL_Deepblue.pd

    Language use as an institutional practice: An investigation into the genre of workplace emails in an educational institution

    Get PDF
    Past studies that examined the genre of email regarded genre as a model by focusing on the content and form alone. However, this study examined the genre as a resource by analyzing the knowledge producing and knowledge disseminating that makes the genre possible in its socio-rhetorical context. The study, in line with critical genre analysis, examined the text-internal and the influences of the text-external elements on language use in email communication at a private higher educational institution in Kuala Lumpur. Using 378 emails, participant observation and interviews, this study analyzed the genre from the ethnographic, textual, socio-cognitive and socio-critical perspectives. To conduct the analysis, a novel integrative methodology that included approaches to text, context and genre analysis was applied. The study revealed that the emails could be categorized into four types of genres that varied in their communicative purposes, intentions, goals of communication, register and generic structures. The discussion email genre, which was used to negotiate issues, mainly included involved production and overt expression of argumentation. Enquiry email genre, which was used as a request-respond strategy, included narrative and nonnarrative discourse while the delivery email genre, which was used to provide files, mainly included informational production and non-narrative discourse. Informing email genre, which was used to notify the recipients about general interest issues, mainly included abstract style and informational production. This study also revealed that the institutional practices and disciplinary conventions of the discourse community influenced language use in the emails. This was reflected in the strategies, mechanisms and linguistic choices made in the four types of genres. The study contributed to the socio-rhetoric perspective and critical genre analysis based on conventionalized practices and procedures in the community of practice in academic management. The integrative approach is also highlighted as an analytical method to examine language use in email communication

    Investigation of Dual-Flow Deep Learning Models LSTM-FCN and GRU-FCN Efficiency against Single-Flow CNN Models for the Host-Based Intrusion and Malware Detection Task on Univariate Times Series Data

    Get PDF
    Intrusion and malware detection tasks on a host level are a critical part of the overall information security infrastructure of a modern enterprise. While classical host-based intrusion detection systems (HIDS) and antivirus (AV) approaches are based on change monitoring of critical files and malware signatures, respectively, some recent research, utilizing relatively vanilla deep learning (DL) methods, has demonstrated promising anomaly-based detection results that already have practical applicability due low false positive rate (FPR). More complex DL methods typically provide better results in natural language processing and image recognition tasks. In this paper, we analyze applicability of more complex dual-flow DL methods, such as long short-term memory fully convolutional network (LSTM-FCN), gated recurrent unit (GRU)-FCN, and several others, for the task specified on the attack-caused Windows OS system calls traces dataset (AWSCTD) and compare it with vanilla single-flow convolutional neural network (CNN) models. The results obtained do not demonstrate any advantages of dual-flow models while processing univariate times series data and introducing unnecessary level of complexity, increasing training, and anomaly detection time, which is crucial in the intrusion containment process. On the other hand, the newly tested AWSCTD-CNN-static (S) single-flow model demonstrated three times better training and testing times, preserving the high detection accuracy.This article belongs to the Special Issue Machine Learning for Cybersecurity Threats, Challenges, and Opportunitie

    Accountable Algorithms

    Get PDF
    Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police scrutiny, select taxpayers for IRS audit, grant or deny immigration visas, and more. The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed to oversee human decisionmakers and often fail when applied to computers instead. For example, how do you judge the intent of a piece of software? Because automated decision systems can return potentially incorrect, unjustified, or unfair results, additional approaches are needed to make such systems accountable and governable. This Article reveals a new technological toolkit to verify that automated decisions comply with key standards of legal fairness. We challenge the dominant position in the legal literature that transparency will solve these problems. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the issues analyzing code) to demonstrate the fairness of a process. Furthermore, transparency may be undesirable, such as when it discloses private information or permits tax cheats or terrorists to game the systems determining audits or security screening. The central issue is how to assure the interests of citizens, and society as a whole, in making these processes more accountable. This Article argues that technology is creating new opportunities—subtler and more flexible than total transparency—to design decisionmaking algorithms so that they better align with legal and policy objectives. Doing so will improve not only the current governance of automated decisions, but also—in certain cases—the governance of decisionmaking in general. The implicit (or explicit) biases of human decisionmakers can be difficult to find and root out, but we can peer into the “brain” of an algorithm: computational processes and purpose specifications can be declared prior to use and verified afterward. The technological tools introduced in this Article apply widely. They can be used in designing decisionmaking processes from both the private and public sectors, and they can be tailored to verify different characteristics as desired by decisionmakers, regulators, or the public. By forcing a more careful consideration of the effects of decision rules, they also engender policy discussions and closer looks at legal standards. As such, these tools have far-reaching implications throughout law and society. Part I of this Article provides an accessible and concise introduction to foundational computer science techniques that can be used to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decisions or the processes by which the decisions were reached. Part II then describes how these techniques can assure that decisions are made with the key governance attribute of procedural regularity, meaning that decisions are made under an announced set of rules consistently applied in each case. We demonstrate how this approach could be used to redesign and resolve issues with the State Department’s diversity visa lottery. In Part III, we go further and explore how other computational techniques can assure that automated decisions preserve fidelity to substantive legal and policy choices. We show how these tools may be used to assure that certain kinds of unjust discrimination are avoided and that automated decision processes behave in ways that comport with the social or legal standards that govern the decision. We also show how automated decisionmaking may even complicate existing doctrines of disparate treatment and disparate impact, and we discuss some recent computer science work on detecting and removing discrimination in algorithms, especially in the context of big data and machine learning. And lastly, in Part IV, we propose an agenda to further synergistic collaboration between computer science, law, and policy to advance the design of automated decision processes for accountabilit

    Accountable Algorithms

    Get PDF

    How Lawyers Search When No-One Is Looking: A Transaction Log Analysis to Evaluate the Educational Needs of the Legal Profession

    No full text
    Lawyers are increasingly responsible for conducting research using legal databases and are looking to law librarians for training. As there is little information regarding law practitioner training, and even less which provides information about the actual search behaviour of the legal profession, much of this training has had to be based upon experience and best guesses of individual librarians. This study was undertaken to investigate the actual search behaviour of practitioners using the Auckland District Law Society Library. Its purpose is to provide the training personnel in that library with information about the search habits of their potential trainees to improve current training initiatives. It is based on data from transaction logs gathered from the public terminals in the Auckland District Law Society Library which are used by practitioners. An analysis of the logs collected revealed that: (1) the case summary databases, LINX and BRIEFCASE, were the databases most commonly used by practitioners; (2) the most common type of search conducted during the study was for commentary or case law on a particular subject; (3) the majority of search sessions comprised only a single query, but there were some instances where practitioner sessions would involve more than 10 queries; and (4) there was limited use of any of the advanced search features offered on FolioVIEWS. Based upon these findings the following recommendations were made in relation to the existing training programme offered by the Library: 1. All training sessions include information regarding database concepts; 2. The library initiate additional lunch-time training sessions to inform practitioners of the databases currently available in the library and their content; 3. The library continue to teach advanced search techniques, particularly search construction, the use of synonyms and truncation, to help increase the levels of recall and therefore search success in practitioner searches; 4. The library continue to include information on Field and Phrase searching in both the beginners and advanced courses. Although the purpose of the study was not to investigate the level of search 'failure' or 'success' attained by practitioners, this paper contains a discussion of the different measurement techniques which could be used to measure search effectiveness. It is argued that recall would be the most appropriate measure of search success and that, based upon a visual examination of the transaction logs, this is not being achieved in the majority of cases. Given this alarming observation it is argued that more attention should be paid to issues surrounding database and interface design and that the library become involved in a general education programme to help users recognise situations in which end-user searches may be inappropriate

    Mitattava viestintä

    Get PDF
    Osa artikkeleista englanniksi.Sisältö: Esipuhe -- Miksi viestintää mitataan? / Juholin & Luoma-aho -- Setting smart objectives / Anne Gregory -- Best practice measurement : listening, learning, adapting / Jim Macnamara -- Measurement tools for exploring online engagement / Rebecca Dolan & Jodie Conduit -- Organisaatioiden sisäisen sosiaalisen median analytiikka ja mittaaminen / Anu Sivunen -- Digitaalisen markkinointiviestinnän tehokkuusmittariston rakentaminen / Joel Järvinen -- Viestinnän mittaaminen big datan avulla / Matti Nelimarkka ja Reijo Sund -- Maineen mittaaminen somedialisaation aikakaudella / Pekka Aula & Jouni Heinonen -- Miten viestiä ja mitata asiakaslähtöisyyttä? / Hannu Saarijärvi -- Integroitu raportointi yritysvastuun mittaamisen ja arvioinnin näkökulmasta / Hannele Mäkelä & Johanna Kujala -- Viestinnän ROI : mahdollisuus vai mahdottomuus? / Petriikka Ohtonen & Elina Ollila.Tällä lomakkeella voit tilata ProComma Academic 2017 -kirjan: https://procom.fi/procomma-academicin-teemana-viestinnan-mittaaminen-tilaa-omasi
    corecore