Designing Academic Writing Analytics for Civil Law Student Self-Assessment

  • Open access
  • Published: 03 November 2016
  • Volume 28 , pages 1–28, ( 2018 )

You have full access to this open access article

  • Simon Knight   ORCID: orcid.org/0000-0002-8709-5780 1 ,
  • Simon Buckingham Shum 1 ,
  • Philippa Ryan 2 ,
  • Ágnes Sándor 3 &
  • Xiaolong Wang 1  

11k Accesses

30 Citations

12 Altmetric

Explore all metrics

Research into the teaching and assessment of student writing shows that many students find academic writing a challenge to learn, with legal writing no exception. Improving the availability and quality of timely formative feedback is an important aim. However, the time-consuming nature of assessing writing makes it impractical for instructors to provide rapid, detailed feedback on hundreds of draft texts which might be improved prior to submission. This paper describes the design of a natural language processing (NLP) tool to provide such support. We report progress in the development of a web application called AWA (Academic Writing Analytics), which has been piloted in a Civil Law degree. We describe: the underlying NLP platform and the participatory design process through which the law academic and analytics team tested and refined an existing rhetorical parser for the discipline; the user interface design and evaluation process; and feedback from students, which was broadly positive, but also identifies important issues to address. We discuss how our approach is positioned in relation to concerns regarding automated essay grading, and ways in which AWA might provide more actionable feedback to students. We conclude by considering how this design process addresses the challenge of making explicit to learners and educators the underlying mode of action in analytic devices such as our rhetorical parser, which we term algorithmic accountability.

Similar content being viewed by others

law essay rubric

Issues and Challenges for Implementing Writing Analytics at Higher Education

law essay rubric

Performance Analysis of a Serial Natural Language Processing Pipeline for Scaling Analytics of Academic Writing Process

law essay rubric

Progress and Challenges for Automated Scoring and Feedback Systems for Large-Scale Assessments

Avoid common mistakes on your manuscript.

Introduction

Writing as a key disciplinary skill.

Critical, analytical writing is a key skill in learning, particularly in higher education contexts, and for employment in most knowledge-intensive professions (National Commission On Writing 2003 ; OECD and Statistics Canada 2010 ). Similarly in legal contexts, writing is both a ‘tool of the trade’, and a tool to think with – to engage in ‘writing to learn’ by considering the application of legal contexts through written legal documents (Parker 1997 ). A 1992 US report, commonly known as the MacCrate report (The Task Force on Law Schools and the Profession: Narrowing the Gap 1992 ), notes that although it is key for lawyers to learn effective communication methods (including analytical writing), there is in fact a disconnect between the practice, and legal-education of, lawyers with too little focus on this communication in legal training. The subsequent ‘Carnegie report’ (Sullivan et al. 2007 ) raised similar concerns, suggesting the need for reform in assessment practices with an increased focus on legal process and practice over product. Indeed, in the context described in this work, across the qualifications offered by the University of Technology Sydney (UTS) Law Faculty, critical analysis and evaluation, research skills (to find, synthesize and evaluate relevant information), and communication and collaboration (using English effectively to inform, analyse, report and persuade in an appropriate – often written – medium), are all highlighted as core graduate attributes. Thus, although there are stark differences internationally in the emphasis placed on writing in undergraduate legal education (Todd 2013 ), there are clear similarities between the English speaking common law countries and the emphasis on written communication in legal education. Learning the law is not simply about memorizing and recalling the contents of ‘the law’, but about thinking like a lawyer – the ability to process, analyse, and apply the law (Beazley 2004 ); abilities fundamentally tied to writing. Indeed, preliminary work indicates a relationship between grades in specific writing courses (common in the US context) and success in other law courses (Clark 2013 ).

Teaching academic writing is recognized as a challenge across higher education (Ganobcsik-Williams 2006 ) with a disparity between the more superficial presentational criteria by which students often judge their work, and the level of analytical argumentation that educators seek (Andrews 2009 ; Lea and Street 1998 ; Lillis and Turner 2001 ; Norton 1990 ). As a field, Law places huge emphasis on argumentation, but evidence suggests that its effective teaching has proven challenging. For example, a survey of US judges, practitioners and legal writing teachers indicated a universal generally poor view of new law graduates’ writing skills (Kosse and ButleRitchie 2003 ). These respondents report writing that lacks: focus; a developed theme; structure; persuasive argument or analysis; synthesis and authority analysis; alongside errors in, citation, grammar, spelling, and punctuation (Kosse and ButleRitchie 2003 ). Similar concerns are raised by other expert legal writers (Abner and Kierstad 2010 ).

A set of discussion and guidance literature has emerged for learning good practice in writing well in the law. These range from discussion of the elegant combination of clarity, concision, and engaging writing (Osbeck 2012 ), to very specific concerns regarding a preference for plain English over jargon and legalese (Stark 1984 ) – a concern shared by judges (across seniority and demography) who find plain English more persuasive (Flammer 2010 ). Others give specific guidance (see, for examples, Goldstein and Lieberman 2002 ; Murumba 1991 ; Samuelson 1984 ) which make clear that key elements of good legal writing include: Asserting a thesis (up front); developing an argument through use of analysis and synthesis of sources, facts, and legal argument (weighed in a measured way); and writing in a clear, simple, and direct or concrete tone.

To address concerns regarding written communication, legal-writing scholars have argued for an increased focus on the process of writing in both curricula and assessments. In the legal writing context (largely in American law schools) there have been calls for advice in writing mentoring to focus on underlying analysis, rather than structural features, (Gionfriddo et al. 2009 ); and for changes to assessment practices, with use of empirical studies to motivate (and assess the impact of) these changes (Curcio 2009 ); indeed, the same author has provided empirical evidence in the law-context that formative assessment can improve final grades by roughly half a grade (Curcio et al. 2008 ) with further preliminary evidence indicating a positive impact on mid-course grade (but not end of course) (Herring and Lynch 2014 ). Authors have thus suggested a need to address student’s mindsets (Sperling and Shapcott 2012 ), and metacognitive and self-regulatory skills (Niedwiecki 2006 , 2012 ) through effective formative assessment, with a commensurate desire to improve the level of self-reflection and professional writing development throughout one’s legal career (Niedwiecki 2012 ; Vinson 2005 ).

Aligning Student and Instructor Assessments of Writing

At UTS students are usually admitted to a law degree on the strength of very good school-leaving results or upon successful completion of an undergraduate degree. As a general rule, both cohorts have strong writing skills. However, we identified that when students were invited to self-assess their own writing using the formal rubric they tended to over-rate their writing. If law students are not taught how to assess their own written work meaningfully while at university, they will be unlikely to learn this skill in practice. Yet it is in legal practice that the skill is most needed. The professional and ethical obligations that are imposed on legal practitioners mean that they must be mindful of what and how they write at all times. Most of what lawyers do involves reading, writing and critiquing correspondence, evidence, advice and instructions.

The metacognitive processes involved in assessing the quality of written work, particularly one’s own, are sophisticated. Indeed, the scholarship on this point paints a negative impression of students’ ability to improve their self-assessments. Research shows that people often have a faulty mental model of how they learn and remember, making them prone to both mis-assessing and mismanaging their own learning (Bjork et al. 2013 ). When students are taught to calibrate their self-reviews to instructor defined assessment criteria, their learning outcomes improve (Boud et al. 2013 , 2015 ). Importantly, self-review should be designed in such a way as to be formative in making critical judgments about the quality of the reviewed writing. A mechanism or intervention that causes students to pause and ask strategic questions about the content and quality of their writing could qualify as an incentive to proof-read and make the critical judgments required for meaningful self-monitoring. Ultimately, we seek to build students’ ability to assess themselves as accurately as an expert assesses them, which as Boud has argued, is the kind of “sustainable assessment” capability needed for lifelong learning (Boud 2000 ).

One means by which to support such alignment is through the automated provision of formative feedback on the accuracy of students’ self-assessment, or the writing itself. Indeed, a line of research has developed to analyse student writing through automated essay scoring or evaluation systems (AEE). These systems have been successfully deployed in summative assessment of constrained-task sets, with evidence indicating generally high levels of reliability between automated and instructor assessments (see, e.g., discussions throughout Shermis and Burstein 2013 ), with some criticism of this work emerging (Ericsson and Haswell 2006 ). Such systems have been targeted at both summative and formative ends. However, these approaches have tended to explore semantic content (i.e., the topics or themes being discussed), and syntactic structure (i.e., the surface level structures in the text), with some analysis of cohesion (see particularly, McNamara et al. 2014 ), but less focus on rhetorical structure (i.e., the expression of moves in an argumentative structure). Moreover, these systems have not typically been applied to formative self-assessment on open-ended writing assignments.

The Rhetorical Structure of Written Texts

The research described in this paper applies a natural language processing (NLP) tool for rhetorical parsing to the context of legal essay writing. The NLP capability in AWA is currently being developed as an adaptation of the rhetorical parsing module (Sándor 2007 ) of the Xerox Incremental Parser (XIP) (Aït-Mokhtar et al. 2002 ) to the legal domain. The parser is designed to detect sentences that reflect salient rhetorical moves in analytical texts (like research articles and reports).

The term rhetorical move was introduced by Swales ( 1990 ) to characterise the communicative functions present in scholarly argumentation. Swales defines rhetorical moves like stating the relevant problem , showing the gaps or proposing solutions . Rhetorical moves are usually conveyed by sequences of sentences, and often they are made explicit by more or less standardized discourse patterns, which contribute to the articulation of the author’s argumentation strategy (e.g. In this paper we describe …- stating the relevant problem, Contrary to previous ideas … - stating the gaps, In this paper we have shown …- proposing solutions). The goal of the XIP rhetorical parser is the detection of the recurring patterns that indicate rhetorical moves in what we call rhetorically salient sentences .

Rhetorically salient sentences have successfully indicated relevant content elements in various text-mining tasks. For example, significantly new research is spotted by detecting a small number of “paradigm shifts” in tens of thousands of biomedical research abstracts (Lisacek et al. 2005 ) through the identification of salient sentences containing discourse patterns that convey contrast between past findings and new experimental evidence. Another application detects salient sentences that describe research problems and summary statements. This application was tested for assisting academic peer reviewers in grasping the main points in research papers (Sándor and Vorndran 2009 ) and project evaluators in extracting key messages from grantees project reports (De Liddo et al. 2012 ). Moreover, as we describe later (Table 1 ) these moves may be mapped to a rubric structure in the legal writing context.

The analytical module of AWA Footnote 1 labels the following types of salient sentences (signalled in the text with highlighting and a ‘Function Key – see next section): Summarizing issues (describing the article’s plan, goals, and conclusions) (S), describing Background knowledge (B), Contrasting ideas (C), Emphasizing important ideas (E), mentioning Novel ideas (N), pointing out Surprising facts, results, etc. (S), describing an open Question or insufficient knowledge (Q), and recognizing research Trends (T). Summarizing is related to Swales’ rhetorical moves stating relevant problems and proposing solutions , whereas all the other sentence types characterise problem formulation, which AWA’s user interface refers to collectively as Important Sentences . Our typology of Important Sentences has been developed as a result of the detection of recurrent discourse patterns in peer reviewed research articles drawn from a variety of fields including social sciences and bio-medicine. Some examples of the discourse patterns are shown in Fig. 1 .

Examples of discourse indicators of rhetorical moves characterising research problems

The typology is robust in the text-mining tasks mentioned above (De Liddo et al. 2012 ; Lisacek et al. 2005 ; Sándor and Vorndran 2009 ) — but is designed to be modified if a new domain establishes the need for the detection of additional rhetorical moves. The rhetorical parser is the implementation of the concept-matching framework (Sándor et al. 2006 ), which models the salient discourse patterns as instantiations of syntactically related Footnote 2 words and expressions that convey constituent concepts. For example, sentences which contrasting ideas contain a pair of syntactically related words or expressions conveying the concepts of “contrast” and “idea/mental operation”. Thus the following 3 syntactically and semantically different sentences are all labeled ‘C′ by AWA, since the words in bold match this pattern: challenge , need , failure and shift convey “contrast” and identify , highlights , demonstrating and notions convey “idea/ mental operation”. The two classes of words are syntactically related in all the three sentences:

These 3 sentences characterise analytical issues by identifying a challenge, highlighting a need, demonstrating failure and discussing the notion of a shift.

The question we investigate in this paper is whether it is possible to design automatically generated cues for civil law students and educators about the presence of valued qualities in student writing, and how these cues might serve as formative feedback to students when they are drafting their texts. In the remainder of this paper, we briefly introduce the AWA web application, describing its architecture and user interface. The evaluation of the tool is reported in terms of how we structured a legal writing academic’s feedback to refine the rhetorical parser implemented in AWA, and the methodology for harvesting student feedback. We then analyse student feedback, before concluding with a discussion of how current limitations can be tackled, and the challenge of “algorithmic accountability”, a broader concern in critical discourse about data in society.

The AWA Web Application

UTS has been developing an end-user application designed for student and staff use. The Academic Writing Analytics (AWA) tool is introduced below in terms of its NLP capabilities, architecture and user interface. AWA v1 described here is implemented in PHP, while v2 is currently under development in Ruby-on-Rails. These operate in configurations approved by the university’s IT Division to ensure that as we build confidence in its efficacy, it is ready for wider rollout as required.

AWA Architecture

AWA’s architecture (Fig. 2 ) is designed to deliver the following capabilities, across all major web browsers and mobile devices:

Authenticate users using university credentials, identifying their faculty

Present discipline-specific sample texts users can experiment with, and discipline-specific examples in the user guide

Accept plain text input from the user (pasted from source)

Log all submissions

Invoke multiple NLP services on the Open Xerox server (to the reflective and analytic/rhetorical parsers)

Render the output in multiple forms

AWA’s functional architecture

A green Summary sentence signaling what the writer’s intention is. On the right is the key to the different sentence types (clicking on this displays more details in the online Guide). Yellow sentences indicate the presence of a rhetorical ‘key’ (indicated below ‘Both’ in the key), for example the yellow. Pink sentences indicate that both a ‘summary’ and other important rhetorical move is made. Contrast sentence shown

Example sentences from civil law essays, classified by XIP and rendered in AWA

The Rhetorical Parser

The rhetorical analysis in the Xerox Analytic Parser is implemented through the Xerox Incremental Parser (XIP) using syntactic parsing, dedicated lexicons and pattern-matching rules. Syntactic parsing extracts syntactic dependencies (such as the one between “challenge” and identify” in the sentence above), while the lexicons contain lists of words and expressions that are associated with the constituent concepts, and the pattern-matching rules select the sentences that contain dependencies that instantiate pairs of concepts necessary for conveying the labels assigned to rhetorically salient sentences (e.g. “contrast” + “idea/mental operation” =  Contrasting idea). As described above, these rhetorical moves are: Summarizing issues (describing the article’s plan, goals, and conclusions) (S), describing Background knowledge (B), Contrasting ideas (C), Emphasizing important ideas (E), mentioning Novel ideas (N), pointing out Surprising facts, results, etc. (S), describing an open Question or insufficient knowledge (Q), and recognizing research Trends (T). In the first prototype of AWA we have chosen to represent all the salient sentence types detected by XIP, however our analyses show that some of them are not particularly relevant in the legal domain. Thus in the future we might omit the irrelevant moves like (B), (Q) and (T), which are characteristic moves in empirical analyses, since their goal is the accumulation of knowledge, in contrast to legal analyses (and social sciences in general) where authors “negotiate” around facts and interpretations (Åström and Sándor 2009 ). The most frequent labels are those that are present in any kind of analytical writing, (S), (C) and (E).

AWA’s User Interface Design Process

The NLP capability provided by the XIP rhetorical parser has been developed into a practical tool to produce a user experience good enough that students and academics are able and willing to engage with it. While the XIP rhetorical parser has been in development for over a decade, it is only over the last year that an end-user application for education has been developed.

In contrast to research prototypes and products for automated grading, we are designing AWA’s feedback not as a summative grade, but as a means to provide formative guidance to improve draft writing. AWA adopts the metaphor of receiving feedback from a colleague who has used different coloured highlighters to draw attention to sentences she found interesting for some reason.

Although designed as a ‘walk up and use’ interface requiring no training, students are first introduced to AWA through a face-to-face briefing, sometimes in conjunction with instruction on academic writing that they would receive in any case from the writing support services in UTS. In this session, it is emphasized to them that it is a prototype in development, and that they should be critical of its output if they do not agree with it (see the discussion for further reflection on formative versus summative assessment).

On logging in for the first few occasions, students are welcomed as new users and prompted to visit the Guide which presents discipline specific sample texts and examples of each rhetorical move that AWA recognises. If the academics have not provided such examples they see default samples. The users paste in their text and submit it for analysis. AWA returns a range of visualizations, illustrated in Figs. 3 and 4 . In addition, some basic statistics are visualized indicating rhetorical move frequencies, alongside (1) a wordcloud and (2) the key concepts, places, and people mentioned in the text.

The user interface design has been through many iterations, based on hours of testing with academics from many UTS faculties and units. A think-aloud method and screen recordings were used with teams as they worked alone, or in pairs/triads, to analyse and discuss sample student texts in AWA, while the researcher prompted them for thoughts, and explained how the system was working. We gradually debugged the design as we experimented with different ways to ensure that the users could navigate the system smoothly. Footnote 3

Usability aside, the next question was whether AWA’s output was experienced as academically trustworthy by the civil law lecturer, and her students. To date, we have reported statistical correlations between the frequency of certain XIP classifications and the quality of English literature essays (Simsek et al. 2015 ). However, user experience testing has not yet been reported; this application to the legal domain provides a first step to roll-out to students within a single domain.

Research Methods and Design

Participants and research design.

In the research described in this paper, a collaboration between a law-faculty academic (the 3rd author), analytics researchers (the 1st and 2nd authors), a linguist (the 4th author) and an applications developer (the last author), we addressed the question of whether the AWA tool could usefully foreground the kind of rhetorical moves of interest in a legal assignment.

An alignment was first drawn between the assessment criteria, and the rhetorical moves identified by XIP, to establish the suitability of the tool for providing feedback on the particular task. The effectiveness of the tool was then evaluated with the law-faculty academic providing a close analysis of the accuracy of the parser for detecting the salient rhetorical structures in a sample of essays. Finally, a cohort of students was invited to engage with the tool, and provide their feedback, with 40 agreeing to do so and submitting their essays to AWA, as described in the sections on student evaluation and feedback.

Assessment Context

The law course in which the AWA tool is being developed has an established rubric, against which alignments with rhetorical moves were drawn. This rubric scores a number of facets on a 1–5 scale (unsatisfactory, satisfactory, good, very good, excellent) aligned with the UTS grading scheme. The rubric is structured around particular kinds of move students should make in their texts, aligning these with sections that the students should use to structure their text. Essays were 2000 words in length, on a topic of relevance to civil-law, with a range of possible questions provided to students, for example:

The concept and meaning of good faith in negotiation and Alternative Dispute Resolution (ADR) processes, together with an articulation of what actions are required to comply with a good faith obligation or to support good faith negotiation, can be best described as an evolving ‘work in progress’ in Australia. What actions do you think are required by practitioners in order to comply with these good faith obligations? Do you think it is possible for our courts to enforce good faith obligations? Support your view with reference to this article and at least two additional recent authorities (For this purpose, “recent” means published in the last five years).

Students are thus asked to write an argumentative essay, forming a thesis or position in relation to the question. The rubric facets used in this course are:

INTRODUCTION: Statement of thesis and essay plan

Development of sustained thesis

Knowledge and understanding of civil procedure act (CPA), uniform civil procedure rules (UCPR), and common law

Identification of relevant issues

Critical analysis, evaluation, and original insight

Development of coherent and persuasive argument

CONCLUSION: Drawing together themes and answering the question posed in the introduction

REFERENCING: Written expression, footnotes and bibliography in accordance with Australian guide to legal citation (AGLC) 3rd edition

The relevance of the analytical rhetorical moves for legal essays is based on their association with the majority of the assessment criteria rubrics at UTS as shown in Table 1 which compares the elements of the writing rubric used in a civil-law assessment (column 1), with their associated salient sentence types (column 2), and gives an example instantiation (column 3).

Our observation from student self-assessment cover sheets indicates that students found self-assessment for these criteria challenging, since they overestimated their performance, and for the teachers, providing formative feedback on them may be prohibitively time-consuming. Effective (self)-assessment of legal writing requires the ability to recognise summary statements of introductions and conclusions, and the identification of text parts that contain critical analysis, and as a second step, the clarity and pertinence of the identified segments need to be evaluated. Both steps need expertise: the first mainly in the analysis of academic writing, and the second in domain knowledge. By highlighting sentences that need to be evaluated, AWA aims to provide support to the first step of this complex assessment activity, aligned with the guidance from the literature described in the introductory sections. Moreover, AWA also indicates in bold characters the relevant linguistic expressions that trigger the highlighting, with an aim to facilitate end-user understanding of the relevant parts of the highlighted sentences. The parser does not yet analyse or provide feedback above the sentence-level, as such it is left to students to reflect on whether sentences-types are positioned in the appropriate place at the whole-text, and section or paragraph level.

In the following sections we show how the salient sentence types noted above relate to structures inherent in any legal essay. We comment on some highlighted sentences of a sample legal essay from the LawTeacher.net web site. Footnote 4

Highlighting Statements of Introduction and Conclusion

A key feature of academic writing is conveyed, particularly in statements of introduction and conclusion, through widely taught rhetorical moves of academic writing such as “Outlining purposes”, “Announcing present research”, etc. (Swales et al. 2004 ). In the AWA tool, these moves fall under the Summary label. Summary sentences are expected in the introduction and the conclusion, as well as at the beginning and the end of sections.

The following Summary sentence is highlighted in the introduction of the sample student essay:

The following Summary sentence appears in the conclusion of the same essay:

By highlighting these sentences AWA focuses the evaluator’s attention on the rhetorical moves of introducing and concluding, while, as we have pointed out, it does not give any clue concerning the clarity of the statements. It is up to the evaluator to assess if these statements are relevant, well written, or if the conclusion matches the aims set in the introduction.

Highlighting Segments Containing Critical Analysis

Whereas the introduction and conclusion statements are clearly associated with acknowledged rhetorical moves of academic writing, and are identifiable in sentences, critical analysis is a more complex endeavour, which has not been associated with linguistic or structural units like sentences or specific section types. Thus pinpointing sentences that contribute to performing critical analysis is not straightforward. Critical analysis is usually explained in the form of general writing guidelines, like “indicate relevant issues”, “formulate problems” or “present original insight”. We suggest that sentences containing any of the salient rhetorical moves labeled in AWA except for the Summary move, are indicators of compliance with such guidelines: when the author points out relevant Background or Contrast between ideas , puts Emphasis on particular points , recognizes research Trend s and Novelty , she is indicating issues that she considers relevant; when she describes Contrast s and hitherto unanswered Question s, she is formulating research problems; or with Contrast and Emphasis she introduces original insight. We do not claim that our list of rhetorical moves indicating particular aspects of critical analysis is exhaustive, since it is the distillation of corpus studies in previous text-mining work. Should a new aspect emerge in corpora, it could be integrated into the framework.

The following examples from the sample essay illustrate how such moves reflect aspects of critical analysis in the sample essays. The sentence below is labeled Emphasis and Contrast . It introduces the discussion of relevant issues in what follows, and points out the importance of discussing some other issues. Although the “relevant issues” themselves are not contained in the highlighted sentence, the sentence still indicates that the author does handle them as an integral part of the essay, and thus the reader can identify and evaluate them in the subsequent sentences. This sentence also draws the reader’s attention to the necessity of discussing an analytical aspect (“the differences between the two jurisdictions”), which is another indicator of the treatment of relevant issues in the analysis:

The following sentence is labeled Contrast , and it formulates a problem (that of “judicial independence”), which signals that the author is engaged in critical analysis:

We emphasize again that the highlighted sentences convey elements of critical analysis based on the rhetorical moves they contain, and the assessment of the relevance of these elements with respect to the topic of the essay remains an expert task.

Evaluation Methodology with The Law Academic

Confusion matrix annotation.

We have developed a practical methodology for refining the quality of the parser, using a form of semantic annotation by the domain expert (the civil law academic leading the course) of AWA’s output. Designed to be practical for the development of analytics tools with staff with limited time and resource, this is a rapid version of the more systematic coding that a qualitative data analyst would normally perform on a large corpus, using signal detection theory codes for True/False Positives/Negatives, to populate a confusion matrix:

Thus, the lecturer was asked to highlight True Positives and Negatives where she agreed with AWA’s highlighting and classification of a sentence, or its absence; False Positives where AWA highlighted a sentence she did not consider to be significant, and False Negatives where AWA ignored a sentence she considered to be important. We placed misclassifications of a sentence in this class too, as another form of failure to spot an important move.

We did not prepare particular annotation guides for the lecturer, since we cannot provide very detailed explanations of AWA highlights for the students or teachers either. As we described above AWA labels are based on complex patterns which would be far too cumbersome to describe in AWA. Our aim is to keep the AWA labels intuitively understandable, which is a challenging aspect of the UI. So, we defined the rhetorical moves informally in one sentence and gave some examples for each type. This first experiment served also as a test if the label names, the short explanations and the examples in AWA enable an analyst to grasp the semantics of the labels. We wanted to gain insight into the ways to improve the guide in the UI (rather than formally assessing the annotation scheme).

Starting with the generic rhetorical parser, the lecturer selected several essays with known grades. She pasted AWA’s output into Microsoft Word, and using the agreed scheme, annotated it with TP/FP/TN/FN plus explanatory margin comments. The linguist in turn commented on this document, for instance, explaining why AWA behaved the way it did when this confused the lecturer, or asking why a sentence was annotated as FP/FN.

This structured approach to analyst-academic communication began to build a corpus from which one could in principle calculate metrics such as precision, recall and F1; however, it is not yet large enough to calculate these reliably. Rather, the confusion matrix provided more focused feedback than informal comments to the team, aiding rapid iteration and dialogue, using a familiar tool (Word) and a simple 2 × 2 representation that required no training. We return to the importance of this process in the discussion on algorithmic accountability.

Refinements to AWA

For each of the cells in the confusion matrix, we consider examples of successes and failures, and the different adaptation measures that were taken to improve the signal/noise ratio.

True Positives

Consistent with the intentions of the rhetorical analysis these sentences illustrate correct classification:

Contrasting ideas:

Summing up the main topic of the essay:

False Positives

We found that sentences annotated as False Positives by the lecturer were falsely triggered by patterns that are often relevant in non-legal academic writing, but in law the same patterns are used as legal ‘terms of art’, for instance:

We can reduce False Positives in such cases by deleting the legal terms from the XIP lexicon, but the complication is that these words may also be used in their analytical sense. In such cases we implement disambiguation rules. In the following sentence “issue” is not used as a legal term, and so the sentence should be (and is) highlighted:

False Negatives

A few false negatives were due to the fact that analytical content in legal essays may use different words or expressions for conveying the constituent concepts from those that are parts of the existing lexicons. For example, neither “assess” nor “argument” was part of the lexicon, and thus the following sentence was not labeled. Once the words are added, the SUMMARY pattern is detected by the parser, and the sentence is labeled.

While one aspect of adaptation is the expansion of the lexicon, in fact the overwhelming majority of false negatives were due to sentences that the law academic coded as relevant in terms of her interpretation of the XIP categories, but which do not contain patterns coded in XIP.

For example, the lecturer expected the following sentence to be labeled as ‘C′:

This sentence does indeed convey a contrast. However, it is not labeled, because the contrast is not between two “ideas”, but between one effect of technology (i.e. it “has facilitated the retention of all records for businesses “) and Keane’s maintaining a “converse effect” of technology. Technically speaking even if this sentence does contain words that represent the relevant analytical concepts, it is not selected, since there is no syntactic relationship between any two of them. We can consider that this sentence partially fulfils the criteria for being selected, since it contains words instantiating some constituent concepts.

Were the sentence formulated in more precise terms, i.e. as a contrast between “ideas”, it would be highlighted, and labeled as ‘Contrast’, thus:

In this case we need to consider the extension of the current analysis, because it seems that the AWA patterns are too restrictive for the ‘C′ move.

The following sentence was expected by the lecturer to be labeled as ‘B′ Background knowledge :

This general description of the concept of “discovery” can legitimately be interpreted as “background knowledge”, however, it does not have the same semantics as ‘B′ in AWA. The semantics of the ‘B′ label in AWA is “reference to previous work”, as illustrated in the true positive ‘B′ sentence:

The role of the sentences annotated as false negatives in legal essay analytics needs to be elucidated before further adaptation is undertaken. On the one hand we need to revise the UI definitions and explanations so that they are in line with the users’ intuitions, and on the other hand, we need to consider the modification of discourse patterns to be detected in order to target more specifically legal discourse.

Taken together, the existing general analytical parser without any adaptation did provide relevant output for legal tests. Our data are too small for computing meaningful metrics, thus in Table 2 we report the result of the annotation exercise in terms of numbers of sentences.

This test indicated that lexical adaptation is required: deleting legal ‘terms of art’ from the general lexicon, and extending the lexicon with genre-specific vocabulary used in legal essays for conveying rhetorical moves. No syntactic parse errors were the cause of any False Negatives or False Positives. Even if some sentences in the legal essays are longer than average general texts sentences, this did not have an effect on the parser performance.

We started the lexical adaptation based on the test annotations. We created shared documents where we collected words to be added and deleted as we encountered them during the development process. Table 3 illustrates the list of legal ‘terms of art’ collected for deletion.

Currently, the implementation of changes (such as those introduced above) to XIP is performed by hand. We foresee the elaboration of mechanisms that automatically update the lexicons on user input or learn the domain vocabulary through machine learning.

No formal evaluation of the effect of the changes has been performed, but it is interesting to analyse the output of the slightly adapted parser on the document used for the annotation exercise. Having updated the lexicons following some basic adaptation the confusion matrix showed the results indicated in Table 4 .

These changes resulted in a decrease in the number of False Positives with a commensurate increase in the number of True Negatives. This was due to the deletion of the legal terms from the general analytical lexicon. For example, the following sentence was highlighted as ‘Contrast’ in the general analytical parser, but not in the adapted legal parser, because of the elimination of issue, solution and problem from the lexicon.

The remaining False Positives and False Negatives are due to the differences of the definition of the rhetorical moves between the annotator and the general analytical parser. Further analysis is needed to determine if it is necessary to change the scope of the analytical parser by adapting the patterns to legal rhetorical moves.

Having taken the first steps in refining AWA for the legal domain, we moved into a first iteration of exposure to the students themselves.

Evaluation Methodology with The Students

Introducing awa to students.

The evaluation of AWA by Law students was designed carefully to ensure that it did not disadvantage any students. Students had already been introduced to the concept of text analysis software in a legal technology lecture, setting the context for an open invitation to engage critically with a prototype. They were informed that they would only be given access to AWA after submission of their essays, to avoid any perceived risk of unfair advantage. AWA was thus framed not only as a tool for self-assessment of their own writing, but as an example of the emerging tools which they could expect to encounter in their professional practice, particularly those who choose careers in commercial litigation.

Forty students volunteered to participate in the project (submitting essays to AWA) and of those initial volunteers, twenty managed to attend introductory sessions where they were introduced to the impact that NLP is having on jobs in diverse sectors, in education specifically, and then introduced to AWA and shown how to use it, concluding with open discussion. Both sessions were held after the participants had submitted their essays, verified against student records.

In the sessions it was emphasized, on the one hand, that AWA was a prototype and students should not be unduly concerned if it failed to highlight sentences they believed to be sound; on the other hand, the academic law staff present indicated that they had growing confidence in it based on their testing.

The students were given a week to experiment with AWA, and were then sent a 4-question online survey, with 12 of the 40 participants submitting responses:

How comfortable are you with getting feedback of this sort from a computer?

Did you find the feedback meaningful, so you could see ways to improve your writing?

If we continue to make AWA available, what is the likelihood that you would use it?

We’d love to hear any further thoughts you have. Please comment here.

Reflection Statements

In addition, all students on the course were invited but not required to orally present 2 min ‘reflection statements’, worth 5 % of the total subject grade. AWA pilot students could choose to reflect on their experience of AWA, as an alternative to reflecting upon other material in the course which other students did. Reflective statements were assessable based on oral presentation only (no written version required), all assessed against the same criteria: use of plain English expression, speaking style, description of content upon which the student was reflecting and clear statement of what is understood as a result of engaging with that learning content (or the use of the AWA tool). Students were also invited to state how their understanding of legal practice might be influenced by their learning or experience. Of the 280 students taking Civil Law, 277 students chose to complete a reflection statement, with 8 of the 40 AWA-students choosing to specifically reflect on their experience with AWA, 2 of whom also provided written copies of their statements, while data from another 5 comes from the lecturer’s notes. All students gave written permission for their feedback to be used anonymously for research purposes.

Qualitative Coding of Student Feedback

An analysis was conducted of the survey data (including comments), the written student reflections, and the lecturer notes from the oral presentations on AWA. For those completing the questionnaire, response frequencies were tabulated. Further analysis of the written content was conducted in NVivo (QSR International Pty Ltd 2012 ) to identify thematic patterns across the content; these are reported in the next section, with broad patterns noted and exemplifications of the feedback given in brackets.

Student Feedback: Results

We organise the feedback data into several themes we discerned:

AWA’s value as a tool for provoking reflection

AWA’s lack of sensitivity to linguistic nuance

Student sentiment on receiving this kind of feedback from a machine

Student appetite for automated support

Student uncertainty on the relationship between AWA output and final grade

AWA’s Value as a Reflective Tool

Survey question: Did you find the feedback meaningful, so you could see ways to improve your writing?

A number of students mentioned in their written comments or reflections that the AWA feedback had helped them to think differently about their writing by using the highlighting – and lack of highlighting – as a prompt for reflection. Table 5 illustrates the students’ views on the value of this.

Lack of Sensitivity to Linguistic Nuance

Interestingly, students mentioned that they had reflected on their writing, even though they questioned the accuracy of the AWA feedback (“I found it really useful in scrutinizing my own work and culling the fat from my essay. I don’t think it was 100% accurate in what it did, and the bold words gave me a really good indication of the sort of phrasing it was looking for” Respondent 5 ).

Although another student (who had marked that the AWA feedback was ‘not all meaningful’) noted that AWA was “not able to identify a single introductory remark” respondent 7 , while both they and an expert colleague had thought the writing contained such features. Another student (who marked the feedback as ‘meaningful in part’) noted:

“it is possible to make a clearly stated point in an academic way without using one of the markers (and it is possible that tools such as this have not been programmed to search for all possible permutations of metadiscourse) that would be recognised by the algorithm. I think perhaps that saying that if a paper does not use specified ‘signposts’ suggests that the writing is not clear and ‘academic’ (see ‘tips’ on the results page), constricts writing style. I think it is possible to be clear about your position without explicitly saying ‘my position is…’”. respondent 11 Other students made similar claims: “…found that the tool was limited in its ability to pick up on summary sentences. It was able to detect phrases such as ‘ultimately, this essay will conclude,’ or ‘in conclusion,’ but the use of adverbs such as ‘thus,’ and ‘evidently,’ in conclusive statements failed to be recognized.”…“Another limitation is that certain sentences, which were recounts or mere descriptions were deemed important, whilst more substantive parts of the essay containing arguments and original voice failed to be detected.” Student 1 reflection ).

On Receiving Feedback from a Machine

Survey question: How comfortable are you with getting feedback of this sort from a computer?

Some students were very positive about removing the personal element (“takes the emotion out of having your work scrutinized” respondent 12; “it was not embarrassing in the way that it can be when a tutor or marker gives feedback” student 7 reflection notes ); and the potential for on demand feedback (“feedback is available 24 hours a day” student 7 reflection notes ; “I think AWA will eventually be able to help bridge the ‘teaching/learning’ divide [between large classes & few tutor consultation hours]” student 4 reflection notes ). Some students also noted the reflective possibilities of using AWA in an ongoing writing process, for example:

“…writing multiple drafts and proof reading each can be both tiresome and difficult considering it is often hard to recognize the flaws of your own writing when you’ve been working on it for so long. Xerox’s tool acts as a great, objective third party in providing early feedback.” Student 1 reflection “I would be comfortable receiving feedback of this sort from this kind of tool in the same way I’m comfortable receiving feedback from a person - it is something to reflect on and consider in order to make decisions whether implementing the suggestions/feedback will improve your piece of writing, or your writing generally.” Respondent 11

One noted the potential of AWA to fit into their current writing workflow, noting “I currently run most of my essays through ‘Grammarly’ [a grammar-checking website] before submission” respondent 6. However, some students provided the caveat that they would consider AWA as one source of feedback alongside others, and/or that they would not “trust it as I would trust feedback from an academic” respondent 12 .

Appetite for Automated Support

Survey question: If we continue to make AWA available, what is the likelihood that you would use it?

There was a clear appetite for support from tools such as AWA among these students. The students were invited to use the tool to reflect on their current assignment; however, a number of them mention ‘testing the system’ – uploading multiple assignments; varying expression in individual assignments; and uploading assignments from varying disciplines, to explore the differences in feedback obtained. Indeed respondent 8 (noted above with regard to a desire to connect feedback to outcomes), said:

“I dug out a few older essays from different law subjects and ran them through the software. Some where I got average marks (60-70%) and another where I absolutely excelled and got 95%. When I compared these essays, I didn’t see much difference in the stats analysed by the software – all my work seemed to have quite low detection rates of ‘importance’, yet on some I got 60%, while others 95%.” Respondent 8

Another reported that “I put through different types of academic papers I have written and discovered that I did not use recognised summarising language consistently across different faculties.” Respondent 12. Indeed, one student looked up the research papers AWA is based on (listed on the AWA site), saying:

“Overall I’m impressed with the tool and think it would be a powerful instrument for students, markers and academics. Originally it appeared to me to essentially be searching for words, but after looking more carefully at the results I can see that it is analysing sentence structure to provide commentary, which is impressive” respondent 11.

Relationship to Grade

Perhaps not surprisingly, students wanted to know how the AWA feedback related to outcome (“I would only find real value in this software if it improved my grades. If framing my writing with ‘contrast’, ‘emphasis’, ‘position’, etc gave me better marks then the feedback would be very meaningful.” r espondent 8 ; “I would like to know if the changes I would have made would have improve my mark.” Student 8 reflection notes ).

Limitations of the Evaluation

The student evaluation was conducted in an authentic context, with students reflecting on how an assignment that they had just submitted might have been improved had AWA been available. This has generated extremely useful insights, but we recognise that this is a preliminary evaluation with a small sample size. While in other case studies we have been able to test for statistical patterns and calculate classification metrics, the annotated corpus from this work is not yet large enough to do this reliably. We thus have qualitative evidence of AWA’s value from the law academic and students, which has yet to be quantified. The emerging themes indicate potential areas for targeting future evaluation, providing qualitative insight into the potential and areas for improvement in the AWA tool. We now consider the implications of the student feedback, and reflect on the state of this project in relation to broader concerns about whether machine intelligence such as this can be trusted.

The prospect of automated analysis of writing finding a place in mainstream educational experience provokes strong reactions and natural scepticism. In writing we can express the highest forms of human reasoning: how can a machine make sense of this in a meaningful manner?

The work presented here has sought to describe a user-centered design process for a tool that is not seeking to grade, thus removing some of the ‘high stakes’ controversy surrounding automated grading (see, for example, Condon 2013 ; Deane 2013 ; Ericsson and Haswell 2006 ). However, seeking ‘only’ to provoke reflection on the part of the student writer in no way removes the obligation to ensure as far as possible that this is productive reflection about meaningful prompts: if the signal-to-noise ratio is too low, students and educators will find they are wasting their time reviewing meaningless highlighted text, and will disengage. The student feedback indicates that AWA was provoking useful reflection, but was not without its problems. AWA can in no sense be considered complete, and we are acutely aware of its current limitations, which set the agenda for our ongoing work.

From Highlighting to Actionable Reports

A key challenge that we are now considering is how to bridge the gap between the current ability to highlight sentences, and capability to generate a meaningful report which is more clearly actionable by the writer. A number of approaches to this are under consideration. At a simple level, ‘canned text’ may be deployed, triggered by the recognition of a simple pattern (e.g. the absence of any Summary sentences in the abstract or conclusion). Advancing our current analysis, combining sentence-types for analysis at the paragraph or multi-sentence level may prove fruitful. In addition, more advanced Natural Language Generation approaches would permit greater flexibility in the conditional construction of feedback to students (e.g. failure to use Contrasting rhetorical moves when discussing particular authors or theories known to be in tension with each other — from prior expert knowledge of the field). Progress on this front will help to address uncertainty from students and instructors as to how to make sense of the highlighting.

“Does this Highlighting Mean it’s Good?”

Related to the previous point, but standing as a question in its own right is the extent to which students and educators should be encouraged to use rhetorically-based highlighting as proxies for the overall quality of the piece. Prior work (Simsek et al. 2015 ) has investigated statistical relationships between the frequency of all or particular XIP sentence types, and essay grade, with some positive correlations found, but clearly there is much more to the creation of a coherent piece of writing than just this indicator, so one does not expect it to account for all variance. Rhetorical parsing on its own does not assess the truth or internal consistency of statements (for which fact-checking or domain-specific ontology-based annotation (Cohen and Hersh 2005 ) could be used). Other writing analytics approaches provide complementary lenses (see, for example, McNamara et al. 2014 ) which, when combined in a future suite of writing analytics tools, would illuminate different levels and properties of a text in a coherent user experience.

We are considering deploying automated techniques to analyse the patterns of highlighting in XIP output. For example, sequential analysis might detect patterns in the sequences in which different highlights co-occur, or follow each other. We can also hypothesize that certain sentence types may occur in particular kinds of writing, or particular sections of a given genre of document.

Addressing Algorithmic Accountability

Algorithms sit at the heart of all analytics, but their design and debugging remains the preserve of the few who possess statistical, mathematical or coding expertise. In an era when data collection pervades societal life, embedded in appliances and the physical environment, it is impossible to understand how all the algorithms ‘touching’ our lives work. Some might ask if learners or educators should be troubling themselves with why software is behaving as it does, if it is behaving predictably. However, when their functioning becomes a matter of enquiry or concern, these cannot remain black boxes. For many, there is an ethical need to articulate the possible definitions and meanings of “algorithmic accountability” (see, for example, Barocas et al. 2013 ; Diakopoulos 2014 ). For learning analytics, this is also a pedagogical need, such that learning analytics have appropriate levels of transparency to different stakeholders.

In the context of AWA, we propose three ways to respond to this challenge, noting also their limitations.

Open Research Publications

The way in which XIP operates has been published in the research literature (Aït-Mokhtar et al. 2002 ), as well as the contextualization to academia in this case study. While this is a dissemination channel suited for researchers, and citeable peer reviewed research adds credibility for non-academics, AWA’s function requires translation into appropriate forms for educational practitioners and students who also have the right to enquire. Currently this takes the form of the website’s Guide, and personal briefings presented to academics and students as orientation.

Openly Testable System Behaviour

Many of XIP’s services are publicly accessible via a public API (Xerox n.d. ), providing another form of accountability: it can be tested at will by anybody able to use the API. The rhetorical parser documented here is available only to research partners at present, but rigorously testable. XIP is not, however, available in open source form at present, which is unquestionably the most rigorous form of accountability for suitably qualified people to inspect.

Open Stakeholder Communication

Most academics and students do not benefit from being given source code, just as they do not benefit from being given SQL backups. Transparency is a function of ability to benefit from the information, and accountability a function of the quality of response to queries, and ultimately the consequences of failing to give an adequate account, or of causing harm of some sort. Thus, users build confidence and ultimately come to trust software because they trust the way in which it has been developed, and the tool adds value to their teaching/learning. The academic’s trust in AWA has grown through extensive discussion with the learning analytics research team , experimenting with early AWA prototypes, receiving satisfactory answers to questions, and seeing that her feedback is acted on at the levels of both the user interface and behaviour of the parser. We have also described how we used the structured annotation scheme to scaffold communication between the academic and linguist . AWA is thus experienced as accountable because as questions arise, they are answered and/or code is modified.

The linguist tuning XIP is another stakeholder: her trust in the process is similarly built as her algorithms are tested in authentic contexts, and enthusiastic end-users are giving detailed feedback to improve its performance. We have completed only one design iteration with the students , but we anticipate that engaging them in future iterations will build their confidence in a similar manner.

Most software tools using a participatory, agile approach go through this kind of design process in their early stages. The implications for analytics products whose designers are not available to answer questions or requests from educators and students remain to be seen. Many companies are now putting in place the human and technical infrastructure to remain responsive to user communities of thousands, challenging as that is. As we have discussed elsewhere (Buckingham Shum 2015 ), it may be that many educators and students do not in fact want to examine the algorithmic engines powering their tools, as long as they seem to be working — or it may be that we must find new ways to make the black boxes transparent in ways that satisfy the curiosity and literacies of different audiences.

To conclude, in the context of civil law undergraduate writing, we have documented the design process through which we are developing and evaluating AWA, a writing analytics tool intended to provide formative feedback to students on their draft writing. In order to reach the point where we could confidently pilot this with students, we evolved a participatory design process using structured annotation of AWA’s output by the law academic, which we believe could find wider application as a practical technique. The piloting of AWA with students provided valuable feedback for future improvements, and parallel AWA extensions to other disciplines, which are now in development.

AWA also has a module for analyzing reflective writing based on the Xerox Reflective Parser (Shum et al. 2016 ).

Syntactic relationships are e.g. subject, object, modifier, preposition, etc.

The system is presently available only for internal use. However, we are developing further resources (use guides, screencasts, etc.) which will be deployed on the public project website https://utscic.edu.au/tools/awa/ .

http://www.lawteacher.net/free-law-essays/civil-law/law-in-action.php

Abner, E., & Kierstad, S. (2010). Preliminary exploration of the elements of expert performance in legal writing, A. Legal Writing: Journal of Legal Writing Inst, 16 (1), 363–411 Retrieved from http://heinonlinebackup.com/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/jlwriins16&section=13 .

Google Scholar  

Aït-Mokhtar, S., Chanod, J.-P., & Roux, C. (2002). Robustness beyond shallowness: incremental deep parsing. Natural Language Engineering, 8 (2–3), 121–144 Retrieved from http://journals.cambridge.org/abstract_S1351324902002887 .

Andrews, R. (2009). Argumentation in higher education: improving practice through theory and research. New York: Routledge.

Åström, F., & Sándor, Á. (2009). Models of scholarly communication and citation analysis. In ISSI 2009: The 12th International Conference of the International Society for Scientometrics and Informetrics (Vol. 1, pp. 10–21). BIREME/PAHO/WHO & Federal University of Rio de Janeiro. Retrieved from http://lup.lub.lu.se/record/1459018/file/1883080.pdf

Barocas, S., Hood, S., & Ziewitz, M. (2013). Governing algorithms: a provocation piece. Available at SSRN 2245322. Retrieved from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2245322

Beazley, M. B. (2004). Better writing, better thinking: using legal writing pedagogy in the casebook classroom (without grading papers). Legal Writing: Journal of Legal Writing Inst., 10 , 23 .Retrieved from http://heinonlinebackup.com/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/jlwriins10&section=9

Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: beliefs, techniques, and illusions. Annual Review of Psychology , 64 , 417–444. Retrieved from http://www.annualreviews.org/doi/abs/10.1146/annurev-psych-113011-143823 .

Boud, D. (2000). Sustainable assessment: rethinking assessment for the learning society. Studies in Continuing Education, 22 (2), 151–167. doi: 10.1080/713695728 .

Article   Google Scholar  

Boud, D., Lawson, R., & Thompson, D. G. (2013). Does student engagement in self-assessment calibrate their judgement over time? Assessment & Evaluation in Higher Education , 38 (8), 941–956. Retrieved from http://www.tandfonline.com/doi/abs/10.1080/02602938.2013.769198 .

Boud, D., Lawson, R., & Thompson, D. G. (2015). The calibration of student judgement through self-assessment: disruptive effects of assessment patterns. Higher Education Research and Development , 34 (1), 45–59. Retrieved from http://www.tandfonline.com/doi/abs/10.1080/07294360.2014.934328 .

Buckingham Shum, S. (2015). Learning analytics: on silver bullets and white rabbits. Medium. Retrieved from http://bit.ly/medium20150209 .

Clark, J. L. (2013). Grades matter; legal writing grades matter most. Miss CL Rev, 32 (3), 375–418 Retrieved from http://heinonlinebackup.com/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/miscollr32&section=24 .

Cohen, A. M., & Hersh, W. R. (2005). A survey of current work in biomedical text mining. Briefings in Bioinformatics, 6 (1), 57–71. doi: 10.1093/bib/6.1.57 .

Condon, W. (2013). Large-scale assessment, locally-developed measures, and automated scoring of essays: fishing for red herrings? Assessing Writing, 18 (1), 100–108 Retrieved from http://www.sciencedirect.com/science/article/pii/S1075293512000505 .

Curcio, A. A. (2009). Assessing differently and using empirical studies to see if it makes a difference: can law schools do it better? Quinnipiac Law Review, 27 (4), 899–934 Retrieved from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1487778 .

Curcio, A. A., Jones, G. T., & Washington, T. (2008). Does practice make perfect? An empirical examination of the impact of practice essays on essay exam performance. Florida State University Law Review, 35 (2), 271–314 Retrieved from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1135351 .

De Liddo, A., Sándor, Á., & Buckingham Shum, S. (2012). Contested collective intelligence: rationale, technologies, and a human-machine annotation study. Computer Supported Cooperative Work (CSCW), 21 (4–5), 417–448. doi: 10.1007/s10606-011-9155-x .

Deane, P. (2013). On the relation between automated essay scoring and modern views of the writing construct. Assessing Writing, 18 (1), 7–24. doi: 10.1016/j.asw.2012.10.002 .

Diakopoulos, N. (2014). Algorithmic accountability. Digital Journalism, 3 (3), 398–415. doi: 10.1080/21670811.2014.976411 .

Ericsson, P. F., & Haswell, R. H. (2006). Machine scoring of student essays: truth and consequences . Utah State University Press .Retrieved from http://digitalcommons.usu.edu/usupress_pubs/139/?utm_source=digitalcommons.usu.edu%2Fusupress_pubs%2F139&utm_medium=PDF&utm_campaign=PDFCoverPages .

Flammer, S. (2010). Persuading judges: an empirical analysis of writing style, persuasion, and the use of plain English. Legal Writing: Jorunal of Legal Writing Inst , 16 (1), 183–222. Retrieved from http://www.law2.byu.edu/Law_Library/jlwi/archives/2010/183.pdf .

Ganobcsik-Williams, L. (Ed.) (2006). Teaching academic writing in UK higher education . Basingstoke, UK: Palgrave Macmillan.

Gionfriddo, J. K., Barnett, D. L., & Blum, J. (2009). A methodology for mentoring writing in law practice: using textual clues to provide effective and efficient feedback. Quinnipiac Law Review, 27 (1), 171–226 Retrieved from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1264390 .

Goldstein, T., & Lieberman, J. K. (2002). The Lawyer’s guide to writing well . University of California Press .Retrieved from http://www.ucpress.edu/excerpt.php?isbn=9780520929074 .

Herring, D. J., & Lynch, C. (2014). Law student learning gains produced by a writing assignment and instructor feedback. Legal Writing: Journal of Legal Writing Inst, 19 , 103–126 Retrieved from http://heinonlinebackup.com/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/jlwriins19&section=7 .

Kosse, S. H., & ButleRitchie, D. T. (2003). How judges, practitioners, and legal writing teachers assess the writing skills of new law graduates: a comparative study. Journal of Legal Education, 53 (1), 80–102 Retrieved from http://www.jstor.org/stable/42893788 .

Lea, M. R., & Street, B. V. (1998). Student writing in higher education: an academic literacies approach. Studies in Higher Education , 23 (2), 157–172. Retrieved from http://www.tandfonline.com/doi/abs/10.1080/03075079812331380364 .

Lillis, T., & Turner, J. (2001). Student writing in higher education: contemporary confusion, traditional concerns. Teaching in Higher Education , 6 (1), 57–68. Retrieved from http://www.tandfonline.com/doi/abs/10.1080/13562510020029608 .

Lisacek, F., Chichester, C., Kaplan, A., & Sandor, Á. (2005). Discovering paradigm shift patterns in biomedical abstracts: application to neurodegenerative diseases. In Proceedings of the First International Symposium on Semantic Mining in Biomedicine (SMBM) (pp. 41–50). Citeseer. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.477.6519&rep=rep1&type.pdf .

McNamara, D. S., Graesser, A. C., McCarthy, P. M., & Cai, Z. (2014). Automated evaluation of text and discourse with Coh-Metrix . Cambridge, UK: Cambridge University Press.

Book   Google Scholar  

Murumba, S. (1991). Good legal writing: a guide for the perplexed. Monash UL Rev., 17 (1), 93–105 Retrieved from http://heinonlinebackup.com/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/monash17&section=11 .

National Commission On Writing. (2003). Report of the national commission on writing in America’s schools and colleges: the neglected “R,” the need for a writing Revolution . College Board. Retrieved from http://www.collegeboard.com/prod_downloads/writingcom/neglectedr.pdf

Niedwiecki, A. (2006). Lawyers and learning: a metacognitive approach to legal education. Widener Law Review, 13 (1), 33–73 Retrieved from http://widenerlawreview.org/files/2011/02/02NIEDWIECKI.pdf .

Niedwiecki, A. (2012). Teaching for lifelong learning: improving the metacognitive skills of law students through more effective formative assessment techniques. Capital University Law Review, 40 (1), 149–194 Retrieved from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2302717 .

Norton, L. S. (1990). Essay-writing: what really counts? Higher Education, 20 (4), 411–442 Retrieved from http://link.springer.com/article/10.1007/BF00136221 .

OECD, & Statistics Canada (2010). Literacy in the information age - final report of the international adult literacy survey . OECD .Retrieved from http://www.oecd.org/edu/skills-beyond-school/41529765.pdf

Osbeck, M. K. (2012). What is “good legal writing”and why does it matter? Drexel Law Review, 4 (2), 417–466 Retrieved from http://bits.rulebase.com/wp-content/uploads/2014/07/Good-Legal-Writing.pdf .

Parker, C. M. (1997). Writing throughout the curriculum: why law schools need it and how to achieve it. Nebraska Law Review, 76 (3), 561–603 Retrieved from http://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1515&context=nlr .

QSR International Pty Ltd. (2012). NVivo qualitative data analysis software (Version 10) [Windows]. Retrieved from https://www.qsrinternational.com/products_nvivo.aspx .

Samuelson, P. (1984). Good legal writing: of Orwell and window panes. University of Pittsburgh Law Review, 46 (1), 149–170 Retrieved from http://heinonlinebackup.com/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/upitt46&section=13 .

Sándor, Á. (2007). Modeling metadiscourse conveying the author’s rhetorical strategy in biomedical research abstracts. Revue Française de Linguistique Appliquée, 12 (2), 97–108 Retrieved from http://www.cairn.info/load_pdf.php?ID_ARTICLE=RFLA_122_0097 .

Sándor, Á., & Vorndran, A. (2009). Detecting key sentences for automatic assistance in peer reviewing research articles in educational sciences. In Proceedings of the 2009 Workshop on Text and Citation Analysis for Scholarly Digital Libraries (pp. 36–44). Association for Computational Linguistics. Retrieved from http://dl.acm.org/citation.cfm?id=1699757 .

Sándor, Á., Kaplan, A., & Rondeau, G. (2006). Discourse and citation analysis with concept-matching. In International Symposium: Discourse and document (ISDD) (pp. 15–16). Retrieved from http://www.xrce.xerox.com/content/download/16625/118566/file/result.pdf .

Shermis, M. D., & Burstein, J. (2013). Handbook of automated essay evaluation: current applications and new directions . New York: Routledge.

Shum, S. B., Sándor, Á., Goldsmith, R., Wang, X., Bass, R., & McWilliams, M. (2016). Reflecting on reflective writing analytics: Assessment challenges and iterative evaluation of a prototype tool. In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (pp. 213–222). ACM. Retrieved from http://dl.acm.org/citation.cfm?id=2883955 .

Simsek, D., Sandor, A., Shum, S. B., Ferguson, R., De Liddo, A., & Whitelock, D. (2015). Correlations between automated rhetorical analysis and tutors’ grades on student essays. In Proceedings of the Fifth International Conference on Learning Analytics And Knowledge (pp. 355–359). ACM. Retrieved from http://dl.acm.org/citation.cfm?id=2723603 .

Sperling, C., & Shapcott, S. (2012). Fixing students’ fixed mindsets: paving the way for meaningful assessment. Legal Writing: Journal of Legal Writing Inst., 18 , 39–84 Retrieved from http://heinonlinebackup.com/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/jlwriins18&section=6 .

Stark, S. (1984). Why lawyers can’t write. Harvard Law Review, 97 (6), 1389–1393.

Sullivan, W. M., Colby, A., Welch Wegner, J., Bond, L., & Shulman, L. S. (2007). Educating lawyers: preparation for the profession of law. Stanford, California: Carnegie Foundation for the Advancement ofTeaching. Retrieved from http://archive.carnegiefoundation.org/pdfs/elibrary/elibrary_pdf_632.pdf .

Swales, J. M. (1990). Genre analysis: English in academic and research settings . Cambridge, UK: Cambridge University Press.

Swales, J. M., Feak, C. B., Committee, S. C. D., Council, S, & others (2004). Academic writing for graduate students: Essential tasks and skills (Vol , 1 . MI: University of Michigan Press Ann Arbor Retrieved from http://www.tesl-ej.org/wordpress/issues/volume8/ej32/ej32r1/?wscr .

The Task Force on Law Schools and the Profession: Narrowing the Gap. (1992). Legal education and professional development - an educational continuum . American bar association. Retrieved from http://www.americanbar.org/content/dam/aba/publications/misc/legal_education/2013_legal_education_and_professional_development_maccrate_report%29.authcheckdam.pdf .

Todd, A. G. (2013). Writing lessons from abroad: a comparative perspective on the teaching of legal writing. Washburn LJ, 53 (2), 295–326 Retrieved from http://heinonlinebackup.com/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/wasbur53&section=16 .

Vinson, K. E. (2005). Improving legal writing: a life-long learning process and continuing professional challenge. Touro Law Review, 21 (2), 507–550 Retrieved from http://papers.ssrn.com/abstract=847644 .

Xerox. (n.d.). Xerox incremental parser. Retrieved October 29, 2015, from https://open.xerox.com/Services/XIPParser .

Download references

Author information

Authors and affiliations.

Connected Intelligence Centre, University of Technology Sydney, Broadway, Ultimo, 2007, NSW, Australia

Simon Knight, Simon Buckingham Shum & Xiaolong Wang

Faculty of Law, University of Technology Sydney, Broadway, Ultimo, NSW, 2007, Australia

Philippa Ryan

Xerox Research Centre Europe, 6 chemin Maupertuis, F-38240, Meylan, France

Ágnes Sándor

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Simon Knight .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Knight, S., Buckingham Shum, S., Ryan, P. et al. Designing Academic Writing Analytics for Civil Law Student Self-Assessment. Int J Artif Intell Educ 28 , 1–28 (2018). https://doi.org/10.1007/s40593-016-0121-0

Download citation

Published : 03 November 2016

Issue Date : March 2018

DOI : https://doi.org/10.1007/s40593-016-0121-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Learning analytics
  • Writing analytics
  • Argumentation
  • Natural language processing
  • Participatory design
  • Find a journal
  • Publish with us
  • Track your research
  • help_outline help

iRubric: Law Essay (Argumentative) rubric

law essay rubric

Home

Common Assessment Types

Please click the links below to go directly to an assessment type. Class Participation Online Content Formal Presentations Fact Scenario Analysis Essays and Writing Group Work Exams Other Forms of Assessment

CLASS PARTICIPATION Students are expected to arrive at class having read set materials and prepared to discuss them in class.  A mark is given for the level of contribution each student makes to the learning in the class. Assessable class participation is designed to:

  • encourage preparation for class;
  • encourage students to learn, think, analyse, reflect and evaluate legal material prior to covering that content in class;
  • assist students to develop the capacity to think clearly and to present oral arguments;
  • develop collaborative and group learning skills.

There are four main types of class participation. 1. Voluntary contribution : volunteered responses to teacher questions and comments on issues Teacher leads discussion in class. Issues are raised and questions are posed. Students provide solutions to problems posed or discuss issues as they feel comfortable. 2. Required contribution : responses to teacher questions directed at specific students Teacher leads discussion. Issues are raised and problems posed by the teacher. Teacher directs questions towards specific students for response. 3. Pre-warned contribution : required contributions from one or more students assigned for that class   Prior to class, general issues, questions, themes and problems are identified which will be discussed in the next class. During class a small group of students will be tasked with discussing those issues, themes, problems and questions, essentially leading discussion. Other students are still expected to participate in discussion 4. Prepared contribution : responses to pre-assigned questions Prior to class, specific questions relating to the next class topic will be identified. These specific questions will provide the basis for class discussion. These can be assigned to individual students, small groups, or the class as a whole to answer for the benefit of the whole class. This method can encapsulate aspects of class participation type 2 and 3 above. All members of the class are still expected to participate in discussion. Formal oral presentations are generally not part of class participation.  Such presentations include explicit assessment of communication skills, use of presentation aids, etc.  Class participation is more focussed on beginning or contributing to discussion.  Marking criteria In general terms, class participation is marked against the following criteria:

  • Familiarity with set readings and relevant material, including (as may be necessary) assigned online content
  • Frequency of participation
  • Relates to number of contributions across the Term
  • Evidence of critical thinking and depth of analysis
  • Demonstration of ethical and professional values
  • Ability to identify issues, provide analysis and apply the relevant law or concept
  • Willingness to consider alternative viewpoints
  • Not dominating a discussion or belittling others’ contributions
  • Willingness to admit to lack of understanding or areas of confusion
  • Ability to formulate responses in clear and succinct terms
  • General oral skills
  • Presentation of persuasive argument
  • Willingness to raise pertinent and thoughtful questions
  • Contribution to group climate
  • Attitude to learning and the subject
  • Attentiveness in class

ONLINE CONTENT Students are expected to complete all assigned online tasks by the time specified in the relevant course outline, reading guide, moodle page or as otherwise communicated to students. The assessment of online content may be considered as a component of Class Participation (as above) or some items may be assessed as a standalone assessment item. The online content may be formative, summative or contain aspects of each. A particular task may be designed to help students obtain an overview of course themes, focus on particular course material, provide feedback to the teacher about students who may be having difficulties and provide feedback to students as to their understanding.

The online assessments may take the form of, but are not limited to:

  • Short answer responses
  • Forum posts
  • Reflective exercises

In general terms, online content is marked against the following criteria: Successful completion of assigned task

  • May be based on completion alone or may be assessed based on percentage of correct answers, as specified in relevant course materials

Demonstrated comprehension (as required)

FORMAL PRESENTATIONS Students may be asked to prepare a talk on a set topic for presentation to the class. Topics may arise out of the class readings, areas not covered in readings, or reports on research being undertaken by the student. What separates this assessment task from other oral activities is the explicit marking of the method and manner of the presentation in addition to the matter of the topic. Typically, students will be marked on the appropriateness of the visual aids they used, the style of delivery, keeping to the set time, the persuasiveness of their presentation, their ability to generate discussion, and their ability to deal with questions, in addition to knowledge and understanding of the topic.

Marking Criteria In general terms, presentations are marked against the following criteria:

  • Appropriate to topic
  • Used creatively
  • Does not distract from speaker
  • Used to highlight points, not a mere mirror of what is being said
  • Easy to understand – eg, not a wall of text
  • Verbal skill/style
  • Clarity – speaking in a way that the audience can follow and understand
  • Pace – speaking in a way that is not too fast to understand or so slow that audience loses interest
  • Fluency – presenter does not interrupt presentation to collect thoughts, arrange notes, etc.
  • Use of appropriate style and language for audience and topic – eg any humour used is not offensive, does not detract from the persuasiveness of the presentation; language is not overly colloquial, etc.
  • Presenter maintains eye contact with audience
  • Posture, gestures, etc.  are appropriate for a formal presentation
  • Presenter projects interest and engagement in the topic they are presenting
  • Introduction and conclusion are clear
  • Topic is clearly defined, and trajectory of argument is set out
  • Central argument is clear
  • Presenter generates discussion
  • Presenter encourages questions
  • Presenter attempts to answer relevant questions
  • Presentation is made in a persuasive manner with the presenter taking a clear position on issues
  • Alternative perspectives are recognised and considered to the extent necessary
  • Points and issues raised are synthesised into overall argument
  • Presentation does not indicate confusion over underlying principles and concepts
  • Legal and other concepts are articulated clearly
  • Sources are cited correctly  and appropriately (ie sources are generally acknowledged but without pinpoint citation unless necessary)
  • Presentation is delivered within the allotted time frame

FACT SCENARIO ANALYSIS These assessments build skills of legal analysis and professional communication. Generally, courses earlier in the degree set problem questions , and later courses move to more practice- oriented advice questions.  1. Problem question : Problem questions are closely aligned with developing students’ legal knowledge, legal reasoning and critical thinking, and writing skills. Generally, a ‘fact scenario’ is provided to students. The scenario will be designed around the legal (and sometimes ‘non-legal’) issues that the course is seeking to highlight. Questions (‘problems’) are provided relating to the fact scenario. Students are required to identify the legally relevant facts, apply them to the applicable legal doctrine or legislative provision and assess the likelihood of a breach of the law.  Some tasks may also require assessment of damages or penalty. By default, students take the position of a disinterested “judge” in assessing the scenario.  Often the fact scenarios assume facts to be proven to allow discussion of legal issues alone. The complexity of problem questions can be varied by the length of the fact scenario (and hence number of legal issues arising) and the specificity of the question/s asked. 2. Advice : In an advice variant of the problem question, the student is required to take the position of a lawyer acting for one of the parties in the fact scenario.  In an advice, assessment is not only of the ability to analyse the legal situation, but also the professional ability to see the implications for the client of taking certain positions in litigation, etc. and the ability to communicate clearly to non-lawyers.  The fact scenarios in advice assessments may be more like that encountered in practice, for example containing higher levels of ambiguity, or be in the form of a brief of evidence. Format requirements The default style requirements for problem question assessments in the Law School are:

  • Assignments should be typed and provide sufficient margin areas for markers comments. 
  • There should be double or 1 ½ spacing between lines.
  • Assignments should be fully referenced using the Australian Guide to Legal Citation (4 th Ed), https://law.unimelb.edu.au/__data/assets/pdf_file/0005/3181325/AGLC4-with-Bookmarks-1.pdf . 
  • Assignments should include footnotes but not bibliographies.  Failure to appropriately reference may amount to plagiarism.
  • Word limit for assignments is calculated by reference to all text in the assignment. This excludes citations and references in the footnotes, but includes discursive text within footnotes. See the UNSW Law Assessment Policy for further information.
  • All written assignments must be submitted through the Turnitin link located on the course moodle page

Marking criteria for problem questions: In general terms, problem questions are marked against the following criteria:

  • Mention of all facts relevant to resolution of legal and policy issues set out
  • Areas of uncertainty noted
  • Areas of law raised by the facts identified
  • Elements required for liability set out
  • Policy issue/s raised by facts are identified.
  • Rules for resolving legal issue identified
  • Hierarchy of sources of authority applied (statute, case-law, etc)
  • Relevant conflicting authorities noted and resolved
  • Succinct application of rules to relevant facts
  • Positive or negative outcomes for each rule where possible
  • Evidence of reasoning when outcomes not clear
  • Recognition of plausible counter conclusions and succinct explanations as to why that conclusion was not applied
  • Different policy approaches to issues identified
  • Implications of each approach assessed
  • Conclusions given for all identified issues
  • Reasons given for conclusion
  • Appropriate concentration on issues most practically relevant to resolving overall scenario
  • Logical order of discussion of issues where relevant
  • All sources acknowledged
  • Correct and consistent citation of authorities
  • Formal English and grammar used throughout – no colloquialisms
  • Paragraphs and headings are used to clearly identify discrete issues

ESSAYS AND WRITING There are five main types of writing assessment task used in the Law School; essays, research essays, thesis essays, case notes and court reports. Individual courses may have additional formats. The three types of essay progressively reduce the amount of scaffolding, and thus adapt well to students’ increased knowledge and skills. The essay types provide opportunity to assess and build on a student’s growing capability over the degree program. The most basic form provides considerable structure for the student, while the later forms require more advanced skills. Case notes and court reports are focused on developing legal knowledge and legal understanding. The focus is on concise, clear and accurate writing. They provide significant opportunity for interaction with the legal ‘system’, and for providing depth of insight into legal issues. Furthermore, they provide opportunity for considerable feedback and student reflection on law and practice. 1.     Essay In this foundational type of essay, the questions are provided to the student along with most if not all the research material required by the student to complete the task. This material can include a specific or extended reading list including books, journals, cases and legislation. The emphasis in this form is to introduce students to the basics of legal research and develop their writing and comprehension skills.  2.     Research Essay In this type of essay, the topic is usually provided, but most research is conducted by the student. Any direction as to source material is secondary or minimal only. In other words, generally no set ‘reading list’ is provided. The focus here is on a student developing independent research skills, while still fine-tuning general writing and comprehension ability.   3.     Thesis Essay In this final essay type, students define their own topic and conduct wholly independent research. All formal scaffolding is removed. The emphasis is on higher level abstract thinking, research skills and conceptual understanding.  Thesis essays generally have longer word limits than research essays.   4.     Case Note Students are provided with a specific legal case. The general format of the task is to provide analysis of the case, such as by identifying the legal issues, the reasoning behind the judge’s decision and any other significant comment on other pertinent matters. The case note also requires ‘commentary’ by the student. This is the student’s own analysis of the significant issues raised by the case. The student discussion may include suggestions as to possible law reform, or as to why the student agrees or disagrees with the judgment made. Secondary sources may be used in this commentary. The case note assessment provides considerable opportunity for students to develop critical legal thinking, for student reflection and deep learning.   5.     Court Report Combining significant aspects of ‘experiential learning’ the court report assists in developing a student’s conceptual understanding of the law. The assessment task can be both specific and general, meaning it can be adapted to different stages of the program. In a general approach, students are invited to think about various aspects of ‘the trial’ for example, or ‘court processes’ or other aspects of the adversarial system of justice. Many of these approaches may have been discussed in class, and there may be considerable primary and secondary source material available. Students attend courts and describe their observations within the context of the class themes. A more specific approach would be for students in a particular course to attend a particular type of court or tribunal  then to report on the specific operation of that  body in the context of their specific course, or even in relation to a particular legislative or common law provision (for example a section of the Evidence Act ).  There are often further detailed instructions issued to students. Format requirements The default style requirements for essay style assessments in the Law School are:

  • Assignments should be fully referenced using the Australian Guide to Legal Citation (4 th  Ed), https://law.unimelb.edu.au/__data/assets/pdf_file/0005/3181325/AGLC4-with-Bookmarks-1.pdf . 
  • Assignments should include both footnotes and bibliography.  Failure to appropriately reference may amount to plagiarism.
  • Word limit for assignments is calculated by reference to all text in the assignment. This excludes citations and references in the footnotes and the bibliography, but includes discursive text within footnotes. See the UNSW Law Assessment Policy for further information.

Marking criteria In general terms, essay style assignments are marked according to the following criteria:

  • Clear introduction, body, and conclusion
  • Clarity of scope/delineation of scope of essay
  • Logical flow of proposition and evidence
  • Integration of evidence
  • Support provided for contentions and assertions of fact, law or opinion
  • Consideration of contrary positions
  • Evidence of sufficient independent research to address issues in topic adequately
  • Use of appropriate sources for topic – eg primary (where available), peer reviewed
  • Evidence of awareness of any relevant issues arising from class readings
  • Correct and consistent citation
  • Bibliography 
  • Demonstrated understanding of primary and secondary material analysed
  • Assertions and contentions of fact, law or opinion are not made without appropriate sourcing
  • Reflection –essay engages with the material
  • Reflection – essay demonstrates independent thought
  • Conclusions are drawn
  • Different perspectives are evaluated
  • Identification of knowledge gaps
  • Ability to weigh sources by evaluation and judgment
  • Awareness of the ambit of sources
  • Use of own words as appropriate
  • Grammar, sentence, paragraph structure, etc.
  • Appropriate tone/voice – minimal verbosity.

GROUP WORK There are three main types of ‘group work’ identified in the Law School. There are some similarities between them; however, like the essay types identified above, the different types of group work can be scaffolded. They lend themselves to be deployed across the program strategically to develop and then build upon students’ growing ability. 1.     Ad-hoc This type of group work is the most ‘foundational’; it can be used across all courses in the program, and often forms the basis for an effective class participation strategy as well. Ad-hoc groups are formed as part of the class, or an extension of it, and last only for the length of the task assigned to the group.  Most often they are short small group discussions held within a class examining an issue that is then reported on back to the class.  There is usually no grade awarded for ad-hoc group work – it often goes toward part of a student’s class participation grade. Ad-hoc groups are useful for adding variety to teaching techniques, building students’ inter-personal skills, broadening perspectives on issues, and acclimatising them to later formal groupwork.  Ad-hoc groups need not be seen as merely introductory forms of group work.  As students become more proficient at group discussion, ad-hoc groups can be given more complex tasks. 2.     Cooperative These are more formal group activities structured by the teacher that generally include summative assessment.  The assessment of a collective learning task often includes the process and not just the final product. While the focus of the task is on developing group skills, learning to support each other and developing a ‘unified’ approach through a final essay or presentation, the emphasis, at least in terms of assessment, still remains very much on the individual’s mastery of skills or content in a group setting. Consequently, the task is structured in such a way that there is a significant component completed individually, such as written papers following a group presentation, or individually assigned sections of a final paper that the group collectively researched.  The aim of co-operative group work is to introduce students to group responsibilities with a safety net of individual assessment.  It can be useful for tasks too complex for an individual student to easily complete in the required timeframes, or to help raise the level of achievement or engagement across a class. The teacher is also quite involved with a cooperative project, providing guidance where necessary. 3.     Collaborative This can be considered an ‘advanced’ or ‘capstone’ type. Under a collaborative learning approach, students work together in a small team creating a piece of work that they have developed and planned together and for which they are overwhelmingly assessed collectively through a group mark. All students in the group receive the same mark. (Generally where students are assessed as a group the mark will have a maximum 30% weighting). Collaborative learning focuses much more on the group process and often assessment will also involve individual members of the group reflecting on that group process. Collaborative learning is also suited to a group work task where the emphasis is on the group themselves coming up with a problem and a subsequent solution (similar in many ways to the thesis essay type discussed above). In other words, it can be beneficial for developing students’ capacity to ‘problem pose’ not just ‘problem solve’. In this type, there is considerable student autonomy, with teachers providing minimal, if any, direction.  One form of collaborative group work, Team-Based Learning, involves groups or “firms” being formed for an entire Termand being the focus of much of the learning outcomes in the course. Marking criteria for group-work Formally assessed groupwork often has a written product of the collaboration and that work is marked according to the criteria appropriate for that style of writing.  Where the process of collaboration in group work is assessed, in general terms, it is assessed against the following criteria:

  • Individual rating of the performance of the group as a whole and a member’s individual performance.  The ratings may be different.
  • Work plan is developed through consensus
  • Work plan is reviewed
  • Group meetings are structured
  • Organisation of group meetings by consensus
  • Group meetings are held face to face
  • Each member has a specific role
  • Roles arrived at by consensus
  • Members are responsible for their role
  • Work is distributed equitably
  • Distribution arrived at by consensus
  • Members are responsible for their work
  • Gr   oup members encourage different viewpoints
  • All decisions by group are based on consensus
  • Potential or real issues identified
  • Appropriate responses to real issues
  • Preventive management of potential issues

EXAMS There are two basic types of exams used in the Law School: formal exams and take-home exams. Some teachers also utilise ‘online’ quizzes and in class tests. 1.     Formal exams This is the traditional exam type, held during the formal examination period at the end of the Term, and centrally invigilated by the University. All examinations in the Law School are ‘open book’ unless otherwise notified to students.  This means students can bring any printed or handwritten materials they wish into the examination room. There is some variety amongst the different formal exam types, however the most common formal exam consists of problem based questions and essay based questions. There are also a number of formal exams which contain only one of these question types.  Problem based questions can require either multiple ‘short’ answers (often the case with foundational courses) or require a longer considered answer, depending on how they are worded and the scenario provided. Sometimes, students may get a choice of problem scenarios to choose from. Similarly, students will often have a choice of essay topics to choose from. Some exams let students choose how many problems or essays they will undertake. For example, there may be three problems posed and three essay topics. Students could choose 2 problems and 1 essay or 2 essays and 1 problem. Some exams may have a compulsory essay (or compulsory problem) that may discuss the wider themes of the course or require students to provide a considerable reflection. While there is no requirement for full citation, some citation is still required. For example, a case may be referred to by its popular name or the names of the parties and then underlined without the need for a full medium neutral citation. Students are expected to bring sufficient printed materials into the examination to be able to cite sources adequately. The more specific structural aspects of the formal exam types in order of scaffolding (from foundation to mastery) include: a. Problem questions may appear in the following forms:

  • Multiple short fact scenarios with question/s. Provides students substantial help in identifying legally relevant facts and in structuring responses
  • Long fact scenario, but broken up into different sections, with question/s for each. Provides students with some help in identifying legally relevant facts and substantial assistance in structuring answers
  • Long fact scenario/s followed by specific questions which structure nature of answer. Provides students with average degree of help identifying legal issues and in structuring answers
  • Long fact scenario/s followed by one general question. Students are given minimal help in structuring a response to the questions
  • Long fact scenario/s followed by highly specific questions.  Students are directed to certain legal issues to the exclusion of others and given minimal help in structuring a response to the questions.
  • Long form: Students complete one or two essays (they may be given a choice of topics). Students have around 30 mins for each answer
  • Short form: Students complete three or more (they may be given a choice of topics). Students have around 15 minutes for each answer.

Essays usually take either of the following structural forms as their basis:

  • A quote from a case or author is provided and the student is asked to critically discuss it.  Further structure may be provided in foundational courses by requiring discussion of particular aspects of the quote, and marks may be awarded separately to components of the answer
  • Students are asked to critically discuss an area of law, social issue or proposition.  No guidance as to source materials or structure is provided.  One holistic mark is provided.

Essay topics are generally based on one or more of the following:

  • Critical evaluation of the current law, including analysis of case law or legislation
  • Theoretical issues
  • Broader social impacts of law
  • Law reform issues and policy considerations
  • Practical considerations of law and regulation

2.     Take-home exams The exact form of a take-home exam will vary from course to course, but in essence it will be a paper released to all students at a particular time with a short time frame for submission of answers, typically 36-48 hours. There may be word limits for answers.  Take-home exams are a hybrid of a formal exam and an in-Term essay or fact scenario analysis.  There is usually a requirement for full citation and printed answers. Students are generally presented with a complex problem scenario, though the take-home exam may also consist of short answer ‘essay’ type responses instead. There is often more than one question relating to the same scenario/s, such as focusing on different participants (there could be multiple plaintiffs or defendants). The length of the answers will be determined by the number of questions asked. The take-home exam may also require a short ‘essay’ or ‘commentary’ or ‘reflection’ by the student on a specific topic.   OTHER FORMS OF ASSESSMENT The Law School also uses other forms of assessment in addition to the types already mentioned. Many of these share features with the above key mentioned typologies, but are often focused towards a certain specific learning outcome desired by the teacher. The Law School encourages the development of different forms of assessment activity, acknowledging that a diverse assessment regime leads to better student engagement, deeper learning and hence improved learning outcomes. New assessment activities and formats are regularly discussed and trialled. Some examples of formats used recently are below. 1.     Poster In this task students are required to present material in the format of an academic poster. This format has been used by some teachers for students to present a court report or case note. Students engage with presenting material visually, within the constraints of the poster format. They must also think about the audience, getting complex information to them in a clear, yet engaging manner, which adds a significant dimension to the deep learning potential of the task. 2.     Video or other multimedia This method of assessment has been used in the context of a cooperative or collaborative group work task. Students are given a topic and then present their response such as through a role play, a ‘mash up’, a short film or animation, or an audio interview where they take on the role of a radio/podcast reporter or presenter. Many of these may then be uploaded to Youtube or moodle for other students to see. The multimedia format of assessment has been used in the context of a court report. The emphasis again is on students taking a different perspective to the issue – essentially thinking about a response ‘outside the box’. 3.     Scrapbook This is an assessment activity that is often used in foundational courses (such as ILJ and Torts). It can form the basis of a class participation strategy. Each week students are required to examine the newspapers and other media sources for stories that relate to that weeks class, or the topic and its themes more generally, and place these in a ‘scrapbook’. These form the basis for class discussion (students may be required to choose just one article per week). Each week in the scrapbook students write a short summary of why the article they choose is relevant or important etc. This is short, usually only 200 or so words. By relating the law to real current events, students are provided with context, and deeper learning opportunity. At the end of the semester students hand these scrapbooks in and receive a mark for them. 4.     Reflective notes This is similar in some ways to a scrapbook but differs in some key respects. For example while the scrapbook notes may relate to an understanding of the law generally (or specifically as the case may be), the reflective notes seeks to encourage a more personal consideration of the topic or subject, quite often free from the constraints of a particular methodology or structural bias. The emphasis is on eliciting a more personal, nuanced and emotional response. It seeks to encourage the students to think beyond their own present material experience and to consider the impact of the law on others for example. Furthermore, reflective writing is quite often designed and utilised to encourage students to think about themselves in terms of their ‘practice’, not just as students but as professionals, for example in relation to ethical behaviour. Reflective writing is used in a number of different contexts in the Law School and is adaptable to the whole range of a student’s learning as they progress through the program from foundational to capstone experiences. Students may be asked to keep a reflective journal throughout the Term, or may be asked to provide short reflections on individual topics.  It may be used as an ‘ongoing’ task to facilitate class participation. It may be used as a mid- Term exercise, as part of an exam or accompanying groupwork. It is often a significant aspect of a student’s assessment when undertaking an internship and other experiential learning activities, and will take the form of a larger piece of writing (for example up to 2000 words).   5.     ‘Mini moots’ In this format, a problem is presented. The class may be divided into those ‘for’ and those ‘against’. Discussion then flows back and forth as each side presents their argument. This is often a good use of the ‘ad-hoc’ group work strategy discussed above. The different sides discuss the issue as groups and then present to the class. Some teachers may make the mini moot more formal and structured depending on the course and the content. For example, a judge may be elected (sometimes the teacher who may be able to direct evidence proceedings) and the roles of a defence and prosecution panel who will be responsible for presenting the argument allocated. There may also be witnesses who have been given certain instructions, resulting in potentially entertaining cross examination. Approved by: Associate Dean Education, Professor Prue Vines, January 2020  

  • Privacy policy
  • Copyright & disclaimer
  • Accessibility
  • Buy Custom Assignment
  • Custom College Papers
  • Buy Dissertation
  • Buy Research Papers
  • Buy Custom Term Papers
  • Cheap Custom Term Papers
  • Custom Courseworks
  • Custom Thesis Papers
  • Custom Expository Essays
  • Custom Plagiarism Check
  • Cheap Custom Essay
  • Custom Argumentative Essays
  • Custom Case Study
  • Custom Annotated Bibliography
  • Custom Book Report
  • How It Works

Custom Law Essay Writing Service by Expert Writers

Affordable law essay writing service assistance for students striving only for the best. Click below so we complete it timely!

Trusted by 14,000+ happy customers and experts

Meet Our Skilled Law Essay Writers

Tutor-Wendy

Learn How Our Custom Law Essay Writing Service Works

When you seek quality custom assistance for your law studies, our custom law essay writing service is ready to assist you 24/7! See below to learn how to place an order with our company:

Set your deadline, the number of pages, formatting, subject, and title of your paper, and upload instructions for your assignment. You can also choose the writer’s level, depending on the complexity of your task.

Make a deposit so that your law essay writer can begin working on your essay right away.

The rest of your payment is released when you receive the final task and check it. You pay the rest only if you are happy with the results. You can request free unlimited revisions.

law essay rubric

Our Law Essay Customers Reviews

Michael Brown

Why choose our custom law essay writing service?

Skilled law assignment writers.

Our team involves native English speakers with verified academic credentials. It means that you work with the best specialists in your field. Choosing your helper allows you to ensure that you work with a person who understands your task. Besides, we offer plagiarism-free and timely delivery for urgent law assignments.

Direct contact with an expert and free revisions

Another important benefit is direct contact with your law essay assistant. It helps eliminate communication issues and discuss every detail, which is essential for a complex field of law studies. The presence of free revisions helps to introduce minor corrections and make things perfect as additional essay editing is done.

Experience with legal studies

We have over 16 years of writing and editing experience in academic research. Our law specialists are graduates from the best universities in the USA and beyond. Working with us, you receive a guarantee of quality work and timely delivery. Join us for a positive learning experience!

Reliable Custom Writing Service has successfully completed over 50k orders for international students

By clicking “Hire”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related emails.

law essay rubric

  • Free Formatting
  • Free Reference Page
  • Free Revisions
  • Free Customer Service

Custom law essay writing service that will never let you down!

Law essays and research papers are some of the most difficult tasks a student has to write. There are numerous case studies, formatting issues, endless revisions, footnotes, and editing that must be done. If you add all the lengthy books and lecture notes that must be studied, things can become even more complex. When you are about to give up, get in touch now! It is where our custom law essay help becomes essential and acts as a saving grace as our specialists provide you with timely assistance and guidance 24/7!

Learn what law essay types you can get

It is where things can get rather complex, but do not let it frighten you off! It usually becomes difficult to determine what kind of law essay you must write as there are many possibilities. Approaching our law essay writing service will help you to cover every possibility as soon as you share your grading rubric. The list of popular law essays that we help you to write includes but is not limited to:

  • Argumentative law essays;
  • Reflective college law journals;
  • Court hearings analysis;
  • Criminal and civil law studies;
  • Law case studies;
  • Law research papers;
  • Dissertations and law coursework help.

Our law essay writing service provides you with reviews of court hearings, book reviews, analysis of various complex historical cases, and more. If you need to edit and proofread as you have finished writing your assignment, we can write such a task as well. Even an interdisciplinary approach can be accomplished if you need to study Forensic Data research or wish to provide something for debates based on a particular legal case. We have relevant experts who will be happy to write what you need and deliver an excellent cheap law essay writing service at an affordable price and top quality!

Benefits of our custom law essay writing service specialists

If you are wondering who will help you complete your demanding law essay paper as you cannot write it for some reason, read on to find out! Here are the main benefits that should persuade you to start our cooperation as our writers help you to write a custom law paper:

  • All our law specialists are graduates from top law schools and colleges with verified academic credentials.
  • Our specialists are native English speakers with stellar grammar and style skills.
  • You can choose any field of law studies and cover every academic essay type.
  • The presence of direct contact allows us to create custom law essays that always match your objectives and quality standards.
  • You can ask your writer for a free revision and enjoy the correction of all the minor aspects.
  • Our writers can match every necessary writing level. You can choose between “All Writers” or consider the Gold (Top 50 writers) or Platinum (Top 20 writers) option.

Tip: Choosing custom law essay writers from the Gold or Platinum category is recommended if you need to write a dissertation or a demanding law assignment. Just make sure to discuss things first as you place your order with us to determine the best and most affordable solution!

How to purchase a law essay on our website?

We have made the ordering process easy for you! It should not take you more than ten minutes in total if you have all your task materials ready at hand. Here are the steps to receive timely and trustworthy law essay writing help that will match your expectations.

  • Once you click on the “Hire a Writer” button, you will be taken to the order placement page. You should choose between “Writing” if you need to write an assignment or “Editing” if you seek corrections and proofreading.
  • Specify your deadline and the number of pages.
  • Choose your academic subject and type the title of your Law paper.
  • Specify formatting and the number of required sources.
  • Upload your grading rubric and leave comments that you have for the task that an expert has to write.
  • Choose the writer’s level based on the offered options.
  • When you are done choosing law essay writers for your task, you must pay a small deposit to let our expert write your law paper. The rest of the sum is secure in your account as you pay only when you are happy with the task!

Tip: Remember to share as much information as you have for your task to help us serve you in the best way possible!

The reasons to use our custom law essay writing service

One of the primary reasons why our service stands out is the direct contact with a chosen writer and the presence of verified specialists who always deliver writing and editing tasks on time. Let’s sum up the main benefits to help you consider the best law essay writing service assistance available:

  • Direct contact with a chosen writer.
  • Affordable prices and safe payment methods.
  • Free revisions and refunds to make things safe and perfect.
  • Professional editing and proofreading help.
  • Free plagiarism-detection tool that guarantees originality.
  • A wide coverage of Law subjects and essay types.
  • Sharing your write my law essay for me message is handled by our customer support 24/7 and keeps your data confidential.
  • Over 16 years of academic research writing experience.

No matter what complexity or subject you want to cover in your law assignment, we shall always assist you and provide you with the best sources, style, formatting, editing, and other academic assistance. Feel free to get in touch and become one of our happy customers!

Can you provide me with a law case study essay help?

Yes, our law paper writing service can provide you with all assignments. If you need case study assistance for your law course, specify it in your instructions, and we shall gladly assist you with your task!

What is the price for an international law studies research paper?

There is no set price for a paper as every order is different. The price will depend on your deadline, number of pages, essay type, and writer’s level, among other things. Feel free to get in touch and place an order to determine the options.

Is your custom law essay writing service legitimate?

Yes, we represent a legitimate academic research service for students, universities, and businesses seeking assistance with writing and editing tasks. We are very strict about confidentiality and anti-plagiarism rules. Using our services within legal terms is absolutely safe.

Can I talk to my law assistant directly?

You can! We provide a direct contact feature that helps you talk to your helper online and eliminate communication barriers as you discuss things. It is one of those aspects that makes our law essay help stand out!

We hope that we have answered all your possible questions on this topic. If not, please check our separate FAQ page, or contact us with your question or suggestion.

Other CustomWriting Services

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.

Comparative Studies in Society and History

Comparative Studies in Society and History

lsa-logo

Essays for all seasons: 2023

law essay rubric

CSSH is pleased to highlight the most-downloaded article of each of our 2022 issues!

Winter: ROY BAR SADEH. “Worldmaking in the Hijaz: Muslims between South Asian and Soviet Visions of Managing Difference, 1919–1926.”

Spring: AUDREY TRUSCHKE. “Hindu: A History.”

Summer: JOSHUA M. WHITE. “Slavery, Freedom Suits, and Legal Praxis in the Ottoman Empire, ca. 1590–1710.”

Fall: DIVYA CHERIAN. “The Owl and the Occult: Popular Politics and Social Liminality in Early Modern South Asia.”

Related Posts

' src=

Lecturer of Anthropology University of Michigan Associate Managing Editor Comparative Studies in Society and History

Web of Conferences logo

Message placeholder

law essay rubric

Structural analysis of types of Muslim religious consciousness

Axmed Abdurazakov 1 , Olga Garnaya 2 * , Michael Lebedev 2 and Emzari Yunusov 2

1 Federal State Institution of Additional Professional Education Interregional Training Center of Federal Penitentiary Service of Russia for Moscow Region, Novye Doma settlement, Elektrostal, Moscow Region, 142470, Russian Federation 2 Federal State Institution Research Institute of Federal Penitentiary Service of Russia, Narvskaya str., 15 a, building 1, Moscow, 125130, Russian Federation

* Corresponding author: [email protected]

A separate theoretical and legal study should be devoted to essential features of legal consciousness of Muslims, which will be based on the study of perception of positive law through the prism of Islamic religious and legal doctrine. It is advisable to start the basis of this study with definition of its main structural element - the types of Muslim legal consciousness. Consideration of this issue from the standpoint of natural law will expand the traditional boundaries of theory of modern legal consciousness, open up additional applied and scientific horizons and, using the example of Islam, allow us to consider peculiarities of religious influence on legal consciousness of various categories of citizens. Knowledge of foundations of Muslim law, procedure for formation of moral and social religious attitudes, interpretation of religious canons and dogmas contribute to a better understanding of many processes taking place within Russian Muslim community and can form the basis of mechanism for formation of moral legal consciousness, which must be opposed, in its turn, to radical and criminalized forms of religious consciousness.

© The Authors, published by EDP Sciences, 2021

Licence Creative Commons

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.

Teachers Told Us They’ve Used AI in the Classroom. Here’s Why

law essay rubric

  • Share article

It’s been a year since ChatGPT —an AI-powered tool that can seemingly answer any prompt—burst onto the K-12 scene, and teachers are slowly embracing the tool and others like it.

One-third of K-12 teachers say they have used artificial intelligence -driven tools in their classroom, according to an EdWeek Research Center survey of educators conducted between Nov. 30 and Dec. 6, 2023.

Of those who said they’ve used AI tools, 21 percent said they’ve used them a little, 10 percent said they’ve used them some, and 2 percent said they’ve used them a lot, according to the survey, which included 498 teachers.

Artificial intelligence experts have touted the technology’s potential to transform K-12 into a more personalized learning experience for students, as well as for teachers through personalized professional development opportunities. Beyond the classroom, experts also believe that generative AI tools could help districts become more efficient and fiscally responsible .

But there are challenges: Many teachers are still unfamiliar with the technology , and they are worried about students using AI tools to cheat and not knowing that the tools can produce inaccurate or biased responses.

Teachers have used ChatGPT and other generative AI tools to create lesson plans, give students feedback on assignments, build rubrics, compose emails to parents, and write letters of recommendation.

In open-ended responses to the EdWeek Research Center survey, educators who have used the technology say that it can be a very effective tool if used responsibly. Others also say that while they’ve used AI tools a little bit, they’d like to learn more about how to use them for their work and how to teach students to use them properly.

Here’s how and why some educators say they’ve been using AI tools in the classroom:

   AI is something we shouldn't be tiptoeing around. I have been following language models for years, and I was one of the first ones to sign up for ChatGPT last November. This is here to stay, and it is a disruptive technology. We, in education, need to jump on this train and teach kids and teachers to use it. It is currently the PERFECT search engine and the PERFECT assistant. It is a FABULOUS time-saver when you become a "power user" with regard to inputs for ChatGPT, Bard, or Claude2. AI is here to stay, and it is growing exponentially every single day. […] This technology is a game-changer for poor districts and struggling districts.

— District superintendent in Mississippi

   I believe that we should responsibly teach students how AI works and how to use it as a tool. My high school students try to use ChatGPT to write papers, but they always seem wonky and repetitive. I tease them and suggest that I wouldn't mind if they were at least well written! I explain that in order to utilize the tool we must read and edit what the AI spits out. I frequently use ChatGPT to write lesson plans, syllabi, and parent letters. It can be a very effective tool, but I still look over and edit anything that looks off. As an artist—it's very important that people understand the difference between creating something from scratch and using AI to generate visuals.

— High school fine arts teacher in Texas

   My district recently issued a survey to staff asking if we would be interested in having a trained AI/ChatGPT professional offer a professional development session on AI and ChatGPT. It seemed to be an open exploratory opportunity. They asked if we would be interested and what we would like them to address in the session. I share this because I think my school is trying to navigate how AI will fit into our school's instruction in a way that is realistic. I fully support this; I used ChatGPT in my classroom last year and am looking forward to learning more ways to understand, leverage, and teach students about this technology.

— High school English teacher in New Jersey

   Recently, our technology teacher told me about using ChatGPT for crafting letters of recommendation. I get a lot of these and it certainly helped when I received several requests with quick due dates. I can definitely see the usage but would want a lot more training and guidelines set.

— High school social studies teacher in Michigan

   When ChatGPT first became publicly available it was almost immediately used by my students as a plagiarism tool. The timing was bad. I was preparing students for the AP English Literature exam and we were drilling quite a bit of fairly formulaic writing in the 300-400 word range, which ChatGPT is particularly well suited for. When the realization dawned on me that many of my students were using the tool unethically, my feelings [were] hurt. It was depressing in a genuinely existential way. As the leader of the English department for my school I held a meeting with the department and we crafted an acceptable use policy. This school year we started to proactively design procedures that would make it harder to use AI unethically and some teachers, myself included, have started finding ways to model ethical use. It's been a roller coaster.

— High school English teacher in Texas

Sign Up for EdWeek Tech Leader

Edweek top school jobs.

Woman using computer chatting with an intelligent artificial intelligence asks for the answers wants. ChatGPT Chat with AI or Artificial Intelligence technology. knowledge on the internet, e-learning,

Sign Up & Sign In

module image 9

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

How to Red Team a Gen AI Model

  • Andrew Burt

law essay rubric

The harms that generative AI systems create often differ from other forms of AI in both scope and scale.

Red teaming, a structured testing effort to find flaws and vulnerabilities in an AI system, is an important means of discovering and managing the risks posed by generative AI. The core concept is trusted actors simulate how adversaries would attack any given system. The term was popularized during the Cold War when the U.S. Defense Department tasked “red teams” with acting as the Soviet adversary, while blue teams were tasked with acting as the United States or its allies. In this article, the author shares what his specialty law firm has discovered what works and what doesn’t in red teaming generative AI.

In recent months governments around the world have begun to converge around one solution to managing the risks of generative AI: red teaming.

  • Andrew Burt is the managing partner of Luminos.Law , a boutique law firm focused on AI and analytics, and a visiting fellow at Yale Law School’s Information Society Project.

Partner Center

  • Share full article

Advertisement

Supported by

Guest Essay

How Biden Can Tackle Mass Incarceration

An illustration of a man in orange prison clothes standing outside in the sunshine, as seen from the gloom of a prison cellblock.

By Michael Romano

Mr. Romano is the founder and director of the Three Strikes Project at Stanford Law School, which represents people sentenced to life under three-strikes laws throughout the country, and he chairs California’s Committee on Revision of the Penal Code.

As a candidate, Joe Biden said he would substantially reduce the federal prison population as president. Last week he commuted the sentences of 11 people who he said were serving unjustifiably harsh prison terms for drug offenses and also pardoned people convicted of certain marijuana charges. Still, the number of people in federal prison has grown during the Biden administration.

Despite historical bipartisan support for sentencing reform, Mr. Biden has failed to fully embrace the momentum of his two immediate predecessors, who made substantial efforts to tackle mass incarceration. Some have argued that his relative inaction on the issue may hurt him among key voting groups.

But it is not too late.

Ten years ago, Barack Obama began an ambitious program to reform the country’s criminal justice system and take on mass incarceration, offering clemency to people serving long sentences for nonviolent crimes who had demonstrated rehabilitation. Over 1,500 people were freed — many of whom would have died in prison otherwise.

And five years ago, Donald Trump took up that mantle and accelerated reform of federal sentencing laws by championing the First Step Act , which, as of January 2023, has resulted in the early release of nearly 30,000 people in prison, including many sentenced under what many lawmakers came to consider especially harsh laws.

The opportunity is ripe for Mr. Biden to act as well. And he can do so without negotiating with a fractious Congress and without following Mr. Obama’s politically fraught path of offering clemency, which invites the same arbitrariness and inequities that reformers are trying to correct.

Instead, Mr. Biden can chart his own course by taking advantage of a little-used law that allows prison officials to recommend to federal judges that they re-evaluate sentences of people for “extraordinary and compelling reasons.” This can include people who are facing long sentences and have already served many years behind bars, have shown their commitment to rehabilitation and are prepared for release.

This approach, which could be called administrative clemency, is fairer, more transparent, more comprehensive and less politically complicated than traditional clemency. It is in step with reforms percolating through state legislatures that empower law enforcement agencies and judges to revisit old, unnecessarily harsh prison sentences. It also encourages people in prison to work on themselves through education, vocational training, counseling and drug treatment.

Prison officials are ideally situated to make this evaluation. Prosecutors, judges, the police and even defense lawyers tend to move on to other cases and often do not keep tabs on people sent to prison who have been working to rehabilitate themselves and are hoping for some kind of reprieve. But prison officials and staff members work with them daily and follow and chart their progress.

The administrative clemency process empowers prison officials to identify suitable candidates for resentencing based on their behavior and rehabilitation. Those cases would be sent back to court, where a judge would make the final determination on whether a person’s sentence should be reduced.

Unlike clemency, this decision is made in open court, with arguments and evidence by prosecutors and defense lawyers. It also allows courts to consider and impose release plans that maximize public safety. Final determinations are made by federal judges with lifetime tenure who are distant from the politics that influence presidential-level decisions. In short, this process returns the case to where it belongs: in court, with all the legal protections, evidence and consideration criminal cases deserve.

While some people may be justifiably wary of investing so much power in prison officials, a similar process is working in California, which is infamous for having some of the country’s harshest sentencing laws and most overcrowded prisons, as well as what may be the nation’s most powerful prison guard lobby.

Over the past six years, under a program begun by Jerry Brown when he was governor, California state prison officials have recommended 2,200 people for sentencing reductions. Before any candidates are released, they undergo thorough vetting by prosecutors and defense lawyers, and a judge determines whether continued incarceration is no longer in the interest of justice.

There is little litigation because after a person is identified and endorsed by prison officials, it becomes clear that the sentence is unnecessarily long and counterproductive. California’s program makes use of what had been a largely dormant state statute similar to the federal law that Mr. Biden could employ.

Mr. Obama, who recognized that many criminal punishments were overly harsh, ineffective at maintaining public safety and infected by racial bias, argued in a Harvard Law Review article that presidents have an “obligation” to correct injustices baked into the county’s criminal legal system. “How we treat citizens who make mistakes (even serious mistakes), pay their debt to society and deserve a second chance reflects who we are as a people,” he wrote.

And Mr. Trump, in social media posts after Congress passed substantial changes to tough-on-crime laws in 2018, said that it was his job “to fight for all citizens, even those who have made mistakes” and added that the new law would “keep our communities safer, and provide hope and a second chance, to those who earn it.” Among other things, the changes reduced prison terms.

Administrative clemency, if the Biden administration pursues it, will not correct fundamental flaws in the criminal legal system. It will not directly address racism, mental illness and bad social science that can be corrected only through comprehensive reform. But it is something Mr. Biden can do to address thousands of unfair federal sentences without embroiling himself in the politics of clemency or legislation. And it’s something he can do today.

Michael Romano is the founder and director of the Three Strikes Project at Stanford Law School, which represents people sentenced to life under three-strikes laws throughout the country, and he chairs California’s Committee on Revision of the Penal Code .

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips . And here’s our email: [email protected] .

Follow The New York Times Opinion section on Facebook , Instagram , TikTok , X and Threads .

comscore

The Vast Extent by Lavinia Greenlaw: Essays exploring intersections between arts and sciences

A life’s worth of reflection and rare pleasure to be had in discovering connections in this writer’s thinking.

law essay rubric

Lavinia Greenlaw has written a wide-ranging collection of essays on the nature of vision, light and image. Photograph: Sarah Lee

The Vast Extent: On Seeing and Not Seeing Further

“I have grown up with an empty frame — the picture emptied of divine presences — and what is left is light.” So says poet Lavinia Greenlaw in her perceptive and wide-ranging collection of essays on the nature of vision, light and image. A poet raised among scientists, Greenlaw demonstrates a restless curiosity, touching upon topics as seemingly disparate as astronomy, meteorology, astrophysics, the colour spectrum, mythology and visual art.

The latter provides her with her most frequently used lens through which to study our engagement with the world. In “Becoming, resistance, dissolve”, the male and female painterly gazes are contrasted through images of Lucretia in works by Lucas Cranach the Elder and by Artemisia Gentileschi, who had herself experienced rape. In Cranach’s painting, Lucretia’s “… face is empty. She is already absent. She has unbecome” whereas in Gentileschi’s, the bloodlessness of the image — in comparison to the artist’s other works — suggests “This is such an old story that there is no more blood to spill. These ambiguities and refusals are a form of resistance.”

Methods of discovery are central to the writer’s work, with essays exploring points of intersection between the arts and sciences. Here we see Greenlaw bristle at the idea that poets are less clear-eyed than their scientific counterparts. In fact, she identifies a strain of romanticism in the world of science that she finds frustrating. A lively exchange of letters with the late immunologist and poet Miroslav Holub animates her arguments, with the latter admiring her coining of a phrase: “I like your ‘palliative use of science’. It ranges from scientific mimicry used by modern shamans to newspaper statements on another cure for tumours.”

The form of these “exploded” essays is loose-limbed and allusive, with the organising principles of essays with titles as diffuse as “Becoming, resistance, dissolve” and “Boredom, repetition, fixatives” sometimes a little difficult to discern. Greenlaw’s approach does, however, allow for a true reflection of the sheer volume (or vast extent) or the work that’s gone into these essays — there is a life’s worth of reflection here and a rare pleasure to be had in discovering connections in the writer’s thinking, gathered sometimes decades apart. A rewarding and thought-provoking read.

IN THIS SECTION

More money spent on books in 2023 than ever before, sing, by jane clarke, william sharman crawford and ulster radicalism: a zealous reformer who clashed with daniel o’connell, paul lynch’s booker prize winner is ireland’s bestselling book of 2023, dublin industrial site set to be redeveloped for thousands of homes 3km from city centre, first look: the pub with no booze that’s full almost every night, blanchardstown shooting: man (18) charged with murder of tristan sherry on christmas eve, mother ‘abandons’ baby by discharging herself from maternity hospital, court hears, moving from singapore to ireland: ‘i’m shocked by the inefficiency and complete lack of common sense’, latest stories, dutch researchers say persistent fatigue in patients with ‘long covid’ has identifiable biological cause, st pat’s sign liverpool goalkeeper marcelo pitaluga on loan, the irish times view on arson attacks: keep the focus on the criminals, the irish times view on nollaig na mban: a resurgent tradition, air of nervousness permeates newcastle as the team heads to sunderland for northeast derby.

Book Club

  • Terms & Conditions
  • Privacy Policy
  • Cookie Information
  • Cookie Settings
  • Community Standards

IMAGES

  1. LAW 203 research essay rubric

    law essay rubric

  2. Legal Writing Rubric

    law essay rubric

  3. LW202 Contract Law II Marking Rubric for Essay

    law essay rubric

  4. Rubric final exam

    law essay rubric

  5. Exam Marking Rubric

    law essay rubric

  6. Rubric law 12 magna carta powerpoint

    law essay rubric

VIDEO

  1. 3 Differences Between a First Class and 2:2 Law Essay #lawdegree #lawstudent #lawschool #lawessay

  2. A Brief Introduction to International Law

  3. Test Day Instructions LAT 2023

  4. Why You Should Do Law

  5. Criminal History Inquiries & Fair Chance Hiring

  6. On the Principles of Fair Employee Representation: Legal Cases

COMMENTS

  1. Law School Essay Exam Scoring, Demystified—Quimbee

    The rubric is a scoring tool that identifies the content expected in an essay response and allocates points to components of the analysis. A typical law school exam rubric allocates points to the four steps of IRAC. You'll earn points for identifying issues, properly reciting rules, applying law to fact, and reasoning to a conclusion.

  2. PDF UNH Law Legal Writing I Standard Rubric F 09

    Proficient work for a first year law student- on a job, the work would need to be revised with input from a supervising attorney. Score 1 for each competent criterion met. Developing. Work needs additional content or skills to be competent - on a job the supervising attorney would need to start over. Score 0 for each developing criterion met.

  3. PDF Tips for Writing a Law School Exam

    The key to a useful condensed outline is that the student can easily and quickly find the rules of law, include them on the exam, and apply them to the facts implicated by the fact pattern. Stated another way, the material included in a condensed outline should be comprehensive, compact, refined, and easily accessible.

  4. Grading Rubrics: Their Creation and Their Many Benefits to Professors

    present offering which discusses grading rubrics and their utility for the law professor and student. Part II of the article discusses the methodology for creating an effective grading rubric, while Part III details the rubric's benefits for the law profes-sor (specifically the legal writing professor) and for the students. Finally,

  5. Law: Legal essay

    1. Starting your answer The first step to a successful law essay is understanding the question. One of the most effective ways of breaking down the question is to identify the direction, content, and scope or limiting words. For example, look at the following essay question:

  6. PDF Property Exam Writing Rubrics

    SUBSTANCE: Each of the following scoring rubrics is applied separately to EACH issue in an essay question. ORGANIZATION AND WRITING: The following scoring rubrics are applied to each answer as a whole.

  7. Designing Academic Writing Analytics for Civil Law Student Self

    The relevance of the analytical rhetorical moves for legal essays is based on their association with the majority of the assessment criteria rubrics at UTS as shown in Table 1 which compares the elements of the writing rubric used in a civil-law assessment (column 1), with their associated salient sentence types (column 2), and gives an example ...

  8. iRubric: Law Essay (Argumentative) rubric

    Law Essay (Argumentative) Rubric Code: KB953X By Norabucco25 Ready to use Public Rubric Subject: Law Type: Assessment Grade Levels: 9-12 Subjects: Law Types: Assessment Discuss this rubric You may also be interested in: More rubrics by this author More Law rubrics More Assessment rubrics Do more with this rubric: Preview Preview this rubric. Edit

  9. PDF "Think Like a Lawyer" Using a Legal Reasoning Grid and Criterion

    This paper adds to the literature in this field by demonstrating how to scaffold IRAC (issue, rule, application, and conclusion) for first year Law students and use criterion-referenced assessment to assess the application of IRAC to a problem-based question. IRAC is one of many acronyms commonly used to teach "legal reasoning" and thus ...

  10. Assessment Rubric

    ASSESSMENT RUBRIC - LAW ESSAY (total mark out of 100 - scale the total depending on how much the essay is worth) HIGH DISTINCTION DISTINCTION CREDIT PASS FAIL Numeri ... evidence of original research and reading of primary and secondary sources beyond basic prescribed texts and case law. Some integration into the essay itself. (9+)

  11. Law School Essay Exam Questions: Issue Spotters with Model ...

    Designed to mimic a real-world law school essay final on the Multistate Essay Examination (MEE). Quimbee essay practice exams prep you for the real thing. ... Quimbee provides a detailed, downloadable rubric to help you view your own work objectively and critically, which is the best way to build your skills and rock your answer on exam day. ...

  12. PDF Constitutional Law Essay Grading Rubric for Essay Activity

    Constitutional Law Essay Grading Rubric for Essay Activity (Letter grade equivalencies are provided for reference and are based upon the school's grading scale. Grade/Points Letter Grade Equivalent Description 50 D- Essay identifies the subject area and makes some attempt at answering the question.

  13. Common Assessment Types

    ESSAYS AND WRITING There are five main types of writing assessment task used in the Law School; essays, research essays, thesis essays, case notes and court reports. Individual courses may have additional formats. The three types of essay progressively reduce the amount of scaffolding, and thus adapt well to students' increased knowledge and ...

  14. What is the Legal Writing Style and How to Use it in Law Essays

    The legal writing style consists of a variety of sentence structures and phrases. Some of them are specific to a particular area of law. For example, the phrase "parliamentary supremacy" is used mostly in articles about public and constitutional law. Others can be applied to an essay or a journal article on any area of law.

  15. Essay Marking Rubric: How to Craft a Winning Legal Analysis and

    Law document from Carleton University, 1 page, Final Essay Marking Rubric Introduction ( /5) - Do you have a thesis statement? - Do you have a clear argument? - Do you have a roadmap? - Does your paper introduce a legal issue? Feedback: Body ( /20) - Do you properly defend your thesis? - Do you take a

  16. LAW 203 research essay rubric

    LAW 203 research essay rubric. Course: Torts (LAW203) 281 Documents. Students shared 281 documents in this course. University: Macquarie University. AI Chat. Info More info. Download. AI Quiz. Save. Marking R ubric for LA W 203 - T ort Law Re searc h Essay (rubric based on o ver all marks out of 40 to war ds the unit to tal) Criteria. HD. D ...

  17. Custom Law Essay Writing Service

    Law research papers; Dissertations and law coursework help. Our law essay writing service provides you with reviews of court hearings, book reviews, analysis of various complex historical cases, and more. If you need to edit and proofread as you have finished writing your assignment, we can write such a task as well.

  18. Essays for all seasons: 2023

    Winter: ROY BAR SADEH. "Worldmaking in the Hijaz: Muslims between South Asian and Soviet Visions of Managing Difference, 1919-1926.". Spring: AUDREY TRUSCHKE. "Hindu: A History.". Summer: JOSHUA M. WHITE. "Slavery, Freedom Suits, and Legal Praxis in the Ottoman Empire, ca. 1590-1710.". Fall: DIVYA CHERIAN. "The Owl and the ...

  19. Essay Grader

    Essay Score Calculated based on the rubric/grading criteria. Aim for 90-100% before submitting your essay. 0 % Please enter your essay, so you can get feedback. Grammar Needs improvement... Clarity Needs improvement... Engagement

  20. Structural analysis of types of Muslim religious consciousness

    Knowledge of foundations of Muslim law, procedure for formation of moral and social religious attitudes, interpretation of religious canons and dogmas contribute to a better understanding of many processes taking place within Russian Muslim community and can form the basis of mechanism for formation of moral legal consciousness, which must be ...

  21. Opinion

    The overall thrust of the federal A.I. initiative is to push for rapid use of untested technologies by law enforcement, an approach that too often fails and causes harm. For that reason, the ...

  22. Teachers Told Us They've Used AI in the Classroom. Here's Why

    Here's Why. It's been a year since ChatGPT —an AI-powered tool that can seemingly answer any prompt—burst onto the K-12 scene, and teachers are slowly embracing the tool and others like it ...

  23. SEVODNIASEV Ilia Valerevich

    A list of suggested Russia-related sanctions targets composed by the Ukrainian…. Ukraine · National Agency on Corruption Prevention. Ukraine National Security Sanctions 16,309. List of individuals and legal entities subject to restrictive measures. Ukraine · National Security and Defense Council.

  24. Opinion

    The Texas anti-abortion law that went into effect shortly after Roe was overturned was drafted to ban the care needed by Ms. Cox and other women with similar cases: It does not include an ...

  25. How to Red Team a Gen AI Model

    Andrew Burt. Summary. Red teaming, a structured testing effort to find flaws and vulnerabilities in an AI system, is an important means of discovering and managing the risks posed by generative AI ...

  26. Elektrostal

    Elektrostal, city, Moscow oblast (province), western Russia.It lies 36 miles (58 km) east of Moscow city. The name, meaning "electric steel," derives from the high-quality-steel industry established there soon after the October Revolution in 1917. During World War II, parts of the heavy-machine-building industry were relocated there from Ukraine, and Elektrostal is now a centre for the ...

  27. Opinion

    How Biden Can Tackle Mass Incarceration. Dec. 29, 2023. Dion MBD. 279. By Michael Romano. Mr. Romano is the founder and director of the Three Strikes Project at Stanford Law School, which ...

  28. The Vast Extent by Lavinia Greenlaw: Essays exploring intersections

    The Vast Extent by Lavinia Greenlaw: Essays exploring intersections between arts and sciences A life's worth of reflection and rare pleasure to be had in discovering connections in this writer ...

  29. Elektrostal

    History. It was known as Zatishye (Зати́шье) until 1928. [citation needed] In 1938, it was granted town status.[citation needed]Administrative and municipal status. Within the framework of administrative divisions, it is incorporated as Elektrostal City Under Oblast Jurisdiction—an administrative unit with the status equal to that of the districts.