Our observation from student self-assessment cover sheets indicates that students found self-assessment for these criteria challenging, since they overestimated their performance, and for the teachers, providing formative feedback on them may be prohibitively time-consuming. Effective (self)-assessment of legal writing requires the ability to recognise summary statements of introductions and conclusions, and the identification of text parts that contain critical analysis, and as a second step, the clarity and pertinence of the identified segments need to be evaluated. Both steps need expertise: the first mainly in the analysis of academic writing, and the second in domain knowledge. By highlighting sentences that need to be evaluated, AWA aims to provide support to the first step of this complex assessment activity, aligned with the guidance from the literature described in the introductory sections. Moreover, AWA also indicates in bold characters the relevant linguistic expressions that trigger the highlighting, with an aim to facilitate end-user understanding of the relevant parts of the highlighted sentences. The parser does not yet analyse or provide feedback above the sentence-level, as such it is left to students to reflect on whether sentences-types are positioned in the appropriate place at the whole-text, and section or paragraph level.
Usability aside, the next question was whether AWA’s output was experienced as academically trustworthy by the civil law lecturer, and her students. To date, we have reported statistical correlations between the frequency of certain XIP classifications and the quality of English literature essays (Simsek et al. ). However, user experience testing has not yet been reported; this application to the legal domain provides a first step to roll-out to students within a single domain.
To persuade someone you will need strong facts, opinions of authorities and the main task is not to leave anything for doubts of your reader. All the sources should be reliable, valid, and up-to-date. As argumentative essay topics are always elaborated from current social interests and usually discussed in the political debates in the media. The topic itself can be also ethical, religious, social or political in nature. Your task is to challenge the audience to re-examine the values by altering deeply held principles providing new evidence and points of view.
Argumentative essay topics are always being debated by the society currently. The objective of such essays is not to demonstrate your knowledge, but to exhibit the critical thinking and analytical skills.
One means by which to support such alignment is through the automated provision of formative feedback on the accuracy of students’ self-assessment, or the writing itself. Indeed, a line of research has developed to analyse student writing through automated essay scoring or evaluation systems (AEE). These systems have been successfully deployed in summative assessment of constrained-task sets, with evidence indicating generally high levels of reliability between automated and instructor assessments (see, e.g., discussions throughout Shermis and Burstein ), with some criticism of this work emerging (Ericsson and Haswell ). Such systems have been targeted at both summative and formative ends. However, these approaches have tended to explore semantic content (i.e., the topics or themes being discussed), and syntactic structure (i.e., the surface level structures in the text), with some analysis of cohesion (see particularly, McNamara et al. ), but less focus on rhetorical structure (i.e., the expression of moves in an argumentative structure). Moreover, these systems have not typically been applied to formative self-assessment on open-ended writing assignments.
Related to the previous point, but standing as a question in its own right is the extent to which students and educators should be encouraged to use rhetorically-based highlighting as proxies for the overall quality of the piece. Prior work (Simsek et al. ) has investigated statistical relationships between the frequency of all or particular XIP sentence types, and essay grade, with some positive correlations found, but clearly there is much more to the creation of a coherent piece of writing than just this indicator, so one does not expect it to account for all variance. Rhetorical parsing on its own does not assess the truth or internal consistency of statements (for which fact-checking or domain-specific ontology-based annotation (Cohen and Hersh ) could be used). Other writing analytics approaches provide complementary lenses (see, for example, McNamara et al. ) which, when combined in a future suite of writing analytics tools, would illuminate different levels and properties of a text in a coherent user experience.
The variety of argumentative essay topics does not guarantee the easy selection of a suitable one. The topic should be correspondent to the following requirements:
The following suggestion is targeted at the undergraduate student who writes several argumentative essays every semester, but who has never formally learnt how to do so and rarely takes the time to reflect on how it should be done. The suggestion takes the form of a model that highlights what one might expect to find in the three main parts of a good argumentative essay. As a model to help you reflect on what you are doing well and what you should be doing differently, it is necessarily categorical and systematic. In trying to master the model, you should not inadvertently become a slave to it. Otherwise, your essays will become rigid and sterile. Instead, use this as a mental checklist that you can build upon and even be playful with, developing in the process a confident, independent and original voice.