Introduction
We have decided to frame our artefact through the lens of three distinct concepts. Design, data and delivery.
Anecdotes
Anecdotes PDF
Wider Narratives
Here both anecdotes describe entanglements with standardized assessments and the data they produce. The ‘Hidden figures’ anecdote praises the STAR Maths assessment as a ‘secret weapon’. The ability of the system to quickly assess a pupil’s attainment level and areas of development without any prior information is full of possibilities and recognised in this description. Similarly with the ‘Take a deep breath’ anecdote the sheer volume of data available at a few clicks of a button is impressive and practical for organising interventions. However both anecdotes describe events where data does not marry up with teacher/pupil judgement. These cautionary tales highlight the possible dangers of putting too much reliance on a sole measure of attainment. After critcism from teaching unions on the use of SNSA’s to measure attainment in Scottish pupils (EIS, 2020), the Scottish Government altered the language of the National Improvement Framework (Scottish Government, 2019) to frame the SNSA data more as an aide to teacher judgement. Fenwick and Edwards (2016) describe measures of calculation and non calculation as ‘interrelated, rather than existing in separate spaces’, arguing for the need of qualculation in education.
References
When considering the delivery of learning and teaching experiences using technology we must look beyond the simple, traditional network of teacher and student. It is important to consider the multifaceted roles of all actors and how they are working in harmony, or indeed in discordance with one another. The two delivery anecdotes illustrate breakdowns in being able to deliver working solutions. There are a number of factors at play here from user errors as a result of not following instructions, through infrastructure issues in these cases around wifi and availability of hardware, to software design elements causing unnecessary complications. The question of responsibility and accountability is raised, in such an enmeshed network where does the responsibility fall and who is accountable when things go wrong? Lynch posits that ‘we only see software’s power then it breaks, unleashing a social shout’ (Lynch, 2015) and the subsequent blame is then often placed squarely on the shoulders of the software or indeed the hardware. This blame game may be symptomatic of the larger scale social narratives around AI which often takes a binary form. AI is put on a pedestal when it works well and in the interest of the user but is oft treated with scorn, derision and even fear when it does not. This ‘us and them’ attitude or othering of AI systems is not helpful when we consider the place of AI as a co-worker. If the responsibility for the success of delivery is shared, then so too should the failure be. Susskind and Susskind pose the suggestion that the there may be tension in technology shifting the teacher from ‘sage on the stage’ to ‘guide on the side’ (Susskind & Susskind, 2015) but in many ways that ‘guide on the side’ role is simultaneously being filled by other actors who hold responsibility for working delivery of technology in learning. Local authorities with their responsibility for ensuring infrastructure is up to par, Scottish Government agencies with the responsibility for providing equitable access to and professional training for certain services within a national platform such as Glow, software designers and their responsibility to provide a user friendly, functional product. We all function as part of this web, and no sole actor is the spider who built it.
References
Lynch, T. L. (2015) Introducing Software Studies. In: The Hidden Role of Software in Educational Research. New York: Routledge, pp. 21-48.
Susskind, R., and Susskind, D. (2015). The Grand Bargain. In: The future of the professions: How technology will transform the work of human experts. Oxford: Oxford University Press, pp. 9-45.
Critical Questions
How the expertise of teachers and pupil voice could be used within a design of an educational AI system?
Is there a place for qualculation, and how practical would it be in education?
How do we find new ways of approaching, understanding and responding to questions of responsibility and accountability when working with our AI co-workers?