top of page
Buscar

Learning analytics and AI: Politics, pedagogy and practices

We treasure what we measure

It is arguably human nature—and most certainly bureaucratic nature—to want to quantify our successes and failures. The ability to calculate a numerical value to represent the progress of an individual or an institution, a country even, is now central to evidence-based policy and practice. The counterpoint to “treasuring what we measure,” is of course, “what counts can’t always be measured, and what’s measurable doesn’t always count.” All institutions, education being no exception, have long wrestled with the tension that the powerful abstractions afforded by quantitative analysis also lose vital detail as context is stripped out. This is what makes the design of metrics to gauge the quality of nuanced human processes and outcomes (such as teaching and learning) so controversial. Across society, we see this debate now playing out in all spheres of human life, with ethical frameworks and professional codes of practice proliferating (eg, Asilomar, 2017; Floridi et al., 2018; IEEE, 2017; Montreal, 2017; Partnership on AI [PAI], 2018)—although it is not clear that these are making much impact yet on computing companies (Whittaker et al., 2018, Section 2.3). How this debate should unfold in education and lifelong learning is the focus of this special issue, which brings together leading scholars in the field of Learning Analytics (LA) and Artificial Intelligence in Education (AIED)—fields that are viewed with varying degrees of excitement and suspicion by parents, students, teachers, journalists and scholars.

The fears are reasonable: that quantification and autonomous systems provide a new wave of power tools to track and quantify human activity in ever higher resolution—a dream for bureaucrats, marketeers and researchers—but offer little to advance everyday teaching and learning in productive directions. This fear is justified in our post-Snowden era of pervasive surveillance, and post-Cambridge Analytica data breaches. Partly however, this fear is also born of lack of awareness about the diverse forms that LA/AIED take, which is equally understandable—to outsiders, these are new and opaque technologies. It follows that if we do not want to see concerned students, parents and unions protesting against AI in education, we need urgently to communicate in accessible terms what the benefits of these new tools are, and equally, how seriously the community is engaging with their potential to be used to the detriment of society.


Politics, pedagogy and practices

This special issue provides resources to tackle this challenge, by engaging with these concerns under the banner of three themes: Politics, Pedagogy and Practices:

  1. The politics theme acknowledges the widespread anxiety about the ways that data, algorithms and machine intelligence are being, or could be, used in education. From international educational datasets gathered by governments and corporations, to personal apps, in a broad sense ‘politics’ infuse all information infrastructures, because they embody values and redistribute power. While applauding the contributions that science and technology studies, critical data studies and related fields are making to contemporary debates around the ethics of big data and AI, we wanted to ask, how do the researchers and developers of LA/AI tools frame their work in relation to these concerns?

  2. The pedagogies theme addresses the critique from some quarters that LA/AI’s requirements to formally model skills and quantify learning processes serve to perpetuate instructivist pedagogies (eg, Wilson & Scott, 2017), branded somewhat provocatively as behaviourism (Watters, 2015). While there has clearly been huge progress in STEM-based intelligent tutoring systems (see du Boulay, 2019; Rosé, McLaughlin, Liu, & Koedinger, 2019), what is the counter-argument that LA/AI empowers more diverse pedagogies?

  3. The practices theme sought accounts of how these technologies come into being. What design practices does one find inside LA/AI teams that engage with the above concerns? Moreover, once these tools have been deployed, what practices do educators use to orchestrate these tools in their teaching?


Politics, Pedagogy and Practices cannot, of course, be neatly split into separate analytical threads: they are mutually constitutive. Building on the critical analyses of information infrastructures (Bowker & Star, 1999; Star & Ruhleder, 1996) and knowledge infrastructures (Edwards, 2010; Edwards et al., 2013), we may now be seeing the emergence of “educational knowledge infrastructures” (Buckingham Shum, 2018)—these only exert political power through practices (in policy and design), which translate educational worldviews (whether implicitly or explicitly recognised: Knight, Buckingham Shum, & Littleton, 2014) into code (eg, in database schemas, algorithms and user interfaces), involving and excluding stakeholders, for instance, through the networks of institutions funding and building such infrastructure (Luckin & Cukurova, 2019; Prinsloo, 2019; Williamson, 2019), and the design methodologies employed to make design decisions (Buckingham Shum, Ferguson, & Martinez-Maldonado, 2019; Richards & Dignum, 2019; Mavrikis, Geraniou, Gutierrez Santos, & Poulovassilis, 2019).


Overview of the special issue

In the context of this 50th Anniversary Special Issue of the British Journal of Educational Technology, authors from a range of disciplinary backgrounds and outlooks were challenged to make the state of the art in their fields accessible to a broad audience, and to give glimpses of the road ahead to 2025. The papers are therefore primarily reflective, “big picture” narratives, reviewing and discussing existing literature and case studies, and looking forward to what could, or should, be on the horizon. Together, they provide an eclectic set of lenses for thinking about LA/AIED at a range of scales—from the macroscale of national and international policy and stakeholder networks, to the meso-scale of institutional strategy, down to the micro-scale of how we make cognitive models more intelligible, or design decisions more ethical. These papers thus provide a complement to the BJET ‘sister’ special issue on LA/AIED (Starčič, 2019), which provides a more conventional set of papers reporting new empirical evidence and technical advances.


We start with two papers from the perspective of critical data studies and sociotechnical infrastructure analysis. In “Policy networks, performance metrics, and platform markets: Charting the expanding data infrastructure of higher education,” Williamson (2019) recognises that while educational data scientists (such as those in this special issue) are genuine in their pursuit of student/teacher-centred LA/AIED, the politicisation and commercialisation of national and international scale data infrastructure cannot be ignored. Drawing on a UK programme currently underway, Williamson highlights numerous concerns, such as how LA/AIED technology may in fact be appropriated by other agendas, and translated into performance metrics. Consequently, he calls for policy frameworks for ethical, pedagogically valuable uses of student data in higher education.


In “A social cartography of analytics in education as performative politics,” Prinsloo (2019) critiques “the data imaginary” in higher education, arguing that this narrative ascribes too much trust and power in the potential of data analysis to represent the complex reality of student progress. Following an analysis of the main assumptions in evidence-based management, and the increasing power of quantitative metrics, he presents a “social cartography” of data analytics, which should be thought of “not only as representational, but as actant, and as performative politics.”


The changing ecosystem of stakeholders who constitute LA/AIED continues as the focus in “Designing Educational Technologies in the Age of AI: A Learning Sciences Driven Approach,” by Luckin and Cukurova (2019). Educational technology companies are interpreting ideas that have been in research labs for decades, and powered by big data and data science, are taking products to market. This is both exciting and dangerous, since as the authors put it, “most commercial AI developers know little about Learning Sciences research, indeed they often know little about learning or teaching.” Luckin and Cukurova present three examples illustrating how the learning sciences assist in making sense of rich learner data traces, before discussing models for the more effective sharing of such academic knowledge in forms that can be applied by LA/AIED companies, by forging inter-stakeholder partnerships between developers, educators and researchers.


LA/AIED is now in commercial products and robust open source platforms, making them ready for adoption at scale—but what does this process look like? The innovation capacity of individual higher education institutions comes into focus in “Complexity leadership in learning analytics: Drivers, challenges, and opportunities,” by Tsai, Poquet, Gašević, Dawson, and Pardo (2019). They document the organisational enablers and obstacles to sustained adoption, and the forms of leadership required to navigate such organisational change. Drawing on their analysis of 21 universities’ adoption processes, and 23 leadership interviews, they distil some common, critical challenges, in the light of which they argue that leadership models based on complexity theory provide key principles to accelerate sustainable LA.


Data ethics run throughout this special issue, but the next paper makes this the central focus. In “Practical ethics for building learning analytics,” Kitto and Knight (2019) identify several tensions (illustrated by “edge cases”) in current thinking about ethics in relation to LA/AIED, focussing in particular on how current frameworks fail to provide practical guidance for the teams who design and build such infrastructure. They propose an open database of such edge cases, as “a middle space where technical builders of systems could more deeply interface with those concerned with policy, law and ethics.”

Attention then switches to the new societal challenge of learning for a lifetime, and its implications for how LA/AIED is conceived. In “From data to personal user models for life-long, life-wide learners,” Kay and Kummerfeld (2019) map out new challenges for the AIED community, specifically, how the established concept of “the learner model” (what the AI believes the learner understands) must evolve. They identify a set of “competency questions” to which learners should be able to get intelligible answers, if they have genuine control over their learning data, and be equipped to use it to scaffold metacognitive processes.


One of the most visible forms of AI that the public now engages with are software agents, for instance, automating telephone and textchat helplines, and enabling voice interfaces to smartphones and other devices. In “Supporting and challenging learners through pedagogical agents who know their learner: Addressing ethical issues through designing for values,” Richards and Dignum (2019) explain what is currently known about the effective use of pedagogical agents (PAs) in learning environments. They describe the different forms these agents can take, what roles they play, the underlying theories, and how they may develop in the future. Richards and Dignum then weave ethical considerations into this picture, arguing for a “design for values” approach to the design of ethically and socially responsible agents.


As we introduced above, there is a belief, held in certain scholarly communities and some of the teaching profession, that LA/AIED equates to intelligent tutoring systems (ITSs), delivered through a narrow, instructivist pedagogical model focussed on skills mastery through “drill and practice.” ITS (often branded “adaptive learning”) may also be seen as particularly threatening to the teaching profession, since the value proposition is to more efficiently handle the task of guiding the learner to master the curriculum with 24/7 coaching feedback. Tackling these concerns heads on in “Escape from the Skinner Box: The case for contemporary intelligent learning environments,” du Boulay (2019) first summarises key evidence from 40 years' ITS research. He turns next to examples of more diverse pedagogies that operate at the “screen level” (the capabilities of an ITS), and the “classroom level” (how teachers can orchestrate the students’ ITS experience in different ways). The agency of both students and teachers in these scenarios is greater than the reductionist image that critics paint.

The question of the pedagogical assumptions underpinning LA/AIED continues in “Intelligent Analysis and Data Visualisation for Teacher Assistance tools: The case of exploratory learning,” by Mavrikis et al. (2019). Tackling together the concerns around pedagogy, and the automation of teachers, the authors describe how LA can be designed to augment teachers' awareness of their students' progress in exploratory learning tasks. They describe the design rationale behind a suite of “Teacher Assistance tools,” and their empirical evaluation in use.


The ITS theme continues in part in a third paper, with a critical reflection on how the AIED community makes its models more transparent, and the criteria used to judge the quality of intelligent systems. In “Explanatory learner models: Why machine learning (alone) is not the answer,” Rosé et al. (2019) argue for “explanatory learner models” that provide more interpretable, actionable output than “black box” models, which may be inscrutable even to their own developers. They present examples of how their systems accomplish this with different kinds of data, across mathematics and writing courses, in order to improve the learner model, generate actionable feedback, and critically, keep the human “in the loop,” rather than automate them out. Distinctively, the authors describe how other disciplines (social sciences, education and design) have contributed important insights and algorithmic improvements, by evaluating AIED in authentic contexts. They conclude that LA/AIED tools should be designed and evaluated with such interdisciplinary skillsets, to understand the human dimensions of use that more technology-centric approaches miss.


We explicitly called for insider accounts from system builders regarding the assumptions they make about “politics, pedagogy and practices.” In “The heart of educational data infrastructures = conscious humanity and scientific responsibility, not infinite data and limitless experimentation,” Johanes and Thille (2019) describe not their own system building efforts, but the insights they gained from in-depth interviews with 11 data infrastructure teams building LA/AIED systems. The authors seek to give these teams a voice, to rebalance concerns that technologists do not care about ethics, or even if they do, they cannot translate this into design decisions. Their goal is that “researchers, policymakers, and infrastructure builders can use these accounts to better understand the building process and experience.” They document how infrastructure developers reflect deeply on, and ultimately make decisions about, the social and ethical dimensions of their creations.


Reflections on this special issue

We invited papers that would provide critical accounts of the practices employed in the design of LA/AIED, and of the practices adopted by educators deploying these tools (Kay & Kummerfeld, 2019; Luckin & Cukurova, 2019; Richards & Dignum, 2019; Rosé et al., 2019). With respect to pedagogy, we anticipated critical narratives that explain how the use of quantification does not need to equate to the implementation of behaviourist or instructivist approaches to teaching and learning (du Boulay, 2019; Mavrikis et al., 2019). Finally, within the theme of politics, our goal was to garner accounts that recognise the political significance and the potential for power and influence that the intelligence infrastructure constructed through the use of big data, AI and analytics brings to the world. Accounts that would help us to be appropriately vigilant and judiciously embracing with respect to these technologies and the quantifiable approaches they can bring to education (Kitto & Knight, 2019; Prinsloo, 2019; Richards & Dignum, 2019; Tsai et al., 2019; Williamson, 2019). We were also keen for clarifications of the way that the three elements of our theme: Politics, Pedagogy and Practices are deeply connected and inter-related (cf. Johanes & Thille, 2019; Kitto & Knight, 2019).


The politics/policy making theme was very popular with authors, who addressed the policies that need to be in place to tackle concerns about marketisation, the use of data to control and reform, and the need to rethink the way we approach ethics, far beyond issues of privacy. We need to equip decision makers to really understand data, and to ask critical, data-literate questions. Policy makers will be pleased to read the call for dialogue—between institution leaders, policy professionals, academics, pedagogical experts and students—in order to address the need for evolving data infrastructures that precipitate tension between on the one hand, student-empowerment, and on the other, the quantification of HE through data-driven metrics (Williamson, 2019). Dialogue between multiple stakeholders is essential to ensure increased understanding as we venture into uncharted educational territory, although we should never underestimate the fact that higher education institutions are complex adaptive systems, as made clear by Tsai et al. (2019). Prinsloo (2019) echoes this, highlighting the need to pay attention to making visible and relevant the social, political, economic, legal, technological and environmental context that “entangles” all that we do. The key role to be played by multiple stakeholders, within school education as well as within HE, also arises for Mavrikis et al. Their paper discusses the way in which teachers can be assisted by well-designed information visualisations driven by sound LA, and how teachers need to be part of the conversation when it comes to the design and implementation of data driven approaches within education.


Surely, however, all these political concerns need to be guided by the “North Star” of pedagogy that we must put in place to achieve the future benefits that analytics and AI can potentially bring? As we move through the 21st century and embrace the fourth industrial revolution, we need to think carefully about pedagogy. It is important that we continually question the drivers behind all the discussions about politics and policy, and ensure that they are led by what we know and understand about pedagogy. If we do not address this focus, then we may make decisions that yield unintended and unwanted consequences. Delightfully, du Boulay (2019) challenges the statement made in the call for papers for this special issue that: “… analytics and AI equate to adopting a retrograde pedagogy from the industrial era…”. Du Boulay probes and discusses the developments in pedagogy that can be found in interactive learning environments, proposing that future educational directions could benefit from fostering the learner's motivation, and tapping into their curiosity through the way we design, organise and choreograph their learning activities. It will be important that new technologies reduce rather than increase the workload of hard-pressed teachers, a point echoed by Mavrikis et al. (2019).

A particularly valuable aspect of the reflective approach we asked authors to take can be seen in the way that the papers in this issue build on much previous valuable research. For example, Kay and Kummerfeld combine earlier work on Open Learner Models from AI in Education with research on Personal Informatics to formulate the idea of a Personal User Model for Life-long, Life-wide Learners (PUML) and produce a set of design guidelines with the aim of helping learners to develop their metacognitive skills as well as gaining control of their data. In particular, they focus on the need for appropriate interfaces that support self-monitoring, reflection and data scrutiny and control. The work is conceptual, but draws together previous research form AI and Human Computer interaction effectively in support of their proposals. We see a similar call, for deep interdisciplinarity, in the work of Rosé et al., who highlight the need for the success criteria of the machine learning paradigm to be tempered with those of authentic assessment in education (see also Kitto, Buckingham Shum, & Gibson, 2018).

When it comes to the practices of teaching and learning with analytics and AI, and the practices involved in building analytics and AI technologies, then we certainly need to ensure that we are connected in our thinking. We need the educators to be talking with the developers and builders, and this joint narrative must speak to policy making. To this point, the paper by Richards and Dignum about the societal impact and ethical issues that must be taken into account when designing PAs feels internationally relevant. The authors acknowledge that we do not yet understand the implications of PAs in human roles for supporting both teachers and learners, and as lifelong companion learners. They stress that both practitioners and policy makers need to be aware of both the benefits and risks associated with the use of PAs (and AI in general) and present a design approach grounded in the importance of respecting learner values. They highlight the age-old challenge of transfer of learning between education and the real-world and the potential benefit that augmented and virtual reality could bring to help address this challenge. They make a strong case for the value of PAs, but they also recognise the need for frameworks, standards and authoring tools to ensure responsible design.


Johanes and Thille also focus on the need for conversations between different communities. They spotlight criticisms that those who build data driven and analytics technology only perceive data as neutral and ready for analysis, rather than as products of social processes. They eloquently report the findings from their interviews with system builders in higher education, helping to balance the conversation, which has until now been dominated by those outside the builder community. The voices of the infrastructure builders are enlightening, and the discussion that ensues makes clear that those who criticise the builders miss the fact that many system builders are engaging deeply with what the critics find concerning. The fact that technologies currently “capture very little of human learning” is something that these system builders recognise, but this recognition is something that critics do not always acknowledge. Johanes and Thille remind us that we must not assume that the technologies provided by system builders are the technologies that the system builders would like to be producing. Rather they are the best that can be done with the current level of the tools available, theoretical, technical and practical. Their final one point is powerful and particularly pertinent to a special issue such as this, where we have tried to pull multiple voices and perspectives together: “Particularly surprising, then, is that the intimate decisions, ethical dilemmas, and personal struggles that show the builders' engagement with these concerns have not found an outlet in existing publications.”


On cultural diversity in LA/AIED

In the context of BJET's concern to promote internationalisation and inclusiveness (Hennessy, Mavrikis & Girvan, 2019), we recognise the lack of cultural diversity in this special issue, and must take responsibility for this, having invited the authors on the basis of their experience and diversity of outlook, rather than cultural background per se. This is, however, a fair reflection of the state of the art in LA/AIED. With the exception of Prinsloo's paper from South Africa, the remainder is from North America, Europe and Australia. None of the papers focus explicitly on how LA/AIED and culture shape each other. The work they have conducted and report is likewise all from the western world. All but 11 citations referenced in the papers have first authors whose host institution is in the west.


We cannot truly report on analytics and AI for the future without a global perspective, and as a community we must seek ways to increase the diversity of our community. The boundaries of the Internet are very different to those of the physical geography of the world. And yet, we can learn a great deal from the ways in which this geographical reality has been discussed and conceptualised. For example, the Silk Road countries are rising, and those countries matter greatly in the 21st century (Frankopan, 2015). Just as our world was shaped by the ideas and the wealth that moved between East and West along this road, we anticipate that LA/AIED will be shaped by countries whose economies and influence are growing rapidly, whose technology infrastructure and digital sophistication are significant, and whose investment in LA/AIED is greater than many of the countries where our authors and our data are located or derived from. India and China are the most populous nations in the world, and Russia has the greatest landmass in the world, and yet no authors or data reported in this issue emanate from these countries. Diversity cuts both ways, however, and can be challenging. Since educational infrastructure embodies values, it should be no surprise to see clashing politics, pedagogies and practices.

Taking China as an example, we see government and commercial investment in AIED that rivals the rest of the world. However, as Knight et al. (2014) argue, “our learning analytics are our pedagogy”—educational technology is always recruited to advance the prevailing conception of teaching and learning, and the pace of change of the latter is much slower than that of the former. Thus, Chinese LA/AIED is at present dominated by companies developing adaptive tutoring for skills mastery, often in preparation for conventional high stakes exams (eg, Chen & Fan, 2018; Qi & He, 2019). Similarly, while many countries are researching facial analysis in the lab, the large scale deployment of facial recognition cameras in school classrooms (Chen, Luo, & Xu, 2018; Li, Li, Zhou, & Song, 2017) clearly reflects a configuration of politics, pedagogies and practices that many would question. While it is surely only a matter of time before LA/AIED expands to embrace other pedagogies (eg, see the contrasting examples in Hao, 2019), concerns around privacy and scientific validity will always be relevant.


Conclusion

In Tibetan Buddhist thinking, every person is a universe: we all see and feel differently. LA/AIED must recognise and value this diversity as we move into an increasingly quantified future. If we are to truly develop the Politics, Pedagogy and Practices that can take us through the next decade, we must find ways to design that are multicultural, multidisciplinary and harness multiple human intelligences. These hallmarks are not yet as visible as they could be in the LA/AIED community, but they are precisely those qualities that distinguish us from machines.

From: https://bera-journals.onlinelibrary.wiley.com/doi/full/10.1111/bjet.12880

65 visualizaciones0 comentarios

Entradas Recientes

Ver todo

BLOG

bottom of page