From in-class polling tools to university-wide learning management systems, software applications – apps – are ubiquitous on campuses nowadays. By collecting and crunching data from staff and students, these programs play a key role in supporting students and helping the modern university run smoothly.
These data, however, can be highly sensitive – and very valuable. Add the fact that many of these apps are created by third-party companies and you have, some believe, a ticking data protection time bomb that threatens to upend pedagogical partnerships for the sake of private profit.
“It’s everything from personal preferences, to knowledge levels, to the work [students] are physically doing that they submit through online portals. That information is owned, and it can be analysed,” said Pete Eyre, managing director of Vevox, a live polling and student engagement app that learners use anonymously, meaning their details can’t be harvested.
“There are very few checks in place to decide whether [an edtech company] is doing it for the educational service or whether it is there to collect data that could offer value in a different way.”
The long list of tools that harvest student – and staff – data includes learning analytics systems, virtual learning environments, plagiarism checking software and library usage monitors, all of which provide a wealth of information about the thousands of individuals on a single campus.
“Collection of student data in universities runs along a pretty vast spectrum now,” said Ben Williamson, chancellor’s fellow at the Centre for Research in Digital Education at the University of Edinburgh. “You’ve got all the official academic records that have been collected for a long time but are now digitised, and new technology constantly emerging.”
Of course, even if collection of data is not central to an app’s primary purpose, data reuse is typically not illegal because it is set out in the terms and conditions, the “small print” that everyone accepts without reading.
Once out of the hands of the student – and the university – it has potential to be sold on to partners or to be exploited for targeted advertising. And when academic data sources are combined, the commercial opportunities can multiply, too.
Nevertheless, in the wake of the scandal over Cambridge Analytica’s exploitation of Facebook data, people around the world are becoming more aware of the dangers of signing away their privacy.
Neil Morris, dean of digital education at the University of Leeds, said that what happens to student data had become a growing problem in higher education. “Many universities are increasingly using external systems from a range of different suppliers, and the complexities of movement of data around those systems are much more of a headache than they were in the past,” he said.
Professor Morris said that universities were becoming aware of the need to vet new suppliers. “You choose a system but what happens next? Where do those data go? Into the cloud, which is often not hosted in the UK – or even in Europe. How secure will it be? Will you be sharing it with your partners? How long will it be stored for? It’s a really complicated space,” he said. “For someone in my role it’s not something we’re used to navigating.”
Professor Morris said he had rejected software providers that he felt could not protect student data adequately, although he added that, despite undertaking as much due diligence as possible, it was sometimes hard to be entirely certain.
“The responsibility lies with the university. We have the contract with the student; we have said we will look after their data,” Professor Morris said.
The problem is easier to address for central university staff who select technology for campus-wide use. It can be more challenging when individual lecturers bring what they see as exciting edtech products into the classroom off their own bat.
“That’s part of what universities should be doing: trying things out, finding ways to help or interact with students,” Professor Morris said. “The problem is [that academics] sign a small-scale licence, put their students in it, data is thrown around the place, and no one thinks about what’s happening.”
Mr Eyre agreed. “It is really common that a lecturer will find an app or a piece of software and sign up to use it in their classroom independently,” he said, adding that in his experience awareness about how information was used in such cases was “mixed”.
In some cases, if a user agrees to pay for an app, they have control over their data, but if someone uses it for free then the company who made it has the rights. “So you might have two lecturers in one university using the same piece of software on very different terms,” Mr Eyre explained, adding that some other in-class polling tools embedded cookies to enable advertising to be targeted at students based on their responses.
“It’s all very well having terms and conditions, but people don’t really look at that. You wouldn’t expect a student who was asked to download an app in a lecture to read them then and there. They are trusting that the lecturer or their institution has checked the terms and conditions,” Mr Eyre said.
He argued that it was best for universities to implement institution-wide policies, even if this was time-consuming and stifled the ability of lecturers to experiment in the classroom.
Jesse Stommel, executive director of the division of teaching and learning technologies at the University of Mary Washington, said that edtech providers had a responsibility to do more than just legally protect themselves with terms and conditions. “The onus has to be on the tech companies themselves to educate the users about data security and data monetisation…say ‘here’s why I’m collecting it, here’s what I hope to do with it, here’s why it should matter to you’,” he said.
For Dr Stommel there was also still a danger when technologies were adopted widely across campuses and every student or lecturer was required to use them. “When certain companies become universal, staff and students don’t have a way to say ‘I won’t use it because I don’t want them to have my data’,” he said.
The fact that certain products had become so widely adopted, such as plagiarism tracking software Turnitin, was another reason to be cautious about data protection, he said.
Turnitin, which was sold last year for $1.8 billion (£1.4 billion), has been accused of monetising students’ intellectual property, since it works by checking submitted papers against an ever-growing database of previously submitted essays and detecting any similarities.
“Companies can start off small and they say ‘we will be good stewards of this data, we’re small, we talk to each other’ but then that company achieves more and more success and it doesn’t necessarily have the standards in place to maintain that,” said Dr Stommel, speaking generally. “Then what happens when they are bought out? What are the ethics of the company that has purchased them? What happens to the student data then?”
Dr Stommel said that the most “moral” thing to do was for companies to collect as little data as possible but admitted “no company is approaching it in that way”.
Edinburgh’s Dr Williamson agreed that the bigger companies become, the more data they hold, and the more valuable they and their data are. In some instances, they are “becoming so valuable that they are attractive to highly opaque private equity firms that will ultimately own and have some level of authority over the huge databanks that are being collected”, he said.
It is a debate that raises big questions about commercialisation of education, Dr Williamson said. Higher education is supposed to be a public good but now universities “are becoming the profit-making centre for a lot of huge multinational corporations”, he warned.
Aside from the use of data for profit, Dr Williamson and Dr Stommel both raised concerns about how the collection of student data was used within education, as data from learning management systems and learning analytics have been touted as a useful way to make predictions about students’ behaviour.
Learning analytics, which tracks student engagement with services such as libraries and virtual learning environments while collecting data on issues such as grades and attendance, was supposed to be about the improvement of teaching and learning, Dr Williamson said. However, there was “deep concern” that it will actually be used as a proxy for teaching quality in performance management.
There was also a huge question about whether such learner models were actually reliable guides for what students might do in future.
“There is the serious issue of the use of past data, because some datasets contain all sorts of inbuilt biases,” Dr Williamson said. “That’s a significant issue, particularly around students from poorer backgrounds or ethnic minorities, who end up discriminated against because of predictions made from the data itself.”
Students needed to be much more involved in conversations across the sector about what acceptable use of their data was, Dr Williamson concluded.
“Academics are increasingly saying we need to engage much more carefully and closely with the issue,” he said. “We need to start thinking about what kind of response we are going to have.”
后记
Print headline: Appy campus: what happens to data collected through edtech products?