This article accompanies Ruth Ahnert and Sebastian E. Ahnert’s new History Workshop Journal article, ‘Meta-data, Surveillance, and the Tudor State’ – which is free access until June 2019.
In 2013 the whistle-blower Edward Snowden revealed to the world that the US National Security Agency had engaged in massive-scale ‘dataveillance’. Dataveillance is the practice of monitoring data that results from activities such as credit card transactions, phone calls, emails, and use of social networking platforms. Unlike the practices of surveillance, which begin with a known target, dataveillance casts a wide net, collecting data from all available sources, and then using that data to reconstruct and analyse social networks, patterns of communication, and movements. Using computational analysis, including methods from the field of network analysis, the NSA used this data to identify potential threats to national security. However, similar practices are used by a range of other organisations for other less defensible purposes.
The principles of such practices can also be turned to the study of history, as we have shown in an article recently published in History Workshop Journal, which uses network analysis to study an archive of more than 132,000 communications collected by the Tudor government (surviving in the State Papers archive). Our work highlights the story of another Snowden, who might also be described as whistleblower. John Snowden (an alias of John Cecil) was a Catholic priest in exile from Elizabeth I’s Protestant regime, and who was recruited by William Allen and Robert Persons, two leading Catholic conspirators, working with Philip I of Spain on various plots to remove Elizabeth I from the throne in favour of a Catholic monarch. Snowden was sent to England as a spy, but his ship was intercepted, and he was arrested and interrogated by Elizabeth’s right-hand man, William Cecil, Lord Burghley. Realising the seriousness of his situation (he stood to be tried and executed for treason), Snowden offered himself up to the Tudor state as a double agent. Burghley accepted his offer, and Snowden became a valuable asset.
John Snowden became important to our work because he unlocked a pattern for us, alerting us to the way network analysis could be used to discover other spies and conspirators within a massive archive of documents. Snowden is a good example of how anomaly detection works. In data mining, anomaly detection is the identification of items, events, or observations which do not conform to an expected pattern in a dataset. To understand communication patterns in our correspondence archive, we generated graphs of combinations of various network analysis algorithms. In one, we combined two measures that chart the volume of people a given person may have communicated with (degree) and plotted it against an algorithm that is often used to help understand how important a person might have been in the transmission of information across disparate places, or isolated communities (betweeneness centrality). The graph below shows these two measures plotted against each other for communications sent in the 1590s.
What you will notice about this graph is the diagonal trend: it tells us that the more people a given person corresponds with, the more likely they are to create short paths across the networks, linking communities and locations – what we might describe as a bridging function. However, there are also outliers in this graph. The people we see appearing a notable distance below the trend line have a surprisingly high bridging function in the network despite a relatively low number of correspondents. The biggest outlier amongst these is Snowden.
But he is not alone. Around him, as we observed, were lots of other figures engaged in conspiracy, including: William Allen, Robert Parsons, Charles Paget, Thomas Fitzherbert, William Douglas, William Crichton, Francis Dacre. Others were more like Snowden, double agents or offering themselves as agents to the Tudor regime, such as: John Daniel, and William Sterrell (who worked with the cryptographer Thomas Phelippes). But what is the reason for this pattern? By looking at their letters we could see that this particular network profile was repeatedly observed in spies, double agents and conspirators because of the way such figures bridge otherwise separate (and indeed often oppositional) communities.
This suggested to us that we could create a predictive model. We therefore developed a method that used eight common network measures to create a ‘network fingerprint’ for all of the correspondents in our archive. We could then use that fingerprint to discover people with the most similar profile. The method was surprisingly adept at discovering similar types of people. For example, of the fifteen people most similar to the exiled Catholic conspirator William Allen, thirteen were also Catholic men who were perceived to present foreign threats to England’s security.
Such a method, however, is not limited to finding conspirators. It can also help to reveal patterns of correspondence that mark out diplomats, or foreign leaders. More importantly, perhaps, this method shows us how, by using network properties rather than human-assigned categories, we can begin to understand group identities in different ways, including the shadings and slippage between those categories, thereby destabilizing them in productive ways. For example, this approach also groups people who were under surveillance, but may have had little in common in terms of their role in society, such as foreign monarchs and conspirators. This shows us how the structural similarity we see is only partly about the epistolary practices and social position of correspondents; it is also about the making of the archive. Our predictive methods, therefore, not only reveal the characteristics of an individual’s communication practices, but also the traces of government surveillance, and tell us precisely who was being targeted.
By using the data-driven methods similar to those employed by modern governments and agencies, we are able to understand something about the surveillance practices of historical governments. This approach helps us to look deeper into the historical archive by suggesting areas that may merit closer attention. But aside from the application of these methods to historical scholarship, these findings also lead us to consider what we can infer about the methods of government agencies and private companies undertaking network analysis on citizens. Attempts to place checks on these activities –such as the Supreme Court case Carpenter v. United States in 2018, or the inquiry into what Facebook knew about Cambridge Analytica’s activities – show the growing realization that we need to place the dataveillance practices of governments and companies themselves under surveillance.
However, the onus should not just be left to political and judicial institutions. The scholarly community has a duty to think deeply about the ethics and practicalities of data gathering. This is already happening in computer science and the digital humanities, but historians have an important voice to add. The history of surveillance offers vital lessons for the current moment. But it is important that historians arm themselves with an understanding of how computational analysis works. If we believe that part of the job of the historian is to agitate for change in the world today, we need to seek greater understanding of the workings of technology. The more we engage with the power that big data and digital methods hold for us in our work, the more we can help to shape the development of theoretical discourse around data and surveillance in ways that make sense for the humanities, and for society at large.