With the increase in use of machine learning and artificial intelligence in every domain, it is now commonplace to find reports about how humans are likely to become increasingly otiose in the coming world. A more clear-eyed analysis of how technology is being used, however, reveals that these pronouncements are still very much premature, and that an alternate (and more plausible) outcome is one where technology doesn’t replace but supplements human labour in complex ways.
An excellent example of this kind of work is documented in a 2016 Proceedings of the National Academy of Sciences paper by a team from Tel Aviv University. A central question in biblical scholarship concerns when exactly the various parts of the Bible were written, which is made particularly complex because we have so little background knowledge information about life 2,500 years ago. Some traction on this was made through the innovative use of machine learning algorithms to try to determine the level of literacy in the community, giving us an idea of whether people in that community would be capable of producing a work of enormous complexity such as the Bible. The project considered 16 inscriptions found in the area of the desert fort of Arad (figure 1).
Each of these was an ostracon or a piece of broken pottery used to write on, like in figure 2. Notice how it is chipped, meaning that traditionally only brief excepts are present. In addition, over time the writing can fade, making reading it difficult, let alone comparing and contrasting different pieces. That’s where the tech comes in.
After restoring the script as much as possible, the researchers used machine learning software to identify individual characters and then compare the same letter on different Ostracons on a range of metrics like overall shape, the angles between strokes, the character’s center of gravity, as well as their horizontal and vertical projections. Allowing for some range in handwriting variability, the programme would identify distinct authors through letters which exceeded a threshold of difference. Through this method, the authors concluded that there were a minimum of six authors for the artifacts they had.
This was clearly a case of machine learning performing tasks that humans cannot even dream of doing with their naked eye, and someone who wanted to push the narrative of a coming apocalypse of job losses for human beings can treat this research as confirming their world view. But a closer examination of the variety of methods indicates a slightly more complex story.
Although the programme did identify at least six authors, this fact by itself says very little about the extent of the literate population — after all it could have been the case that only six people in the area had been literate or it could have been that a lot more people were. To make inroads with regard to this question, the results of the application were analyzed by human researchers and a model of the hierarchical relationships between the authors and intended recipients of each message was constructed:
Since there appeared to be people from every sociopolitical strata represented, the authors concluded that it was likely literacy was widespread among the inhabitants of the area in the kingdom of Judah near Fort Arad in 600 BCE.
For the wider audience, the lesson from this study is that we shouldn’t be too certain that machine learning and AI will mean the end of jobs, since there is still the possibility of modifying older ways of working that incorporate technology while still relying substantially on human minds and hands. The effects of the coming machine learning revolution should not be prophesied about in general terms, but instead we should engage in nuanced studies and projections of individual fields and sub-fields.
The future is neither completely opaque nor transparent, and what we can glean about it is almost definitely going to be fragmentary, tentative, and context-dependent, instead of a single grand narrative.