Back

Is the Science behind AI just Alchemy?

In Primo Levi’s celebrated short story collection The Periodic Table, the story titled “Chromium” illustrates how our collective ways of behaving incorporate procedures whose justification no longer apply over time. For example, when he worked in a paint manufacturing company, he found that a certain batch of paint had turned solid due to an accidental excess of chromium oxide. In response, he added ammonium chloride to the paint to make it liquid again, and recommended to continue doing so until that batch was used up. He then left his job, but when he returned 10 years later, he found that people were still adding ammonium chloride despite the bad batch having long been replaced: "And so my ammonium chloride, by now completely useless and probably a bit harmful, is religiously ground into the chromate anti-rust paint on the shore of that lake, and nobody knows why anymore."

According to AI researcher Ali Rahimi, something analogous is happening in the field of AI research today. Last December, he argued that the use of machine-learning algorithms had become a form of alchemy since the researchers developing and using them don’t know why their algorithms work and why they don’t.

Algorithms are tweaked and tested using trial and error to generate success against benchmarks, but it really isn’t possible to pinpoint whether the success is due to the core algorithm or if the peripheral add-ons were doing all the heavy lifting. Rahimi thinks this is an unhealthy state of affairs and urges greater attention to explanations and finding root causes. He must have been onto something, because his talk received 40 seconds of standing applause from the audience.

Not everyone agrees with Rahimi, however. According to Facebook’s Yann LeCun, Rahimi is fundamentally wrong because while understanding is certainly good wherever you can get it, understanding often only follows the creation of methods, techniques, and even tricks. To then insist that the creation of new technology only takes place where understanding is possible, would be to cripple innovation. He even makes this claim concrete by arguing that this was precisely why neural nets didn’t get the attention they deserved for over ten years.

Still, I get a sense that both Rahimi and LeCun are arguing past each other, because there’s no indication that Rahimi wants the kind of comprehensive understanding that would stifle innovation, as much as a more rigorous approach to avoid pitfalls. In a recent paper, for example, he calls for measures like

  • Breaking down performance measures by different dimensions or categories of the data

  • Full ablation studies of all changes from prior baselines should be included, testing each component change in isolation and a select number in combination.

  • Understanding of model behavior should be informed by intentional sanity checks, such as analysis on counter-factual or counter-usual data outside of the test distribution.

  • Finding and reporting areas where a new method does not perform better than previous baselines.

These are clearly not intended to stop progress, but to ensure a more sustainable model of growth. Still, the question of whether this will actually generate better results is one that cannot be answered through armchair philosophy — we’ll simply have to give these methods a shot and see if they prove fruitful.

Comments
Trackback URL:

No comments yet. Be the first.
Subscribe!