It happened on May 11, 1997. After a defeat, a victory, and three draws, Deep Blue, programmed by IBM, would eventually win the sixth and decisive game of the historic chess match against Garry Kasparov. The Russian chess player, incredulous and upset, did not take the defeat well. In fact, given the machine’s behavior during the game, he protested: some of its moves seemed to indicate human intervention.
Today, more than twenty-five years later, the most amazing thing is that Kasparov thought it possible to continue to beat a computer that was already capable of analyzing one hundred million moves per second at that time. Nevertheless, it is ironic that his defeat was later speculated to be due to the fact that Kasparov had interpreted a move resulting from a software bug as strategic. According to Nate Silver, who tells the anecdote in The Signal and the Noise, it was that move, enigmatic in its aims, that fatally distracted the chess player.
The controversy surrounding Artificial Intelligence is back. However, it is no longer just due to its ability to calculate that the machine, in the era of “big data” and “deep learning,” threatens to surpass human intelligence. Take the case of OpenAI. With Dall-E and ChatGPT, the uses of Artificial Intelligence invade the domains of creation and knowledge. In the former case, a program capable of creating images in the style of this or that artist, enlarging their masterpieces, and crossbreeding their styles. In the latter case, a program capable of producing text by gathering, synthesizing, and cross-referencing information in informal conversations with the user. Disbelief and unease are spreading. Human pride is wounded. Was Kasparov’s defeat not enough? Do they now want to dethrone Vermeer, Beethoven, Kant?
Let us not be mistaken. The concerns raised by these programs are not merely speculative. They are also practical. How to deal with the issue of copyright when it covers works but not styles? Is it legitimate to claim authorship of a work partially generated by algorithms? How can schools and universities around the world prevent students from cheating? How should teachers react to the use of this technology? Is the essay dead, as Stephen Marche suggests in an article published in The Atlantic, in a somewhat apocalyptic tone?
Aware that the impact of Artificial Intelligence encompasses both the domains of knowledge and creation, I decided to conduct an experiment in a Philosophy of Music class dedicated to discussing the concept of genius. I proposed listening to a symphonic piece: nothing less than Beethoven’s “10th Symphony,” composed in . . . 2021. We were thus listening to a composition generated with the help of an algorithm from fragments of the composer, resulting from a project in which musicologists, composers, and programmers collaborated to present what could have been Beethoven’s final symphony. It is up to each individual to judge the result. Is it too predictable? Does it fall short of the Master of Bonn’s previous nine symphonies? Is it better or worse than Barry Cooper’s attempt based on these same fragments in 1988? One thing is certain: the composition “sounds like Beethoven.”
At the end of the class, I asked the students to write a brief essay on this project, which we would discuss in later classes. My purpose was twofold. On the one hand, taking advantage of the recognition of “similarity,” I was interested in deconstructing the myth of genius, in its association with the idea of “innatism” and the cliché of “inspiration.” On the other hand, in a reverse movement, I was interested in mobilizing the concept of genius, as Kant presents it in the Critique of Judgment, to problematize the discourse surrounding the “creativity” of Artificial Intelligence. For Kant, the work of genius, unlike great scientific discoveries, is not reducible to rules. There is no calculation that allows it to be produced or explained. This is a strong idea that, apart from the romantic rhetoric of genius, still challenges us.
An algorithm can compose like Beethoven: it can, by following certain rules, instructions, and patterns, emulate his style. But it cannot “err” like Beethoven. It cannot become excited. It cannot become anxious. Above all, it cannot not compose like Beethoven. It is condemned to imitation. Paradoxically, only Beethoven can not compose like Beethoven. Only he could hesitate and, prodigiously, give up and take a risk—as when, in the Piano Sonata, Op. 110, from an impasse emerges, in a blink of an eye, an improbable and irresistible fugue that leans toward the future while nodding to the past.
The reading and discussion of these essays, which led me to many other questions beyond those that I intended to discuss, reinforced in me the conviction that the essay is not dead. As astonishing and useful—without irony—as the texts generated by Artificial Intelligence may be in their ability to collect, synthesize, or organize information, they fall short of certain intellectual operations, which unfold individually and collectively in the field of humanities and which are, in the transition from dialogue to writing, the heart of the essay: the raising of hypotheses, the questioning of assumptions, the search for blind spots, the construction of concepts, the untimely mobilization of tradition, the choice of emblematic cases, the recognition of affinities.
However, if the essay is not dead, a certain form—scholastic, dusty, lazy—of practicing it is dead. For one thing is certain: with programs like ChatGPT, there is no excuse for articles, papers, and classes that are limited to presenting information. Ironically, in an era when the humanities are constrained in their independence and ambition by the logic of production and profit, it would not be the least of the virtues of Artificial Intelligence to force them, first of all, to recognize their irreducibility to yesterday’s and today’s positivisms, and secondly, to assume their critical vocation.
In a particularly lucid article on the uses of Artificial Intelligence, Marco Donnarumma argues that Artificial Intelligence is largely “soft propaganda” for the “ideology of prediction” that dominates the Global North. It is not a matter of denying the usefulness of Artificial Intelligence: in medicine, engineering, and certainly in culture. It is—and one thing does not prevent the other—to critically examine the assumptions on which its exaggerated valorization rests. Everything would be calculable; everything would be predictable; everything would be controllable. Calculate to predict; predict to control—that is the project. “Big data,” “deep learning,” and “AI” form a system.
Concerns about the uses of Artificial Intelligence are not only a consequence of technological advances since the time of Kasparov. They are also a symptom of the impoverishment of our understanding of the world, as well as of what human communication, experience, and intelligence mean. Social and human sciences are also victims of this. Look at the quantification of research, the reduction of the qualitative to the quantitative, the emphasis on productivity, the university’s corporate transformation, and the privatization of knowledge. But the response cannot be merely defensive. The importance of humanities is not an inheritance. It can and should be a conquest.
Human intelligence is plural: mathematical, spatial, emotional. Now, if there is an intelligence akin to the humanities, it is critical intelligence. We owe it, from the outset, the recognition that there are various intelligences. Just as we owe it the understanding that with the recognition of the plurality of intelligences, the discussion opens up about their uses, their ends, and their value. In this discussion, in which no calculator can do all the calculations, it is also up to the humanities to try out a less prejudiced but never less cautious or critical view of technology.
In a way, we made the opposite mistake to Kasparov’s: the mistake of attributing to the machine, in its unsurpassable ability to calculate, a type of intelligence that it does not possess. We point the finger at it in fear and disbelief. We get distracted . . . and just when we should be more focused on the game.
Nice essay. But I’ll start paying more attention once A.I can solve poverty, intolerance, wars, and tell me how I can get a 10% annualized risk free rate of return without the eternal recurrence of stock market crises, which are driven by the quants who ignore human psychology. You know what, I’ll settle for an answer in which everyone’s personal rate of return does not exceed social economic growth. This fear of AI sort of reminds of the Yahoos who look at a Jackson Pollock on one hand or a Jeff Koons on the other and say, I could have done that. Maybe, but you didn’t think about it, which I guess is the point of the essay, or at least should be. As Plato reminds us, the truth is not in the response, it’s in the question.