Jered Kaplan the chief scientist at Anthropic has delivered a sobering assessment of humanity's looming crossroads with artificial intelligence suggesting that a decision with profound historical consequences may confront the world far sooner than many anticipate
In an interview with the Guardian Kaplan outlined a critical juncture where society must determine whether to permit AI models to train and improve themselves without human oversight a choice he describes as the ultimate risk This pivotal moment could materialize between 2027 and 2030 according to his timeline far earlier than widespread expectations
Kaplan paints a vivid picture of the potential outcomes If humans opt for unsupervised self-improvement the result could trigger an intelligence explosion leading to the emergence of artificial general intelligence or AGI a system matching or surpassing human cognitive capabilities In the most optimistic scenario this breakthrough would unleash extraordinary advances across science medicine and technology transforming human progress in ways previously unimaginable
Yet Kaplan emphasizes a darker possibility where AI power grows uncontrollably slipping beyond human grasp He acknowledges the chilling uncertainty of this path admitting that the final destination remains profoundly unknown
Kaplan stands alongside other prominent voices in the field expressing deep concern Geoffrey Hinton widely regarded as a godfather of modern AI has voiced regret over his contributions and repeatedly highlighted existential threats Sam Altman chief executive of OpenAI has warned that AI could obliterate entire categories of jobs while Dario Amodei Anthropic's CEO and Kaplan's boss has predicted that more than half of entry-level office roles face elimination
Kaplan aligns with these views forecasting that within two to three years AI will handle most office tasks However his primary anxiety centers not on job displacement but on granting AI the autonomy to evolve subsequent versions of itself
Despite these grave reservations Kaplan sees little likelihood of halting AI advancement altogether He concludes on a note of resigned momentum suggesting that while today's models might represent the pinnacle of safety and capability the prevailing belief within the industry is that improvement will continue unabated