The pace of AI development in Silicon Valley has turned fast and furious. ChatGPT, Bard, Gemini, OpenAI, Bing AI, no one had heard of these applications and companies just 15 months ago. China also has emerged as a frontrunner in developing and applying AI and digital technologies.

Now French company Mistral AI and German company Aleph Alpha, with funding from SAP and Bosch, have raced to catch up, raising hundreds of millions of euros in seed funding. Ominously, fears of the dangers of this new technology have been pushed to the side-lines, taking a backseat to commercial interests and national security priorities. Precautionary principle be damned, it’s a new gold rush in the making, and the race is on to stake your claim.

Meanwhile, governments are struggling to keep up with regulatory guardrails. The EU has developed a global reputation as the ‘conscience of the Internet’ in regulating Big Tech. However, its recent efforts to pass the first AI Act in the Global North ran into a surprising volte-face from several member states, including Germany, France and Italy.

The three largest EU member states used their influence to overturn months of negotiations and compromise and threatened to turn the pioneering legislation into mush. Fears had suddenly emerged in certain corridors of influence that the EU would be undermining its own companies’ ability to innovate and catch up with Silicon Valley and China. It was likely no coincidence that a co-founder of Mistral AI is a former French state secretary and has direct access to President Emmanuel Macron.

The role of the military in technological development

France and Germany have long had a bad case of ‘Silicon Valley envy’, and, consequently, they are in danger of adopting the wrong strategy for AI development. While Europeans have lamented ‘where are the European Googles, Apples, Microsofts and Amazons’, and now ‘where are the European OpenAIs and Teslas’, they forget that much of that technology has its roots in massive military spending that has subsidised Silicon Valley for decades.

Going back to the 1930s and World War II, the San Francisco area has long been a major site of US government research and technology. In the 1950s, the nearby Stanford University became a research magnet that attracted top tech talent to companies like Fairchild Semiconductor and Bell Telephone Laboratories, focused on military priorities like responding to the Soviet Union’s Sputnik space satellite.

Familiar technologies such as the military-funded ARPANET, which became the first version of the Internet, Apple’s Siri, the World Wide Web, Google Maps, automated vehicles, artificial intelligence, all began their birth stories as military projects.

Seven out of 10 Silicon Valley start-ups fail and 9 out of 10 never earn a profit.

With that stable base of investment from public tax dollars, Silicon Valley venture capitalists had the luxury of rolling the dice with their private money on new companies and technologies. Seven out of 10 Silicon Valley start-ups fail and 9 out of 10 never earn a profit. But the ones that win in this investment casino – OpenAI, Google, Amazon, Meta/Facebook – have become hugely dominant and profitable.

China is now engaged in a similar state- and military-sponsored strategy. Is Germany or the EU prepared to launch such a high-risk, military-subsidsed and expensive course of development? It seems unlikely.

EU funding levels are far behind the US and China

The EU has been trying for several years to increase funding and investment for AI development, but with limited success. Too often, there have been grand announcements about new funding sources from the EU, or Germany, France and the UK, but the amount of money actually laid ‘on the barrel head’ has been modest.

In 2022, private AI investment in the US amounted to $47.4 bn, which easily surpassed China’s private investment at $13.4 bn. In the EU, private investment amounted to only about $6.6 bn, with the UK alone almost matching that at $4.4 bn. Over the last decade, this disparity has been even greater – American private investment climbed to $241 bn, China to $95 bn, the EU at around $16 bn – less than the UK at $18 bn.

But maybe the recent investments in the EU mean it is starting to catch up? Not at all. Looking at the number of newly funded AI companies in 2022, the US is far in the lead with 542, China is pretty far behind with 160, and the EU is even further behind at around 140.

And that’s just looking at private investment without adding what the US and Chinese governments and military are spending. In recent years, the US military has annually spent about $7 bn on unclassified projects for AI and related fields, a 32 per cent increase since 2012. But it spent billions more on classified R&D, though the exact figure is unknown.

The US and China are spending real money, more than Europe will ever match. EU investment and R&D need to be more realistic and strategic, even as the EU continues its trajectory as the conscience of digital technology development. What might an alternative ‘European way’ of AI development look like?

The missing AI vision

Amidst all the headlines, what is not discussed enough is what kind of AI research would best serve the public interest. The right kind of AI development would ideally benefit humankind, rather than focus exclusively on commercial, for-profit applications or on military uses that reinforce a bunkering down into national silos.

These shortcomings suggest a direction for European efforts that would make a unique contribution and transform the EU into a global leader. The EU and its member states should continue to fund AI development and support its businesses and academic institutions in developing it. Governments can do a lot to help the important Mittelstand sector of small and medium enterprises adopt the most applicable AI technologies. It doesn’t always matter as much where those technologies are developed or which companies develop them. It’s just as important to be able to incorporate those technologies into businesses and society in practical ways so that the human benefits are obvious and the economy remains competitive.

But Germany and the EU will lose their important perch in the global economy if they sacrifice their crucial role as the ‘conscience’ of the AI movement. That’s why the recent watering down of the EU’s legislation to regulate AI is so disappointing.

EU, German and French leaders should not underestimate the political, economic and moral value of being the conscience of the internet.

The original goal of that legislation was to establish a new global benchmark for countries seeking to harness the potential benefits of AI technology, while trying to protect against its possible risks, like killing jobs, spreading misinformation online or endangering national security. In the Digital Services Act and the Digital Markets Act, the EU established an important precedent in which the giant companies and their potentially dangerous technologies that are ‘systemic risks’ would face additional oversight, transparency and regulation. After initially supporting this approach in the drafting of the AI Act, suddenly Germany, France and Italy tried to throw this ‘precautionary principle’ overboard.

A compromise was eventually reached, but despite recent headlines announcing that the EU had agreed on the terms of this landmark legislation, this is only a provisional agreement and details are still being worked out. Those include provisions over the rigour of enforcement, national security exemptions, and whether to ban specific applications that are potentially Orwellian, such as emotion prediction technologies, predictive policing and remote biometric identification, which could potentially lead to mass surveillance.

EU, German and French leaders should not underestimate the political, economic and moral value of being the conscience of the internet. A realistic appraisal of the EU’s place in this rapidly developing AI world makes it clear that innovation extends beyond the technologies and into the role of establishing the safest and most humanistic digital infrastructure guidelines for the 21st century.

Our AI future is being written in the present, and it’s looking like a sci-fi movie full of either peril or promise. The right regulatory guardrails will decide which scenario will manifest as our future.