Investors in AI-first Technology companies serving the defense industry, such as Pelantir, Primer and Enduril, are doing well. Enduril, for one, reached a value of over $ 4 billion in less than four years. Many other companies that make general-purpose, AI-first technologies કે such as the image labeling સં receive a large (undisclosed) share of their revenue from the defense industry.

Investors in AI-First technology companies who do not even intend to serve the defense industry often feel that these companies ultimately (and sometimes unknowingly) help other powerful organizations, such as the police force, municipal agencies and media companies, in the case of their duties. Runs.

Most do a lot of good work, such as datarobat helping agencies understand the spread of covid, run simulations of HASH vaccine distribution, or make school communications available to immigrant parents in the U.S. school district.

The first step to taking responsibility is to know what is going on on earth. It’s easy to eliminate the need for startup investors to know what’s going on in AI-based models.

However, there are some less positive examples-the technology created by the Israeli cyber-intelligence firm NSO was used to hack 37 smartphones of journalists, human rights activists, business executives and the fiance of the murdered Saudi journalist Jamal Khashoggi. By the Washington Post and 16 media partners. The report claims that the phones were on a list of more than 50,000 numbers located in countries that survey their citizens and are found to have hired Israeli pay services.

Investors in these companies can now be asked challenging questions by other founders, limited partners and governments as to whether the technology is too powerful, too enabling or too widely applicable. These are degree questions, but sometimes not even asked when investing.

CEOs of big companies, (currently!) Founders and politicians of small companies – after publishing “The AI-First Company” and having invested in such companies, I have had the privilege of talking to many people with many perspectives. A good part of a decade. I often get an important question: How do investors make sure that startups that they invest responsibly implement AI?

Let’s be frank: it’s easy for startup investors to overcome such an important question as, “It’s very difficult to say when we invest.” Startups are a new form of thing to come. However, AI-First startups have been working with something powerful since day one: tools that take much greater advantage of our physical, intellectual and temporal reach.

AI not only gives people the ability to get their hands around heavy objects (robots) or get their heads around more information (analytics), it also gives them the ability to turn their minds around time (predictions). When people can predict and learn by playing, they can learn quickly. When people can learn quickly, they can act quickly.

Like any tool, anyone can use these tools for good or bad. You can use a rock to build a house or you can throw it at someone. You can use gunpowder to run beautiful fireworks or bullets.

Significantly similar, AI-based computer vision models can be used to track the movements of a dance group or terrorist group. AI-powered drones can target cameras while skiing, but they can also target us with guns.

This article covers the basics, metrics and politics of investing responsibly in AI-First companies.


Investors and board members of AI-First companies should take at least partial responsibility for the decisions of the companies in which they invest.

Investors influence founders, whether they have intentions or not. Founders are constantly asking investors what products to make, which customers to contact, and which deals to run. They do this to learn and improve their chances of winning. They also do this, in part, to deter and keep investors informed because they can be a valuable source of capital.