Why Entrepreneurs Jumping Into The AI Space Need To Be Careful
Technology / Artificial Intelligence & Machine Learning
The AI field right now feels similar to the dotcom boom at the end of the 1990s when it became clear the internet would become a game-changing technology. Thousands of people switched gears and moved into the space, looking for opportunities to make money.
But it was a cautionary tale. While some made millions (or billions), many others saw their initial investments vanish.
All told, the stakes in the late 1990s were small. The internet was essentially a glorified communication technology. It had the potential to change the world, but nobody accused it of being overtly dangerous.
That’s not so much the case with AI. Thinking “agentic” systems harbor the real possibility of danger. Elon Musk and other visionaries in the field muse that such systems could potentially spell the end of humanity, or something equally gruesome.
AIs making decisions could also lead to entirely new classes of professional indemnity risks. Entrepreneurs abiding by the wrong advice of such systems could find themselves in trouble.
The purpose of this post is to explore some of the risks associated with AI and how leading insurance companies are managing the situation. The hype machine is currently running at full pace, and the business leaders need to be careful not to get swept up along with it.
AI Is Complex And Rapidly Evolving
The first major risk entrepreneurs face comes from the fact that AI is complex and rapidly evolving. New breakthroughs are a daily occurrence as the world’s best minds descend on the field, looking for opportunities to turn these complex systems into a multi-trillion-dollar industry.
The primary concern is firms investing in technology that becomes redundant just months down the line. Venture capitalists may put millions of dollars into technologies that ultimately wind up going nowhere.
AI Applications Have Social And Personal Ramifications
Another risk is the fact that AI applications have significant social and personal ramifications. The introduction of sufficiently powerful systems could fundamentally alter the human experience in ways that perhaps no other technology ever has.
As such, Simply Business, an insurance firm, says that PI cover is becoming increasingly valuable. While professional indemnity insurance is something most businesses take out as standard, it is even more relevant in the AI space.
“Professional indemnity insurance, also known as professional liability insurance, is something we recommend all customer-facing entrepreneurs and businesses take out,” the brand says. “It should be a minimum in any risk-mitigation strategy, regardless of sector.”
Professional indemnity covers things like negligence and “loss causing advice,” which is something that AI agents could generate. AI-dependent entrepreneurs could mistakenly tout an artificial intelligence’s hallucination as truth, causing customers to lose money.
“The benefit of professional indemnity insurance is that it protects against these kinds of risks,” Simply Business says. “If you make a mistake on your work or provide bad advice, you will get cover that protects you against loss if a client takes you to court. You can also get protection if you publish a false statement that damages the reputation of a company or person.”
AI Regulations And Standards Are Still Under Development
Another significant risk is the fact that AI regulations and standards are still under development. Entrepreneurs could find themselves effectively barred from global markets if they develop AIs outside of the permissible range.
Currently, this risk appears highest in Europe. The so-called “regulatory powerhouse” already has rules on its books that prevent many startups from considering it a suitable location for their work.
However, these impediments to the industry’s growth may also feature in the U.S. and Canada. Regulators may view the development of unfettered AI as a national or existential threat, particularly if there is an accident and a firm loses control of one of its machines.
To reduce these risks, firms must have viable exit strategies. Applications should be niche, safe, and non-controversial, except in the friendliest states.
AI Bias And Fairness Remains An Issue
Issues also remain in the area of AI bias and fairness. Commentators continue to worry that systems will offend people, stereotype, or fail to represent people in ways they deem appropriate.
While machines will only reflect the data provided, that might not be sufficient to assuage activists. Therefore, entrepreneurs need to keep a close eye on the ability of their systems to adapt to the current climate.
AI Development Pitfalls Abound
The development of AI also comes with serious pitfalls. The technology is notoriously challenging to master, and many firms have spent decades in the doldrums trying to work out the technology, only for a better approach to come along and render their old methods obsolete. Even Apple’s Siri is a victim of this effect. Despite over a decade of investment and nearly limitless funds, it appears primitive compared to LLMs, like GPT-4.0, Llama, and LAMdA.
For this reason, entrepreneurs need to consider the likelihood of various experimental technologies becoming reality in the AI space. While LLMs are the current gold standard, that dynamic could quickly change.
The optimal approach is to work on systems using various technologies and pursue them independently. At the very least, executives should have one eye firmly on emerging technologies in the space to ensure that they don’t pose a risk to current developments. If the risk appears high, they should switch to a new methodology.
AI Stakeholders May Disagree With Company Direction
Finally, entrepreneurs in the AI space should consider the needs of stakeholders in their products. Machine intelligence could have impacts that extend beyond customers to researchers and civil society as a whole. AI systems must align with broader human values, not just those of corporations.
While companies who enter the space first might feel a sense of power, that will likely be short-lived. As the technology becomes more ubiquitous, government regulations will increase, and society will get a better feel of how to manage the impact of the technology. It won’t be a wild west like the internet was for the first ten years or so. The world has changed a lot since the mid-1990s.