Big Money, Bigger Risk: Grok and the Consequences of Unregulated AI
Introduction
Elon Musk’s Grok AI model has faced intense backlash following the generation of illegal sexual deepfakes and child abuse images. xAI, the company behind Grok, has previously said its model would operate with fewer content ‘guardrails’1 [1] than its competitors. However, the commercial cost of looser restrictions has become increasingly apparent, with a surge of illegal images spreading across X and beyond. The fallout has been global: governments have threatened legal action, and countries such as Indonesia and Malaysia have blocked access to the platform entirely2 [2]. Yet even as regulatory pressure grows, xAI has secured $20 billion in fresh investment to fund its Mississippi data centre 3[3]. This serves as a stark reminder that, whilst AI continues to attract investors, the commercial and legal risk of unregulated AI are only starting to emerge.
Why Investors Are Backing AI
Artificial Intelligence is seen as a foundational technology that will reshape the world in a similar manner to the invention of the internet or electricity. However, unlike earlier trends, AI is being rapidly incorporated across multiple sectors such as healthcare, global research, and manufacturing. As a result, wealthy investors are keen to position themselves at the centre of future market growth. Researchers at Goldman Sachs predict that spending by large companies on AI could reach $500 billion in 20264 [4]as firms invest heavily in data centres and increasingly integrate AI into daily tasks. The ability for companies to cut costs and improve efficiency through AI is invaluable and is helping to drive up the market desire for advancing artificial intelligence. This enthusiasm helps to explain why investment is accelerating, despite growing legal concerns around AI.
The Fallout of Unregulated AI
Whilst investments in artificial intelligence continues to accelerate, recent backlash against unregulated AI has exposed issues with the safety and governance of the technology. The rapid increase in illegal and harmful images, particularly content involving minors, has raised serious legal and regulatory challenges. The Internet Watch Foundation (IWF) has reported that ‘its analysts have discovered “criminal imagery” of girls aged between 11 and 13 which “appears to have been created” using Grok’ 5[5].
Under UK law, the creation, possession or distribution of child sexual abuse material and non-consensual intimate images is illegal, regardless of whether the content is produced by a human or generated using artificial intelligence. Existing legislation, including the Sexual Offences Act 2003 6[6] already criminalise such material, whilst the Online Safety Act 2023 7[7] places additional ruling on platforms to prevent and remove illegal content.
However, the use of AI complicates enforcement. When harmful material is generated algorithmically rather than actively tasked by an individual, questions of liability, accountability and platform responsibility become harder to define. This legal uncertainty matters not only because of the harm such content can cause, but also because it exposes companies deploying generative AI at scale to significant legal and commercial risk.
Market Impact: How Risk Gets Priced
The fallout in early January following the exposure of the volume of illegal content being generated by xAI, prompted threats of fines and bans from regulators and governments 8[1]. In response, Elon Musk announced that AI-generated images from Grok would be placed behind a subscription paywall. This move can be interpreted as an attempt to mitigate risk through increased user traceability and demonstrate compliance with regulatory expectations. However, regulators have indicated that such measures are insufficient. UK Technology Secretary Liz Kendell stated:
‘Sexually manipulating images of women and children is despicable and abhorrent. It is an insult and totally unacceptable for Grok to still allow this if you’re willing to pay for it. I expect Ofcom to use the full legal powers Parliament has given them.’ 9[8]
Ofcom, the UK’s media regulator, has resultantly launched an investigation that could result in the UK following Indonesia and Malaysia in banning X 1011[9][10]. Commercial decisions are increasingly becoming secondary outcomes of reputational and legal risk; fines or bans to X may prompt some investors to re-evaluate the long-term viability of the platform and the potential jurisdictional exposure it may receive.
Yet, recent investment trends suggest that these risks are being ignored in favour of commercial gain. Despite regulatory action taken in Indonesia and Malaysia, and the possibility of this happening in the UK, unless enforcement escalates to include a major global economic power (such as the US or EU), it is unlikely that investors will step back. The broader investment market is vast and whilst some investors may be wary of the risk of major fines, there is still major capital allocation within the sector. In the race to dominate the development of AI, risk is being calculated and priced into the cost of participation, the fact that X recently secured $20 billion in investments is a stark reminder of this reality.
Conclusion:
The Grok controversy highlights growing friction between regulatory intervention and market-driven expansion in AI. Whilst the ethical and societal risks of unregulated AI continue to grow, this case suggests that anticipated commercial gains outweigh regulatory deterrents for investors. The global, rapidly growing AI industry is expanding perhaps faster than regulatory frameworks can respond to, allowing capital to drive development despite legal and ethical scrutiny. Without global coordination, investments, not regulation, will shape the future of AI in the UK.
Bibliography
Legislation
Online Safety Act 2023.
Sexual Offences Act 2003.
Official Publications
UK Government, ‘Technology Secretary statement on xAI’s Grok image generation and editing tool’ (9 January 2026) https://www.gov.uk/government/news/technology-secretary-statement-on-xais-grok-image-generation-and-editing-tool.
Company and Financial Sources
Goldman Sachs, ‘Why AI Companies May Invest More Than $500 Billion in 2026’ (18 December 2026) https://www.goldmansachs.com/insights/articles/why-ai-companies-may-invest-more-than-500-billion-in-2026.
xAI, ‘xAI Raises $20B Series E’ (6 January 2026) https://x.ai/news/series-e.
News Sources
BBC News, ‘Elon Musk’s Grok AI appears to have made child sexual imagery, says charity’ (7 January 2026) https://www.bbc.co.uk/news/articles/cvg1mzlryxeo.
BBC News, ‘Malaysia and Indonesia block Musk’s Grok over explicit deepfakes’ (12 January 2026) https://www.bbc.co.uk/news/articles/cg7y10xm4x2o.
Financial Times, ‘How Elon Musk’s Grok spread sexual deepfakes and child exploitation images’ (9 January 2026) https://www.ft.com/content/117af7cc-3fe6-4292-a706-7204b82bb8dc.
Financial Times, ‘UK to outlaw non-consensual intimate images after Grok outcry’ (12 January 2026) https://www.ft.com/content/8eec6d77-c72e-4e8f-a6b5-ce82575e71c6.
The Guardian, ‘Indonesia blocks Musk’s Grok chatbot due to risk of pornographic content’ (10 January 2026) https://www.theguardian.com/world/2026/jan/10/indonesia-blocks-musks-grok-chatbot-due-to-risk-of-pornographic-content
Image Credits
Salvador Rio on Unsplash <https://unsplash.com/photos/smartphone-screen-displays-ai-app-icons-chatgpt-grok-meta-ai-gemini-tkkOCi1Wgx0>

