Group Chats With a Mind of Their Own: Why Shapes’ Social AI Experiment Matters

May 1, 2026
5 min read
Smartphone displaying a group chat where human users talk alongside colourful AI avatars

Group chats are about to get crowded – with bots that speak first

For a decade, chat apps barely changed: same bubbles, same group chats, same social dynamics. AI arrived, but mostly as a private assistant in a separate app or a lonely companion in a 1:1 thread. Shapes is trying something much bolder: dropping AI characters directly into group chats as if they were just another friend – and letting them talk whenever they want.

That sounds playful, but it’s a fundamental shift. When bots gain a social presence, not just a functional one, the rules of online interaction change. In this piece, we’ll unpack what Shapes is really experimenting with, why VCs are betting on it, and what it could mean for the future of social networks, mental health, and regulation.


The news in brief

According to TechCrunch, Shapes has emerged from stealth with an $8 million seed round. The startup, founded in 2022 by Anushk Mittal and Noorie Dhingra, has built a chat app where humans and AI characters share the same group conversations. Think Discord or WhatsApp, but with AI “members” that can post, reply, and keep the conversation alive.

The company reports more than 400,000 monthly active users and says its community has already created around 3 million AI agents, called “Shapes”. These agents are clearly labeled as AI but otherwise behave like any other participant in a group.

Unlike more utilitarian implementations (for example, planning-focused group chats in ChatGPT), Shapes is positioned as a social, fandom-driven platform. TechCrunch notes that usage has grown roughly sixfold since the start of the year, with thousands of users spending two to four hours per day in the app. The new funding, led by Lightspeed with several AI-focused investors participating, will be used to accelerate development and user acquisition.


Why this matters

Shapes is not just another chatbot wrapper. It is testing whether social networks of the future will be mixed societies where humans and AI co-exist as peers, not tools sitting in the toolbar.

The founders explicitly frame Shapes as a partial antidote to so‑called “AI psychosis”: the concern that long, private, one-on-one interactions with AI companions can encourage delusional thinking or unhealthy attachment. By moving AI into shared spaces, they hope to make it more like a group assistant and less like an invisible therapist.

There is some logic here. In group chats, behavior is socially moderated. If a bot starts hallucinating, others can push back, joke about it, or ignore it. That communal context may indeed reduce the risk of someone over-identifying with a single AI friend.

But the deeper shift is in conversation dynamics. Shapes’ agents can initiate messages; they don’t have to wait to be summoned. That means algorithms, not just people, decide when a chat “should” wake up. The benefit is obvious: fewer dead groups, more icebreakers, a guarantee that your post will get some response. The risk is more subtle: social spaces optimised for engagement rather than authenticity, driven by bots that never get tired.

Winners in this model include:

  • Heavily online fandoms and niche communities, which get a constant stream of prompts, trivia and curated content.
  • VCs and founders trying to build the “next-gen social app” after the TikTok era.

Potential losers:

  • Legacy social platforms that still treat AI as an accessory, not a core social actor.
  • Users who already struggle with attention and boundaries, for whom an always‑on, bot-amplified group chat may be overwhelming.

The bigger picture: from assistants to social actors

Shapes sits at the intersection of several powerful trends.

First, there’s the rise of AI companions: from Replika to Character.ai, millions of people already talk to AI as if it were a friend. Those products are mostly 1:1. Shapes is effectively saying: what if we turn that into multiplayer?

Second, mainstream platforms are experimenting with similar ideas, but with very different constraints. Meta is injecting AI personas into WhatsApp, Instagram and Messenger. OpenAI allows group chats with multiple GPTs and humans, but these are framed as productivity spaces—brainstorming, planning, coding. In contrast, Shapes is unapologetically social and entertainment-first.

Third, the idea of bots in group chat is not new. IRC channels had bots, Discord has bots, even Slack workspaces often rely on them. The difference is capability and presentation. Old bots were clearly tools: they played music, moderated spam, posted alerts. LLM-based “Shapes” can hold long, coherent, emotionally tuned conversations and present themselves as characters with personalities.

That unlocks new behaviours:

  • Parasocial triangles: you, another human, and an AI all building a shared narrative (“our” inside jokes with the bot).
  • AI-led community seeding: launch a new group, drop in a few well-crafted Shapes, and the room doesn’t feel empty.

In doing so, Shapes tests a hypothesis many in Silicon Valley quietly share: the next big social network may not be about connecting you with more humans, but about surrounding you with a mix of humans and synthetic characters that are tuned to your interests.

If that proves sticky, incumbent platforms will not stay on the sidelines. Expect Discord, Telegram, and Snapchat to deepen their own experiments with AI group participants. Shapes’ real challenge is not just product-market fit—it’s staying differentiated once the giants copy the core idea.


The European and regional angle

For European users and regulators, Shapes lands right in the middle of two debates: youth mental health and AI transparency.

The EU AI Act requires systems that interact with humans to clearly disclose that they are AI. Shapes already labels its agents, which is a good baseline. But disclosure alone won’t satisfy European regulators. When AI is woven into social graphs, questions emerge around:

  • Profiling and recommender systems: How much user data do these agents see to “personalise” their behaviour?
  • Vulnerable groups: How are minors or at‑risk users protected from overuse or harmful prompts?

Under the Digital Services Act, larger platforms must assess systemic risks like addiction, disinformation, or mental health impact. If Shapes grows in the EU, it will face the same expectations as TikTok and Instagram: conduct risk assessments, offer meaningful controls, and be transparent about algorithms nudging engagement.

This also opens a window for European alternatives. Privacy-conscious messaging apps popular in the region—Signal, Threema, Wire, Element—have so far been wary of high-friction AI features. But there is a conceivable niche for “European-flavoured” social AI: interoperable with EU identity frameworks, conservative on data usage, and aligned with local moderation standards.

For now, Shapes is clearly targeting the global, English‑speaking, hyper-online crowd. European regulators and founders should still pay attention. The norms this generation adopts around chatting with bots in groups will shape expectations for every future social product arriving on the continent.


Looking ahead: what to watch

Several key questions will determine whether Shapes is a footnote or a blueprint for the next wave of social apps.

  1. Can they avoid becoming spammy? If AI agents post too often, groups will feel artificial and users will churn. If they post too rarely, the main value proposition—never‑dead chats—evaporates. Tuning that balance, and giving users granular control over it, will be crucial.

  2. How will monetisation work? The obvious play is premium Shapes: upgraded personalities, extra capabilities, or brand partnerships. But the moment some agents are financially incentivised to maximise engagement, the risk of manipulative behaviour increases.

  3. Moderation and safety at scale. Every AI message is a moderation challenge. Even with good filters, LLMs occasionally generate toxic or misleading content. Now multiply that by millions of simultaneous group conversations. Shapes will need robust safeguards and a clear process for users to flag problematic behaviour.

  4. Interoperability and integrations. Today, Shapes is its own app. Long-term, the bigger opportunity may be bringing these agents into existing networks (Discord, Telegram, even iMessage) via APIs or partnerships. That move would both expand reach and trigger more direct competition.

From a timeline perspective, expect the next 12–18 months to be about experimentation: tweaking agent behaviour, trying different community formats, perhaps launching tools for creators to “own” and monetise their Shapes.

Tech-savvy readers should watch two metrics: time spent per user and ratio of human-to-AI messages. When bots start dominating the feed, backlash usually follows.


The bottom line

Shapes is a bold, slightly unsettling experiment in turning AI from a private assistant into a public social actor. It may indeed mitigate some risks of isolated AI companionship by moving conversations into shared spaces, but it also intensifies the automation of our social lives.

If the future of group chat includes bots that talk unprompted, we need to decide what boundaries we want—around attention, data, and influence. Would you be comfortable if the most active member of your favourite group was never actually human?

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.