- ← Retour aux ressources
- /Meet Claude: The 'Safer' ChatGPT Alternative From OpenAI's Former Employees
Meet Claude: The 'Safer' ChatGPT Alternative From OpenAI's Former Employees
In March 2023, Anthropic launched Claude, an AI assistant built by ex-OpenAI researchers focused on safety. Could Constitutional AI offer a better approach?
In March 2023, a new AI assistant quietly launched that would become one of ChatGPT's main competitors. But Claude wasn't built by a tech giant or well-funded startup. It was built by people who left OpenAI specifically because they were worried about AI safety.
This is the story of how safety concerns created a company—and a genuinely different approach to building AI.
The OpenAI Exodus
The founding story of Anthropic starts with a departure.
In 2021, several senior OpenAI researchers—including Dario Amodei (VP of Research) and Daniela Amodei (VP of People)—left to start their own company. With them went key research scientists who had worked on GPT-3.
Their reason? Growing concerns about OpenAI's direction.
What They Worried About
The departing researchers felt OpenAI was prioritizing capability over safety. The pressure to compete, to release powerful models, to commercialize—all of this was moving faster than safety research could keep pace with.
They wanted to build a company where safety wasn't an afterthought. Where AI alignment research happened before capability development, not after.
In 2021, they founded Anthropic with a clear mission: build safe, steerable AI systems.
Constitutional AI: A Different Approach
Anthropic's key innovation wasn't just making an AI assistant. It was the method: Constitutional AI.
How It Works
Instead of learning from human feedback on every response (the approach OpenAI used), Claude learned from a "constitution"—a set of principles governing its behavior.
These principles included things like:
- Be helpful, harmless, and honest
- Respect user privacy
- Refuse harmful requests
- Acknowledge uncertainty
- Be thoughtful about sensitive topics
Claude was then trained to follow these principles consistently, using them as guidelines rather than needing human judgment on every single interaction.
Why This Mattered
Constitutional AI offered several advantages:
Transparency: The rules governing Claude's behavior were written down, not hidden in training data
Consistency: Claude would apply the same ethical framework across all conversations
Iterability: Anthropic could update the constitution as they learned what worked and what didn't
It was a fundamentally different approach to alignment than competitors were taking.
The March 2023 Launch
Claude launched quietly in March 2023, initially through a limited API for developers.
Unlike ChatGPT's viral explosion, Claude's release was measured. Anthropic carefully onboarded users, gathered feedback, and refined the experience before wider release.
Initial Capabilities
Claude's first version offered:
9,000 token context: Smaller than GPT-4 but respectable for reading documents Strong writing: Natural, thoughtful responses Safety guardrails: Better at refusing harmful requests Reduced hallucination: More likely to say "I don't know"
It wasn't more capable than GPT-4, but it was noticeably different in personality and approach.
How Claude Was Different
Using Claude felt distinct from ChatGPT in subtle but important ways.
More Careful, Less Bold
Claude was more likely to hedge, to express uncertainty, to decline requests that felt ethically questionable. Where ChatGPT might confidently answer anything, Claude would sometimes say "I'm not sure" or "I can't help with that."
Some users found this refreshing—honesty over false confidence. Others found it frustrating when they just wanted answers.
Better at Saying No
Claude's safety training made it genuinely better at refusing harmful requests without being preachy. It would decline clearly and move on, rather than lecture users.
This made it popular for enterprise use cases where companies wanted AI assistance but worried about liability.
The Vibe Was Different
It's hard to quantify, but Claude had a different personality. More thoughtful, less trying to impress, more willing to admit limitations.
Users described it as talking to a careful professor versus ChatGPT's eager student energy.
The Funding and Growth
Anthropic wasn't struggling for resources. In 2023 alone, they raised:
$450 million from Spark Capital (February) $2 billion from Google (2023-2024) $4 billion from Amazon (September)
The backing of two tech giants gave Anthropic the resources to compete with OpenAI and build massive computing infrastructure for training future models.
The Competition Heats Up
Claude's launch established Anthropic as a serious player in the AI race.
For enterprises, Claude offered a safety-focused alternative to ChatGPT For researchers, Anthropic's Constitutional AI papers provided new alignment approaches For users, competition meant more choices and faster innovation
Google's investment in Anthropic was strategic—it gave them a hedge if their own AI efforts lagged behind OpenAI.
The Safety Debate
Not everyone agreed that Anthropic's safety focus was the right approach.
Supporters argued Constitutional AI represented genuine progress in AI alignment and that Anthropic was demonstrating responsible development.
Critics questioned whether the difference was real or just marketing, and whether "safety" was being used to justify moving slower than competitors.
Realists noted that Anthropic still faced the same competitive pressures as everyone else. Safety takes a back seat when falling behind threatens your existence.
Where Are They Now?
Today, Anthropic is one of the leading AI companies, with Claude 3.5 Sonnet (June 2024) widely praised as one of the best AI models available—especially for coding.
Claude offers features competitors don't:
- 200K token context (process entire books)
- Computer use (control keyboard and mouse)
- Artifacts (interactive code previews)
More importantly, Anthropic has maintained their safety focus while staying competitive on capabilities. Constitutional AI has evolved into more sophisticated alignment techniques.
The former OpenAI researchers who left because they worried about safety have built a company that proves safety and performance don't have to be trade-offs.
March 2023's Claude launch was quiet compared to ChatGPT's viral explosion. But it established an alternative path for AI development—one where careful consideration of safety doesn't mean sacrificing capability.
Whether that approach can survive the intense competition of the AI race remains an open question. But Claude proved it was possible to build genuinely good AI while being thoughtful about risks. That alone makes Anthropic's founding and Claude's launch significant in AI history.