TL;DR
- Inside Access: Prakhar Agarwal, an applied researcher who has worked at Apple, OpenAI, and Meta Superintelligence Labs, revealed what hiring and daily life at frontier AI labs actually look like.
- Hiring Signal: Meta and OpenAI test candidates on their ability to identify and quantify gaps in current AI models, prioritizing demonstrated judgment over academic credentials.
- Work Culture: Researchers at these labs are expected to self-direct from day one, defining their own problems and priorities without a traditional management hierarchy.
- Talent War: The flow of researchers between Meta and OpenAI continues in both directions, underscoring how scarce frontier AI talent truly is.
Forget the PhD. The skill Meta Superintelligence Labs tests in interviews – according to a researcher who moved there from OpenAI – is the ability to find gaps in current AI models and quantify them. Prakhar Agarwal, an applied researcher who has worked at Apple, OpenAI, and now Meta’s elite research division, revealed these insights in a first-person account published by Business Insider.
The piece offers some of the deepest public detail yet into what it takes to get in – and what happens once you do. The core lesson: these labs want researchers who can define what problems are worth solving, not just execute on ones they’re given.
“Once you’re in, you’re pretty much thrown in the deep end. You define your own problems and try to come up with solutions. At OpenAI and Meta, they spend a lot of time hiring smart people. You need to tell them what needs to be done, rather than the other way round.”
Prakhar Agarwal, Applied Researcher at Meta Superintelligence Labs (via Business Insider)
What this signals is a fundamental inversion of the traditional employer-employee relationship. Rather than organizations directing talent, these labs engineer conditions where leading researchers self-select for fit – and self-select out when they cannot operate without structure. The implication: the interview is less a test of knowledge than a test of self-awareness about how you work.
Life Inside the Lab
That “deep end” is not a metaphor. For those accustomed to conventional tech roles – defined OKRs, assigned projects, a manager with a product roadmap – life at an elite AI lab is a genuine culture shock. Agarwal describes the day-to-day as governed by high autonomy and flexible structure, with no traditional management hierarchy telling researchers what to build next.
Identifying what gap to close, deciding whether that gap is worth closing, and executing on the solution are all the researcher’s responsibility. A new hire might spend weeks evaluating whether a problem is real before writing a single line of code. That prolonged ambiguity is by design, not oversight.
This dynamic goes well beyond loosely defined roles. Researchers who thrive tend to be those comfortable operating under uncertainty – people who can evaluate AI capabilities, spot where models systematically fail, and propose interventions without waiting for a brief. Those who struggle are typically the ones accustomed to receiving a problem statement before they begin.
The Culture Shock of Self-Direction
At many tech companies, a researcher’s first weeks involve onboarding, shadowing, and incremental ramp-up. At OpenAI and Meta’s research arms, however, those hired are presumed ready to self-direct from day one. Agarwal’s account suggests the labs’ hiring process presupposes that people will arrive prepared to define their own agenda.
For anyone coming from a more structured environment, that shift can be jarring. The absence of a prescribed roadmap forces researchers to develop a different kind of professional muscle: the ability to justify their own priorities without external validation. Despite the disorientation, the reward is total ownership of a research agenda at the frontier of AI development – a level of autonomy that few industry roles offer at any career stage.
Understanding the culture is only half the challenge – getting through the door is the other.
PhD vs. Practical Experience
Academic credentials matter, but Agarwal’s account cuts against simple credential gatekeeping. What these labs test is problem-solving approach, not pedigree. Candidates who demonstrate a track record of identifying real-world model failures and measuring them systematically will outperform candidates who can only recite theory.
“Don’t just rely on coursework or books written 5-10 years ago,” Agarwal noted in his essay – the field has changed faster than many syllabi. A PhD builds analytical rigor and depth, yet both labs actively hire researchers without one, particularly those with strong empirical track records in industry. Practical experience developing and deploying models against real-world constraints signals exactly the kind of judgment these labs seek.
Building Skills That Matter
Beyond credentials, Agarwal’s prescription for aspiring researchers is concrete: build things. “Nothing replaces building concrete projects, even modest ones,” he wrote in his Business Insider account. A personal project that exposes a model’s failure mode – even if small in scope – demonstrates the gap-identification skill that distinguishes interview-ready candidates from those still waiting to be assigned a problem.
Staying current with the literature is equally non-negotiable. AI research moves at a pace that makes textbooks from even a few years ago misleading guides to the state of the art. Candidates who track recent papers, reproduce results, and experiment quickly with new tools demonstrate they can operate in an environment where the field shifts monthly.
What Is Meta Superintelligence Labs
That culture of radical self-direction was built into Meta Superintelligence Labs from the outset. Meta Superintelligence Labs was founded in July 2025, when Zuckerberg consolidated Meta’s AI research and product teams into a single organization competing directly with OpenAI and Google DeepMind. Alexandr Wang, the former CEO of Scale AI, was tapped to lead the division – a hire that signaled ambitions well beyond incremental product improvements.
Zuckerberg articulated the lab’s purpose in terms of consumer impact, stating:
“an even more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals.”
Mark Zuckerberg, CEO of Meta
On the research side, Meta appointed Shengjia Zhao – an OpenAI veteran – as Chief Scientist, part of an aggressive push of hiring top OpenAI researchers throughout 2025. By collapsing the distance between research and product, the lab compresses the research-to-deployment cycle in ways that siloed organizations structurally cannot.
The Talent War Behind the Headlines
Agarwal’s move from OpenAI to Meta reflects a broader, sustained talent war between the two companies. As WinBuzzer reported in mid-2025, OpenAI’s leadership scrambled to prevent a staff exodus to Meta, with the departure of at least eight researchers in a single week forcing the company into a public struggle to retain talent.
The flow is not one-directional, however. Ruoming Pang, one of Meta’s highest-profile superintelligence hires, left Meta to join OpenAI after just seven months – a departure that underscores the escalating rivalry between the two companies.
Talent moving in both directions reveals how thin the supply of frontier AI researchers truly is. Both companies are competing for the same small pool of people who can operate at the level Agarwal describes: self-directed, empirically sharp, capable of finding the right problems before anyone tells them where to look. That scarcity is precisely what makes accounts like Agarwal’s valuable – they offer a rare window into what these organizations are in practice selecting for.
For the tens of thousands of AI researchers currently in structured industry roles, this represents a genuine reckoning. The skills that earned them their current positions – executing well-defined tasks, optimizing known benchmarks, delivering against a product roadmap – are precisely what frontier labs screen against.
Those who adapt by seeking ambiguity rather than avoiding it, and by building projects that expose model failures rather than validating existing ones, will find themselves positioned for roles redefining what applied AI research means. Those who do not may find that credentials alone open fewer doors than expected, as the labs increasingly treat demonstrated judgment as a prerequisite that no degree program has yet learned to confer.

