top of page
Search

When AI Skills Become Non-Negotiable

  • Writer: Yoshi Soornack
    Yoshi Soornack
  • 14 hours ago
  • 5 min read

What Meta's Move Signals for Your Team: From an optional capability to a career requirement in just 18 months. Imagine your next performance review hinging not just on the projects you delivered, but on how effectively you deployed AI to deliver them. That's no longer hypothetical for Meta employees. Starting in 2026, the social media giant is formally baking "AI-driven impact" into performance assessments, making it clear that AI proficiency is shifting from an interesting skill to a baseline expectation.


The shift is significant not because it's surprising, but because it signals where the entire sector is heading. Janelle Gale, Meta's head of people, spelt it out in an internal memo to staff: "As we move toward an AI-native future, we want to recognise people who are helping us get there faster."



ree

From Optional to Essential

For 2025, Meta is taking a measured approach. Individual AI usage metrics won't factor formally into performance reviews yet. But employees are being nudged hard. The company is deploying an "AI Performance Assistant" and encouraging staff to use internal tools like Metamate and Google Gemini when writing performance summaries and peer feedback.


The signal is unmistakable. By 2026, managers will begin asking whether projects leveraged AI to cut delivery time, improve metrics, or unlock new capabilities. Employees who can't demonstrate tangible AI-driven wins will look increasingly behind their peers.


This reflects a broader pattern in the tech industry. Microsoft executives have told managers that using AI is "no longer optional". Google's CEO, Sundar Pichai, has stated that AI literacy is now essential for leaders. Amazon is rebuilding workflows so AI sits at the centre of every internal process. Meta is simply being most explicit about formalising what was already becoming practice across Big Tech.


The business case for this shift is clear. According to research from Wharton's Human-AI program, 72% of organisations are now formally measuring the ROI of generative AI, with three out of four leaders reporting positive returns on their AI investments. However, the same research reveals the hidden cost of AI illiteracy: only about one in four AI initiatives actually deliver expected ROI, and fewer than 20% have been fully scaled across the enterprise. The difference between success and failure often comes down to team capability. Organisations with stronger AI literacy see 2.3 times faster implementation and substantially higher returns.


Janelle Gale, Meta’s Head of People, told staff that “AI-driven impact” will become a core expectation from 2026. The company wants every employee, whether in engineering, marketing, or operations, to demonstrate how they’re using AI to work smarter, faster, and better.

The Fairness Problem

Yet Meta's approach also exposes the real friction point: not all roles are equally positioned to demonstrate visible AI impact. A software engineer can show code written with AI assistance. A product manager can highlight faster analysis. But what about roles in operations, HR, or legal, where an AI application is trickier? A junior accountant might show measurable AI wins, whilst a senior strategist struggles to quantify AI's contribution to longer-term decisions.


This creates a secondary problem: fairness. Performance reviews were already imperfect. Tying them explicitly to AI-driven output risks turning performance management into a proxy for AI fluency rather than genuine contribution. Some employees will adapt quickly; others will find the metric unfamiliar or uncomfortable, rightly seeing it as shifting the evaluation goalposts partway through their careers.


Research into AI adoption challenges reveals how these dynamics play out in practice. According to analysis from the Digital Project Manager, one of the most significant adoption barriers remains uneven skill levels. "Everyone is at a different point in their AI journey," the research notes. "Some team members are just getting started, while others are more advanced. This uneven experience creates adoption challenges, especially when there's a lack of understanding or confidence in using AI tools." When performance evaluation systems formalise AI competency as a requirement, they risk penalising employees still on the learning curve, whilst rewarding those who adopted early. That dynamic can create either motivation for upskilling or resentment, depending on how organisations handle training and support.


The most successful organisations recognise that AI skills development must come before AI-tied performance metrics. Leapsome's 2025 HR Insights report found that 94% of HR leaders agree that failing to train or reskill employees in AI carries significant risks. Yet many organisations skip the training phase entirely, expecting adoption to happen organically. That approach typically results in adoption gaps that correlate with generation, technical background, and proximity to AI-forward management, creating the very fairness issues Meta's model is designed to surface and hopefully address.


The Wider Workforce Implication

Yet Meta's move is strategically sound because it's signalling something important about the future shape of work. If a company with Meta's resources and culture is making AI-driven impact a core expectation, it's only a matter of time before other sectors follow. Project delivery, in particular, will feel this pressure acutely.


The skills needed for future project teams will increasingly include the ability to use AI to plan, analyse, communicate, forecast and make decisions. What we're seeing with Meta is the early stage of a shift that will redefine the baseline capabilities of project professionals, making AI skills as essential as digital literacy was a decade ago.


What This Means for Project Professionals

For project managers, the Meta model presents both opportunity and challenge. The opportunity is substantial: project management is inherently suited to AI assistance. Scheduling, resource allocation, risk assessment, stakeholder communication, budget forecasting, scenario planning, and even some aspects of leadership can be enhanced significantly through thoughtful AI application.


The challenge: developing genuine AI literacy takes time and intentional investment. It's not about knowing which tools exist; it's about understanding what questions to ask, when to trust outputs, how to integrate AI-generated insights into human decision-making, and where the genuine limitations lie. We believe the professionals who rush to demonstrate "AI-driven impact" without building a foundational understanding of what these tools can and cannot do will face the steepest learning curve and the most visible failures.


In our opinion, what's essential is that Meta's move validates something we've been observing in project delivery contexts for some time: AI skills aren't a specialisation anymore. They're becoming part of the job description for anyone in a delivery or strategic role. Project professionals who develop comfort working alongside AI tools whilst maintaining critical judgment about outputs will be well-positioned. Those who resist the shift or assume AI tools are simply "nice to have"risk becoming outdated.


Our suggestion to project teams is clear: start building AI literacy now, before your performance evaluation forces the conversation. Choose one practical application for your projects, perhaps risk analysis, schedule forecasting, or resource capacity planning, and commit to genuinely understanding how AI can enhance (not replace) your human expertise in that domain. By the time Meta's model reaches your sector (and it will), you'll be ahead of the curve rather than scrambling to catch up on deadline.


The 2025 transition period Meta is allowing is real. But the direction is locked in. By 2026, across Meta and likely much of the sector, the question won't be whether you use AI. It'll be whether you use it well. The organisations that treat this as a training moment rather than a performance test will build capability that sustains competitive advantage. Those who treat it as a gotcha, hoping to catch underperformers, will end up building resentment and losing talent. We expect the former approach will win out over time, but watch closely how your own organisation frames the transition. That framing says everything about whether leadership actually believes AI literacy is foundational or is just using it as another performance lever.


What does your organisation look like when AI proficiency becomes the baseline? Subscribe to Project Flux to explore how teams are adapting.








 
 
 

Comments


bottom of page