CONNECT WITH US

AI & Deeptech

Nvidia, ServiceNow expand partnership for AI agents

Nvidia, ServiceNow expand partnership for AI agents

Ninety-five percent of enterprises can't measure what value they're actually getting from their AI investment. That stat, dropped by ServiceNow President and Chief Product Officer Amit Zavery on stage in Las Vegas this week, landed like a confession. Companies have deployed agents across IT, HR, security, and customer service — and most of them are flying blind. It was the setup ServiceNow and Nvidia needed to justify what they announced next.

At ServiceNow Knowledge 2026, Nvidia founder and CEO Jensen Huang joined ServiceNow chairman and CEO Bill McDermott during the opening keynote to discuss the next phase of enterprise AI. The conversation wasn't just strategic posturing. The companies are expanding their collaboration across the full stack, delivering specialized autonomous AI agents that are safe and easy to adopt — powered by Nvidia accelerated computing, open models, domain-specific skills, and secure agent execution software — bringing together enterprise workflow context from ServiceNow Action Fabric and governance from ServiceNow AI Control Tower. For anyone tracking how Nvidia, ServiceNow expand partnership for AI infrastructure at the enterprise level, this marks the most concrete deployment blueprint the two companies have produced in their six-year joint history.

From Productivity Play to Platform Shift

Huang, characteristically blunt, framed the stakes in terms founders and operators understand immediately. Speaking to an audience that included roughly 25,000 CIOs, he described five layers of the AI economy — energy, chips, infrastructure, models, and applications — and positioned ServiceNow at the apex: the bridge between raw model intelligence and enterprise execution. His line to McDermott cut to the point: "You started out being the human operating system for enterprise, and now you become the AI agentic operating system as well."

That framing matters because it reframes how operators should evaluate this partnership. This is not a chip vendor selling compute capacity to a workflow platform. It's a vertical integration play — one where Nvidia's hardware and model stack becomes the engine inside ServiceNow's governance and orchestration layer. The go-to-market implication is significant: enterprises don't just buy Nvidia or ServiceNow separately. Increasingly, they buy the combination.

ServiceNow used Knowledge 2026 to reposition itself from workflow platform to what McDermott calls "the AI agent of agents," with the centerpiece announcement being an expanded AI Control Tower now capable of discovering, governing, observing, securing, and measuring every AI agent, model, and workflow across the enterprise, regardless of origin. That last clause — regardless of origin — is doing enormous strategic work. It means ServiceNow is pitching its governance layer as infrastructure for the entire enterprise agent ecosystem, not just its own.

Project Arc: The Desktop Agent That Doesn't Need a Babysitter

The flagship announcement from the partnership's Knowledge 2026 chapter is Project Arc. ServiceNow is introducing Project Arc, a long-running, self-evolving autonomous desktop agent designed for knowledge workers, including developers, IT teams, and administrators. Unlike standalone AI agents, Project Arc connects natively to the ServiceNow AI Platform through ServiceNow Action Fabric to bring governance, auditability, and workflow intelligence to every action the autonomous desktop agent takes. It can access local file systems, terminals, and applications installed on a machine to complete complex, multistep tasks that traditional automation can't handle — but with the controls enterprises actually need to deploy AI at scale.

That's the real pitch embedded in Project Arc: not capability, but trustworthiness. The demo at Knowledge made the vulnerability vivid — a prompt injection attack instructing an AI agent to override pricing rules, set shipping to $1, and suppress transaction logs. The fact that enterprise agents can be compromised this way isn't new. The fact that a company of ServiceNow's scale is staging that attack as a conference keynote demo signals that governance has moved from legal boilerplate to product differentiator.

The runtime underneath Project Arc is NVIDIA Open Shell, an open-source secure environment for developing and deploying autonomous agents in sandboxed, policy-governed containers. With OpenShell, enterprises can define what an agent can see, which tools it can use, and how each action is contained. ServiceNow is building on and contributing to OpenShell to advance a common foundation for secure, enterprise-grade agent execution. The open-source contribution strategy here is deliberate — it widens adoption of the standard while keeping ServiceNow's governance and Control Tower layers proprietary.

"Project Arc represents the next step in our ongoing collaboration with NVIDIA, bringing autonomous execution to the desktop," said Jon Sigler, EVP and GM of AI Platform at ServiceNow. "By combining OpenShell's runtime layer with ServiceNow AI Control Tower, and powered by ServiceNow Action Fabric, we're delivering the governance and security that enterprise AI requires."

The Model Stack: Nemotron, Apriel, and the Benchmarking Problem

Underneath the agent announcements sits a model story that's been building since early 2025. At NVIDIA GTC in March 2025, the two companies integrated NVIDIA Llama Nemotron reasoning models with the ServiceNow platform. Two months later at Knowledge 2025, they co-developed Apriel Nemotron 15B — a purpose-built open-source reasoning model delivering lower latency and lower inference costs for enterprise workflows. By October 2025, Apriel 2.0 arrived, adding multimodal input support so agents could interpret screenshots, forms, and diagrams — not just text.

ServiceNow's Autonomous Workforce of AI Specialists are built on the ServiceNow AI Platform and leverage NVIDIA Agent Toolkit software, the NVIDIA AI-Q Blueprint, and a combination of closed and open models, including NVIDIA Nemotron and ServiceNow Apriel models. This layered stack — open models for customization, blueprints for rapid deployment, and a proprietary reasoning model purpose-tuned for enterprise workflows — is how the two companies are answering a legitimate criticism of enterprise AI: that generic frontier models are too unpredictable, too expensive, and too opaque for production deployment at global scale.

The benchmarking piece is underappreciated. The companies are advancing NOWAI-Bench, an open benchmarking suite for enterprise AI agents, integrated with the NVIDIA NeMo Gym library. If NOWAI-Bench gains traction as the industry standard for evaluating enterprise agents — and the participation of names like Adobe, Atlassian, Cisco, CrowdStrike, and SAP in the broader NVIDIA Agent Toolkit ecosystem suggests it might — Nvidia and ServiceNow effectively set the grading rubric for the entire market. That's a moat, not a feature.

Bold fact worth sitting with: ServiceNow University, the company's free learning platform, has grown to nearly 2 million learners, up 80% year over year since its launch at Knowledge 2025. The workforce implications of autonomous agents extend well beyond automation anxiety — they land directly on the question of who gets trained to work alongside them.

Nvidia, ServiceNow Expand Partnership for AI — And Everyone Else Is Watching

The global competitive context here is sharper than it looks from the press releases. Microsoft has its Copilot agent stack and a deepened partnership with ServiceNow extending AI Control Tower governance across the Microsoft 365 Agent ecosystem. Salesforce is pushing Agentforce. SAP has its Joule agent strategy. Every major enterprise platform is racing to be the orchestration layer for autonomous AI, and they're all converging on the same problem: agents are powerful, but enterprises won't deploy them at scale without auditability and control.

What Nvidia and ServiceNow have that most of the field doesn't is a vertically integrated stack from silicon to governance. Nvidia brings compute efficiency and open model customization; ServiceNow brings 7,700+ enterprise customers and workflow data that no startup can replicate. The partnership is, in essence, a hardware-software stack optimized for the specific constraints of regulated enterprise environments — financial services, healthcare, government — where model performance alone is never sufficient.

The UK angle is worth noting for European readers: ServiceNow accelerated UK AI innovation with NVIDIA AI infrastructure for agentic AI capability as part of an October 2025 announcement. With the EU AI Act's compliance timelines now actively shaping enterprise procurement decisions across Europe, the governance-first architecture that OpenShell and AI Control Tower represent becomes a regulatory selling point, not just a product feature. Asia-Pacific enterprises, particularly in Japan and South Korea where enterprise automation adoption has accelerated sharply since 2024, are watching this architecture closely as a compliance template.

The counterargument, stated plainly: none of this is cheap, and autonomous agents that touch financial systems, customer data, or HR records carry liability exposure that hasn't been fully stress-tested in court. The governance layer ServiceNow and Nvidia are building is designed to address this — but designed and proven are different things. Enterprises that move fast on Project Arc deployment before the benchmarks mature are taking a calculated risk. The NOWAI-Bench effort is an implicit acknowledgment of that gap.

The deeper story here isn't about any single product announcement. It's about what happens when the world's dominant AI chip company and the world's dominant enterprise workflow platform decide to build the same thing together. They're not just selling tools. They're defining what enterprise-grade AI even means — and doing it fast enough that their competitors are left arguing over the definition after the fact.

Key Takeaways

The architecture is the strategy. Project Arc, OpenShell, Apriel, NOWAI-Bench — these aren't discrete products. They're interlocking components of a full-stack bet that enterprise AI must be governed from the runtime up, not bolted on after deployment.

Governance is the new performance. Every meaningful announcement from Knowledge 2026 leads with control, auditability, and security. Capability is table stakes. Trust infrastructure is the actual product.

The open-source contributions are deliberate. OpenShell and NOWAI-Bench being open-source is a classic platform play: commoditize the standard, monetize the governance and the compute layer on top. It's the same logic that made Linux foundational.

Six years of data flywheel matters. The Apriel Nemotron model family is post-trained on ServiceNow-specific workflow data. Competitors can buy Nvidia chips, but they can't replicate 7,700 enterprise customers' worth of workflow context.

The regulatory tailwind is real. As the EU AI Act and equivalent frameworks mature globally, architectures built around auditability and least-privilege access aren't just safer — they're likely to become table stakes for enterprise procurement. ServiceNow and Nvidia are positioned ahead of that curve.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It's possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi