SK Telecom presented the outcomes of its joint AI model development work with NVIDIA during a technical panel session at NVIDIA Nemotron Developer Days Seoul 2026 — held locally for the first time. The venue itself signals something. Jensen Huang's team doesn't run bespoke developer events in cities unless those cities have become genuinely important nodes in NVIDIA's ecosystem. Seoul is now one of them. Stock Titan
At the event, SK Telecom outlined its collaboration plans for advancing sovereign AI, including the development of A.X K2 — the successor to its proprietary foundation model A.X K1. That model succession carries more weight than a typical product roadmap update. It is the next chapter of a story that began with a supercomputer, ran through a government mandate, and now connects the world's dominant AI chip company to the world's most wired nation's ambitions to own its own intelligence. Stock Titan
Titan to A.X K1: A Partnership Built Over Five Years
The SKT-NVIDIA relationship is not a deal signed at a trade show and announced via press release. It is an engineering collaboration that has evolved through hardware generations, model architectures, and shifting geopolitical priorities over half a decade.
The partnership began in 2021 when SK Telecom constructed the 'TITAN' supercomputer based on NVIDIA's A100 GPUs. Since then, the two companies have maintained close cooperation across data, infrastructure, and the overall training environment, with NVIDIA's technology deeply involved throughout SK Telecom's proprietary AI model development process. Banner Life
That foundation mattered when Korea launched its national AI programme. Last year, SK Telecom participated in the government-led Sovereign AI Foundation Model Project, using NVIDIA's Nemotron dataset to train the 519-billion-parameter A.X K1 model. During the process, the two companies worked to enhance the stability of large-scale AI training using Megatron-LM, a distributed training framework, and NeMo Curator for data preparation and refinement. Stock Titan
519 billion parameters. For context: OpenAI has never disclosed GPT-4's parameter count, but estimates place it in the range of 1 to 1.8 trillion. A 519-billion-parameter model built by a telco — on NVIDIA infrastructure, for a government programme, in Korean — places SKT in a category of AI model development that precious few Asian organisations can credibly claim. And it did it not by acquiring a research lab, but by deepening an infrastructure partnership methodically across five years.
This collaboration enabled SK Telecom to improve AI model performance while allowing NVIDIA to further refine its core software frameworks, creating a mutually reinforcing development cycle. That reciprocity is what distinguishes this from a vendor-customer relationship. SKT gave NVIDIA real-world stress-testing of Megatron-LM at a scale that most companies cannot replicate. NVIDIA gave SKT the tooling and dataset resources to train a model that would otherwise have required years more runway. Both sides got something that money alone couldn't efficiently buy. Stock Titan
"SK Telecom and NVIDIA have built a relationship in which we proactively apply new technologies and advance together through mutual feedback. Through this partnership, we will jointly contribute to the development of Korea's AI ecosystem."
— Kim Tae-yoon, Head of Foundation Model Development, SK Telecom
The A.X K2 Ambition — and What It Needs to Deliver
SK Telecom has shared its experience in applying new model architectures, such as Mixture of Experts (MoE), and development infrastructure information with NVIDIA, jointly establishing a more sophisticated and stable large-scale training foundation. NVIDIA's solutions are expected to be used for training the successor model A.X K2, currently under development, with plans to actively continue joint research in next-generation foundational technology areas like multimodal and Vision-Language Models. Banner Life
The shift to Mixture of Experts architecture is significant. MoE models — pioneered at scale by Google with models like Gemini and adopted by Mistral, DeepSeek, and others — activate only a subset of parameters for any given input, dramatically improving inference efficiency without sacrificing model capability. A 519-billion-parameter MoE model behaves computationally like a much smaller one in practice, which matters enormously when you're running it at telco scale across millions of consumer interactions.
Vision-Language Model development signals the next frontier. A.X K1 was primarily a language model — powerful for Korean-language enterprise applications, document understanding, and AI agent workflows. A.X K2 incorporating VLM capabilities means SKT is building toward models that can process images, video, and text simultaneously. That unlocks use cases in manufacturing quality control, medical imaging, autonomous vehicles, and physical AI applications that a text-only model cannot serve.
At the World IT Show 2026 in Seoul, SK Telecom showcased its full-stack AI vision across five zones: Network AI, AI Data Center Solutions, AI Model, Agent AI, and Physical AI — demonstrating AI applied across networks, data centres, and everyday life. Highlights included AI-RAN, GPUaaS via the Haein cluster, sovereign AI foundation model A.X K1, and AI agents including A.call, A.note, and A.auto. Coverager
That product surface area is the point. A.call, A.note, A.auto — consumer AI services running on a sovereign foundation model, delivered over a telco network the company also operates, using GPU infrastructure it also owns. Vertical integration from semiconductor supply (through SK Hynix's HBM chips inside NVIDIA GPUs) to consumer application layer. No other telco in Asia has assembled this stack.
The Haein Cluster: Sovereign Infrastructure With a Name That Means Something
SK Telecom launched its sovereign AI infrastructure platform — the Haein GPU cluster — featuring one of Korea's largest GPU clusters, consisting of over 1,000 NVIDIA Blackwell GPUs integrated into a single cluster. The cluster is named "Haein" (해인, 海印), inspired by Haeinsa Temple, where the Tripitaka Koreana — a UNESCO World Heritage collection of over 80,000 Buddhist scriptures — is stored. The name reflects the intention to establish this cluster as a key part of Korea's sovereign AI infrastructure. Trefis
The naming choice is deliberate and revealing. The Tripitaka Koreana is one of the most complete and carefully preserved collections of Buddhist scripture in the world — carved onto 80,000 wooden blocks in the 13th century during the Mongol invasions, as an act of cultural and spiritual preservation under existential threat. Naming your national AI compute cluster after the temple that has kept that collection safe for 800 years is not an accident. It's a statement about what this infrastructure is for.
The Haein GPU cluster was recognised at MWC Barcelona 2026, where SK Telecom received the Best Cloud Solution award at the GSMA Global Mobile Awards — the third consecutive year that its cloud-related technologies have been acknowledged in this category. That winning streak in a competition judged by the global telecommunications industry reflects how consistently SKT has executed on its AI infrastructure ambitions, not just announced them. Quiver Quantitative
SK Telecom's in-house virtualisation solution, Petasus AI Cloud, can instantly partition and reconfigure the GPU cluster according to customer needs, maximising its utilisation. The company also provides AI Cloud Manager, an AIOps platform that efficiently manages the entire AI service lifecycle — from development and training to deployment. The operational sophistication embedded in these proprietary tools is what separates a company that bought GPUs from one that has learned to run them at production scale for enterprise customers. Trefis
The broader SK Group context makes the SKT-NVIDIA story even more structurally interesting. SK Group is building an NVIDIA AI factory featuring more than 50,000 NVIDIA GPUs, with the new factory set to serve SK subsidiaries — including SK Hynix and SK Telecom — as well as external organisations through a GPU-as-a-Service model. SK Hynix, meanwhile, is the world's leading producer of High-Bandwidth Memory — the memory architecture that makes NVIDIA's H100 and Blackwell GPUs performant at the scale that AI training requires. The same conglomerate is simultaneously supplying the memory inside NVIDIA's chips, building the AI factory that runs on those chips, developing the foundation models trained on that factory's compute, and delivering consumer AI services over its own network. In the global AI supply chain, SK Group occupies more of the value chain simultaneously than any non-American organisation.Productgrowth
Beyond NVIDIA: SKT's Hedge Into Domestic Chip Sovereignty
SK Telecom's AI collaboration efforts are not limited to NVIDIA. On April 9, the company signed a strategic MOU with semiconductor design firm Arm and AI semiconductor startup Rebellions for next-generation AI infrastructure innovation. The three companies agreed to jointly develop an AI inference performance enhancement solution combining Arm's 'Arm AGI CPU' and the AI accelerator 'RebelCard', which Rebellions plans to launch in Q3 2026. Banner Life
This is the crucial hedge that the NVIDIA-focused headlines consistently underplay. SKT is running NVIDIA Blackwell GPUs for AI training — where NVIDIA's dominance is structural and near-unassailable — while simultaneously developing domestic inference infrastructure with Korean chipmaker Rebellions. Training and inference are different workloads with different optimal hardware profiles. Training requires maximum raw compute, where NVIDIA's CUDA ecosystem and NVLink interconnect deliver unmatched performance. Inference — running trained models for production queries — is a workload where specialised chips like Rebellions' Atom SoC and the forthcoming RebelCard can deliver dramatically better power efficiency and cost-per-query.
SK Telecom is testing neural processing unit technology from South Korean chip vendor Rebellions to support its domestic AI services. SKT is testing servers equipped with Rebellions' Atom system-on-chip for a range of its AI services, including spam filtering and Adot Phone Call Summary, based on SKT's Korean LLM, AdotX. Spam filtering at telco scale runs billions of inference queries daily. Deploying a domestic NPU for that workload simultaneously validates Rebellions' hardware commercially, reduces SKT's inference cost, and advances Korean semiconductor competitiveness in a segment where NVIDIA's dominance is not yet complete. NewsBytes
Key Takeaways
1. The NVIDIA partnership is a software story as much as a hardware one. SKT's use of Megatron-LM, NeMo Curator, and Nemotron datasets — and its role in stress-testing those frameworks at scale — has made it a co-developer of NVIDIA's AI software stack, not just a customer of its GPU hardware. That is a meaningfully different relationship.
2. A.X K2's multimodal ambitions define the next competitive frontier. Moving from a 519B-parameter language model to a vision-language model capable of processing images and video alongside text unlocks manufacturing, medical, and physical AI use cases that pure language models cannot serve. The question is execution speed.
3. The Rebellions hedge is strategically rational. Maintaining NVIDIA dependency for training while developing domestic inference infrastructure with Rebellions and Arm gives SKT optionality as the inference chip market evolves. It also serves Korea's national interest in building a domestic AI semiconductor ecosystem.
4. SKT's vertical integration from HBM to consumer app is unique in Asia. No other Asian telco can credibly claim control over the full AI stack from memory supply through GPU infrastructure, foundation model, cloud platform, and consumer AI service layer. That integration is both a competitive moat and an execution challenge at every layer simultaneously.
5. Korea's national AI programme is producing the geopolitical template others will follow. The Sovereign AI Foundation Models project — combining government mandate, corporate infrastructure, NVIDIA tools, and domestic model development — is a blueprint that Southeast Asian governments, Middle Eastern sovereign wealth funds, and European nations are watching and beginning to replicate.
The Honest Counterargument
Building a sovereign AI stack is strategically correct. Executing one is technically brutal. Training a 519-billion-parameter model is an engineering achievement. Making it competitive against GPT-4o, Gemini 2.0, and Claude 3.5 in real-world performance — not just parameter count — is a different challenge entirely. Korean-language capability is a genuine differentiation; general reasoning, coding, and multimodal performance benchmarks are where the global competition concentrates.
On April 24, 2026, SK Hynix announced plans to establish independent AI infrastructure by deploying AI systems directly at its production sites, moving away from reliance on SK Telecom's cloud services. That internal defection — a sister company within the same conglomerate choosing to build its own AI infrastructure rather than use SKT's platform — is an early signal that sovereign AI infrastructure, even when technically excellent, must still compete on price and performance against alternatives. SK Hynix procuring 250 servers with 2,000 NVIDIA Blackwell GPUs for its Cheongju facility independently of SKT's cloud is not a crisis, but it is a reminder that even captive demand is not guaranteed. Barchart
The GPU-as-a-Service market that SKT is targeting with the Haein cluster also faces competition from hyperscalers — AWS, Google Cloud, and Microsoft Azure all have Korean data centre presence and the global software ecosystems that domestic enterprises frequently prefer. SKT's advantages are data sovereignty, Korean-language model access, and network integration. Whether those advantages command a sufficient price premium to sustain the infrastructure investment will be determined by enterprise sales cycles over the next two to three years.
What This Means for the Global Sovereign AI Race
At MWC Barcelona 2026 in February, NVIDIA announced a commitment — together with BT Group, Deutsche Telekom, Ericsson, SK Telecom, SoftBank Corp., T-Mobile, and others — to build the world's next generation of wireless networks on AI-native, open, secure, and trustworthy platforms. SK Telecom's CEO Jung Jai-hun framed the commitment precisely: "SKT is evolving telco infrastructure to serve as the foundation for the AI era, where connectivity serves as a platform for intelligence and innovation." The Manila Times
That framing — connectivity as a platform for intelligence — is the thesis underneath everything SKT is building. The telco's traditional value proposition was bandwidth: moving bits from A to B faster and more reliably than competitors. The new value proposition is intelligence: making those bits smarter, more contextually aware, and increasingly autonomous at the network edge.
Samsung, SK Group and Hyundai Motor Group are each building AI factories with up to 50,000 NVIDIA GPUs apiece, while NAVER plans to deploy more than 60,000. This is forming the backbone for Korea's AI-powered transformation across manufacturing, mobility, telecommunications and robotics. Korea is not one company making an AI bet. It is an entire industrial ecosystem — chaebols, government, telcos, and chipmakers — moving in coordinated parallel toward the same national objective. That coordination is both Korea's structural advantage and, for the rest of Asia, the sovereign AI development model most worth studying. Stock Titan
For the global AI infrastructure landscape, the SKT-NVIDIA partnership reveals something important about how NVIDIA is extending its dominance beyond the US hyperscalers. By embedding its software frameworks — Megatron-LM, NeMo Curator, Nemotron — deeply into the development cycles of national AI programmes, NVIDIA creates switching costs that are not primarily about GPU hardware lock-in. They are about institutional knowledge, training pipelines, and engineering muscle memory built around NVIDIA's tools. Switching away from NVIDIA for A.X K3 would require SKT to retrain its entire model development team and rebuild its data preparation infrastructure from scratch.
That is a more durable moat than any silicon advantage. And SK Telecom — named after a telephone company, now running one of Asia's largest sovereign AI infrastructure programmes — helped NVIDIA build it.






