At the Consumer Electronics Show (CES) in Las Vegas, Nvidia showcased advances across three main categories: gaming & graphics, autonomous vehicles and AI & data center. Previously, we covered how Nvidia rose to the top with its irreversible AI lock-in, as the company’s position now appears to be further entrenched with the latest Vera Rubin platform.
This time, we examine Nvidia’s efforts to make driverless cars a reality. And how does this push compare with autonomous advances in China?
Nvidia’s End-to-End Control of the AV Stack
Just as Nvidia provides a full AI stack for data center deployment, the same is true for the autonomous driving race. And in the same way Nvidia is reliant on TSMC fabs to produce its designed chips, other companies, such as Alphabet’s Waymo and Tesla, are increasingly reliant on Nvidia as the key supplier of self-driving components.
Up to the latest CES 2026 that ended on Friday, Nvidia developed the following autonomous driving pillars:
-
Nvidia DRIVE AGX Hyperion Platform – Providing automakers with production-ready and safety-certified sensor & compute architecture. From cameras to lidar, these pre-qualified components lower automakers’ costs
-
Nvidia DRIVE AGX Thor Compute – As an upgrade from Orin, Thor uses Blackwell GPU architecture with a generative AI engine, that is 4-8x more compute performant. Thor unifies infotainment, cockpit function and autonomous driving into a single Vision-Language-Action (VLA) model for L4 autonomy.
-
Nvidia Halos Safety System – Collaborating with partners like Bosch, Wayve, Omnivision, Continental, ANAB and others, Halos is Nvidia’s full-stack safety framework spanning chip design through deployment, including an accredited inspection lab and a certified evaluation program.
-
Nvidia Omniverse – A set of libraries that make it possible to simulate conditions as a digital twin of the physical world, effectively validating self-driving approaches to training. Given the billions of edge cases that could possibly exist, omniverse lets automakers account for them within physics-accurate virtual cities that run vehicles, sensors, pedestrians, weather, traffic and other factors.
In short, Nvidia is pursuing Google’s approach that worked so well to proliferate Android, but at a deeper infrastructure layer. As Google standardized APIs and tooling for OEMs like Samsung to differentiate themselves, Android won the mobile OS game, presently at around 71% market share.
Equally so, Nvidia already became the default AI substrate that standardizes simulation, training and deployment for autonomous vehicles (AVs). And not only is there a full software stack with Omniverse/DRIVE/CUDA, but also a hardware stack that perfectly complements software and certification.
Nvidia’s entrenchment runs much deeper, however, because validating autonomy from scratch would be prohibitively costly. Once in this ecosystem, switching out would be irrational. Moreover, no other single company provides such a comprehensive suite of services. The latest AV announcement from CES 2026 only confirms this trajectory.
Nvidia Addresses the AI Black Box Problem
So far, Nvidia has provided GPUs for training, Omniverse for simulation, DRIVE for inference and safety tooling for validation. Although already impressive, this stack is missing an edge. At CES 2026, Nvidia announced the open source Alpamayo model to address it.
First, what is the underlying problem for autonomous driving?
When people use large language models (LLMs), they may come away with the impression they are engaging with reasoning entities. However, beneath that layer of illusion (addressed by Apple) is a probabilistic machine learning model that calculates the likelihood of every possible next word in the dictionary. The next word is selected based on patterns during training.
LLMs are only partly deterministic in the sense they can build up output based on internet searches or when running a coding problem. In other words, if AI encounters a problem not sufficiently represented in training data, such as driving in novel environmental conditions, it typically confabulates an answer.
Even if perceiving degraded objects, humans can spot subtle cues to correctly identify them regardless. In contrast, AI may detect misaligned pixel patterns for what should constitute a “stop” sign and misinterpret it entirely.
In other words, not knowing what a stop sign actually is, as humans do, constitutes a “Black Box” problem for AI. So far, brute-force approach has been used primarily to address it, demanding ever-increasing compute costs and data center buildup.
The next step in solving AI’s Black Box problem, for the purpose of autonomous driving, is Nvidia’s new Alpamayo family of AI models, tools and datasets. As a large vision-language-action (VLA) model, Alpamayo 1 not only reacts to patterns but also provides chain-of-causation reasoning for every action made.
Together with open source AlpaSim and Physical AI Open Datasets, automakers have more tools than ever to make self-driving as safe and robust as possible.
“Alpamayo creates exciting new opportunities for the industry to accelerate physical AI, improve transparency and increase safe level 4 deployments.”
Sarfraz Maredia, head of autonomous mobility and delivery at Uber
Nvidia CEO Jensen Huang called the Alpamayo launch “the ChatGPT moment for physical AI”. However, unlike OpenAI that faces many competitors, it is safe to say that Nvidia is in a superior position moving forward as the software/hardware infrastructure stack.
Can China Threaten Nvidia’s AI Stack?
According to December’s Counterpoint data for Q3 2025, Chinese Geely Holding Group is the world’s dominant EV supplier, at 61% market share. Chinese BYD Auto is at 16%, leaving Tesla with a 13% global EV market share.
Interestingly, Alphabet’s Waymo is using Zeekr EV platform, as one of the subsidiaries within the Geely Holding Group. Previously, we came to the conclusion that Tesla is more likely to win the robotaxi race owing to a more unified approach and control of platforms.
Nonetheless, it is clear that China mastered the economy of scale, further boosted by not expending energy on racial strife which is plaguing the West. Case in point, investors should take into account urban crime rates when considering exposure to companies like Serve Robotics (SERV).
Lacking such social fragmentation, it is fair to say that China is more focused and streamlined. By 2024, over 60% of new cars sold on China’s mainland already featured some level of self-driving capability.
Despite export controls on AI chips, China also built its autonomous industry on Nvidia. However, the purported geopolitical animosity is making China’s autonomous sector more diversified, while elaborate workarounds have to be taken to acquire more powerful AI chips like Blackwell.
Altogether, China’s full-stack AI providers come from the following companies:
-
Baidu provides high-definition maps, algorithms and in-car operating system DuerOS, featuring both AI conversational capabilities and broader self-driving unification. Baidu tightly collaborates with Geely, Chery and GAC to build up its Apollo Go robotaxi fleet. By mid-2025, Baidu deployed over 1,000 robotaxis, making it slightly ahead of both Waymo and Tesla.
-
On the hardware side, Huawei is working to drive China out of Nvidia’s ecosystem with Ascend AI processors and Autonomous Driving System (ADS), which is a stand-in for Tesla’s FSD. On top of this, Huawei developed the Balong 5000 5G chipset for V2X communications and LiDAR systems. Huawei’s answer to Nvidia frameworks is open source MindSpore, but it is likely to be China-bound.
-
Of other notable companies, Pony.AI and WeRide focus on full software stacks for level 4 autonomous rollout for both passenger and cargo transit. Complementing them is Horizon Robotics with its proprietary NPU (Neural Network Processor), as well as Hesai Technology and RoboSense for LiDAR sensors.
Although more diversified, China’s autonomous ecosystem tightly collaborates at all levels. This is likely an artifact of the nation’s political class being above its merchant class, as evidenced by Alibaba founder Jack Ma’s prolonged absence from the public spotlight.
When it comes to long-term scaling, Chinese ADS is similar to Waymo in that it relies on lidar and pre-mapping. Accordingly, most reports show Tesla’s FSD (vision-only) approach is better at handling diverse scenarios, while Huawei ADS is better suited at localized urban environments covered by high-precision mapping and denser localized bandwidth.
Consequently, this would make Tesla better suited globally, as we concluded previously.
The Bottom Line
In conclusion, while Huawei’s Ascend chips are comparable to Nvidia’s older H100 chips, China is still catching up to Blackwell, as Nvidia is already moving beyond with Vera Rubin. On top of this hardware gap, Nvidia’s CUDA platform has over two decades of developer loyalty and optimization.
With the launch of open source Alpamayo, equally open source MindSpore from Huawei is unlikely to make a big dent even within Chinese-owned AI firms. Altogether, this makes Nvidia’s hardware and software moats substantial and hardened.
Given that the robotaxi and self-driving economy is just starting to ramp up, it is likely that Nvidia will see valuations far beyond $5 trillion by 2030.
***
Looking to start your trading day ahead of the curve?
Get up to speed before the bell with Bull Whisper—a sharp, daily premarket newsletter packed with key news, market-moving updates, and actionable insights for traders.
Start your day with an edge. Subscribe to Bull Whisper using this link.
