AI Value Chain: 5 Layers Every Investor Must Understand Before Investing

Loading...

Most people think AI is just a chatbot. You open ChatGPT, ask it to fix an email, and it does. It feels like magic. But that is like swiping your credit card at a restaurant and thinking you now understand how Visa makes money. You are using the product — not understanding the system.
The reality is that the AI industry is far larger than what you see on your phone screen. Behind every ChatGPT response or AI-powered Google search lies an ecosystem worth hundreds of billions of dollars — involving power plants, chip factories, massive data centres, and infrastructure most people have never heard of.
Jensen Huang, CEO of Nvidia, described AI at the World Economic Forum (Davos) in January 2026 as a five-layer system: Energy, Chips, Cloud, Models, and Applications. He called it "the largest infrastructure build in human history."
This article will walk you through each layer of the AI value chain — from the electricity powering data centres all the way to the app on your phone — and explain why understanding this "AI Stack" is critical for every investor, including investors in Malaysia.
How most people understand AI: "a smart computer that answers questions." That is like saying the internet is "a place to watch videos." Technically not wrong, but it misses the entire picture.
Jensen Huang describes AI as a five-layer system stacked on top of each other, where each layer feeds the one above it, and capital flows in both directions:
Every conversation about AI that focuses only on Layer 5 (Applications) is actually missing 80% of the real picture.
And the most important part for investors: money does not flow evenly across these layers. It concentrates, it compounds, and right now it is concentrating in places most people are not paying attention to.
Everyone keeps looking at the application layer. ChatGPT, Copilot, Claude, Perplexity. These are the products you touch, so they feel like the whole story.
But here is what most people miss.
In 2026, the four largest cloud companies (Amazon, Microsoft, Google, and Meta) are expected to spend between USD650 billion to USD700 billion in capital expenditure (capex). That is roughly equivalent to Switzerland's GDP. And almost 75% of it — approximately USD450 billion — goes directly to AI infrastructure.
Not chatbots. Not applications. Buildings, chips, cables, and cooling systems.
Think of it this way: before anyone can use ChatGPT, someone has to build a data centre the size of a shopping mall, fill it with tens of thousands of specialised processors, connect it with networking equipment worth more than most companies, and then feed the whole thing with enough electricity to power a small city. Every single day.
That is Layers 1 through 3. The invisible layers. The layers where serious capital is being deployed.
OpenAI reported USD20 billion in annualised recurring revenue by end of 2025, up from USD6 billion the year before. 10x growth in two years. No company in history has scaled revenue that fast.
But here is the reality: OpenAI burned approximately USD9 billion in cash in 2025 and projects burning USD17 billion in 2026. Their inference costs (the actual cost of running AI when you ask a question) hit USD8.4 billion in 2025 and are projected to reach USD14.1 billion in 2026. They do not expect to be cash-flow positive until 2029 or 2030.
So where does that burned cash go?
It flows down through the stack. To Microsoft Azure (OpenAI pays Microsoft 20% of its total revenue through 2032). To Nvidia for chips. To companies building and equipping data centres. To energy companies generating the electricity.
This is the first lesson of the AI value chain: revenue flows upward, capital flows downward.
If you want to understand what is happening with AI, study what happened with electricity between 1880 and 1920.
When Thomas Edison built the first commercial power station in 1882 on Pearl Street, Manhattan, people thought electricity was a novelty. A luxury way to light a room. Why would you need this when gas lamps worked perfectly well?
Within 40 years, electricity had reorganised every industry on earth. Manufacturing, transportation, communications, medicine, entertainment. The companies that won were not the ones who invented the light bulb. They were the ones who built the power plants, laid the copper wires, and manufactured the generators.
General Electric. Westinghouse. Utility companies. Copper miners. Builders.
The same pattern is repeating with AI, only compressed into years instead of decades.
AI → data centres → chips → raw materials → energy
Electricity → factories → machinery → raw materials → coal/water
Every time a new computing platform emerges, the initial wealth creation happens in picks and shovels. Applications come later. Applications get all the press coverage. But infrastructure gets all the profit margins.

AI data centres are extraordinarily power-hungry. A single large AI training run can consume as much electricity as a small town uses in a year. These facilities are projected to use approximately 90 terawatt-hours (TWh) of electricity annually by 2026, roughly ten times the 2022 level.
Jensen Huang himself said in October 2025: "Data centre self-generation of power can move faster than connecting to the grid." Tech companies have started building dedicated power generation connected directly to their data centres, bypassing the grid entirely.
Who benefits: Utility companies (especially those with nuclear capacity), independent power producers, companies manufacturing transformers, switchgear, and other electrical infrastructure.
Malaysia context: The government allocated RM5.9 billion in Budget 2026 to achieve AI nation status by 2030, including digital infrastructure construction. Data centre investments in Melaka, Johor, and Selangor are already driving energy demand upward.
This layer is more complex than a single company. The chip layer has its own sub-layers:
Numbers to know:
One company designs. One company builds. One company makes the machines that build. This level of concentration is simultaneously an investment thesis and a geopolitical risk.
Malaysia context: Malaysia's semiconductor exports are projected to continue growing, with E&E sector growth of 15% in 2025. Companies like Inari Amertron, Unisem, and Malaysian Pacific Industries (MPI) sit in the global chip supply chain and benefit directly from the AI boom.
This is where the chips live. Warehouse-scale facilities packed with servers, connected by high-speed networks, and cooled by increasingly sophisticated thermal management systems.
The market is dominated by three hyperscalers:
Oracle is also expanding aggressively with a USD50 billion capex target for 2026.
Hyperscalers are spending 90% of their operating cash flow on capex in 2026, up from 65% in 2025. Morgan Stanley estimates these companies will borrow over USD400 billion this year to fund construction — more than double the USD165 billion in 2025.
Malaysia context: Malaysia has received approved digital investments totalling RM54.13 billion as of Q3 2025. Giants like Google, Microsoft, and Oracle have announced data centre construction plans in Malaysia, positioning the country as a regional data centre hub.
This is the "brain" layer — companies that train and build the actual AI models.
Major players:
This layer is fascinating because it is simultaneously the most hyped and the least profitable. Models get better when you spend more on compute, but that spending grows faster than revenue. It is like running a restaurant where every dish requires more expensive ingredients than the last, but customers expect prices to stay the same.
For investors, this layer is high risk, high potential reward. Most of these companies are private. Your public market exposure comes through the cloud providers hosting them and the chip companies whose products are used during model training.
This is the layer you interact with every day. ChatGPT, Google Search powered by Gemini, Microsoft Copilot in Office, your bank's AI fraud detection, Netflix recommendations, your phone's photo enhancement.
The application layer is the broadest and most crowded. Thousands of startups and established companies are building here. It will ultimately become the largest layer (projected to exceed USD2 trillion by the early 2030s), but right now it is also the layer with the thinnest margins and the greatest uncertainty about who will win.
The differentiator is data. Companies with unique, proprietary data will build durable advantages. Salesforce has enterprise CRM data. Bloomberg has financial data. Epic has medical records. Companies sitting on proprietary data like this can fine-tune AI models in ways generic chatbots cannot match.
For investors: The application layer is where the greatest potential ultimately lies, but also where the most capital will be destroyed. Most AI startups will fail. The survivors will scale aggressively.
A fair question. It deserves a serious answer.
During the dot-com era, companies spent on infrastructure for demand that did not yet exist. They built fibre optic networks and web servers for an internet audience still using dial-up. Infrastructure was built, demand did not materialise for another 5-7 years, and everything in between was liquidated.
By 2026, AI demand already exists. Nvidia cannot make chips fast enough. TSMC's advanced packaging capacity is sold out. Cloud computing rental prices are rising, not falling. OpenAI added 400 million weekly active users between March and October 2025 alone.
However, three key risks deserve attention:
1. Capital misallocation — Companies are spending USD650 billion+ on data centres in 2026. If revenue from AI services does not scale fast enough, some companies will face serious margin pressure.
2. Concentration risk — The AI supply chain is heavily concentrated. TSMC fabricates nearly 70% of the world's chips. ASML is the sole supplier of EUV machines. Nvidia designs 92% of AI data centre GPUs. Any disruption ripples through the entire stack. A single earthquake in Hsinchu, Taiwan, could slow global AI development by years.
3. The DeepSeek question — In January 2025, Chinese AI lab DeepSeek released a model approaching frontier performance at a fraction of the training cost. If open-source, efficient models continue closing the gap, the infrastructure spending thesis weakens.
Nevertheless, McKinsey estimates cumulative data centre investment could reach USD6.7 trillion globally by 2030. PwC estimates AI could contribute USD15.7 trillion to global GDP by 2030.
Even if those numbers are 50% wrong, we are still talking about the largest technology-driven economic shift since the internet.
Think of AI as a video game with five levels:
Meta-strategy: You do not need to play all five levels. Most people try to play Level 5 because it is most visible. Smart money is farming Levels 2 and 3 because that is where the most XP is right now.
The best returns over the next 3-5 years will likely look like this: infrastructure now, applications later. The smartest capital is already positioned accordingly.
Malaysia is not merely a spectator in the AI revolution. The country sits inside the global supply chain:
For investors on Bursa Malaysia, understanding where each company sits in the AI Stack provides a fundamentally different perspective. A chip packaging company (Layer 2) has a very different risk-reward profile than an AI chatbot startup (Layer 5).
The AI value chain refers to the complete ecosystem required to produce and run artificial intelligence — from electricity generation, chip manufacturing, cloud data centre construction, AI model development, to the end-user applications people use every day.
In the current phase (2025–2028), Layer 2 (Chips) and Layer 3 (Cloud) show the highest profit margins. Nvidia reports gross margins of around 75%, while TSMC enjoys nearly 70% dominance of the global foundry market.
Unlike the dot-com era, AI demand in 2026 already exists and is measurable. However, risks including capital misallocation, supply chain concentration, and the emergence of efficient models like DeepSeek need to be monitored carefully.
Through Bursa Malaysia-listed companies in the semiconductor sector (Inari, Unisem, MPI), data centre-related companies, and utilities. Globally, investors can gain exposure through Nvidia, TSMC, ASML, and hyperscalers like Amazon and Microsoft.
Concentration risk — nearly the entire chain depends on just a few companies (TSMC, ASML, Nvidia). Any geopolitical disruption, particularly related to Taiwan, could destabilise the entire global AI ecosystem.
In 2026, the four largest hyperscalers (Amazon, Microsoft, Google, Meta) are expected to spend USD650–700 billion on capex, with approximately 75% (USD450 billion) directed to AI infrastructure.
A term referring to the recurring pattern whereby every time a new computing platform emerges, the initial wealth creation happens at the infrastructure layer (picks and shovels), not the application layer. This pattern occurred with electricity (1880–1920), the internet (1990s), and is now repeating with AI.
Analysts expect the value shift to the application layer to occur as the infrastructure build-out phase matures, likely after 2028–2030. Companies with unique proprietary data are expected to become the big winners at that point.
Understanding the AI value chain is not just an intellectual exercise — it is a critical investment skill. In 10 years, understanding the AI Stack will be as fundamental as reading a financial statement. Investors who see the full structure will spot shifts before others are caught off guard. Understand the stack. Map the layers. Follow the capital.
If you want to start investing in technology and semiconductor stocks that benefit from the AI boom, the first step is having the right investment account.
Open a CDS Trading Account to start investing on Bursa Malaysia as well as international stocks including the US and Hong Kong markets — where the world's biggest AI technology giants are listed.
Download the Free Stock Market Basics Ebook to learn the fundamentals of investing before you begin.