OpenAI could start using its own AI hardware for ChatGPT in 2026

OpenAI is reportedly collaborating with Broadcom to develop custom silicon optimized for its demanding AI inference workloads and has secured manufacturing capacity with TSMC, according to sources cited by Reuters. OpenAI’s chip development team, currently composed of about 20 engineers, includes key talent previously involved in the design of Google’s Tensor processors for AI.

Read: New MacBook Pro with M4 Pro and M4 Max chips launched

However, production of this custom hardware may not begin until 2026. In the interim, OpenAI is reportedly integrating AMD’s MI300 chips into its Microsoft Azure infrastructure. AMD’s MI300 series, launched last year, contributed significantly to AMD’s data center revenue doubling over the past year, as the company seeks to compete with market leader Nvidia.

Earlier this year, The Information reported that OpenAI was in talks with Broadcom and other semiconductor companies about designing its own AI chip. Additionally, Bloomberg noted that OpenAI considered building its own foundries, though this plan has been shelved due to cost and time constraints, according to Reuters.

This approach aligns OpenAI with other tech giants that are also pursuing custom chip designs to better control costs and hardware availability for AI applications. However, OpenAI trails Google, Microsoft, and Amazon, each of whom has already developed multiple generations of custom AI processors. To catch up with these established players, OpenAI may require substantial additional funding.

{Categories} _Category: Applications{/Categories}
{URL}https://bandwidthblog.co.za/2024/11/01/openai-chatgpt-ai-hardware-2026/{/URL}
{Author}Theunis Jansen van Rensburg{/Author}
{Image}https://bandwidthblog.co.za/wp-content/uploads/2024/11/openai-chatgpt-ai-hardware-2026-scaled.jpg{/Image}
{Keywords}Artificial Intelligence,Hardware,Technology{/Keywords}
{Source}Applications{/Source}
{Thumb}{/Thumb}

Exit mobile version