This Week in AI: Week of 19th October 2025
This week’s AI updates paint a clear picture of where the industry is heading. From OpenAI’s hardware ambitions to Microsoft’s vertical integration and Oracle’s massive GPU bet—the race is shifting from models to full-stack control. Each move signals how fast AI is becoming an infrastructure story, not just a software one.
Here’s everything you need to know!
Oracle Unveils Zettascale10, World’s Largest AI Supercomputer
Here’s What You Need to Know:
Oracle has announced the OCI Zettascale10, which it touts as the world’s largest AI supercomputer in the cloud. This behemoth is capable of connecting up to 800,000 NVIDIA GPUs across vast, multi-gigawatt data center clusters. This enormous infrastructure investment is a milestone, designed to handle the massive compute demands required for training next-generation large-scale AI models.
Why It’s Important for AI Professionals:
The sheer scale of Zettascale10 pushes the boundaries of distributed computing. For researchers and engineers, this kind of infrastructure promises to unlock the training of true frontier models—AI systems that were previously impossible due to computational limits. It requires highly specialized knowledge in areas like cluster optimization, ultra-low-latency RoCE networking (Oracle Acceleron), and efficient parallel processing to harness the 800,000 GPUs effectively. This moves the bottleneck from hardware availability to large-scale data and efficient distributed training algorithms.
Why It Matters for Everyone Else:
This is the infrastructure required for the next leap in AI capability—the foundation for AGI. For businesses, it means that the most advanced AI models will first be available through major cloud providers like Oracle. Companies training their own specialized LLMs or foundation models will have access to unparalleled scale, reducing training time from months to weeks. Ultimately, this accelerates the deployment of better, more capable AI services across every sector, from finance to pharmaceuticals.
Aish’s Prediction:
This is pure infrastructure flexing, and I love it. The race to AGI is essentially a hardware race right now. It’s no longer just about who builds the smartest models, but who builds the smartest infrastructure to run them. So much of what limits a model’s potential isn’t in the code, but in how well it’s optimized for the hardware it sits on. Oracle’s Zettascale10 is a reminder that the next wave of breakthroughs will come not from bigger models, but from the systems powerful enough to sustain them.
Salesforce Expands Agentforce 360 with GPT-5 and Claude Integration
Here’s What You Need to Know:
Salesforce has announced an expansion of its Agentforce 360 platform, which will now deeply integrate both OpenAI’s cutting-edge GPT-5 model and Anthropic’s Claude model. This multi-model strategy aims to give enterprise customers the flexibility to choose the best-of-breed LLM for their specific AI agent needs within the secure Agentforce environment, further solidifying Salesforce’s position in the enterprise AI agent market.
Why It’s Important for AI Professionals:
This news confirms the death of LLM-monogamy in the enterprise. For practitioners, the future is multi-model and model-agnostic. Developers must now be fluent in utilizing different models for different tasks—using Claude for its safety-first approach and complex reasoning, and GPT-5 for its raw creative power or general knowledge. Salesforce’s platform (and its underlying data layer, the Data Cloud) becomes the critical abstraction layer that lets engineers build robust applications without getting locked into a single provider. This creates a huge demand for engineers skilled in model routing and orchestration.
Why It Matters for Everyone Else:
For every business that relies on a CRM (Customer Relationship Management) platform—which is nearly all of them—this means a massive jump in the quality and safety of automated customer interactions. Businesses can confidently deploy powerful AI agents for sales, service, and marketing, knowing they can leverage the best model for the job while keeping their sensitive customer data secure within the Salesforce cloud. This accelerates the shift from basic chatbots to true AI customer agents.
Aish’s Prediction:
Salesforce is playing the long game here. By integrating both GPT-5 and Claude, they’re essentially saying: “the model isn’t the product, orchestration is.” It’s a technically smart move because it abstracts away model choice and focuses on routing workloads to whichever LLM performs best for the task. This makes Salesforce less dependent on any single frontier model provider and turns OpenAI or Anthropic into interchangeable compute layers under their control. My prediction: within the next year, every major enterprise platform will follow this playbook, the real differentiator won’t be which model you use, but how intelligently you combine and govern them.
Apna.co Launches “Blue Machines.ai” Voice AI Platform in India
Here’s What You Need to Know:
Apna.co, the Indian jobs and careers platform, has entered the enterprise AI infrastructure space with the launch of BlueMachines.ai. This new platform is designed to provide enterprise-grade, multilingual voice AI agents to businesses, initially focusing on high-volume customer engagement in sectors like lending, insurance, and recruitment across India. This move highlights the growing specialization of AI platforms for massive, non-English-speaking markets.
Why It’s Important for AI Professionals:
The key technical challenge here is multilingual voice AI at scale. Developing a high-fidelity voice AI platform for India requires expertise in low-resource language modeling, accent and dialect handling, and seamlessly switching between languages (code-switching). For Indian AI engineers, this creates a massive opportunity in specialized speech recognition, natural language understanding (NLU) tailored for local dialects, and ensuring low-latency deployment over varied network conditions. This is data-centric AI applied to a unique, complex linguistic environment.
Why It Matters for Everyone Else:
This platform is a major step toward digitally transforming high-touch, human-centric industries in India. For businesses, it means they can automate customer service, onboarding, and telesales with agents that speak local languages fluently, significantly reducing operational costs and improving customer reach. For the Indian population, it means better, more accessible services, as AI agents can serve diverse linguistic groups, lowering the language barrier for critical services like finance and healthcare.
Aish’s Prediction:
I’m watching the specialized AI markets very closely, and voice AI in a multilingual, high-population market like India is a gold mine. The performance of general LLMs in regional Indian languages is often subpar. The hard problems here are not just LLM prompts, they are telephony-grade ASR for Hinglish and regional dialects, latency under 300 ms round-trip, accurate NER for KYC in noisy audio, and ironclad guardrails so agents never hallucinate policy. If BlueMachines builds a stack that fuses domain-tuned ASR + intent models, retrieval for policy compliance, constrained decoding, and RL from call outcomes, the data network effects from millions of labeled calls become the moat, not the model weights. Expect BFSI and recruiting in India to demand on-prem or VPC, PII redaction by default, audit trails, and carrier-level SLAs, whoever nails that wins budgets, not just pilots. My prediction is that in the next 9–12 months we will see a split between “generic voice bots” and vertically integrated, region-first voice infrastructures; the latter will partner with telcos, ship eval suites that track AHT, FCR, WER, and policy adherence, and outcompete Western incumbents on accuracy, cost per minute, and trust.
Keep reading with a 7-day free trial
Subscribe to AI with Aish to keep reading this post and get 7 days of free access to the full post archives.





