ARTICLE AD BOX
Apple's software chief
Craig Federighi
has offered the most detailed explanation yet for why the company's highly anticipated AI-powered
Siri
features, first promised at WWDC 2024, have been delayed until 2026. In a series of post-keynote interviews,
Federighi
admitted that Apple's initial approach "just doesn't work reliably enough to be an Apple product."The delay represents a rare misstep for Apple, which typically avoids demonstrating features it cannot deliver on schedule. Federighi revealed that while Apple had working prototypes of the enhanced Siri capabilities, including personal context awareness and cross-app functionality, the features only performed correctly about two-thirds of the time."This just doesn't work reliably enough to be an Apple product," Federighi told The Wall Street Journal, providing his most candid assessment of the situation. In the same WSJ interview, he reiterated the technical challenges: "We found that when we were developing this feature that we had, really, two phases, two versions of the ultimate architecture that we were going to create."Speaking to TechRadar and Tom's Guide in a joint podcast interview, Federighi expanded on the architectural problems: "We realized that V1 architecture, we could push and push and put in more time, but if we tried to push that out in the state it was going to be in, it would not meet our customer expectations or Apple standards."
Apple's quality standards override deadlines
The admission marks an unusual moment of transparency from Apple, which has faced mounting pressure to compete with AI-powered assistants from Google and OpenAI. Federighi stressed that the company prioritized product quality over meeting artificial deadlines, even as competitors advance their AI capabilities."Look, we don't want to disappoint customers," said
Greg Joswiak
, Apple's marketing chief, in the Wall Street Journal interview. "We never do. But it would've been more disappointing to ship something that didn't hit our quality standard, that had an error rate that we felt was unacceptable. So we made what we thought was the best decision. I'd make it again."Federighi echoed this sentiment in his conversation with
YouTuber iJustine
, explaining: "What we said a while ago was that we were really working hard to get those [features] to come together. [We] had them working internally, but not working well enough." He emphasized the critical importance of reliability: "You know, when you have an experience like asking Siri to do something, either it becomes something you can depend on reliably, or it's something in the end you're not going to use."The delayed features were meant to transform Siri into a more capable assistant that could understand personal context, operate across multiple apps, and perform complex tasks based on on-screen content. Apple demonstrated these capabilities at WWDC 2024, showing Siri finding podcasts mentioned in messages and taking actions within apps using what Apple calls "app intents."However, Federighi revealed that Apple was simultaneously developing two different underlying architectures. While the company successfully demonstrated the "V1 architecture" at last year's conference, it became clear during development that this approach had fundamental limitations that prevented it from meeting Apple's reliability standards."We were very focused on creating a broad platform for really integrated personal experiences into the OS," Federighi saud in the podcast, explaining Apple's original vision. But as development progressed, he noted: "We set about for months, making it work better and better across more app intents, better and better for doing search, but fundamentally, we found that the limitations of the V1 architecture weren't getting us to the quality level that we knew our customers needed and expected."In the Wall Street Journal interview, Federighi addressed why Apple, with all its resources, couldn't make the original approach work: "When it comes to automating capabilities on devices in a reliable way, no one's doing it really well right now. We wanted to be the first. We wanted to do it best."
Two architectures, one solution
"We set about for months, making it work better and better across more app intents, better and better for doing search," Federighi said. "But fundamentally, we found that the limitations of the V1 architecture weren't getting us to the quality level that we knew our customers needed and expected."The switch to what Federighi calls the "V2 architecture" effectively reset the development timeline. This deeper, more comprehensive approach extends across the entire Siri experience, building upon the work already completed rather than starting from scratch."The V2 architecture is not, it wasn't a start-over," Federighi clarified in his TechRadar-Tom's Guide interview. "The V1 architecture was sort of half of the V2 architecture, and now we extend it across, sort of make it a pure architecture that extends across the entire Siri experience."During the company's spring assessment, Apple made the difficult decision to abandon the V1 approach. "As soon as we realized that, and that was during the spring, we let the world know that we weren't going to be able to put that out, and we were going to keep working on really shifting to the new architecture," Federighi told the podcast.
Apple's different AI strategy
Apple's approach differs significantly from competitors like OpenAI and Google, who have released chatbot-style AI assistants that users access through dedicated apps. Instead, Apple aims to integrate intelligence throughout its ecosystem, meeting users "where they are" rather than requiring them to navigate to a separate AI interface."When we started with Apple Intelligence, we were very clear: this wasn't about just building a chatbot," Federighi emphasized in the podcast. The company's strategy remains focused on embedding AI capabilities directly into existing apps and workflows, rather than creating a standalone conversational assistant.Joswiak reinforced this philosophy in the same podcast discussion: "The features that you're seeing in Apple Intelligence isn't a destination for us. There's no app on intelligence. [It's about] making all the things you do every day better."Federighi acknowledged the appeal of conversational AI but maintained Apple's different focus: "I know a lot of people find it to be a really powerful way to gather their thoughts, brainstorm [...] So, sure, these are great things. Are they the most important thing for Apple to develop? Well, time will tell where we go there, but that's not the main thing we set out to do at this time."While Apple won't commit to a specific timeline for the enhanced Siri features, Federighi made clear the company won't repeat its premature announcement mistake. "We will announce the date when we're ready to seed it, and you're all ready to be able to experience it," he told the TechRadar-Tom's Guide podcast.Speaking with YouTuber iJustine, Federighi promised comprehensive delivery: "We really look forward to releasing everything we talked about in the past, and more. We don't really want to commit to that until we have it in hand." This suggests Apple may have additional Siri capabilities in development beyond what was originally demonstrated at WWDC 2024.