One of the main things that I missed rambling about during my 11 year blogging absence was the progress of the A-series chips used in iPhones and iPads. These chips are really the unsung heroes of Apple's success over the last decade - they've allowed the iPhones and iPads to remain best in class throughout, enabled the Apple Watch to get through a day while offering more and more features, and more recently have allowed the Mac to take huge leaps in performance and efficiency. I want this post to capture how the journey from the A4, all the way to the A14X (more commonly known as the M1). I'll talk about the M series chips and their own journey in another post sometime.
Humble beginnings
While the A4 chip was the first silicon designed by Apple, it's worth mentioning why there's no A1-A3. When Apple was creating the iPhone, they needed an incredibly small, powerful, and efficient chip to realise the vision of a mini, touch based computer in your pocket. Apple initially went to Intel in search for such a chip, given they'd recently started working with them on the Mac with similar goals of low TDP, high performance processors. Ultimately a deal with Intel to supply iPhone chips fell through. I don't normally quote others in these rambling posts, but there's an interesting quote from the late Paul Otellini, Intel’s CEO at the time, on the reasons for this:
“We ended up not winning it or passing on it, depending on how you want to view it. And the world would have been a lot different if we’d done it. The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do… At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn’t see it. It wasn’t one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.”
With the door closed at Intel, Apple decided to look elsewhere, eventually making a deal with Samsung to use their ARM-based chips. It's a badly kept secret that iOS is actually a lightweight, leaner, meaner version of MacOS, without all of the bulk and clutter of the desktop version. In this sense, Apple has always had a version of the MacOS operating system running on ARM chips - its just been in the iPhone/iPads this whole time. Apple continued to use Samsung designed chips in the next few releases, the 3G and the 3GS, but during the development for the iPhone 4, Apple had more ambitious plans for chip design.
In 2008, when planning the chip design for the iPhone 4, Apple knew that they wanted to dramatically increase the resolution of the iPhone, going from 480x320 to a mind-boggling 960x640. In 2023 this might not seem too impressive, but on a tiny 3.5" screen this resulted in a pixel density of 326PPI, and the birth of "Retina" displays. For context as to how high resolution this was back in 2010, many of the Macbooks Apple sold at the time were 1280x800 displays, and their display area was 14x larger than the iPhone's display (yikes). In order to pull off the increase in resolution while maintaining the performance and battery life of the 3GS, they were going to need a very efficient and powerful chip. Following some acquisitions made in the mid-late 2000's, Apple started their own fully fledged, in-house chip design team, with the goal of creating a System on a Chip (SoC). This design would offer significant performance and power-saving advantages over the previous iPhone chip designs, which used discrete CPU, GPU, RAM and storage modules that all had to interface with each other (as you'd find in any normal PC). This development coincided with Apple's development of a tablet computer, which had a similar resolution to the iPhone 4 and required the same demanding mix of performance and efficiency. After several years of development, Apple announced the A4 chip in January 2010, powering the 1st generation iPad.
A fast start
The A4 chip was the key behind the iPad's incredible early success. Developers were able to write sophisticated apps for the iPad from day 1, while being efficient enough to achieve the famous 10 hour battery life that the iPad has long enjoyed. As expected, the A4 made it to the iPhone 4 in June 2010, before being used in the 4th generation iPod Touch and 2nd generation Apple TV later that same year.
Apple had no intention of resting on their laurels - in March 2011, less than a year after the iPad had been released, Apple announced the iPad 2, featuring a 2nd generation of Apple designed SoC, the A5. The A5 offered a dual core CPU (compared to the single core A4), doubled the RAM offered on the iPad, and crucially offered 9x faster graphics performance than the A4. Yes, you read that correctly, 9x faster. It was clear at this point that Apple's chip team were performing miracles, and that the future of iOS devices looked bright.
After the success of the iPhone's retina display, it felt inevitable that the iPad would one day receive the same, high resolution treatment. The iPad 2 focused on slimming down the product, removing all of the slack within the original iPad's curvy design. There was much debate ahead of the 3rd generation iPad as to whether Apple would manage to fit in such a high resolution panel, and the chips to power such a panel, while maintaining the battery life and sleak design that the iPad 2 offered. In March 2012, Apple released the 3rd generation iPad, with a retina display, and an A5X chip. The X stood for graphics. This was basically an A5 on steroids, essentially doubling the graphics power of the A5 and allowing the iPad 3 to drive its insanely high resolution display. This marked an important change in the Apple silicon strategy, where silicon would be tailored to devices depending on their thermal constraints, and power demands.
I'm keen not to make this post unnecessarily long for the sake of it, so lets skip straight to 2014, and the A8X.
Slow and steady loses the race
By 2014, Apple had expanded the iPad range to now include a mini version. For the 2013 refresh, Apple opted to use the bog-standard A7 chip from the iPhone 5S, rather than using a graphics powerhouse "X" version as they'd done in the previous couple of years with the A5X and A6X. While there might be technical reasons for this, the obvious explanation is that the A7 was just so good that the iPads didn't need a more powerful chip than the iPhone, and sticking with the vanilla A7 managed to drag even greater battery life out of the designs which had shrunk in 2013 with the introduction of the iPad Air.
Going into 2014, it was assumed that Apple would continue the trend when the inevitable iPad Air 2 came along in October 2014. However, Apple surprised everyone when they unveiled an A8X chip, offering a new triple core CPU design and a sizeable graphics improvement. It was the first indication that Apple wanted to push forward the performance of the A series chips, even further beyond what the iPad software really needed at the time - it hinted that Apple had bigger plans for the iPad, or maybe had long-term plans to move the Mac product line to Apple silicon chips. It was around this time that the first real rumours of Apple silicon Macs gained traction, in part due to the stunning performance of the A8X which had already begun to rival Intels lower TDP chips. The single core performance of the A8X still lagged behind the soon to be announced 2015 Macbook and Macbook Air, but the graphics performance was truly impressive, with benchmarks starting to rival the Macbook Air and the lower end Macbook Pro models. The real question was, could Apple sustain the (frankly absurd) year over year improvements? If they could, the writing was on the wall for Intel.
The following year marked another big milestone for Apple silicon, with the debut of the A9, and A9X chipsets. Both of these were screamers - roughly doubling the CPU and GPU performance of the A8 and A8X chips that they replaced. With the A9X came a new product, the iPad Pro. It became obvious at this point that the iPad Air 2 had been an exception, and that going forward the "X" chips would be reserved for the latest and greatest "Pro" iPads. The A9X was now closing in on the CPU performance of the Macbook Air and 13 inch Macbook Pro lines, while the graphics could now match the best integrated graphics chips that Intel offered. It still came up short compared to the discrete GPUs in the higher end Macbook Pros, but that gap was also closing.
The A10X followed in 2017 with more modest 30%-40% improvements - but crucially still outpaced the gains that Intel were making. Additionally, a die shrink from 16nm to 10nm allowed the A10X to consume considerably less power than the A9X, allowing enough battery wiggle-room to enable the Promotion features on the 2017 iPad Pros. It's worth considering just how bad Intel's execution had become by this point. In the mid 2010s there were numerous promises for when chips would be available, and when die shrinks would occur, but time after time Intel missed their own deadlines and chips were frequently delayed. This caused issues for Apple - they were unable to release new Macs as and when they wanted to coincide with hardware changes, and when they did release, the chip performance was behind expectations (both power and efficiency) which threw more spanners in the works for Apple's hardware team. It was becomming obvious that the relationship between Apple and Intel was strained. Apple's ability to control their own silicon releases, and coincide them with hardware announcements, clearly had huge benefits.
The transition was starting to look more like "when", rather than "if". With the A10X, Apple was able to match, or best, all of the chipsets in the 13 inch Macbook lines while consuming a fraction of the power. Even the higher specced 15 inch models were starting to come under threat. The only stumbling block was how that architecture would scale up to the desktop chips. It was clear that a partial transition would create more problems than it would solve, and so Apple would have to wait until they were confident that they had a chip that could do it all - even rivalling high end discrete desktop GPUs.
The A12X was Apple's next pro iPad chip, released in October 2018. This was perhaps the biggest and boldest leap that we'd seen to date. There were questions in 2017 and 2018 as to whether Apple could sustain the huge year on year improvements that had been achieved throughout the 2010s. Reaching 10nm with the A10 and A11 was a huge milestone, and with Intel also moving to 10nm in 2017 with the Ice Lake chips, all eyes were on Apple to see if they could shrink the A-series chips even further. The A12X utilised a 7nm process, offering a 35% increase in single core performance, and an almost 2x increase in multicore performance thanks to the increased core count (8 cores vs the previous 6 core design of the A10X). The graphics performance also saw a 2x increase - ridiculous. The CPU performance was now on-par with, and often slightly ahead of the highest end 15 inch Macbook Pro. The graphics were now comfortably ahead of all Apple laptops, except for the highest end 15 inch BTO options with Vega graphics, which maintained a healthy margin over the A12X. However, the Vega GPUs in the Macbook Pros were notorious for their heavy throttling and high power consumption. They could provide huge amounts of performance but only for short bursts, and at the cost of tons of battery life. The A12X however could run silently, in a 5.9mm thin iPad, with no fans, and basically no throttling. Apple also touted that the GPU performance of the A12X was similar to that of the 8th generation of consoles (Xbox One and PS4). While these were released 5 years prior to the A12X, it's pretty staggering to think about the size of a PS4, and the size of the iPad Pros (and lack of fans in the latter).
By the time of the A12X's release in October 2018, speculation had been rife around Apple's transition of the Mac to Apple silicon. Many had thought that 2018 would be the year, given the advantages that the A10X already offered over many Intel Macbooks in 2017. Even the architecure of the A12X was very similar to what we'd eventually get in the M1 a few years later. It features an 8 core CPU (4 high performance and 4 high efficiency cores) and an 8 core GPU (with 1 binned core). If you're thinking "hang on, isn't that just an M1?" - you'd be right. I tend to think of the A12X as the M0 with the benefit of hindsight. Even at the time in 2018, the A12X felt ludicrously overpowered for the iPad Pros. This is evidenced today where in 2023, I'm still using the 2018 iPad Pro, and while I'm sure the M2 version is slightly snappier, I really haven't noticed any slowdown in the nearly 5 years since it debued. I'll put some pictures of the performance race between Intel and Apple silicon below, which should (hopefully) stack up with the words above.
Showtime
2020 arrived, and ahead of WWDC there was wide speculation that now was finally the time for Apple to announce the processor transition. This was a cool moment as an Apple fan - the run into an Apple silicon Mac transition was one of the longest rumours in Apple's history, spanning all the way back to 2014/2015. Apple finally took the covers off at the end of their WWDC presentation, which had striking similarities to the keynote in 2005 which announced the transition from PowerPC to Intel. Just as in 2005, performance per watt was the phrase of the day, with Apple stating that there would be a family of chips for the Mac, with the transition starting later in 2020 and lasting around 2 years. The A12Z (non-binned A12X) was used as a developer kit for developers to get their apps ready ahead of the late 2020 release of the first Apple silicon Macs. Even using the A12Z (which was now almost 2 years) old, developers were finding that their apps were flying compared to the Intel equivelants.
In November 2020, the first chip for the Mac was finally revealed. This had been largely foreshadowed by the introduction of the A14 in the iPhone 12 series. The A14 was based on a 5nm process, a further shrink from the 7nm A12/A13 chips. The performance gains were significant, largely thanks to this die shrink. It became clear that an A14X would be a substantial leap over the A12X, with an expected 50% increase in CPU and an almost 2x increase in GPU power. This was widely expected to be the first Mac chip, and these expectations became reality. Apple ultimately decided to rebrand the A14X, becomming the M1 for use in Macs. The rest, is history.
Saturday, 18 February 2023
Subscribe to:
Posts (Atom)