As professionals and many aficionados know, early this year some widespread vulnerabilities were found on Intel CPU’s as well as on AMD’s. It was a bit later discovered the flaws also affected some RISC architectures such as Power and ARM. Everybody went nuts and the world seemed to be tumbling because of two CPU vulnerabilities spectre and meltdown, affecting almost all the systems around the world. It was a ubiquitous problem hence the initial panic.
How the information was handled from Intel and AMD was not of anyone’s taste and during the first weeks some hosting companies started on their own to investigate and understand the implications these issues had on their platforms and what actions had to be taken in different time frames. It’s been difficult and still is on some organizations to fully patch or mitigate in a great manner the exploits in the heart of the CPU’s. The severity, although they do not score high in the CVSS3 evaluation system, was very remarkable. Because of the nature of the bugs, the ubiquity of them into the systems around the world and how difficult is to patch that kind of vulnerability, not only from the logistics side but from the technical point too.
I remember back in the nineties, when I wasn’t interested on computers nor I did work in the field (too young anyway), reading magazine articles (when paper was very relevant) about the CISC and RISC debate. Does anyone remember those Sparc based laptops from Sun Microsystems? Apple? For more than a decade Apple had been relying on PowerPC based CPU’s for their Mac devices until they switched to Intel, or to be more accurate on x86-64. Using a PC computer was somehow problematic and Windows was very unreliable. I must confess I’ve been a Windows hater for many years and I still think it’s a pity it took Microsoft so many years, in fact more than two decades to release a decent, stable, capable, secure OS. Anyway, after the 486’s came the fancy Pentiums. They were a revolution but at the time it looked like the architecture was unable to scale, temperatures were high, clock frequency was willed to be king but they couldn’t deliver. Enter the Pentium Pro.
Pentium Pro was a magnificent piece of silicon. A big one. I mean literally. It was big, very big. Size aside the chip incorporated some new features and it seemed to cover some comparative deficiencies when compared to RISC based chips on floating point operations which seems to be a very needed feature when using design and image rendering software. To make it to this article this was the first Intel chip to use speculative execution. Sounds familiar? At the time it seemed like Intel had invented some magic tricks to boost the performance of their CPU’s and gained respect for what they did. Some said RISC based computers were still faster than PC’s but the gap started to close at that time. I think I had read something a bit negative on speculative execution but I was also a bit surprised that speculating about future events resulted on a performance advantage. It could have been like the first turbo on a car. “It will break it!” Sure. Everything breaks. Just tune it, find good components, make it reliable and run away with it.
Computation has embraced speculative execution procedures. You can find this on modern RISC cpu’s like Power 7, 8 and 9 chips as well as in some ARM developments, too. It’s ubiqutious. The key is how it is made. At some point down the pipe memory resources have to be taken in account to load, unload, store, restore, flush, etc data in memory in a somewhat shared when needed but controlled status too. Speculative execution finally boils down to pattern and behaviour analysis. Many operations require the same kind of operations repeteadly. So guessing what comes next to calculate is an intelligent move to gain efficiency and speed. Thus the need to protect memory chunks adequately is important. You don’t one any process to read any part of memory. However barriers can have a cost in performance if badly designed. There can be another bad impact as we’ve discovered with the collection of vulnerabilities called Spectre and Meltdown.
Operating systems have to be aware of these speculative calculations, memory protection, etc that are going on and the chip makers have to provide not only accurate documentation (at least better than the one that brought CVE-2018-8897) but have to provide mechanisms which must be fast but secure too. Quite recently a new trio of vulnerabilities has been publicly disclosed and those only affect Intel processors. There is a particular one that caught my attention (and media’s too), it is codenamed CVE-2018-3615. It affects an instruction set from Intel implementation of 64 bit computation, SGX. It is an acronym that stands for Software Guard Extensions. It basically protects chunks of memory of someone’s interest not to be messed by other up-level processes. That is the basic principle. The thing is it didn’t. Plainly didn’t do what it was meant to. One of the discoverers put it this way: “The whole trust model collapses”. Many of the main distributions do not contemplate this vulnerability but have put their attention to the two ones that really affect them, CVE-2018-3620 and CVE-2018-3646. Those two really matter but as Theo de Raadt (main person in charge of OpenBSD) puts it the thing is much deeper. SMT, named Hyper-Threading on Intel’s land, is fundamentally broken because it shares resources between the two cpu instances and those shared resources lack security differentiators.
Some of you may be looking at an AMD processor catalog. They are lately doing a very good job but don’t think for a moment they are out of this other set of problems. Yes, they are out of the L1TF vulnerability triad, but they are affected by Spectre and Meltdown too. Although some may argue not as much as Intel and they’d be right. Different architecture some may say? Well… you can have a look at what IBM said about the impact on their flagship cpu’s. Have I mentioned ARM. Yes, Apple fans, you are also in trouble. This demonstrates making chips is hard but making them secure is even harder. The question is if the industry is ready to double check everything, if the time to market rush doesn’t take designs down the rabbit hole, if marketing doesn’t bolster products for the sake of it but they do it based on facts and with accuracy (we’re not affected bragging shouldn’t be allowed here), if collaboration between OS designers and CPU gurus is not overshadowed and overcontrolled by lawyers and paranoia. AMD kicked Intel’s ass quite badly when they came across with 64 bit and backward compatibility. What a joy those days. It can happen and it should happen more.
A word on Apple. I was a Macintosh user back in the early 2000’s and it lasted a decade. Technologically they have been placed up high in a pedestal they do not deserve anymore (if they ever had in this recent decade, phones aside). Security wise they do not only offer very little explanation but they do some Oracle style disclosure, with vague, repetitive and short discourse and with some significant delay between announcemnts. I find pitiful when specialized publishers do make assertions with incomplete information to end users. Microsoft released their patches the same day the Foreshadow vulnerabilities was disclosed and many GNU/Linux and UNIX distributions did the same. Being a major player and a 5 % of Intel’s revenue my question is simple. Why hasn’t Apple released anything concerning L1TF aka Foreshadow triad yet, and now is late august? What will be the security and disclosure policy if they finally switch to build their own chips (something they had done in the past)?
Still today September the 1st there are still unpatched vulnerabilities such as CVE-2018-3693 and we’d like to hear a bit more about TLBleed and what is Intel going about it beyond whistling around like nothing extraordinary has happened.
So what do we the users do? Well, firstly we have to protest and ask for accurate explanations. Second demand patches. Thirdly demand a refund if necessary. When is that necessary? The spectre and meltdown vulnerabilities are a good chance to ask for that. However this hasn’t happened and doesn’t look like it will in the near future. Who knows what could happen in 50 years time when computing will be even more relevant than it is today. Big companies and specifically the ones in charge of systems have had a very troubled time identifying the issues, planning teams and resources to deal with the problems associated with the nature of deploying fixes. So all in all they have had a hard time to explain the issues to management, and to solve the problem itself. There’s been little to no time to be preocupied to really punish Intel and AMD. The stock market has done some of this work but the market goes on its on as usual and bounces back because of why not. Sorry, I meant because of making money. And fourth. We users have to patch, patch and patch. Don’t skip patches and update regularly.
Last but not least, you will find two guides on how to patch systems here in adminbyaccident.com. You will find an article patching a regular enterprise laptop running a GNU/Linux OS and a small regular enterprise server from a known branch which is running a UNIX flavour.
PS: If the Internet of Things (IoT) is really coming we all are in big trouble unless we take some action. We will, won’t we?