8 thoughts on “Moore’s Law”

  1. Yes and not news…
    I guess because I’m so close to this professionally it this is not news to me, nor has it been for nearly a decade. We are approaching the limits of what is possible with lithography with X-rays and AFAIK no one is doing lithography with Gamma-rays. Also there are thermal issues that drastically begin to effect the smaller transistor elements (<20nm) that form the logic switches for computers. Most of the design cycles spent on modern CPU architectures today are in the areas of how to rapidly turn entire subsystems & cores off and on dynamically (in non-trivial ways that minimize user impact) as it become unfeasible both in power budget and temperature to leave large pieces of the chip powered up indefinitely. Also the physical limits to switch lithography places a speed limit on the switches themselves. That is why you haven't seen drastic speed improvement in CPUs (like 4x per generation like we saw in the 80's and 90's) since they hit 3GHz almost 10 years ago. Instead it is is easier to just provide more of the same (i.e. multiple core devices), rather than trying to speed up the cores. But there is a very practical limit to how much a user can benefit to multi-core designs unless they are computing a so-called "embarrassingly parallel" problem which the vast majority of computer users aren't routinely solving (well, perhaps with the exception of gamers). Hence without a speed up of CPU speed they won't see much improvement in computing power because of another law known as Amdahl's Law.

    There are other technologies that offer better electron mobility and hence switch faster for a given lithography size than CMOS (potentially to speeds up to 100x current Intel powerhouses) but at power and temperature trade-offs that haven't been explored much in the marketplace because of the dominance of CMOS. And because it is a much different technology the cost will be far greater at the beginning until familiarity breeds reductions. We are returning to the physical reality that you can have cheap or fast but not both, because the faux-economy of Moore's Law which was in reality just a variation of Feynman's "there's plenty of room down at the bottom" has come to an end.

    1. The other thing is, the need for greater performance isn’t there for most people any more. For example, I looked at one of our servers a while back, and it was using a whole 3% of the CPU. My laptop spends most of its time clocked down to 800MHz, because word processing and web browsing rarely need more power than that. Even my games PC rarely uses more than 50% of the CPU, because so many games are designed for consoles with much slower CPUs.

      PCs have been ‘good enough’ for most uses for several years now. The most heavily loaded servers at work are probably the ones running multiple VMs, each of which replaced a PC with an old, slow CPU.

      And there’s obviously less incentive to build extremely expensive new fabs, if you can just continue selling the old chips to happy customers…

      1. @EdwardGrant
        Although I agree with your premise I don’t necessarily agree with your conclusion. Truth is we really don’t know what kinds of applications a 100x increase in CPU speed would provide. Back in the early 80’s for example there was ample computer power for doing word processing (text only), email, even public bulletin boards like this one (sans fonts & graphics), even telecommuting. I know I was there and doing it. But NO ONE was doing live action video on desktop PCs or even back-office mainframes. Some simple gray scale, slow scan approximations were possible, and by the mid-to-late 80’s you started to see dedicated “video workstations” available in the multi-$10k range, but frankly, the consumer hardware just wasn’t up to it. But as memory density and CPU horsepower got better and better by the 90’s you started to see the results and thus was fully born the era of computer “visualization”. It’s to the point now that even the idea of a separate desktop computer for word processing or the idea of a “television receiver” is becoming a bit of a quaint notion. The point here is you can’t necessarily judge the “sufficiency” of a technology just based around current use. Buggy whips were quite functional and adequate in their day… 😉

        1. Certainly there are things you can do with more processing power. But most people don’t care about them, which is why so many are moving from PCs to tablets and phones.

          Photo-realistic VR is probably the next big thing that will require much more power than we currently have, but that’s still quite some way off. Few people are willing to pay the sums required to buy that much processing power today.

  2. I agree with the first commenter there. Raw performance will start to flatten, but coders have been getting sloppy, counting on faster performance to cover bulky code.

    On the price issue, I noticed storage is still coming down, but whole systems are not dropping that much. Actually, for the performance I’m seeking, the price seems to be increasing.

    1. I was looking at laptops yesterday, and prices seem to have gone up significantly since the last time I bought one, mostly because the selection of models seems to have gone down, and I’d have to buy an expensive one to get the features I’d want from a manufacturer with a reputation for reliability.

      So much so that I’d probably be better off buying a custom-built model from a company that specializes in Linux than buying a Windows model and trying to get Linux to work on it (thanks, for example, to ‘Secure Boot’, and crappy wi-fi drivers from some commonly-used manufacturers).

      In fact, I think my iPad cost more than this laptop did, when I bought it a couple of years ago.

  3. Moore’s law started long before Moore was born or transistors were invented and will continue as long as there is a free market. What Moore’s Law really measures is the cost of computation.

  4. This may be a good thing. “Just get a faster machine” has been the solution of choice rather than writing better code for decades (because it makes economic sense no matter how repugnant to the purist programmer.)

    I’ve coded in dozens of languages (from APL to Forth) but was never more productive than VB6/SQL. Bascom was faster than any version of Basic that followed it. I want a static compile that just works which VB got away from.

    Euphoria, before being handed to a committee, seemed like a good direction but cumbersome because it first had to be translated to C before being compiled.

    Anyway, software is what holds us back, not hardware.

Comments are closed.