Makestation

Full Version: In theory - more cores, or higher clock speed?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Everyone thought that by now, 5GHZ processors would pretty much be the standard. Instead, dual-core processors hit the market. The effectiveness of the second core was initially in question, since few applications at the time were really designed to make use of them. Over time, manufactures found that simply making more cores was cheaper than raising the clock speed, and as a result, instead of 5GHZ processors, we're seeing as many as twelve cores inside of one CPU, at a clock speed of around 2GHZ or so. It looks like a similar trend is beginning to hit mobile devices, as it's simply cheaper to add more cores than to raise the clock speed further, and software is slowly adapting. This raises many questions about whether having a single, very fast core would be more effective than tons of smaller cores.

So, the question I'd like to raise is, in theory, would 100 100mhz cores, or a single 10GHZ core be better for today's computing, and how would such a model work for future computing models as technology continues to evolve?

____________________________________________________________________________________________

My personal feeling is that our current software does not make good use of 100mhz cores, so having 100 cores clocked at 100mhz would significantly hurt PC performance. If we're thinking from a theoretical standpoint, it is quite possible that 100 slow cores could be put to very good use, depending on the use. Graphics cards, for example, use a similar model. If software can be developed to be heavily multi-threaded to run across many cores, each core could have its resources, such as its own cache, registers, etc... and these threads would not have to share a universal L1 cache. This could either boost or harm performance, depending on whether a shared cache is more ideal for a multithreaded application, or whether each thread will be using different data entirely. Either way, such a CPU setup has the potential to work if software development makes use of it, but as it stands now, most software, outside of graphics, is not currently designed to make use of that many cores.

In other words, my current feeling is that our current software and our current model of computing simply isn't designed to make use of 100 cores, so a large amount of those cores would simply go to waste in a current system. I do believe that the computing trend will continue to add more cores, and clock speeds may drop slightly as more cores are added, but I believe that we'll always be looking at desktop CPU clock speeds of at least 2GHZ as more cores get added. What are your thoughts? 
I feel that either way it'd work. Clock speeds will increase as cores increase too, I think. But I don't like the idea of changing the multi-core platform that most new computers have now. Software has already been optimized for this. Remember: if it ain't broke don't fix it.
I think both would be fine but I would have to go with 100 cores at 100MHz. Just because you could split the tasks over 20 or so cores. Having 5 intensive tasks running on different cores is surely better than having all 5 on one core.