pstarr wrote:Dolan, massively parallel computing is so 90's.
Err, the Hadoop ecosystem seems to be doing quite well.
pstarr wrote:Dolan, massively parallel computing is so 90's.
pstarr wrote:Dolan, massively parallel computing is so 90's. Anybody remember 3D Nintendos?Now that selfies have reached the pinnacle of future computing, the Genius Companies are left with intellectual property. Who owns the cnt-c? That's the big question these days. Microsoft and Apple are in a death-match over FN ScrLK and PrtSc.
Peak Compute is when the rate of software bloat exceeds the increase in computing power.dolanbaker wrote:But nothing that can't be sorted out with a bit of decent code.
Keith_McClary wrote:Peak Compute is when the rate of software bloat exceeds the increase in computing power.dolanbaker wrote:But nothing that can't be sorted out with a bit of decent code.
That article is over a year old and it's predictions did not come true. Intel continued to increase performance per watt while at the same time lowering cost per transistor. Moore's Law is still alive and well.PEAKINT wrote:Not to rain on anyone's party, but we have already peaked technologically. Moore's law died at 28nm
14nm Process Technology: Opening New HorizonsIntel 14nm continues to deliver lower cost per transistor. 14nm Intel delivers >2x improvement in performance per watt. Moore's Law continues!
Intel’s 14nm Technology in DetailConcerns over the immediate end of Moore’s Law remain overblown and sensationalistic.
Intel is also reporting that they have been able to maintain their desired pace at improving transistor switching speeds and reducing power leakage. Across the entire performance curve the 14nm process offers a continuum of better switching speeds and/or lower leakage compared to Intel’s 22nm process. Here we can see how the last several generations of Intel’s process nodes compare across mobile, laptop, and server performance profiles. All 3 profiles are seeing a roughly linear increase in performance and decrease in active power consumption, which indicates that Intel’s 14nm process is behaving as expected and is offering similar gains as past processes. In this case the 14nm process should deliver a roughly 1.6x increase in performance per watt, just as past processes have too.
Furthermore, these base benefits when coupled with Intel’s customized 14nm process for Core M (Broadwell-Y) and Broadwell’s power optimizations have allowed Intel to more than double their performance per watt as compared to Haswell-Y.
The end result is that while Intel’s cost per transistor is not decreasing as quickly as the area per transistor, the cost is still decreasing and significantly so. Even with the additional wafer costs of the 14nm process, on a cost per transistor basis the 14nm process is still slightly ahead of normal for Intel.
The fact that costs per transistor continue to come down at a steady rate may be par for the course, but that Intel has been able to even maintain par for the course is actually a very significant accomplishment. As the cost of wafers and fabbing have risen over the years there has been concern that transistor costs would plateau, which would lead to chip designers being able to increase their performance but only by increasing prices, as opposed to the past 40 years of cheaper transistors allowing prices to hold steady while performance has increased. So for Intel this is a major point of pride, especially in light of complaints from NVIDIA and others in recent years that their costs on new nodes aren’t scaling nearly as well as they would like.
As if to prove the point about "Concerns over the immediate end of Moore’s Law remain overblown and sensationalistic". Intel pushes back its 10nm technology gets overblown into "Intel is throwing in the towel!" Sensationalist nonsense.PEAKINT wrote:This just in...
Intel throwing in the towel!
http://arstechnica.com/gadgets/2015/07/ ... w-falters/
in 2000 the number of transistors in the CPU numbered 37.5 million, while in 2009 the number went up to an outstanding 904 million; this is why it is more accurate to apply the law to transistors than to speed.
Moore's law"Moore's law" is the observation that the number of transistors in a dense integrated circuit has doubled approximately every two years.
From this point on we will still be able to double the amount of transistors in a single device but not at lower cost. All that we know about the more advanced nodes (22/20nm, 16/14nm, …) indicates that the cost per transistor is not going to be reduced significantly vs. that of 28nm.
On a cost per transistor basis the 14nm process is still slightly ahead of normal for Intel. The fact that costs per transistor continue to come down at a steady rate may be par for the course, but that Intel has been able to even maintain par for the course is actually a very significant accomplishment.
Why are newer generations of processors faster at the same clock speed?Q: Why, for example, would a 2.66 GHz dual-core Core i5 be faster than a 2.66 GHz Core 2 Duo, which is also dual-core?
A1: The processor requires fewer instruction cycles to execute the same instructions. This can be for a large number of reasons:
1. Large caches mean less time wasted waiting for memory.
2. More execution units means less time waiting to start operating on an instruction.
3. Better branch prediction means less time wasted speculatively executing instructions that never actually need to be executed.
4. Execution unit improvements mean less time waiting for instructions to complete.
5. Shorter pipelines means pipelines fill up faster.
And so on.
A2: The absolute definitive reference is the Intel 64 and IA-32 Architectures Software Developer Manuals. Some general differences I see listed in that chapter, going from the Core to the Nehalem/Sandy Bridge microarchitectures are:
* improved branch prediction, quicker recovery from misprediction
* HyperThreading Technology
* integrated memory controller, new cache hirearchy
* faster floating-point exception handling (Sandy Bridge only)
* LEA bandwidth improvement (Sandy Bridge only)
* AVX instruction extensions (Sandy Bridge only)
A3: Designing a processor to deliver high performance is far more than just increasing the clock rate. There are numerous other ways to increase performance, enabled through Moore's law and instrumental to the design of modern processors.
* Pipelines have become longer over the years, enabling higher clock rates. However, among other things, longer pipelines increase the penalty for an incorrect branch prediction, so a pipeline can't be too long. In trying to reach very high clock speeds, the Pentium 4 processor used very long pipelines, up to 31 stages in Prescott. To reduce performance deficits, the processor would try to execute instructions even if they might fail, and would keep trying until they succeeded. This led to very high power consumption and reduced the performance gained from hyper-threading. Newer processors no longer use pipelines this long, especially since clock rate scaling has reached a wall; Haswell uses a pipeline which varies between 14 and 19 stages long, and lower-power architectures use shorter pipelines (Intel Atom Silvermont has 12 to 14 stages).
* The accuracy of branch prediction has improved with more advanced architectures, reducing the frequency of pipeline flushes caused by misprediction and allowing more instructions to be executed concurrently. Considering the length of pipelines in today's processors, this is critical to maintaining high performance.
* With increasing transistor budgets, larger and more effective caches can be embedded in the processor, reducing stalls due to memory access. Memory accesses can require more than 200 cycles to complete on modern systems, so it is important to reduce the need to access main memory as much as possible.
* Newer processors are better able to take advantage of ILP through more advanced superscalar execution logic and "wider" designs that allow more instructions to be decoded and executed concurrently. As noted above, Haswell can execute up to eight instructions at a time. Increasing transistor budgets allow more functional units such as integer ALUs to be included in the processor core. Key data structures used in out-of-order and superscalar execution, such as the reservation station, reorder buffer, and register file, are expanded in newer designs, which allows the processor to search a wider window of instructions to exploit their ILP. This is a major driving force behind performance increases in today's processors.
* More complex instructions are included in newer processors, and an increasing number of applications use these instructions to enhance performance. Improvements in compiler technology enable more effective use of these instructions.
* In addition to the above, greater integration of parts previously external to the CPU such as the northbridge, memory controller, and PCIe lanes reduce I/O and memory latency. This increases throughput by reducing stalls caused by delays in accessing data from other devices.
ennui2 wrote:Pstarr, high-tech isn't your sphere of knowledge and therefore you really should think twice about trying to offer analysis or predictions.
StarvingLion wrote:Digital Computers are great for somethings -- A Police State, FIAT money, and Deskilling
StarvingLion wrote:You coding "geniuses" made your bed (getting rid of that dirty industrialism), now pull a rabbit or else.
StarvingLion wrote:Nuclear or Coal or Windmill technology is irrelevant because every person in this "country" owes $600,000 in debt.
StarvingLion wrote:The old reliables for electricity generation are "bad" because all that is left are ponzi schemes like nat gas and solar.
StarvingLion wrote:you people are broke, deskilled, and can't afford to rebuild the industrial infrastructure.
StarvingLion wrote:But according to the loons in the IT sector, if we just keep buying faster processors and other useless electronic gadgets, the local manufacturing sector will boom. LOL.
StarvingLion wrote:Thats why I gave up on you people for the Wood Economy oops I mean Biomass. Thats the future, Biomass, not computers.
StarvingLion wrote:The hobo in the woods is no more luddite than the fag banker in Brussels driving his BMW. Both depend on wood and palm oil, otherwise the lights go out and the car never leaves the garage.
StarvingLion wrote:Nuclear or Coal or Windmill technology is irrelevant because every person in this "country" owes $600,000 in debt.
pstarr wrote:I am still waiting for you techtopians to project implementation of true natural language processing. Come back to me when Siri can parse this simple string; Time flies like an arrow. into it various permutations.
pstarr wrote:Are you Watson ennui? Otherwise you failed the Turing Test, responding with platitudes, not specifics.
Users browsing this forum: No registered users and 131 guests