Message boards : Number crunching : CERN Engineer Details AMD Zen Processor Confirming 32 Core Implementation, SMT
Author | Message |
---|---|
Timo Send message Joined: 9 Jan 12 Posts: 185 Credit: 45,649,459 RAC: 0 |
CERN Engineer Details AMD Zen Processor Confirming 32 Core Implementation, SMT Interesting developments forthcoming from AMD, according to some leaked info from this CERN computer engineer. The article hints at SKUs that scale up to 64 logical cores (32 cores plus SMT which is akin to Intel's hyperthreading). Exciting stuff! This gives me some hope that we could see a revival of competition at the higher end. Article here: http://hothardware.com/news/cern-engineer-leaks-amd-zen-architecture-details-claiming-40-percent-increase-in-ipc-up-to-32-cores **38 cores crunching for R@H on behalf of cancercomputer.org - a non-profit supporting High Performance Computing in Cancer Research |
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,627,225 RAC: 11,586 |
yeah - it could be great for rosetta. They've said that they can't only compete on price going forwards, but they're going to have to offer something competitive to bring people back from intel and invest in the platform. I guess some of it will come down to how good the 14nm process is compared to intel's - my understanding is that TSMC's 14nm is really closer to intel's 20nm than their 14nm. Hopefully there'll be a really good value, efficient option with lots of cores. Guess we're probably 6 months away or so still though. |
Timo Send message Joined: 9 Jan 12 Posts: 185 Credit: 45,649,459 RAC: 0 |
yeah - it could be great for rosetta. Agreed! By the way DCDC, congrats on being the predictor of the day today! XD **38 cores crunching for R@H on behalf of cancercomputer.org - a non-profit supporting High Performance Computing in Cancer Research |
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,627,225 RAC: 11,586 |
yeah - it could be great for rosetta. Ah thanks! Hadn't seen that! :D |
Timo Send message Joined: 9 Jan 12 Posts: 185 Credit: 45,649,459 RAC: 0 |
Another related write-up about the same leak, with more tidbits and context regarding where the industry (and by extension, AMD) is heading: http://www.extremetech.com/extreme/222921-amd-is-supposedly-planning-a-32-core-cpu-with-an-eight-channel-ddr4-interface |
sgaboinc Send message Joined: 2 Apr 14 Posts: 282 Credit: 208,966 RAC: 0 |
i'm thinking that with 32 cores, TDP would be pretty much a large limiting factor which means that each core needs to run at a lower ghz perhaps in the 1.5ghz - 2ghz range. that could be a pretty serious limiting factor considering that today's processors can be overclocked to some 4.5Ghz and beyond (some are running at those speeds as 'default') with voltages pushing the low limits of around 1v, i'd think it is doubtful if that can go any much lower while keeping frequency high recently this is played out in 'real life' on mobile SOC where there can literally be 8 'cores' and yet the performance is inferior to lower cores cpus running at higher frequencies it would also seem that amd may not be willing to commit large effort to design deep superscalar cores that maximise instruction level parallelism which could require large design effort and complex fabrication and lower yields (chance of a atomic defect damaging a core is much higher) leading to more expensive per core costs cores is a short cut and an easier way out which may mean lower performace. e.g. u could have 32 simple non-superscalar cores and if part of the chip surface has a defect, the manufacturer could simply 'disable' those cores and re-brand them as a lower spec cpu in effect we are truly past the end of moores law where dennard scaling has probably completely broken down http://www.extremetech.com/computing/116561-the-death-of-cpu-scaling-from-one-core-to-many-and-why-were-still-stuck worse of all, few applications have been re-done to work as multi-cores/multi-threaded parallel programs, and all these applications become significantly more complex to rework (concurrency, deadlocks & all that which is never there with simple minded sequential single thread processing become very real the moment u go multi-threaded parallel) |
Chilean Send message Joined: 16 Oct 05 Posts: 711 Credit: 26,694,507 RAC: 0 |
i'm thinking that with 32 cores, TDP would be pretty much a large limiting factor which means that each core needs to run at a lower ghz perhaps in the 1.5ghz - 2ghz range. that could be a pretty serious limiting factor considering that today's processors can be overclocked to some 4.5Ghz and beyond (some are running at those speeds as 'default') Hopefully you're wrong. Hopefully they can design a high IPC and high core-count CPUs. Low TDP/performance ratio. Something that will at least make Intel notice it has a competitor out there. I read that the dude in charge of the Zen is the same that was in charge of the Athlon 64. Regardless, Intel needs a strong competitor, just like NVIDIA has AMD on the graphics department. |
Timo Send message Joined: 9 Jan 12 Posts: 185 Credit: 45,649,459 RAC: 0 |
I for one am totally fine with trading clock rates for more cores if it means overall compute performance is increased. If someone can give me a (theoretical) chip that has 128 cores (or 64 real cores+some sort of 'hyper-threading' to make for 2x logical core count) but they top out at 1.8GHz, I still call that a huge win in terms of usefulness in projects like Rosetta@Home. **38 cores crunching for R@H on behalf of cancercomputer.org - a non-profit supporting High Performance Computing in Cancer Research |
Chilean Send message Joined: 16 Oct 05 Posts: 711 Credit: 26,694,507 RAC: 0 |
I for one am totally fine with trading clock rates for more cores if it means overall compute performance is increased. If someone can give me a (theoretical) chip that has 128 cores (or 64 real cores+some sort of 'hyper-threading' to make for 2x logical core count) but they top out at 1.8GHz, I still call that a huge win in terms of usefulness in projects like Rosetta@Home. Yes. It'd be useful for rosetta... but, I find that trading one for the other is not advancing. You're just kicking something off in favor of more cores. In all fairness, chip design is something that goes well over my head. And it's very very complex, like sgaboinc stated, to increase efficiency. More so because of the physical limitations silicon has. But still. |
sgaboinc Send message Joined: 2 Apr 14 Posts: 282 Credit: 208,966 RAC: 0 |
I for one am totally fine with trading clock rates for more cores if it means overall compute performance is increased. If someone can give me a (theoretical) chip that has 128 cores (or 64 real cores+some sort of 'hyper-threading' to make for 2x logical core count) but they top out at 1.8GHz, I still call that a huge win in terms of usefulness in projects like Rosetta@Home. rather i'm thinking that chip fabricators (not necessary the design engineers themselves) are moving towards more simpler cores taking a short cut than attempting the 'hard work' of complex instructional level parallelism or single processor core hardware optimization. pushing that effort to the software side to attempt the optimizations. this is quite apparent in the android world, where we are seeing manufacturers adopting cheaper / slower multi core cpu that run at fewer ghz. this is starting to see real world impact such as this https://meta.discourse.org/t/the-state-of-javascript-on-android-in-2015-is-poor/33889 which is challenging the continued viability for the project which is based on javascript. it could be causing other projects to fail in the same light on the higher end front, we are seeing AMD and NVIDIA (and even Intel) pushing towards HSA or GPGPU (OpenCL/CUDA) or such alternate multi-thread/multi-core parallelism rather than for the hardware to offer deep ILP or related capabilities. This severely limits the scope of programs / applications that can benefit from HSA/GPGPU as it depends on whether those programs can use very limited feature sets for the simplified cores to process vectorized instructions. one of such realizations is in HSA benchmarks such as this http://wccftech.com/amd-kaveri-i54670k-benchmarks-showdown-continued-hsa-features-test/ where the manufacturers attempt to showcase that HSA can achieve similar performance to the competing chips at the expense of re-writing those apps to be HSA *specific*. The trouble with all these is that there are a lot of applications today which are *very complex* and *very hard to completely rewrite / redesign* (sometimes impossible due to Amdahl's law https://en.wikipedia.org/wiki/Amdahl%27s_law just to do all that parallelism in software that could perhaps be that 95% of all software and applications today perhaps including rosetta and it leaves perhaps 5% of very specialised applications (notably those 'embarrassingly parallel' applications) to use those vector or multicore functionalities |
sgaboinc Send message Joined: 2 Apr 14 Posts: 282 Credit: 208,966 RAC: 0 |
http://www.cs.cornell.edu/courses/cs3410/2012sp/lecture/23-multicore-w-g.pdf Parallel Programming |
sgaboinc Send message Joined: 2 Apr 14 Posts: 282 Credit: 208,966 RAC: 0 |
http://thunk.org/tytso/blog/2008/04/26/donald-knuth-i-trust-my-family-jewels-only-to-linux/ Donald Knuth - famed: 'Art of Computer Programming':
http://www.informit.com/articles/article.aspx?p=1193856 |
River~~ Send message Joined: 15 Dec 05 Posts: 761 Credit: 285,578 RAC: 0 |
Multi core It's good for BOINC people, off course it is, but remember that you need memory for each core that's enabled for crunching. The market driven radon for huge multi core CPUs is similar - it's for servers running many virtual machines, or a server running multiple containers. Each VM or each container is by definition able to rub It's own threads with no additional programming (once you've installed Xen or Docker oute your choice off the proprietary alternatives) Or, even more so, for a server hosting eight VMs, each one running seven containers: this allows one thread for Docker and one each for the containers. Almost everyone gets their own core, it's just missing one for the hyperbole. So 64 cores makes very good sense in the server market. Will I be able to afford them? Probably only two our three years later as second hand. For the home computing market, AMD is also looking at 64 bit ARM chips - Many guess id's that that is what most of its will be buying, if we go for AMD. That is a more radical departure for AMD - up to now it's been totally loyal to Intel's x86 architecture, to the point that when Intel tried to drop x86 with ia64, AMD was able to use market pressure forces to force then to bee loyal to their own offspring. As a Brit I'm also excited to see further scope for the wonderful UK based ARM design. I hope |
River~~ Send message Joined: 15 Dec 05 Posts: 761 Credit: 285,578 RAC: 0 |
Just noticed my last sentence got cut off somehow. My post was intended to end like this: I hope AMD do well with this,because I strongly agree with other posts which say that we the computer buyers need the market to have some strong competition for Intel. R~~ |
Message boards :
Number crunching :
CERN Engineer Details AMD Zen Processor Confirming 32 Core Implementation, SMT
©2024 University of Washington
https://www.bakerlab.org