Message boards : Number crunching : When will be the GPU thing start here?!?
Author | Message |
---|---|
Orgil Send message Joined: 11 Dec 05 Posts: 82 Credit: 169,751 RAC: 0 |
Watch this, it is a closer type of project yet R@H got certain financial boost recently yet the scientists in the project are neglecting GPU advnacement issues. We the members must insist/urge/demand the project admin that GPU thing needed to be solved here! If the scientists in R@H cannot figure out their algorithm for GPU why do not they hire some mathematicians from outside with their recent financing money and let the PROJECT ADVANCE along with the technological phase?! |
Orgil Send message Joined: 11 Dec 05 Posts: 82 Credit: 169,751 RAC: 0 |
Compare to that GPU farm here we members feeling like light years left behind from F@H the closer kind of project. |
svincent Send message Joined: 30 Dec 05 Posts: 219 Credit: 12,120,035 RAC: 0 |
|
Orgil Send message Joined: 11 Dec 05 Posts: 82 Credit: 169,751 RAC: 0 |
Yes the previous topics have not been rang loud enough bell on admin's ears. Hopefully watching that video some people might wake up and the project might start prepping for gpu... One idea the R@H admin could announce international competition of GPU algorithm for the project and among pool of world brains there are at least several of them are likely to write few lines of numbers and characters that could solve this thread tittle. Then maybe if Modsense delete this thread and we all be satisfied. :D |
P . P . L . Send message Joined: 20 Aug 06 Posts: 581 Credit: 4,865,274 RAC: 0 |
The dead horse has been dug up again!!!!!! ![]() |
Murasaki![]() Send message Joined: 20 Apr 06 Posts: 303 Credit: 511,418 RAC: 0 |
We the members must insist/urge/demand the project admin that GPU thing needed to be solved here! I for one won't be insisting/urging/demanding anything. The project team has to strike a balance between advancing the processing ability of the BOINC assets linked to the project and advancing the fundamental science that lies at the heart of the project. I am in no position to second guess the project team so I will assume that they have apportioned out their time and money in a way that is best for the project as a whole. If that means GPU processing can't/won't be introduced in the near future then we will just have to live with it. If you feel that you are better qualified than the bakerlab team to make such a decision then you can always try writing to them directly instead of through the forum. If on the other hand you are suggesting/requesting/discussing the potential benefits of introducing GPU processing, then don't let me stop you. :) |
Orgil Send message Joined: 11 Dec 05 Posts: 82 Credit: 169,751 RAC: 0 |
I do not understand why the project is not getting message from these types of sources: http://www.youtube.com/watch?v=KjOW5iW7dJQ Instead of running 10 desktop pc's for this project you can setup 2 gtx260 gpu's on 1 quad core desktop and accomplish computing of 15-25 desktops on electricity for 1.5 pc! So it is becoming real greener option for scientific minding! |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
Orgil, if I may ask, why do you presume that what is required is a mathematician? And why, after the post clearly stated how the funds would be used, do you presume that the donation is sufficient to cover the costs of "some mathematicians" in addition to the items stated? Rosetta Moderator: Mod.Sense |
![]() Send message Joined: 17 Sep 05 Posts: 815 Credit: 1,812,737 RAC: 0 |
I do not understand why the project is not getting message from these types of sources: http://www.youtube.com/watch?v=KjOW5iW7dJQ Because, as has been stated in the other threads that have hashed this topic to death, not all problems or algorithms are convertible to parallel processing as shown in that demonstration. Problems that are so constructed that they can be run in a parallel mode can see those kinds of gains. WCG which has several sub-projects and as near as I can tell those projects are as well funded as can be have no plans on the horizon to field a GPU driven application. Milky Way which ALREADY has a GPU application is trying to stand up a second project based solely on GPU systems has not been able to field that application even after nearly a month of trying. And they have an already proven application! It is not slather on some GPU Goo and you are good to go ... it is a massive undertaking. If it were not, we would have 45 project all with GPU applications ... we have only two in regular production, three if you count the SaH Beta ... and there are only a couple more projects that have hinted at a forthcoming GPU application and only one more that has tried a test with a new GPU application. |
![]() ![]() Send message Joined: 16 Oct 05 Posts: 711 Credit: 26,694,507 RAC: 0 |
|
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
A machine that makes a ton of water into ice cubes in an hour is great... unless you don't have a use for ice cubes. A machine that has massive TFLOP ratings, but is unable to run Rosetta is of little use. If the Original Poster really wants to see a GPU client, I would suggest they commit to raising the funds Dr. Baker estimates the task will take. And then contribute them via the foundation established with a portion of the large anonymous donation already mentioned. If he tells you he needs math, and it will cost this much, then that means he feels it can be done. And if he suggests a large contribution be used in some other way, then it means he feels it is the best use of the funds. Rosetta Moderator: Mod.Sense |
mikey![]() Send message Joined: 5 Jan 06 Posts: 1896 Credit: 10,138,586 RAC: 14,109 ![]() |
A machine that makes a ton of water into ice cubes in an hour is great... unless you don't have a use for ice cubes. A machine that has massive TFLOP ratings, but is unable to run Rosetta is of little use. The OP has not even considered that CUDA will NOT work on ATI cards so cutting out that half of the gpu World makes no sense for future compatibility! OpenGL should solve that but no one has come up with an app to use it yet! ![]() |
![]() Send message Joined: 16 Jun 08 Posts: 1235 Credit: 14,372,156 RAC: 211 |
Looks like another thread mostly by people who don't know that converting software to run on graphics cards often takes a major rewrite of the software, largely due to the small amount of memory each of the many processors within the graphics chip gets as its share. The current Rosetta@home applications demand so much memory to run on one CPU core that I'd expect them to require such a major rewrite. Also, although the future 6.8.* series of BOINC versions is planned to add the ability to use ATI cards several months down the road, there are still questions about whether the OpenGL standard is sufficiently defined and has sufficient software available to predict just how soon after that there will actually be even one BOINC project making good use of ATI cards while using the OpenGL standard. It takes computer programmers a significant amount of time to turn even a good algorithm into a working computer program if they need a major rewrite to fit such memory restrictions. Why should they need more mathematicians if they already have an algorithm that works well, but is hard to fit into the available memory? An idea to consider on how to do this, though: Allow the graphics processor to run one module of the software, then tell the CPU program than runs the graphics card to save a checkpoint of the results of that module, then overwrite that module with the next module and continue. The CPU program may have enough memory to store a copy of each of the modules available, or may just use disk storage to get copies of those modules this workunit needs, and leave the rest of the modules in a disk file. The CPU program could use a disk checkpoint to write all of the processor checkpoints stored in memory so far to disk, then erase at least any memory copies for which a newer memory checkpoint for the same processor is available. |
![]() Send message Joined: 17 Sep 05 Posts: 815 Credit: 1,812,737 RAC: 0 |
What you are speaking of Robert is a "mixed" mode of processing. Not a bad idea in theory, except we don't even have the single mode processing of the two resource classes working yet. And still to come is ATI native and OpenCL ... two more classes. Then toss in mixed mode class ... oh my head aches. This is especially troublesome in that UCB won't even discuss the most basic issues with the nonorthogonal GPU resource class with the potential to have GPUs of vastly different capabilities in the system as the same time. My simplest example is from my next system, say I get a MB with three PCI-e slots and I buy another GTX295 to go into this system. The two old systems that are retiring have GPUs and am I to junk them right away? Not likely, I would put them into the new system so it would have, now, a GTX295, a GTX280, and a 9800GT until I could afford to buy another GTX295 card to replace the 9800GT and possibly later the GTX280 ... Every time we bring this up we get shut down ... BUt it is on the near horizon in that The Lattice Project's application is suggested to be CPU intense at the same time as it is GPU intense ... not really an option considered in the current code base that I can tell ... |
Joseph in Seattle Send message Joined: 16 Oct 06 Posts: 1 Credit: 1,462,240 RAC: 0 |
I am a beginner. Sorry, but what does the acronym GPU stand for? |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
http://lmgtfy.com/?q=gpu Rosetta Moderator: Mod.Sense |
Jesse Viviano Send message Joined: 14 Jan 10 Posts: 42 Credit: 2,700,472 RAC: 0 |
I sincerely doubt that this project could use a GPU. From what little I know about this project from the quick guide to Rosetta and its graphics, it seems that steps 2 and 4 of the process are serial tasks, which GPUs are slow at normally unless special-purpose hardware is added to handle them like the entropy decoders used in the first steps of decompressing compressed video. Therefore, asking for a GPU client seems to be a waste of time. A better use of funds and developer time would be to compile a 64-bit client because the normal FPU in 32-bit x86 processing (not the SSE2 unit) is extremely slow due to being really unfriendly to compilers, and the SSE2 unit which is guaranteed in the AMD64 and Intel's copy of that standard and acts as the FPU in these standards is much more compiler-friendly, so compilers and humans can write fast floating point code that is much speedier than the x87 trash that most 32-bit code uses. Even if the code uses SSE2 in 32-bit code, then the 16 XMM registers in the SSE2 unit in 64-bit mode should help by reducing the time spent doing loads and stores to memory compared to the 8 XMM registers in the SSE2 unit in 32-bit mode. |
![]() Send message Joined: 1 Dec 05 Posts: 2035 Credit: 10,329,896 RAC: 13,796 ![]() |
And what about new AVX instructions? Can it help rosetta?? |
![]() Send message Joined: 3 Nov 05 Posts: 1833 Credit: 120,343,184 RAC: 28,545 ![]() |
I sincerely doubt that this project could use a GPU. From what little I know about this project from the quick guide to Rosetta and its graphics, it seems that steps 2 and 4 of the process are serial tasks, which GPUs are slow at normally unless special-purpose hardware is added to handle them like the entropy decoders used in the first steps of decompressing compressed video. Therefore, asking for a GPU client seems to be a waste of time. I'm as much of a programmer as Charlie Sheen is a nun so have no idea what the process to create a 64-bit client is - is it a case of compiling existing code and then testing/debugging or would parts need re-writing? If it were easy I would assume they'd already have done it so I'm assuming it's not. |
![]() Send message Joined: 3 Nov 05 Posts: 1833 Credit: 120,343,184 RAC: 28,545 ![]() |
I just had a look at AVX on wikipedia and it randomly mentions that BOINC supports it! |
Message boards :
Number crunching :
When will be the GPU thing start here?!?
©2025 University of Washington
https://www.bakerlab.org