Are you considering GPU usage?

Questions and Answers : Wish list : Are you considering GPU usage?

To post messages, you must log in.

AuthorMessage
z2rus

Send message
Joined: 14 Jan 06
Posts: 1
Credit: 2,832,389
RAC: 0
Message 31973 - Posted: 2 Dec 2006, 21:57:26 UTC

Just wonder what about possible GPU usage in future rosetta versions?
Some of distributed projects has beta versions of clients with GPU co-processing.
ID: 31973 · Rating: 1 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Feet1st
Avatar

Send message
Joined: 30 Dec 05
Posts: 1755
Credit: 4,690,520
RAC: 0
Message 32025 - Posted: 3 Dec 2006, 20:20:19 UTC

Yes, Dr. Baker has said they are working towards that. Such things take considerable time.

They are ALSO working on a sort of Rosetta video game where YOU decide how to wrap the proteins, based on feedback of the energy level achieved. That would be really fun. And will be a great way for people to get a more concrete understanding of what Rosetta is doing, and how difficult it is to do the task well.
Add this signature to your EMail:
Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might!
https://boinc.bakerlab.org/rosetta/
ID: 32025 · Rating: 1 · rate: Rate + / Rate - Report as offensive    Reply Quote
Roy

Send message
Joined: 14 Feb 07
Posts: 3
Credit: 10,309
RAC: 0
Message 36770 - Posted: 14 Feb 2007, 8:26:54 UTC

Dr. Baker said that ATi Radeon X1900XTX has such computing power as thousands of x86 CPUs,but nVidia's GPUs couldn't
ID: 36770 · Rating: -1 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile kevan-in-devon

Send message
Joined: 8 Nov 06
Posts: 2
Credit: 14,772
RAC: 0
Message 38907 - Posted: 3 Apr 2007, 2:24:48 UTC

GPU co-processing would be brilliant as most peoples wideo cards sit doing nothing, mine gets about 30mins use a day at the most then just sits idle while i browse the web and listen to music. Not a lot of work at all considering its a 512mb x1600
ID: 38907 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
kuarl

Send message
Joined: 10 Nov 07
Posts: 3
Credit: 177,515
RAC: 0
Message 49059 - Posted: 26 Nov 2007, 9:41:51 UTC - in response to Message 38907.  

GPU co-processing would be brilliant as most peoples wideo cards sit doing nothing, mine gets about 30mins use a day at the most then just sits idle while i browse the web and listen to music. Not a lot of work at all considering its a 512mb x1600


hi,
in this day I'm studying the CUDA library used to develop data intensive application on top of nvidia 8800 series, and I would try to help you using that power.

A project to use cuda library in boinc already exist? eventually where I can find source code of rosetta or other boinc project to do some tests?



Fernando, Italy
ID: 49059 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 52705 - Posted: 25 Apr 2008, 12:00:26 UTC - in response to Message 32025.  

Once again, guess who is at the forefront (gpu, ps3, pflop, etc)?

None other than F@H: April 23, 2008 Folding@home and Simbios


First, we're working to make our GPU code available to others. This code will be distributed in a couple of forms. First, we'll give out a GPU-enabled version of Gromacs (basically, a standalone version of the GPU2 core), which will enable others to get major speed increases from GPUs. Next, we are working to release a GPU-enabled library (OpenMM), which will allow others to integrate GPU code into their programs. OpenMM is special in that it is a place for integrating both application developers as well as GPU vendors; much like OpenGL, our hope is that hardware acceleration vendors will now have a single API to accelerate and people who want to write applications will have a single, hardware accelerated API to use that would work on a variety of platforms.



What's that old saying?

Oh yeah, "Lead, follow, or get out of the way!!!"

RV770 chip performs great

AMD's RV770 to have 800 stream processors?
ID: 52705 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Stephen

Send message
Joined: 26 Apr 08
Posts: 32
Credit: 429,286
RAC: 0
Message 55187 - Posted: 20 Aug 2008, 5:31:08 UTC - in response to Message 31973.  

Just wonder what about possible GPU usage in future rosetta versions?
Some of distributed projects has beta versions of clients with GPU co-processing.


I would also like to see a GPU client. I have the money to go out and buy the best performing GPU (regardless of cost), but I would also like to see some benchmarks to determine which GPU I should buy.

The other issue with GPU is the question of if SLI will be a problem.
ID: 55187 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Matthew Maples

Send message
Joined: 19 Oct 06
Posts: 5
Credit: 135,659
RAC: 0
Message 55362 - Posted: 28 Aug 2008, 20:29:12 UTC

I would like to see this too. An HD 4850 and HD 4870 has over 1 Terraflop of computing power (1.0 and 1.2 I belive) and the HD 4870 x2 has over 2 Terraflops, much more than even the fastest quad core. Folding@Home has a GPU client, cant wait for Rosetta@Home.
ID: 55362 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
zd

Send message
Joined: 25 Oct 06
Posts: 2
Credit: 206,179
RAC: 0
Message 77771 - Posted: 28 Dec 2014, 17:30:59 UTC - in response to Message 52705.  

Once again, guess who is at the forefront (gpu, ps3, pflop, etc)?

None other than F@H: April 23, 2008 Folding@home and Simbios


First, we're working to make our GPU code available to others. This code will be distributed in a couple of forms. First, we'll give out a GPU-enabled version of Gromacs (basically, a standalone version of the GPU2 core), which will enable others to get major speed increases from GPUs. Next, we are working to release a GPU-enabled library (OpenMM), which will allow others to integrate GPU code into their programs. OpenMM is special in that it is a place for integrating both application developers as well as GPU vendors; much like OpenGL, our hope is that hardware acceleration vendors will now have a single API to accelerate and people who want to write applications will have a single, hardware accelerated API to use that would work on a variety of platforms.



What's that old saying?

Oh yeah, "Lead, follow, or get out of the way!!!"

RV770 chip performs great

AMD's RV770 to have 800 stream processors?



I have 5 teraflops power doing nothing most games for 1 hour a day. What a waste considering that could help my fellow human.




ID: 77771 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Murasaki
Avatar

Send message
Joined: 20 Apr 06
Posts: 303
Credit: 511,418
RAC: 0
Message 77773 - Posted: 28 Dec 2014, 22:21:27 UTC - in response to Message 77771.  


I have 5 teraflops power doing nothing most games for 1 hour a day. What a waste considering that could help my fellow human.


Some of the scientists have tested GPU processing of Rosetta tasks in the last couple of years. While a small number of functions benefitted from the abilities of a GPU, most of the Rosetta work did not.

In general terms GPUs are great for tasks that you can split into small chunks (each requiring a low amount of memory) that can be run in parallel. Unfortunately a lot of Rosetta work requires a large amount of memory or needs to be run in sequence, thereby losing the advantages of the GPU.

You can read further discussions about GPU processing on Rosetta in the Number Crunching Forum on the main message board.

If you want to use your GPU for protein related research you may want to give POEM@home a shot. They have been trying to relaunch their GPU client in the last couple of months, though I haven't been paying attention to their progress.
ID: 77773 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Charles C.

Send message
Joined: 5 Oct 05
Posts: 2
Credit: 1,008,810
RAC: 0
Message 78045 - Posted: 18 Mar 2015, 13:28:06 UTC

rosetta need to upgrade

the computing prefs are not up to date standards with other projects.
the computing algorithm suffer of very severe outdated function which are not often with recent GPU like mine Geforce970 GTX. so please, make your project up to date
ID: 78045 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
GLLR

Send message
Joined: 13 Jan 12
Posts: 1
Credit: 89,896
RAC: 0
Message 79366 - Posted: 7 Jan 2016, 10:42:13 UTC

I would like to see work for GPU's. I run a R9 290X on several projects which is far superior and energy efficient than assigning 7 of my 8 cpu cores requiring far more energy for very little work. I have yet to run any GPU tasks on other projects that really task my GPU much. I have seen were some contributors have been able to edit their config files to allow running two projects at the same time. I may try that as I have no temperature restrictions under full load.

Those projects that only use CPU's to perform work I rarely bother to contribute anymore. It just is not worth the additional energy costs.

Hope your team can catch up with the times and become a bit environmentally greener. I do think your project is worth contributing to, but not at this cost of inefficiency.
ID: 79366 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
qKVDKiCoUNJeXhN4k5C65X4WPb3

Send message
Joined: 11 Dec 16
Posts: 2
Credit: 960,125
RAC: 0
Message 88102 - Posted: 18 Jan 2018, 20:49:17 UTC

I personally advise against the use of a GPU, and while it will mean increased computation, people crunching on the CPU will not be able to hit high RAC's as easily, and one of the parts I like about Rosetta is that it has CPU tasks only.
ID: 88102 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Alessandro Losavio

Send message
Joined: 3 May 18
Posts: 1
Credit: 43,481
RAC: 0
Message 88821 - Posted: 5 May 2018, 13:54:53 UTC - in response to Message 88102.  

Maybe a reasonable trade-off is to have two versions of Rosetta. One which use GPU and the other which use CPU only.

Is this feasible?
ID: 88821 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Aurum

Send message
Joined: 12 Jul 17
Posts: 32
Credit: 38,158,977
RAC: 0
Message 90147 - Posted: 5 Jan 2019, 19:48:43 UTC - in response to Message 88821.  

Maybe a reasonable trade-off is to have two versions of Rosetta. One which use GPU and the other which use CPU only.
Is this feasible?
It's extremely important that they be separate. The controls for CPU projects has to be totally separate from those for GPU projects.
My compute node has a range of computer capabilities and one-size does NOT fit all in distributed computing.
Especially when The Beast of BOINC consumes all of your RAM.
ID: 90147 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
futurenz

Send message
Joined: 30 Mar 14
Posts: 1
Credit: 497,602
RAC: 0
Message 95883 - Posted: 2 May 2020, 23:23:11 UTC - in response to Message 55187.  

https://boinc.berkeley.edu/wiki/ATI_Radeon
ID: 95883 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Questions and Answers : Wish list : Are you considering GPU usage?



©2024 University of Washington
https://www.bakerlab.org