1.03 (vbox64) is out for rosetta python projects

Message boards : Number crunching : 1.03 (vbox64) is out for rosetta python projects

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Jim1348

Send message
Joined: 19 Jan 06
Posts: 881
Credit: 52,257,545
RAC: 0
Message 102093 - Posted: 18 Jun 2021, 22:58:46 UTC

A new version for the rosetta python projects is out for Windows, Linux and Mac.
Maybe there will be some work units to go along with it. That would be nice.
ID: 102093 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,097,357
RAC: 5,678
Message 102094 - Posted: 18 Jun 2021, 23:31:03 UTC - in response to Message 102093.  

A new version for the rosetta python projects is out for Windows, Linux and Mac.
Maybe there will be some work units to go along with it. That would be nice.


I agree
ID: 102094 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile [VENETO] boboviz

Send message
Joined: 1 Dec 05
Posts: 1990
Credit: 9,492,874
RAC: 12,663
Message 102098 - Posted: 20 Jun 2021, 11:43:27 UTC - in response to Message 102093.  

Maybe there will be some work units to go along with it. That would be nice.


- more python wus
- possibility, in the user's profile, to choose app
- what are differences between this app and 0.21 in Ralph@home?
ID: 102098 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jim1348

Send message
Joined: 19 Jan 06
Posts: 881
Credit: 52,257,545
RAC: 0
Message 102100 - Posted: 20 Jun 2021, 14:28:35 UTC - in response to Message 102098.  

- possibility, in the user's profile, to choose app

Yes, that is my hope. In the meantime, I have set up app_configs for each machine to limit the number of pythons to what its memory can handle.
It is even possible that they will run short (or even out) of the regular Rosettas and we will be left with only the pythons.

Since they don't tell us, we can believe what we want.
ID: 102100 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1831
Credit: 119,444,289
RAC: 10,966
Message 102107 - Posted: 21 Jun 2021, 22:06:53 UTC - in response to Message 102100.  

I doubt we'll be out of normal Rosetta units for a while yet because I believe the Robetta server takes requests in from around the world and distributes them on r@h. I wouldn't expect all those users to start using trrosetta yet, but I might well be wrong. Would be good to hear from someone in the project.
ID: 102107 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
wolfman1360

Send message
Joined: 18 Feb 17
Posts: 72
Credit: 18,450,036
RAC: 0
Message 102128 - Posted: 26 Jun 2021, 20:30:39 UTC

Hopefully they don't switch exclusively to vbox - or give users the ability to choose workunit types. There are doubtlessly countless machines that do not have virtual box installed and likely will not do so as they simply sit and crunch with users not checking the forums.

How much memory does each WU consume with these new python WUs? Do they finally use SSE/Avx? I don't think I have gotten one yet.
thanks
ID: 102128 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jim1348

Send message
Joined: 19 Jan 06
Posts: 881
Credit: 52,257,545
RAC: 0
Message 102129 - Posted: 26 Jun 2021, 20:37:10 UTC - in response to Message 102128.  

How much memory does each WU consume with these new python WUs?

You won't believe it. They use 8 GB per work unit. Maybe they have reduced that, but VBox will be the easy part.
ID: 102129 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
wolfman1360

Send message
Joined: 18 Feb 17
Posts: 72
Credit: 18,450,036
RAC: 0
Message 102130 - Posted: 26 Jun 2021, 20:40:39 UTC - in response to Message 102129.  

How much memory does each WU consume with these new python WUs?

You won't believe it. They use 8 GB per work unit. Maybe they have reduced that, but VBox will be the easy part.

8 GB!
Good thing 2 of my Ryzen's have 64, but ...still!
That's just a little excessive.
Hopefully that comes way down or gets optimized before these go into main production...
ID: 102130 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Bryn Mawr

Send message
Joined: 26 Dec 18
Posts: 389
Credit: 12,043,527
RAC: 14,519
Message 102132 - Posted: 27 Jun 2021, 6:58:57 UTC - in response to Message 102129.  

How much memory does each WU consume with these new python WUs?

You won't believe it. They use 8 GB per work unit. Maybe they have reduced that, but VBox will be the easy part.


If these become the standard then I will have to, sadly, abandon Rosetta.
ID: 102132 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,097,357
RAC: 5,678
Message 102138 - Posted: 29 Jun 2021, 2:27:06 UTC - in response to Message 102132.  

How much memory does each WU consume with these new python WUs?

You won't believe it. They use 8 GB per work unit. Maybe they have reduced that, but VBox will be the easy part.


If these become the standard then I will have to, sadly, abandon Rosetta.


WHY? You can always run 1 task at a time?
ID: 102138 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Bryn Mawr

Send message
Joined: 26 Dec 18
Posts: 389
Credit: 12,043,527
RAC: 14,519
Message 102140 - Posted: 29 Jun 2021, 5:58:31 UTC - in response to Message 102138.  

How much memory does each WU consume with these new python WUs?

You won't believe it. They use 8 GB per work unit. Maybe they have reduced that, but VBox will be the easy part.


If these become the standard then I will have to, sadly, abandon Rosetta.


WHY? You can always run 1 task at a time?


So run 1, possibly 2, tasks and leave the other 23/22 threads idle on each machine?

No thanks, Rosetta is one of 4 projects I contribute to and I would have to sacrifice it to allow the other 3 to continue running.
ID: 102140 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jim1348

Send message
Joined: 19 Jan 06
Posts: 881
Credit: 52,257,545
RAC: 0
Message 102143 - Posted: 29 Jun 2021, 15:48:08 UTC - in response to Message 102140.  

So run 1, possibly 2, tasks and leave the other 23/22 threads idle on each machine?

You use an app_config.xml file to limit the Pythons to the number you can run.
https://boinc.bakerlab.org/rosetta/forum_thread.php?id=14448&postid=102001#102001

Then you can use the other cores to run whatever else you want (which may be the ordinary Rosettas).
It is a good way to use up all the memory you have.
ID: 102143 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,097,357
RAC: 5,678
Message 102145 - Posted: 29 Jun 2021, 23:45:49 UTC - in response to Message 102143.  

So run 1, possibly 2, tasks and leave the other 23/22 threads idle on each machine?

You use an app_config.xml file to limit the Pythons to the number you can run.
https://boinc.bakerlab.org/rosetta/forum_thread.php?id=14448&postid=102001#102001

Then you can use the other cores to run whatever else you want (which may be the ordinary Rosettas).
It is a good way to use up all the memory you have.


In that app_config file you don't even need the name of the app, you can leave that line out and as long as it's in the Project folder you can run x numbers of units at one time from that project. I often use to limit my crunching when new apps come out from the different projects, limiting which apps I get in the different venues when possible, and run ie 3 tasks from project a, 3 tasks from project b and then 6 tasks from project c still leaving 2 cores free for gpu crunching and surfing. It can mean a little hands on to get tasks from some projects but suspending the ones I dod not need tasks from and then resuming them after I get the needed tasks means things run pretty smoothly.

<app_config>
<app>
<max_concurrent>3</max_concurrent>
</app>
</app_config>

It does NOT work from projects that do not let you pick which type of task you want to run in the venue setting, ie Rosetta, but if you don't care what kind of tasks you want to crunch it works fine.
ID: 102145 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jim1348

Send message
Joined: 19 Jan 06
Posts: 881
Credit: 52,257,545
RAC: 0
Message 102146 - Posted: 30 Jun 2021, 5:34:20 UTC - in response to Message 102145.  
Last modified: 30 Jun 2021, 5:37:11 UTC

In that app_config file you don't even need the name of the app, you can leave that line out and as long as it's in the Project folder you can run x numbers of units at one time from that project.

Yes, but then you limit all the tasks, even the normal Rosetta ones.
If you use the name, you can limit the Python Rosettas while not restricting the normal Rosettas.

If you have limited memory, it would be best to use a non-Rosetta "other" project. WCG/MCM works well, though it sometimes gives me scheduling problems and downloads too many.
TN-Grid is nice and reliable with good science and minimal memory requirements at 57 MB per work unit.
http://gene.disi.unitn.it/test/index.php

And if you want covid research, there is SiDock, using 173 MB per work unit (all on LInux).
ID: 102146 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Bryn Mawr

Send message
Joined: 26 Dec 18
Posts: 389
Credit: 12,043,527
RAC: 14,519
Message 102147 - Posted: 30 Jun 2021, 6:33:08 UTC - in response to Message 102143.  

So run 1, possibly 2, tasks and leave the other 23/22 threads idle on each machine?

You use an app_config.xml file to limit the Pythons to the number you can run.
https://boinc.bakerlab.org/rosetta/forum_thread.php?id=14448&postid=102001#102001

Then you can use the other cores to run whatever else you want (which may be the ordinary Rosettas).
It is a good way to use up all the memory you have.


If 2 tasks use up all of the memory on the machine then there is no memory for the other cores to run anything.

As I said, if the new tasks become the norm (I.e. Rosetta no longer provides the current type of task) then I will stop running Rosetta.
ID: 102147 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile [VENETO] boboviz

Send message
Joined: 1 Dec 05
Posts: 1990
Credit: 9,492,874
RAC: 12,663
Message 102148 - Posted: 30 Jun 2021, 6:38:34 UTC - in response to Message 102147.  

Calm down, guys.
I think it's just the beginning of the "python project" and that there is a lot of work to do on the code.
ID: 102148 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
wolfman1360

Send message
Joined: 18 Feb 17
Posts: 72
Credit: 18,450,036
RAC: 0
Message 102157 - Posted: 2 Jul 2021, 17:50:39 UTC

One upside to these - at least from what I can see - they take a lot less time to run, maybe an hour.
My two larger processors - 2 Ryzen 7-3700's. Sidock seems to do okay with them - nice and small requirements - but sometimes I run into scheduling issues. Even my 4770 has problems and the scheduler can't keep up at times.
I should check out tn grid again. I gave up pretty hard on LHC at home which is a shame as the science was very neat, but ran into constant issues, I do not have time to fiddle with things these days, given it is summer and the beach calls.
These Ryzen's are doing so much work versus the e5-2680 I had going in here and my air conditioner is kicking in so much less. Add to that - the incredible amount of hot temperatures and it's only June, best purchase I have made.
How do the 5000 series from AMD (Zen 3) do on this project? I couldn't find one at a decent price, the 5600 was more expensive than this 3700.
I might snatch up yet another 3000 series - maybe a 3600.
ID: 102157 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jim1348

Send message
Joined: 19 Jan 06
Posts: 881
Credit: 52,257,545
RAC: 0
Message 102158 - Posted: 2 Jul 2021, 17:55:47 UTC - in response to Message 102157.  

I might snatch up yet another 3000 series - maybe a 3600.

I like the 3600 a lot, though I have 3900X and 3950X also.
But the cache sometimes works better on the 3600 than on the larger chips, depending on how the project uses it, and they are easier to cool.
And the price was right when I bought them. Anything is more now, but I am going to wait at least until the end of the year for the 5000 series.
ID: 102158 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Grant (SSSF)

Send message
Joined: 28 Mar 20
Posts: 1667
Credit: 17,445,511
RAC: 24,785
Message 102159 - Posted: 2 Jul 2021, 21:02:18 UTC - in response to Message 102157.  

One upside to these - at least from what I can see - they take a lot less time to run, maybe an hour.
How long they run for is set by the project. When the final version is released i'd expect their runtime to be the project's default of 8 hours.
Since they're still developing the application, they would want the results back sooner, hence the shorter runtime.
Grant
Darwin NT
ID: 102159 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile [VENETO] boboviz

Send message
Joined: 1 Dec 05
Posts: 1990
Credit: 9,492,874
RAC: 12,663
Message 102333 - Posted: 3 Aug 2021, 7:32:45 UTC

A little batch of rosetta python on Ralph.
ID: 102333 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
1 · 2 · Next

Message boards : Number crunching : 1.03 (vbox64) is out for rosetta python projects



©2024 University of Washington
https://www.bakerlab.org