Run time.

Message boards : Rosetta@home Science : Run time.

To post messages, you must log in.

AuthorMessage
Profile adrianxw
Avatar

Send message
Joined: 18 Sep 05
Posts: 653
Credit: 11,840,739
RAC: 51
Message 89816 - Posted: 1 Nov 2018, 21:41:58 UTC
Last modified: 1 Nov 2018, 21:47:48 UTC

In your settings, you can change the run time for work units. I am making an assumption about this, and want to confirm the veracity of my assumption.
If a job takes 20 minutes to run, and I have my task run time set to 1 hour, then my assumption is the work unit will run 3 jobs.
An alternative possibility is that a work unit starts a single job and works on it for 1 hour then stops, reporting its state at the end of the hour. The returned state can then be issued in another work unit as its starting point. This would act in a similar manner as checkpoints within a job do.
If the alternate explanation is the case, setting a longer run time would allow the job to progress further within a single work unit, which is probably beneficial to the job overall as the generation, distribution processing and collection of parts is minimised.
Comments welcome.
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
ID: 89816 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Snags

Send message
Joined: 22 Feb 07
Posts: 198
Credit: 2,888,320
RAC: 0
Message 89861 - Posted: 9 Nov 2018, 17:54:49 UTC - in response to Message 89816.  

Your first assumption is the correct one. The program will complete as many decoys as it can in the time allotted. At the end of each decoy it will check if it has time to run another, if not it will wrap up. This is when you may see the task completed in less time than you have chosen in your preferences. On the other hand, not all decoys take the same amount of time to run. Some will continue past your run time preference in order to complete the decoy. If it runs four hours over, the watchdog should cut in and end the task.

Snags
ID: 89861 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile adrianxw
Avatar

Send message
Joined: 18 Sep 05
Posts: 653
Credit: 11,840,739
RAC: 51
Message 89864 - Posted: 9 Nov 2018, 22:40:36 UTC
Last modified: 9 Nov 2018, 22:42:32 UTC

Okay, that is more or less what thought, I have my run time set for 12 hours and the work units typically run for circa 12 hours. The other idea was a "just in case" type scenario, in case they needed longer run times.
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
ID: 89864 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 89877 - Posted: 11 Nov 2018, 15:48:28 UTC

Just to clarify terminology, the 20 minute run you mention is what is referred to as a decoy or a model. It is a complete run of the algorithm used by the task. What you describe about partial completion and creating another work unit to continue the work is possible with BOINC, but not required by R@h. The algorithms run to completion on a time scale that can be completed by each machine. A longer runtime preference results in more completed decoys, rather than a more precise prediction. At the project level, more reported decoys yields a better prediction overall.

If the runtime preference of the user is enough that the average runtime of the first models would predict there is enough time to run another, than a new model with a unique starting point is begun. Note that the runtime preference and time calculations just described refer to actual CPU time, but the BOINC Manager shows both CPU time and run time (wall-clock time).
Rosetta Moderator: Mod.Sense
ID: 89877 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile adrianxw
Avatar

Send message
Joined: 18 Sep 05
Posts: 653
Credit: 11,840,739
RAC: 51
Message 89882 - Posted: 12 Nov 2018, 8:00:50 UTC - in response to Message 89877.  
Last modified: 12 Nov 2018, 8:13:59 UTC

Simply out of curiosity, does the work unit download the decoys as it needs them or does the initial send have sufficient information to run the algorithm many times? I envisage a situation where the model takes a set number of start parameters and these are included in the initial download, it runs the initial decoy, and then, if time allows, it, for example, increments parameter 6, and then runs it again. I was a professional software engineer for 40 years, so am likely to understand a technical reply.
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
ID: 89882 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 89885 - Posted: 12 Nov 2018, 20:53:26 UTC

Yes, "the initial send have sufficient information to run the algorithm many times" says it well. The second decoy is running over the same protein sequence, with the same algorithm, it simply starts with a new random number, which is used to basically create a new hypothetical starting position of the protein. So there isn't anything more to download. The next starting point is generated for the subject protein using the random number. The algorithm is then run against this new, starting conformation.
Rosetta Moderator: Mod.Sense
ID: 89885 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile adrianxw
Avatar

Send message
Joined: 18 Sep 05
Posts: 653
Credit: 11,840,739
RAC: 51
Message 89887 - Posted: 12 Nov 2018, 21:27:36 UTC - in response to Message 89885.  

Thanks, understand fully.
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
ID: 89887 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile adrianxw
Avatar

Send message
Joined: 18 Sep 05
Posts: 653
Credit: 11,840,739
RAC: 51
Message 89980 - Posted: 10 Dec 2018, 18:25:02 UTC

Given what you have said here, if a unit has a "computation error" like this one...

https://boinc.bakerlab.org/result.php?resultid=1046340233

... it has run the job many times. I would expect a "computation error" in an early cycle to crash the work unit. yet that one ran for the time limit I have set using the same protein simply with a different random number start point. This implies that the job has run normally for many start points. I have noticed a number of errors in the last few days actually, the one I highlight is just the worst.
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
ID: 89980 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 89981 - Posted: 10 Dec 2018, 20:36:52 UTC

That task says it had an error trying to create the results file. But the output shows it completed 21 structures. So it did complete those, and just had an error at the end trying to create and zip the results. A few possibilities I can think of would be Windows authority issues, anti-virus software, a full hard drive (or storage device that BOINC is using to run), and possibly somehow the task was configured incorrectly (on the R@h side) and it was trying to access the wrong area of storage to create the file.
Rosetta Moderator: Mod.Sense
ID: 89981 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile adrianxw
Avatar

Send message
Joined: 18 Sep 05
Posts: 653
Credit: 11,840,739
RAC: 51
Message 89982 - Posted: 11 Dec 2018, 8:06:01 UTC
Last modified: 11 Dec 2018, 8:35:58 UTC

I hear what you say, but, having spent some time checking the other projects totals, I have only one, that has a single error, and that is a "cancelled by server" which is not an error, and I don't know why it flagged as such. So Einstein, Milkyway, Seti, Yoyo, and Acoustics are not having any problems, just here, and just in the last week. Other projects have not sent work for a while but none I looked at had any errors.

Nothing has changed with Windows or Avast, (anti virus) , there is over 20 Gig on the SSD free.
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
ID: 89982 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile adrianxw
Avatar

Send message
Joined: 18 Sep 05
Posts: 653
Credit: 11,840,739
RAC: 51
Message 89987 - Posted: 12 Dec 2018, 8:10:14 UTC
Last modified: 12 Dec 2018, 9:03:03 UTC

Deleted.
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
ID: 89987 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 89995 - Posted: 13 Dec 2018, 18:54:13 UTC

I guess I am a bit confused. I do not see that the task you linked indicates it was cancelled by the server.

If a batch of tasks is found to have a problem, now that the newer server code is in place, the Project Team can easily cancel them to spare others the same problems. When that happens, the "cancelled by server" sort of status is assigned. It helps make the best use of crunch time. If other projects have not had to cancel batches of tasks, the tasks from those projects will not see such a status.
Rosetta Moderator: Mod.Sense
ID: 89995 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile adrianxw
Avatar

Send message
Joined: 18 Sep 05
Posts: 653
Credit: 11,840,739
RAC: 51
Message 89996 - Posted: 13 Dec 2018, 20:35:27 UTC - in response to Message 89995.  
Last modified: 13 Dec 2018, 21:13:20 UTC

Err, I'm confused now. What I said is...

>>>
Given what you have said here, if a unit has a "computation error" like this one... (highlight added)

https://boinc.bakerlab.org/result.php?resultid=1046340233
<<<<

... I quite understand why tasks can be cancelled by server, and quite agree with the function, however, I did not say that the task was cancelled by server.

I did say I was searching my other active projects for "errors", but found none, only here.

A cancelled by server work unit is this one...

https://boinc.bakerlab.org/workunit.php?wuid=941131930.
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
ID: 89996 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile adrianxw
Avatar

Send message
Joined: 18 Sep 05
Posts: 653
Credit: 11,840,739
RAC: 51
Message 89999 - Posted: 14 Dec 2018, 14:07:56 UTC
Last modified: 14 Dec 2018, 14:09:07 UTC

And now there is something seriously wrong going on here. Looking at my "errors" page, most of the older ones, that had values before are showing "Timed out - no response" - they did NOT show that before.

1045594884 942000780 3117659 6 Dec 2018, 12:54:10 UTC 14 Dec 2018, 12:54:10 UTC Timed out - no response 0.00 0.00 --- Rosetta Mini v3.78
windows_intelx86

Secure copies made.
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
ID: 89999 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sid Celery

Send message
Joined: 11 Feb 08
Posts: 2124
Credit: 41,224,342
RAC: 11,119
Message 90014 - Posted: 17 Dec 2018, 16:09:27 UTC - in response to Message 89999.  

And now there is something seriously wrong going on here. Looking at my "errors" page, most of the older ones, that had values before are showing "Timed out - no response" - they did NOT show that before.

1045594884 942000780 3117659 6 Dec 2018, 12:54:10 UTC 14 Dec 2018, 12:54:10 UTC Timed out - no response 0.00 0.00 --- Rosetta Mini v3.78
windows_intelx86

Secure copies made.

I've had a lot of this too. I have a theory why it is...

Recently there have been a lot of download errors on all my machines. A recent example of these errors is shown below:

14/12/2018 21:12:05 | Rosetta@home | Sending scheduler request: To report completed tasks.
14/12/2018 21:12:05 | Rosetta@home | Reporting 4 completed tasks
14/12/2018 21:12:05 | Rosetta@home | Requesting new tasks for CPU
14/12/2018 21:12:08 | Rosetta@home | [error] Can't parse file info in scheduler reply: file name is empty or has '..'
14/12/2018 21:12:08 | Rosetta@home | [error] Can't parse file info in scheduler reply: file name is empty or has '..'
14/12/2018 21:12:08 | Rosetta@home | [error] Can't parse file info in scheduler reply: file name is empty or has '..'
14/12/2018 21:12:08 | Rosetta@home | [error] Can't parse file info in scheduler reply: file name is empty or has '..'
14/12/2018 21:12:08 | Rosetta@home | [error] Can't parse file info in scheduler reply: file name is empty or has '..'
14/12/2018 21:12:08 | Rosetta@home | [error] Can't parse file info in scheduler reply: file name is empty or has '..'
14/12/2018 21:12:08 | Rosetta@home | [error] Can't parse file info in scheduler reply: file name is empty or has '..'
14/12/2018 21:12:08 | Rosetta@home | [error] Can't parse file info in scheduler reply: file name is empty or has '..'
14/12/2018 21:12:08 | Rosetta@home | [error] Can't parse file info in scheduler reply: file name is empty or has '..'
14/12/2018 21:12:08 | Rosetta@home | [error] Can't parse file info in scheduler reply: file name is empty or has '..'
14/12/2018 21:12:08 | Rosetta@home | Scheduler request completed: got 5 new tasks
14/12/2018 21:12:08 | Rosetta@home | [error] State file error: missing file r1_r1_EHEE_12133_000000021_0001_0001_0001_15_38_H_.._ems_3hM_2987_0001_0003_0001_0001_1_21_H_.._DHR71_DHR77_l5_t2_t1_D26_D20_cTerm_3x_r8_0001_0003_0001_0001_0002_0001_0001_0001_0001_fragments_data.zip
14/12/2018 21:12:08 | Rosetta@home | [error] State file error: missing input file r1_r1_EHEE_12133_000000021_0001_0001_0001_15_38_H_.._ems_3hM_2987_0001_0003_0001_0001_1_21_H_.._DHR71_DHR77_l5_t2_t1_D26_D20_cTerm_3x_r8_0001_0003_0001_0001_0002_0001_0001_0001_0001_fragments_data.zip
14/12/2018 21:12:08 | Rosetta@home | [error] Can't handle task r1_r1_EHEE_12133_000000021_0001_0001_0001_15_38_H_.._ems_3hM_2987_0001_0003_0001_0001_1_21_H_.._DHR71_DHR77_l5_t2_t1_D26_D20_cTerm_3x_r8_0001_0003_0001_0001_0002_0001_0001_0001_0001_fragments_fold_SAV_705713_217 in scheduler reply
14/12/2018 21:12:08 | Rosetta@home | [error] State file error: missing file r1_r1_ems_3hM_2904_000000002_0001_0001_0001_42_61_H_.._EHEE_13236_0001_0001_0001_0001_15_39_H_.._DHR62_DHR54_l4_t2_t3_D20_D18_ct6_cTerm_3x_r8_0001_0002_0001_0001_0001_0001_0001_0001_fragments_data.zip
14/12/2018 21:12:08 | Rosetta@home | [error] State file error: missing input file r1_r1_ems_3hM_2904_000000002_0001_0001_0001_42_61_H_.._EHEE_13236_0001_0001_0001_0001_15_39_H_.._DHR62_DHR54_l4_t2_t3_D20_D18_ct6_cTerm_3x_r8_0001_0002_0001_0001_0001_0001_0001_0001_fragments_data.zip
14/12/2018 21:12:08 | Rosetta@home | [error] Can't handle task r1_r1_ems_3hM_2904_000000002_0001_0001_0001_42_61_H_.._EHEE_13236_0001_0001_0001_0001_15_39_H_.._DHR62_DHR54_l4_t2_t3_D20_D18_ct6_cTerm_3x_r8_0001_0002_0001_0001_0001_0001_0001_0001_fragments_fold_SAV_706426_122 in scheduler reply
14/12/2018 21:12:08 | Rosetta@home | [error] State file error: missing file r1_r1_ems_3hC_984_0002_000000007_0001_0001_0001_23_41_H_.._EHEE_10482_0001_0001_0001_0001_15_38_H_.._DHR77_DHR4_l3_t3_t3_D26_D20_ct7_nTerm_3x_r8_0001_0001_0001_0001_0001_0001_0001_0001_fragments_data.zip
14/12/2018 21:12:08 | Rosetta@home | [error] State file error: missing input file r1_r1_ems_3hC_984_0002_000000007_0001_0001_0001_23_41_H_.._EHEE_10482_0001_0001_0001_0001_15_38_H_.._DHR77_DHR4_l3_t3_t3_D26_D20_ct7_nTerm_3x_r8_0001_0001_0001_0001_0001_0001_0001_0001_fragments_data.zip
14/12/2018 21:12:08 | Rosetta@home | [error] Can't handle task r1_r1_ems_3hC_984_0002_000000007_0001_0001_0001_23_41_H_.._EHEE_10482_0001_0001_0001_0001_15_38_H_.._DHR77_DHR4_l3_t3_t3_D26_D20_ct7_nTerm_3x_r8_0001_0001_0001_0001_0001_0001_0001_0001_fragments_fold__706255_218 in scheduler reply
14/12/2018 21:12:08 | Rosetta@home | [error] State file error: missing file r1_r1_EHEE_12133_000000021_0001_0001_0001_15_38_H_.._ems_3hM_2987_0001_0003_0001_0001_1_21_H_.._DHR15_DHR70_l5_t3_t1_D20_D25_cTerm_3x_r8_0001_0005_0001_0001_0001_0001_0001_0001_fragments_data.zip
14/12/2018 21:12:08 | Rosetta@home | [error] State file error: missing input file r1_r1_EHEE_12133_000000021_0001_0001_0001_15_38_H_.._ems_3hM_2987_0001_0003_0001_0001_1_21_H_.._DHR15_DHR70_l5_t3_t1_D20_D25_cTerm_3x_r8_0001_0005_0001_0001_0001_0001_0001_0001_fragments_data.zip
14/12/2018 21:12:08 | Rosetta@home | [error] Can't handle task r1_r1_EHEE_12133_000000021_0001_0001_0001_15_38_H_.._ems_3hM_2987_0001_0003_0001_0001_1_21_H_.._DHR15_DHR70_l5_t3_t1_D20_D25_cTerm_3x_r8_0001_0005_0001_0001_0001_0001_0001_0001_fragments_fold_SAVE_ALL_705607_217 in scheduler reply
14/12/2018 21:12:08 | Rosetta@home | [error] State file error: missing file r1_r1_ems_3hM_2904_000000002_0001_0001_0001_42_61_H_.._EHEE_13236_0001_0001_0001_0001_15_39_H_.._DHR71_DHR26_l3_h21_l2_t1_t3_0_v6c_r11_0001_0002_0001_0001_0002_0001_0001_0001_0001_fragments_data.zip
14/12/2018 21:12:08 | Rosetta@home | [error] State file error: missing input file r1_r1_ems_3hM_2904_000000002_0001_0001_0001_42_61_H_.._EHEE_13236_0001_0001_0001_0001_15_39_H_.._DHR71_DHR26_l3_h21_l2_t1_t3_0_v6c_r11_0001_0002_0001_0001_0002_0001_0001_0001_0001_fragments_data.zip
14/12/2018 21:12:08 | Rosetta@home | [error] Can't handle task r1_r1_ems_3hM_2904_000000002_0001_0001_0001_42_61_H_.._EHEE_13236_0001_0001_0001_0001_15_39_H_.._DHR71_DHR26_l3_h21_l2_t1_t3_0_v6c_r11_0001_0002_0001_0001_0002_0001_0001_0001_0001_fragments_fold_SAVE__706457_122 in scheduler reply
14/12/2018 21:12:08 | Rosetta@home | [error] State file error: missing task r1_r1_EHEE_12133_000000021_0001_0001_0001_15_38_H_.._ems_3hM_2987_0001_0003_0001_0001_1_21_H_.._DHR71_DHR77_l5_t2_t1_D26_D20_cTerm_3x_r8_0001_0003_0001_0001_0002_0001_0001_0001_0001_fragments_fold_SAV_705713_217
14/12/2018 21:12:08 | Rosetta@home | [error] Can't handle task r1_r1_EHEE_12133_000000021_0001_0001_0001_15_38_H_.._ems_3hM_2987_0001_0003_0001_0001_1_21_H_.._DHR71_DHR77_l5_t2_t1_D26_D20_cTerm_3x_r8_0001_0003_0001_0001_0002_0001_0001_0001_0001_fragments_fold_SAV_705713_217_1 in scheduler reply
14/12/2018 21:12:08 | Rosetta@home | [error] State file error: missing task r1_r1_ems_3hM_2904_000000002_0001_0001_0001_42_61_H_.._EHEE_13236_0001_0001_0001_0001_15_39_H_.._DHR62_DHR54_l4_t2_t3_D20_D18_ct6_cTerm_3x_r8_0001_0002_0001_0001_0001_0001_0001_0001_fragments_fold_SAV_706426_122
14/12/2018 21:12:08 | Rosetta@home | [error] Can't handle task r1_r1_ems_3hM_2904_000000002_0001_0001_0001_42_61_H_.._EHEE_13236_0001_0001_0001_0001_15_39_H_.._DHR62_DHR54_l4_t2_t3_D20_D18_ct6_cTerm_3x_r8_0001_0002_0001_0001_0001_0001_0001_0001_fragments_fold_SAV_706426_122_1 in scheduler reply
14/12/2018 21:12:08 | Rosetta@home | [error] State file error: missing task r1_r1_ems_3hC_984_0002_000000007_0001_0001_0001_23_41_H_.._EHEE_10482_0001_0001_0001_0001_15_38_H_.._DHR77_DHR4_l3_t3_t3_D26_D20_ct7_nTerm_3x_r8_0001_0001_0001_0001_0001_0001_0001_0001_fragments_fold__706255_218
14/12/2018 21:12:08 | Rosetta@home | [error] Can't handle task r1_r1_ems_3hC_984_0002_000000007_0001_0001_0001_23_41_H_.._EHEE_10482_0001_0001_0001_0001_15_38_H_.._DHR77_DHR4_l3_t3_t3_D26_D20_ct7_nTerm_3x_r8_0001_0001_0001_0001_0001_0001_0001_0001_fragments_fold__706255_218_1 in scheduler reply
14/12/2018 21:12:08 | Rosetta@home | [error] State file error: missing task r1_r1_EHEE_12133_000000021_0001_0001_0001_15_38_H_.._ems_3hM_2987_0001_0003_0001_0001_1_21_H_.._DHR15_DHR70_l5_t3_t1_D20_D25_cTerm_3x_r8_0001_0005_0001_0001_0001_0001_0001_0001_fragments_fold_SAVE_ALL_705607_217
14/12/2018 21:12:08 | Rosetta@home | [error] Can't handle task r1_r1_EHEE_12133_000000021_0001_0001_0001_15_38_H_.._ems_3hM_2987_0001_0003_0001_0001_1_21_H_.._DHR15_DHR70_l5_t3_t1_D20_D25_cTerm_3x_r8_0001_0005_0001_0001_0001_0001_0001_0001_fragments_fold_SAVE_ALL_705607_217_1 in scheduler reply
14/12/2018 21:12:08 | Rosetta@home | [error] State file error: missing task r1_r1_ems_3hM_2904_000000002_0001_0001_0001_42_61_H_.._EHEE_13236_0001_0001_0001_0001_15_39_H_.._DHR71_DHR26_l3_h21_l2_t1_t3_0_v6c_r11_0001_0002_0001_0001_0002_0001_0001_0001_0001_fragments_fold_SAVE__706457_122
14/12/2018 21:12:08 | Rosetta@home | [error] Can't handle task r1_r1_ems_3hM_2904_000000002_0001_0001_0001_42_61_H_.._EHEE_13236_0001_0001_0001_0001_15_39_H_.._DHR71_DHR26_l3_h21_l2_t1_t3_0_v6c_r11_0001_0002_0001_0001_0002_0001_0001_0001_0001_fragments_fold_SAVE__706457_122_1 in scheduler reply

My theory is that the server thinks the tasks were received correctly when they weren't, so they're never in the list to process here and they only disappear once they pass the due date.

When I check my offline tasks with the number showing in my online task list there is a very large discrepancy. Check yours - I suspect it will be the same.

I meant to mention this some weeks ago but never got round to it - sorry
ID: 90014 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile adrianxw
Avatar

Send message
Joined: 18 Sep 05
Posts: 653
Credit: 11,840,739
RAC: 51
Message 90015 - Posted: 17 Dec 2018, 16:19:19 UTC
Last modified: 17 Dec 2018, 16:30:47 UTC

I raised the issue with the research team, I got this back...

>>>
Do you happen to know the name(s) of these jobs? There was a problematic batch that was sent out by a researcher in the lab that had '..' in the name which the BOINC client did not like. These jobs would fail and may be causing the odd behavior. These jobs also had very long names.
<<<

... which seems to apply to your record. Looking at the names of my failures, the .. appears.
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
ID: 90015 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sid Celery

Send message
Joined: 11 Feb 08
Posts: 2124
Credit: 41,224,342
RAC: 11,119
Message 90026 - Posted: 19 Dec 2018, 3:25:39 UTC - in response to Message 90015.  

I raised the issue with the research team, I got this back...

>>>
Do you happen to know the name(s) of these jobs? There was a problematic batch that was sent out by a researcher in the lab that had '..' in the name which the BOINC client did not like. These jobs would fail and may be causing the odd behavior. These jobs also had very long names.
<<<

... which seems to apply to your record. Looking at the names of my failures, the .. appears.

Makes sense. I wonder why these tasks haven't been withdrawn via the server?
ID: 90026 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile adrianxw
Avatar

Send message
Joined: 18 Sep 05
Posts: 653
Credit: 11,840,739
RAC: 51
Message 90027 - Posted: 19 Dec 2018, 6:35:48 UTC

If a job is out in the wild, they do seem to have ways of stopping them, I don't know why they did not do that. I still don't know why my job could not write its output, all the others can. A couple of goofs in quick succession,
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
ID: 90027 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Message boards : Rosetta@home Science : Run time.



©2024 University of Washington
https://www.bakerlab.org