@andreasnoack I wanted to use the plugin for a large job that would require TB’s of memory to run in one iteration. I am planning on looping through the process like 1000x and saving what I need each time through the loop. Is this possible?
The following is code is what I had to save on the plugin before. How would I modify it to save at the end of each time through a loop?
open("results_500_relF_T_3_1.jls", "w") do f
serialize(f, Dict("MAPE_relF_summaries" => MAPE_relF_summaries, "MAPE_Vc_summaries" => MAPE_Vc_summaries, "MAPE_CL_summaries" => MAPE_CL_summaries,
"MPE_relF_summaries" => MPE_relF_summaries, "MPE_Vc_summaries" => MPE_Vc_summaries, "MPE_CL_summaries" => MPE_CL_summaries,
"MAPE_BE_cmax_summaries" => MAPE_BE_cmax_summaries, "MAPE_BE_auc_summaries" => MAPE_BE_auc_summaries,
"MPE_BE_cmax_summaries" => MPE_BE_cmax_summaries, "MPE_BE_auc_summaries" => MPE_BE_auc_summaries,
"BE_power_df" => BE_power_df))
end
ENV["RESULTS_FILE"] = "results_500_relF_T_3_1.jls"
Also, is there an efficient to manage the memory through each loop? I’m used to being able to easily clear the workspace in MATLAB. Should I do variable = nothing
for all the large memory objects at the end of each loop?
And, should this outer loop be distributed with @everywhere
as well?
Please let me know if there might be any other important considerations with this approach that I haven’t mentioned.
Thank you.