Your browser was unable to load all of the resources. They may have been blocked by your firewall, proxy or browser configuration.
Press Ctrl+F5 or Ctrl+Shift+R to have your browser try again.

How can I set max size of xml file in zipped backup file? #3947

JerryLee ·

We faced the issue that 'Out Of Memory' during restore or migration because of huge 'Builds.xml.x' file size.
We divided those huge file into few files which less than 150MB, then it succeeded.
I think if we can set the max size of xml file in zipped backup, we don't need to do by hands.
Could you let us know how to set max size of it?

If there is no way, please make such thing in latest QuickBuild.
Since we suffer from this jobs which should be done by hands, it would be very nice to us.

  • replies 7
  • views 1235
  • stars 0
robinshen ADMIN ·

QB tries to make a balance between file size and backup/restore speed. It currently holds 2000 build records in each build.xml file. What is the size of your build.xml size? And is this file backed up / restored against taken against same QB instance? If so, what is the wrapper.java.maxmemory setting in "conf/wrapper.conf"?

JerryLee ·

First of all, we have lost of variables and huge data in one QB build.
We have many Builds.xml.x files and many of them exceed 500MB.
I tried same QB Instance and 'wrapper.java.maxmemory' setting in "wrapper.conf" file is 100GB.
Plus, 'wrapper.java.initmemory' also 100GB.
We tried several times with different memory size such as 8G, 10G, 40G, ..., because we found your reply on other user's topic regarding this problem.
However, all tries were failed.
Please reply if you need any other information to look into.

Last, We use "QuickBuild 7.0.31".

robinshen ADMIN ·

Do you have hundreds/thousands of variables, or do you store very large data in some variable? Both cases are not suggested as variables are mainly used for purpose of controlling build logic. It should not used to store large build data; otherwise you will encounter performance issues when many builds are generated.

Requiring 100G to operate a QB server sounds very abnormal, and I guess the main reason is abuse of variables.

JerryLee ·

Thanks to reply.
Frankly, we store large data in some variables.
We know that it cause performance degradation, therfore we delete QB buils which over 60days.
However, we need to store it because of some reasons.
I'm not pretty sure that but I found the number of batch size in source code.

com.pmease.quickbuild/src/com/pmease/quickbuild/entitymanager/impl/DefaultDataManager.java
private static final int NORMAL_BATCH_SIZE = 2000;
private static final int LARGE_BATCH_SIZE = 20000;

I think if Admin can set the value 'NORMAL_BATCH_SIZE', it would be very nice and can help to resolve our problem.

robinshen ADMIN ·

You may download 7.0.31 source code and change the batch size to some small value and recompile to run the backup/restore to see how it works. If it works fine, we can make that configurable in future QB versions.

JerryLee ·

I did what you suggested and it works what I expected. (works fine)
Actually, the value was 10.
However, I found the newly added MIN_BATCH_SIZE variable which is '100' in latest QuickBuild source code.
Thefore, if you implement that value as configurable, it should be bigger than 100, if I'm right.
Thanks all.

robinshen ADMIN ·