I just got into bother by testing it on a file of mixed source, age and genera tracks and, in spite of our library being sourced from older recordings, for some reason I chose to normalise my test tracks and subsequent folders to the loudest, new recordings level.
Not the smartest thing to do, a little thought and I could have saved myself a lot of work
I know this is a late response, but I do NOT recommend doing this.
The reason why modern masterings sound so loud is not because their peak levels are high; it's because the dynamic range has been heavily compressed. The way I understand it, this plugin simply amplifies (or de-amplifies, often) the files by a certain amount of dB based on how far it is from the reference level. If you take an older, quiet mastering and simply amplify it by several dB to reach a "loudness war" level, you will introduce a lot of clipping into the audio. If you really want them to sound that loud, you should do more than just amplify it; you should compress the dynamic range as well with the Compressor plugin or the like.
The only time I've had to do that was for a custom soundtrack for a video game, in which the output level was very low. Even then, I used -8 dB as the ReplayGain target rather than something ridiculous like -10 or more. In the end, it's better to just work with a low RG level, either 0, or maybe 2 or 3 dB louder. That way you don't have to compromise good masterings and you add little to no clipping.
BTW, I have a question for the author, if he's around. When you set it to a different level, does it perform 2 different operations to get there? Say you have something with a RG value of -8 dB. Normally it would de-amplify the file by 8 dB to reach 0. But when you have it set to adjust it by +2 dB, does it just de-amplify the file by 6 dB, or does it perform 2 operations, the first being to get to 0 dB, then amplify the result by 2 dB? It seems like one operation would be better, but I'm not sure if it does that or not.