Need improvement
This definitely needs a fine-tune. The recycled turbo data has severely amplified turbos issues baked into de-turbo.
The claim that it's easier to train doesn't seem true either, at 3000 steps, my Ultra-Realist lora looks like turbo training at 500 steps, and is not making a great difference like on Turbo. The results are also poor, breaking anatomy cohesion, text, etc, things the aesthetic lora isn't really even touching. Not worth releasing. I'll try training it further but feel this is a problem with scrambled model.
The model is also unaligned from the VAE producing washed out results, and muddled pixelated details. There are "fixed" VAEs out, but they are more just aggressively shifting the model to produce darker more vibrant colors, like you cranked vibrancy in photoshop way too far, but does nothing for the quality degradation and pixelated results like using the wrong VAE with a SDXL model.
I've tested v1, v2, and de turbo and I am fancying v1 and v2 better than de turbo myself, seems de turbo will more often have an odd limb or make things a bit out of focus vs v1/v2.
I was hoping de turbo would help with the multi lora issue, but didn't really see an improvement there yet.
Might be interesting to retrain a lower step count De Turbo from scratch in the realm of steps of v1 to v2, in other words 'just breaking down the distillation enough'.
After using this as soon as it was released, I've realized the de-distilled model does not work for in depth training.
I've been trying to train on small or large data sets, but it ends up 'burning' the LoRA. When the LoRA is used, as
@WAS
pointed out, there are different types of issues. Either the image ends up too 'burned', or there's a ton of distortion. I've tried training over 8-9 data sets, with and without captioning and with different settings (Low, High noise and balanced modes) but the results aren't reflecting in the output.
Agreed, I cannot get a lora trained with good results. I'm on by 7th try but they all come out with weird results. My previous LoRA's created on the standard Z-Image-Turbo look so much better. I think I'm going to give up and wait for the base and edit versions of z-image to come out.
The comments about distortion and washed out or burning are not what I am experiencing. I am not sure what I am doing different, but 6 out of 7 lora's so far have been a huge success. Mixing also not bad, just lower the lora strength, ie 0.5 + 0.5 for two loras works very well. always keep a ratio that does not exceed 1 and it works fine. I do 3000 steps on 35 images, 6000 on 70 and so on.
The comments about distortion and washed out or burning are not what I am experiencing. I am not sure what I am doing different, but 6 out of 7 lora's so far have been a huge success. Mixing also not bad, just lower the lora strength, ie 0.5 + 0.5 for two loras works very well. always keep a ratio that does not exceed 1 and it works fine. I do 3000 steps on 35 images, 6000 on 70 and so on.
Thanks - I also agree with the rule of thumb (10 steps/image), but a larger data set isn't working without giving distorted results. I've trained with SD15, Flux, XL before without issues.
I just tried training 3 LoRAs on the Adapter version with smaller data sets and the results are much better. I'm sure your experience is valid, but as others pointed out, there is some issue and when you keep trying out different settings and arrive at the same result, it's better to wait for the base version and see if things improve.
The comments about distortion and washed out or burning are not what I am experiencing. I am not sure what I am doing different, but 6 out of 7 lora's so far have been a huge success. Mixing also not bad, just lower the lora strength, ie 0.5 + 0.5 for two loras works very well. always keep a ratio that does not exceed 1 and it works fine. I do 3000 steps on 35 images, 6000 on 70 and so on.
Can you maybe investigate what you are doing differently? And about weight, that's just a blaring issue, really. If you can't use your Lora at 1 with other loras, inherently, you can't get the right effect of your Lora.