Andy M.
AI & ML interests
Recent Activity
Organizations
still finetuned zimage base can't beat nano banana pro
This IS the worst at human anatomy (hands, limbs...) it simply can't do hands!
i tried flux2 locally and on replicate, even the official bfl pro version there. It's kind of ... bad?! It wasn't just a early comfyui or fp8 issue. It borders on unusable (at these things) . it can't do hands arms limbs, it can't do multiple people in an image without making a mess.
as for krea, i use lora trained on dev, they usually work, it's bit hit and miss though on clothing for example. cranking them up higher to overcome Krea's trademark scruff looks often helps. But of course within the limitations of dev, srpo also works with dev loras. it's kinda depended which one works better in a case. This vast eco system is the big advantage for flux.1 modifications over 2.
Flux 1 Dev vs Flux 1 Krea Dev vs Flux 2 Dev Comparisions
the very best flux in almost all aspects is krea. beyond the visuals it can do so much more than Flux1.Dev and srpo. Unlike SRPO it isn't just a visual enhancement. I know it's not a base model, but it is also a standalone flux model. Compatible with Flux loras and Redux.
try anything dirty or damaged or ripped (like: a man in a dirty, damaged shirt... ) in .1 dev (or pro btw) and you can see it's desperate attempts to simulate something it('s text encoder) is aware off but it just can't do. Unlike even some ancient SDXL models btw.
Krea is in terms of prompts it can follow not a million miles away from qwen image. Downsides: It makes clothes with structure or patterns often TOO gritty and scruffy, and people never actually look at each other.
But of course, at a dramatically higher cost, Flux2 should indeed vastly better ( but NOT compatible with any old loras.
(little update: i have not seen a model make such a MESS of hands and limbs in multi subject scenes since SDXL or Flux Schnell). We are actually back in six finger land. I hope that are new-model early issues.
For faces
call at home
I2I always generates nipples that are not present in the original image.
what if you then ptq the qat model? are the usual ptq precision loss problems back, are they worse?
I couldn't find a LORA that works with these checkpoints!?Impossible...
here is the issue with that: people, common or not, exploit and opress people. all you need is some level of power and control over ther people. Corporations a re a completely unnecessary part of that.
Blaming stuff on "corporations" or "politics" is too cheap and convenient. that is letting normal "people" off the hook. It may hurt to realise this but corporations are just normal people with more power.
And the reason censorship in (corporate ai models) has gotten even more out of hand than the cultural censorship you actually mean is "people". Google has no inherent interest in being "woke", they just follow politics and media and few but loud screaming "common people" who got us in this suffocation ott PC mess.
And into the annoyed backlash against it, which brought the rise of far rights everywhere.
How is the refiner model different?
observations about the AIO NSFW model's (V10) capabilities for Gay stuff (NOT a complaint, TL;DR: RAIO ok-ish, Mega V5 BAAAAD)
MEGA does not follow my prompt - Maybe I found the problem/solution
You know, I always wondered, what will become if we will start training AI "as" a human by mindset, and not as a "separated" entity that can either be a thread or perceive us as threads.
The ONLY way to ensure the future is to work together as people and not companies, as humans with tools, not corporations with weapons.
to be able to guarantee tour species safety, is to work as one and as a community or specie.
weapons using ai will happen. it very likely is happening. Every technology will be used for that. That is not even a question. The issue is not (for a while) that it becomes sentient, just that it is in charge of something dangerous and it "makes a mistake" as all ai chat platforms warn. It might not mean harm.
Issac Asimov's book (NOT the movie!) I, Robot is as relevant as never before.
But that's not the "harm" in question here, that is plain old censorship, some warranted and expected (same limits as laws) but most models go WAY overboard. If i can legally talk about a topic in public, there is NO reason to black it in a ai chat.
Here is the thing: on this planet is exactly ONE person who knows, what could cause me harm (in that sense). And that is ME! Not google, not Alibaba or OpenAI. The community led way into this is already here. It comes with terms like "uncensored" or "Abliterated". And then you can add YOUR own system prompt to protect you fram what you need protection from.
an i heroically avoided the W word completely.
you try to not give it choices implied in your question that it can cling to. Ask a question as neutrally as you can. The worst you can do is "isn't it true that...".
That said, system prompts can empower the llm to actually challenge and contradict you. How well it is using which method and data for it, is probably down to the model .
Now THAT said... getting medical advise from an AI beyond the basics AND relying on it is madness.
(without internet access or specific training data it's knowledge will likely be inferior to yours, and if the system prompt also stops it from speculating, it just takes your word for it even more).