• nicgentile@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    5 days ago

    I’ve been planning for two years now on how to successfully put this together.

    First thing I realised I would have to learn is tools like Blender, Gimp (which I will likely replace with Krita), etc. cause regardless of how well AI produces, you need to tidy things up.

    Then there is story boarding. No amount of AI can replace professionalism. So this is an important skill to have.

    Then there are the layouts. All that. I learnt how to use Scribus for layouts and Inkscape is always handy.

    My main struggle will be maintaining consistency which has improved consistently over the last two years, and I’ve been reading a ton of comics to learn the sort of views and angles they use.

    I can’t allow AI to generate text for me, cause that loses the plot. I might as well just prompt a story up and put it on Amazon and move on. I don’t want to do that. Instead I let it suggest better phrasing, words, basically a better editor.

    I also created my own theme and it, very nicely, points out when I lose the plot. I then ask it to point out where my story sucks and it will also point that out. If I run my text through an AI text detector, I get like 1-2% written by AI which I believe any AI language tool would do. It points out where it detects the AI written text and I work on it and remove the text. GPT has a habit of adding its own text and does not stick to the boundaries set.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      No amount of AI can replace professionalism.

      This!

      I don’t want to do that. Instead I let it suggest better phrasing, words, basically a better editor.

      This!

      Locally, I actually keep a long context llm that can fit all or most of the story, and sometimes use it as an “autocomplete.” For instance, when my brain freezes and I can’t finish a sentence, I see what it suggests. If I am thinking of the next word, I let it generate one token and look at the logprobs for all the words it considered, kind of like a thesaurus sorted by contextual relevance.

      This is only doable locally, as prompts are cached so you get instant/free responses ingesting (say) a 80K word block of text.