• QuantumTickle@lemmy.zip
    link
    fedilink
    English
    arrow-up
    212
    arrow-down
    1
    ·
    4 days ago

    If “everyone will be using AI” and it’s not a bad thing, then these big companies should wear it as a badge of honor. The rest of us will buy accordingly.

    • Devial@discuss.online
      link
      fedilink
      English
      arrow-up
      67
      arrow-down
      2
      ·
      4 days ago

      If “everyone will be using AI”, AI will turn to shit.

      They can’t create originality, they’re only recycling and recontextualising existing information. But if you recycle and recontextualise the same information over and over again, it keeps degrading more and more.

      It’s ironic that the very people who advocate for AI everywhere, fail to realise just how dependent the quality of AI content is on having real, human generated content to input to train the model.

      • 4am@lemmy.zip
        link
        fedilink
        English
        arrow-up
        40
        arrow-down
        4
        ·
        4 days ago

        “The people who advocate for AI” are literally running around claiming that AI is Jesus and it is sacrilege to stand against it.

        And by literally, I mean Peter Thiel is giving talks actually claiming this. This is not an exaggeration, this is not hyperbole.

        They are trying to recruit techno-cultists.

        • EldritchFemininity@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 days ago

          Ironically, one of the defining features of the techno-cultists in Warhammer 40k is that they changed the acronym to mean “Abominable Intelligence” and not a single machine runs on anything more advanced than a calculator.

          • 4am@lemmy.zip
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            3 days ago

            Sci Fi keeps trying to teach us lessons, and instead we keep using it as an instruction manual.

            (Except, apparently, whenever it’s on the nose we interpret it as dramatic irony…)

      • Sl00k@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        4 days ago

        I think the grey area is what if you’re an indie dev and did the entire story line and artwork yourself, but have the ai handle more complex coding.

        It is to our eyes entirely original but used AI. Where do you draw the line?

        • Default_Defect@anarchist.nexus
          link
          fedilink
          English
          arrow-up
          14
          ·
          4 days ago

          Disclose the AI usage and how it was used. Let people decide. There will always be “no AI at all, ever” types that won’t touch the game, but others will see that it was used as a tool rather than a replacement for creativity and will give it a chance.

        • Devial@discuss.online
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          1 day ago

          The line, imo, is: are you creating it yourself, and just using AI to help you make it faster/more convenient, or is AI the primary thing that is creating your content in the first place.

          Using AI for convenience is absolutely valid imo, I routinely use chatGPT to do things like debugging code I wrote, or rewriting data sets in different formats, instead of doing to by hand, or using it for more complex search and replace jobs, if I can’t be fucked to figure out a regex to cover it.

          For these kind of jobs, I think AI is a great tool.

          More simply said, I personally generally use AI for small subtasks that I am entirely capable of doing myself, but are annoying/boring/repetitive/time consuming to do by hand.

          • Sl00k@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            I definitely agree but I think that case would still get caught in the steam AI usage badge?

        • irmoz@reddthat.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          That’s somewhat acceptable. The ideal use of AI is as a crutch - and I mean that literally. A tool that multiplies and supports your effort, but does not replace your effort or remove the need for it.

      • CatsPajamas@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        4 days ago

        How does this model collapse thing still get spread around? It’s not true. Synthetic data has actually helped bots get smarter, not dumber. And if you think that all Gemini3 does is recycle idk what to tell you

        • Devial@discuss.online
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          2 days ago

          If the model collapse theory weren’t true, then why do LLMs need to scrape so much data from the internet for training ?

          According to you, they should be able to just generate synthetic training data purely with the previous model, and then use that to train the next generation.

          So why is there even a need for human input at all then ? Why are all LLM companies fighting tooth and nail against their data scraping being restricted, if real human data is in fact so unnecessary for model training, and they could just generate their own synthetic training data instead ?

          You can stop models from deteriorating without new data, and you can even train them with synthetic data, but that still requires the synthetic data to either be modelled, or filtered by humans to ensure its quality. If you just take a million random chatGPT outputs, with no human filtering whatsoever, and use those to retrain the chatGPT model, and then repeat that over and over again, eventually the model will turn to shit. Each iteration some of the random tweaks chatGPT makes to their output are going to produce some low quality outputs, which are now presented to the new training model as a target to achieve, so the new model learns that the quality of this type of bad output is actually higher, which makes it more likely for it to reappear in the next set of synthetic data.

          And if you turn of the random tweaks, the model may not deteriorate, but it also won’t improve, because effectively no new data is being generated.

          • CatsPajamas@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 days ago

            I stopped reading when you said according to me and then produced a wall of text of shit I never said.

            Synthetic data is massively helpful. You can look it up. This is a myth.

            • Devial@discuss.online
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              2 days ago

              That is enormously ironic, since I literally never claimed you said anything except for what you did: Namely, that synthetic data is enough to train models.

              According to you, they should be able to just generate synthetic training data purely with the previous model, and then use that to train the next generation.

              LIterally, the very next sentence starts with the words “Then why”, which clearly and explicitly means I’m no longer indirectly quoting you Everything else in my comment is quite explicitly my own thoughts on the matter, and why I disagree with that statment, so in actual fact, you’re the one making up shit I never said.