Jyk0L8eLs7jd7es.png

I’m completely speechless. This looks so terrible I thought it was a joke, but apparently Nvidia released these demos to impress people. DLSS 5 runs the entire game through an AI filter, making every character look like it’s running through an ultra realistic beauty filter.

The photo above is used as the promo image for the official blog post by the way. It completely ignores artistic intent and makes Grace’s face look “sexier” because apparently that’s what realism looks like now.

I wouldn’t be so baffled if this was some experimental setting they were testing, but they’re advertising this as the next gen DLSS. As in, this is their image of what the future of gaming should be. A massive F U to every artist in the industry. Well done, Nvidia.

  • TalkingFlower@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    The only good application of DLSS5 I could’ve think of is Euro Truck Simulator 2, hyperrealism would pair well with the game.

  • MurrayL@lemmy.world
    link
    fedilink
    English
    arrow-up
    424
    arrow-down
    5
    ·
    10 days ago

    Check out this fun little nugget from further down in the article:

    Nvidia actually used two RTX 5090s for its demos: one plays the game, the other exclusively runs the DLSS 5 technology.

    An entire second GPU just to run it.

      • muhyb@programming.dev
        link
        fedilink
        English
        arrow-up
        94
        arrow-down
        1
        ·
        10 days ago

        They have no intention to sell them to us. They’ll maybe let us play it like this thorugh GeForce Now, or any other streaming service.

        They don’t want regular folk to buy PCs anymore.

        So, yes. Fuck them indeed.

        • meco03211@lemmy.world
          link
          fedilink
          English
          arrow-up
          36
          arrow-down
          1
          ·
          10 days ago

          No. They want regular folk to buy PCs. They just have no idea what regular folk can afford. How much could a banana cost, 10 dollars?

          • Mirshe@lemmy.world
            link
            fedilink
            English
            arrow-up
            13
            ·
            10 days ago

            Yeah, their sales team is incredibly fucked over from their only significant revenue being various AI farms.

    • ZoteTheMighty@lemmy.zip
      link
      fedilink
      English
      arrow-up
      52
      arrow-down
      1
      ·
      10 days ago

      That’s actually good news in my eyes, we definitely won’t see this hit the consumer market for years.

    • Romkslrqusz@lemmy.zip
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      4
      ·
      10 days ago

      Context is important. The following sentence is:

      The use of two GPUs is required right now as DLSS 5 still has a long way to go in terms of optimisation - both in terms of performance and its VRAM footprint. However, DLSS 5 is designed for use on a single GPU and that’s how it will ship later this year. Quite how scalable it is also remains to be seen, but in common with other DLSS technologies, Nvidia tells us that the computational cost scales with resolution.

      • Skyrmir@lemmy.world
        link
        fedilink
        English
        arrow-up
        43
        ·
        10 days ago

        Sure just double the vram and your AI can run, unfortunately you can’t afford that vram because billionaires are running AI.

        • zurohki@aussie.zone
          link
          fedilink
          English
          arrow-up
          24
          ·
          10 days ago

          It’s always funny to me when Nvidia releases new 8GB cards and new VRAM-heavy features as a reason to buy them.

          • Skyrmir@lemmy.world
            link
            fedilink
            English
            arrow-up
            14
            ·
            10 days ago

            8gig isn’t even enough for 4k right now. There’s a guy that took his 3070 up to 16gig and it really shows the cards are ram limited. And also it’s a serious pita to change vram without melting the card.

      • SaharaMaleikuhm@feddit.org
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        2
        ·
        9 days ago

        But still the idea of DLSS was to claw back performance lost to raytracing, right? This is the exact opposite of that, it costs performance to sloppify the game. I just pray it’s gonna be an optional feature in games and I can still use DLSS 4 instead.

    • Bunitonito@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      2
      ·
      10 days ago

      2017: Buy 2x 1080 Ti for 1500 bucks, your build is GOATed, have fun spending half your time tinkering with your overclocks and fishing for the perfect SLI compatibility bits in Inspector

      2026: This. It’s a shame

    • Matty_r@programming.dev
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      10 days ago

      Ha ha thats great. Reminds me of PhysX back in the day where initially it needed a dedicated card.

    • nutsack@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      23
      ·
      10 days ago

      as much as you and I both hate it and as shitty as it looks right now, i imagine that some sort of cloud hosted ai technology is the future of gaming

      • athatet@lemmy.zip
        link
        fedilink
        English
        arrow-up
        30
        arrow-down
        1
        ·
        10 days ago

        It sounds like indie games that will run on a potato and not require internet access will be the future of gaming. I’m completely done with aaa corposlop.

        • nutsack@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          12
          ·
          10 days ago

          sounds like a good idea. it sounds like that’s what I like doing and that’s what I like playing and it sounds like it’s good but I don’t know man. you tell me that sounds good. I’m glad you predict the future

          i hope that the universe will happen exactly as in this website In accordance with our upvotes

      • DaTingGoBrrr@lemmy.ml
        link
        fedilink
        English
        arrow-up
        11
        ·
        10 days ago

        They tried this shit with Stadia. What would be the difference now that we also have AI? There is no reason to stream a shitty AAA game from a shitty company when you can buy and play indie hits on your own hardware.

        I know the capitalists really want game streaming to be the future but every gamer with some common sense will reject that idea.

  • empireOfLove2@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    187
    arrow-down
    1
    ·
    10 days ago

    Makes perfect sense in context.

    Nvidia is no longer a gaming company. They do not care about gamers. They are actively DRIVING AWAY gamers as fast as they can, because AI chips are more lucrative than dirty smelly tiny consumer products.

    This is meant to show off real-time AI generation performance, which is massively marketable to AI companies. Not consumers. Nothing is about consumers.

  • PonyOfWar@pawb.social
    link
    fedilink
    English
    arrow-up
    90
    arrow-down
    3
    ·
    10 days ago

    Can’t believe DF is so positive about this, just looks horrible. I’ve actually been quite positive about technologies like DLSS and FSR but this… no thanks.

    • RightHandOfIkaros@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      2
      ·
      10 days ago

      Crazy to me to see people just now waking up to just how underinformed Digital Foundry really is to technical details, and how much they sold out.

      • Gathorall@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 days ago

        Well, helps that you don’t need to know how to turn a computer on to see that they’re glazing some complete bullshit, just eyes.

    • absquatulate@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      10 days ago

      The amount of glazing they do, holy fuck. They had me too for a minute before I started wondering if this is just an april fools joke

      • PonyOfWar@pawb.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 days ago

        Eh, DLSS maybe, but they’ve been quite critical of UE5, especially its stutter issues. On the contrary, they’re always happy when something isn’t UE5.

        • CommanderCloon@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          My issue with UE5 is that it runs like shit, which DF tends to agree with, but it also looks like trash. Temporal effect are all over the place and make the image look like a muddy, ghosting mess. It only ever looks good on static screenshots

  • eli@lemmy.world
    link
    fedilink
    English
    arrow-up
    80
    ·
    10 days ago

    Idk why, but this reminded me of the South Park episode where everyone is using photoshop to show off their girlfriends.

    Had to make the meme

  • randomaside@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    71
    ·
    10 days ago

    The demos run on the back of two 5090 GPUs… At what point do we even acknowledge I shouldnt need to be using nearly 2kW of power on a gaming PC on a dedicated circuit in my home? Simply diminishing returns at a certain point and nvidia is bringing us way past that point into very unreasonable territory.

    Is it cool they could do it at all?.. I GUESS?! But they’ve definitely lost the plot here.

  • ThrowawayOnLemmy@lemmy.world
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    1
    ·
    10 days ago

    I just watched the Digital Foundry video on this and the way they fuckin glaze over this tech just disgusted me enough to unsubscribe.

    • RetroGoblet79@eviltoast.org
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      3
      ·
      10 days ago

      Can’t say I ever really trusted any Digital Foundry takes. They all seem to naval glaze over shiny things.

    • verdi@tarte.nuage-libre.fr
      link
      fedilink
      Français
      arrow-up
      10
      arrow-down
      1
      ·
      9 days ago

      Digital Foundry has sold off for years now. They glaze over everything NV regardless of the quality. What killed it for me was in a now edited restored video reviewing Dying Light 2, in a mausoleaum, Alex Bataglia was glazing over an RTX image that was obviously black crushed and saying RTX was adding immense detail when the exact opposite was shown. That’s when I realized they were surely being paid to portray that version of events.

      Evidence of said bugs on release.

      I love it, they restores the original video from 3-2-2022, likely because they think We’d forget! That’s an absolute smoking gun of reading the marketing materials vs what’s in fact in front of you… Check at 5:20 hahahaha this is amazing. If you go to the same location on present day DL2 you can clearly see the scene presented by Alex was bugged to the wazoo!

      Invidious link because fuck google!

  • sonofearth@lemmy.world
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    2
    ·
    10 days ago

    DLSS 6: You don’t even need to design your game. AI will generate it in real time.🤡

  • Ech@lemmy.ca
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    2
    ·
    10 days ago

    Was able to brace myself enough to skim through the video. Anyone that watches those clips and thinks that looks better has no place working on anything related to visual tech. Not only is the interpolation obvious and distracting, every scene, no matter the time of day, location, or anything else, has the exact same shitty lighting that washes out every shadow and color painstakingly added to the scene by actual professionals. Pro-tip: A white-balanced image is the starting point, not the end-goal. Making every game look exactly the same is fucking terrible, you hacks.

    I can only hope this shit only finds its way into the AAAA dross that’s not worth playing already.

    • popcar2@piefed.caOP
      link
      fedilink
      English
      arrow-up
      75
      arrow-down
      12
      ·
      10 days ago

      Not really, DLSS mostly just reduces the resolution of a game and then upscales it back up. It does a pretty good job of making the game still look (almost) exactly the same. This, however, completely changes what you’re looking at.

      • zaphod@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        73
        arrow-down
        12
        ·
        10 days ago

        DLSS is short for Deep Learning Super Sampling, it does the upscaling using deep learning, what people also call AI. The upscaler has to be trained on images. Depending on how you train it you either get something that looks almost exactly the same as the game at a higher resolution or you get AI slop.

        • popcar2@piefed.caOP
          link
          fedilink
          English
          arrow-up
          49
          arrow-down
          10
          ·
          10 days ago

          I’m aware of how it works, but the results aren’t bad. Worst case scenario is you get some ghosting with DLSS, but it’s far from what I’d call AI slop.

          • plantfanatic@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            25
            arrow-down
            63
            ·
            10 days ago

            But it literally follows the same process. Why is one slop, but not the other? You’re being hypocrital.

            • popcar2@piefed.caOP
              link
              fedilink
              English
              arrow-up
              84
              arrow-down
              9
              ·
              10 days ago

              One is upscaling the image while preserving it as much as possible, the other is applying a filter to try and “enhance” it by drastically changing the image and ignoring artist’s intent. What’s hard to get?

              • kieron115@startrek.website
                link
                fedilink
                English
                arrow-up
                9
                arrow-down
                14
                ·
                edit-2
                10 days ago

                This isn’t applying a filter, it’s applying running the image through a transformer network trained on advanced lighting methods like subsurface scattering to make materials more lifelike. It seems to change artistic intent quite a lot on these existing games, but frankly I’m excited to see what creators do with a game designed from the ground up to utilize AI-enhanced lighting. The DF video also states that this is an early preview (hence the dual 5090s) that is expected to change over time.

                • grue@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  17
                  arrow-down
                  1
                  ·
                  10 days ago

                  it’s applying advanced lighting methods like subsurface scattering to make materials more lifelike.

                  It is not. It is approximating the results of training data consisting of output images that have been rendered with subsurface scattering. It isn’t actually running the subsurface scattering algorithm.

                • Gathorall@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  8
                  ·
                  edit-2
                  9 days ago

                  If it was made for that the slopifier would be able to identify the light sources. Before that it is art and environment destroying irrelevant bullshit. From all the slop examples, the best Nvidia can deliver, it is shown that they ignore the lighting of the scene.

              • Ledivin@lemmy.world
                link
                fedilink
                English
                arrow-up
                13
                arrow-down
                50
                ·
                10 days ago

                How is “upscaling while preserving it” not the exact same philosophy as “enhance by applying a filter?”

                You just don’t like the specific filter, it’s very literally the same process.

                • Nibodhika@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  33
                  arrow-down
                  4
                  ·
                  10 days ago

                  Because a pixelated circle being upscaled is a circle, but a pixelated circle being turned into a high definition pie is no longer a circle, and that’s especially problematic if the circle was just a cross hair or some other random circle like thing the AI thought was meant to be a pie.

                  Yes, both things are the same, but that’s like saying you had a tiny spider in your house and you were okay because it killed mosquitoes in your house, so you should be okay with having a colony of bats since they are also animals and eat mosquitoes. Yes, both are the same, but the scales and the amount of intrusion are completely different.

                • heavyboots@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  15
                  arrow-down
                  1
                  ·
                  10 days ago

                  Current DLSS intent: We can only render this at like 720p with enough frames, so let’s do that and use AI anti-aliasing tricks so that when we present it at 4k, none of the jaggies are visible on-screen like they would be with raw 720p upscaling.

                  DLSS5 intent: Using our pile of stolen artwork neural net that we can now render at 60fps+ let’s “reimagine” the entire look of the game as we present it on-screen, even if it was already running at 4k just fine.

                  TLDR; How big the neuralnet is and what your train it for matters.

                • ricecake@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  10 days ago

                  … How if flying a spaceship different from driving a car? They’re both controlled applications of kinetic energy to move people or objects.

                  At the end of the day, it’s all a pile of transistors and the only thing that is of import is the intent behind usage.

                  In one case it’s saying you can use a neural net to take something rendered at resolution A/4 and make it visually indistinguishable from the same render at resolution A.
                  The other is rendering something and radically changing the artistic or visual style.

                  Upsampling can be replicated within some margin by lowering framerate and letting the GPU work longer on each frame. It strives to restore detail left out from working quicker by guessing.
                  You cannot turn this feature off and get similar results by lowering the frame rate. It aims to add detail that was never present by guessing.

                  Upsampling methods have been produced that don’t use neural networks. The differences in behavior are in the realm of efficiency, and in many cases you would be hard pressed to tell which is which. The neural network is an implementation detail.
                  In the other case, the changes are more broad than can be captured by non AI techniques easily. The generative capabilities are central to the feature.

                  Process matters, but zooming out too far makes everything identical, and the intent matters too. “I want to see your art better” as opposed to “I want to make your art better”.

            • half_built_pyramids@lemmy.world
              link
              fedilink
              English
              arrow-up
              13
              arrow-down
              2
              ·
              10 days ago

              Not all answers are easy. This new dlss looks like it was trained on stolen work. Old dlss had a neutral network that was tuned before the plagiarism machine became popular.

            • bdonvr@thelemmy.club
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              4
              ·
              10 days ago

              Oh yeah? Well vegatables are both in pig troughs and on dinner plates. Why’s one slop and not the other? They were grown with the same process!

              Because one is shitty and the other isn’t.

              • plantfanatic@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                9 days ago

                If the vegetables are the same, they aren’t slop. Pigs aren’t fed vegetables, they use rotten vegetables. Your analogy doesn’t work, if you actually comprehend the basics of it….

                If the vegetables weren’t rotten, yeah most people would eat the “slop” since it’s just vegetables, you would let good food go to waste just because the “name” you’re arbitrarily and incorrectly using for all pig feed?

            • Catoblepas@piefed.blahaj.zone
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              3
              ·
              10 days ago

              Are you really asking why compressing and uncompressing art made by a human being is different from slop produced by the slop machine?

              One exists to reconstruct an image as closely to the original as possible while saving space, the other is meant to insert arbitrary changes to the initial image and produce something else.

        • NotSteve_@piefed.ca
          link
          fedilink
          English
          arrow-up
          24
          arrow-down
          17
          ·
          edit-2
          10 days ago

          I don’t like AI but christ Lemmy is getting annoying lately with kneejerk “slop” claims for anything with the letters AI in it. A lot of this stuff has been used for ages and yeah, they’re leaning into the current hype but the over reaction is just ridiculous (see: the “open slop” list of open source projects that includes those that have the audacity to allow developers the ability to use AI line completion)

          It genuinely diminishes actual concerns with AI tech when people are losing it over things that have existed long before the current bubble but just have AI™️ on the package now

            • dreadbeef@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              9 days ago

              AI isn’t real, I’m just saying what people call AI is pretty much LLMs or anything that does NLP. No one looks at DLSS and says ‘thats ai’

              • Nikelui@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                9 days ago

                “Science fiction AI” isn’t real. AI is most definitely a thing. From the Oxford dictionary:

                ​artificial intelligence = the study and development of computer systems that can copy intelligent human behaviour

                By definition, a chess program is AI.

        • b34k@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          10 days ago

          DLSS actually uses Machine Learning models to do the upscaling, so in fact there is no AI Slop here.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        ·
        10 days ago

        Not really,

        Nvidia just calls everything DLSS…

        Like, it’s basically an anthology label at this point. If they think it’s a good idea, they call.it DLSS #

        For example DLSS 4 was frame generation, nothing to do with super sampling.

      • CileTheSane@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 days ago

        It does a pretty good job of making the game still look (almost) exactly the same

        Isn’t that just displaying the image with extra steps? Why is my PC using all this extra processing power in order to make it look (almost) exactly the same?

    • dan1101@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 days ago

      I think that’s accurate. It’s making something out of nothing, which will certainly be graphics but not necessarily exactly what the game is supposed to look like.

    • Übercomplicated@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 days ago

      While it may have used machine learning, it was definitely not in the ‘slop’ category. I generally think of slop as things which try to imitate some kind of creative or human element (like the enhancements from DLSS 5), but FSR and earlier DLSS used machine learning to replace anti-aliasing like MSAA, etc., through super-sampling and temporal technologies (frame gen kinda sucked though). So, to answer your hopefully literal question, DLSS has, in the past, not been a AI slop filter.