• ExLisper@linux.community
    link
    fedilink
    English
    arrow-up
    19
    ·
    9 months ago

    It’s not making Turing test obsolete. It was obvious from day 1 that Turing test is not an intelligence test. You could simply create a sufficiently big dictionary of “if human says X respond with Y” and it would fool any person that its talking with a human with 0 intelligence behind it. Turing test was always about checking how good a program is at chatting. If you want to test something else you have to come up with other test. If you want to test chat bots you will still use Turing test.

    • intensely_human@lemm.ee
      link
      fedilink
      arrow-up
      9
      ·
      9 months ago

      Sounds to me like that sufficiently large dictionary would be intelligent. Like, a dictionary that can produce the correct response to every thing said sounds like a system that can produce the correct response to any thing said. Like, that system could advise you on your career or invent machines or whatever.

      • ExLisper@linux.community
        link
        fedilink
        English
        arrow-up
        8
        ·
        9 months ago

        No, a dictionary is not intelligent. A dictionary simply matches one text to another. A HashMap is not intelligent. But it can fool a human that it is.

        • aname@lemmy.one
          link
          fedilink
          arrow-up
          9
          ·
          9 months ago

          Yes, but you could argue that human brain is a large pattern matcher with a dictionary. What separates human intelligence from machine intelligence?

          • ExLisper@linux.community
            link
            fedilink
            English
            arrow-up
            4
            ·
            9 months ago

            The question is not if something is a patter matcher or not. The question is how this matching is done. There are ways we consider intelligent and ways that are not. Human brain is generally considered intelligent, some algorithms using heuristics or machine learning would be considered artificial intelligence, a hash map matching string A to string B is not in any way intelligent. But all this methods can produce the same results so it’s impossible to determine if something is intelligent or not without looking inside the black box.

        • Kit Sorens@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          9 months ago

          Yet language and abstraction are the core of intelligence. You cannot have intelligence without 2 way communication, and if anything, your brain contains exactly that dictionary you describe. Ask any verbal autistic person, and 90% of their conversations are scripted to a fault. However, there’s another component to intelligence that the Turing Test just scrapes against. I’m not philosophical enough to identify it, but it seems like the turing test is looking for lightning by listening for rumbling that might mean thunder.

          • ExLisper@linux.community
            link
            fedilink
            English
            arrow-up
            4
            ·
            9 months ago

            If you want to get philosophical the truth it we don’t know what intelligence is and there’s no way to identify it in a black box. We may say that something behaves intelligently or not but we will never be able say if it’s really intelligent. Turing test check if a program is able to chat intelligently. We can come up with a test for solving math intelligently or driving car intelligently but we will never have a test for what most people understand as intelligence.

            • 0ops@lemm.ee
              link
              fedilink
              arrow-up
              1
              ·
              9 months ago

              This is what it comes down to. Until we agree on a testable definition of “intelligence” (or sentience, sapience, consciousness or just about any descriptor of human thought), it’s not really science. Even in nature, what we might consider intelligence manifests in different organisms in different ways.

              We could assume that when people say intelligence they mean human-like intelligence. That might be narrow enough to test, but you’d probably still end up failing some humans and passing some trained models

              • ExLisper@linux.community
                link
                fedilink
                English
                arrow-up
                0
                ·
                9 months ago

                It’s not that it’s not science. Different sciences simply define intelligence in different ways. In psychology it’s mostly the ability to solve problems by reasoning so ‘human like’ intelligence. They don’t care that computers can solve the same problems without reasoning (by brute force for example) because they don’t study computers. In computer science it’s more fuzzy but pretty much boils down to algorithms solving problems by using some sort of insights that are not simple step-by-step instructions. The problem is that with general AI we’re trying to unify those definitions but when you do this both lose it’s meanings.

                • 0ops@lemm.ee
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  9 months ago

                  You’re right, it’s very much context dependent, and I appreciate your incite on how this clash between psychology and computer science muddies the terms. As a CS guy myself who’s just dipping my toes into NN’s, I lean toward the psychology definition, where intelligence is measured by behavior.

                  In an artificial neural network, the algorithms that wrangle data and build a model aren’t really what makes the decisions, they just build out the “body” (model, generator functions) and “environment” (data format), so to speak. If anything that code is more comparable to DNA than any state of mind. Training on data is where the knowledge comes from, and by making connections the model can “reason” a good answer with the correlations it found. Those processes are vague enough that I don’t feel comfortable calling them algorithms, though. It’s pretty divorced from cold, hard code.

      • JohnEdwa@sopuli.xyz
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        9 months ago

        So would a book could be considered intelligent if it was large enough to contain the answer to any possible question? Or maybe the search tool that simply matches your input to the output the book provides, would that be intelligence?

        To me, something can’t be considered intelligent if it lacks the ability to learn.