• Nougat@kbin.social
    link
    fedilink
    arrow-up
    32
    ·
    2 years ago

    tl;dr: Autonomous driving uses a whole host of multiple and different kinds of sensors. Musk said “NO, WE WILL ONLY USE VISION CAMERA SENSORS.” And that doesn’t work.

    Guess what? I have eyes; I can see. You know what I want an autonomous vehicle to be able to do? Receive sensory input that I can’t.

    • bfg9k@kbin.social
      link
      fedilink
      arrow-up
      14
      ·
      2 years ago

      We also use way more than just our eyes to navigate. We have accelerometers (ear canals), pressure sensors (touch), Doppler sensors (ears) to augment how we get around. It was a fools errand to try and figure everything out just with cameras.

    • EthicalAI@beehaw.org
      link
      fedilink
      arrow-up
      7
      ·
      2 years ago

      What’s worse is it will be hard to reverse this decision. Tesla is a data and AI company compiling vision and driving data from drivers around the world. If you change the sensor format or layout dramatically, all the old data and all the new data becomes hard to hybridize. You basically start from scratch at least for the new sensors, and you fail to deliver a promise to old customers.

      • Metacortechs@lemmy.stellarvortex.com
        link
        fedilink
        arrow-up
        4
        ·
        2 years ago

        Sounds to me like they should full steam ahead with new sensors, they will never deliver on what they’ve promised with the tech they are using today.

        Old customers situation won’t change and it would only be better going forward.

      • Barry Zuckerkorn@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        2 years ago

        If you change the sensor format or layout dramatically, all the old data and all the new data becomes hard to hybridize.

        I don’t see why that would have to be the case if the new data is a complete superset of the old data. If all the same cameras are there, then the additional sensors and the data those sensors collect can actually help train the processing of the visual-only data, right?

    • kestrel7@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      2 years ago

      How do we prove we’re not robots? Fucking select the picture with traffic lights or buses, right? How was this allowed.

    • Canadian Nomad@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      This news is months old. Honestly agree with musk on this one. We are able to drive with 2(sometimes only 1)low resolution(sometimes out of focus, sometimes closed) cameras on a pivot inside the vehicle with further blindspots all around. Much of our rear situational awareness comes from 2/3 small warped mirrors strategically placed to enhance those 2 low resolution cameras on a pivot. Tesla has already reverted to add some radar back in… The lidar option sounds like dystopia waiting to happen (just imagine all streets filled with aftermarket invisible lasers from 3rd world counties, any one of them could blind you under unlucky circumstances). The best way forward is visual, and if you watch up to date test drives on YouTube you can see they are doing quite well with what they have.