May 29th, 2019

Tesla has made and kept many promises. Some they have not kept may be because puzzle pieces were missing.

With the addition of Starlink satellites, high bandwidth becomes very achievable especially when they have no load on the system.

You have a brand new high capacity network with no user traffic.

If Starlink is going to do anything, it is probably going to integrate with the hundreds of thousands of Tesla vehicles.

So what does that mean. Well, if you also get your ISP as Starlink, you basically have 2 hops to your car. This is unheard of except on Wifi.

Now we need a network that can process a tonne of bandwidth, we need cars that are capable of recording and processing video at incredible rates.

Basically what this means is the puzzle pieces are starting to fit. Do the cars really need to be self driving if they can be remote controlled by human?

Humans that train their specific cars in their specific regions. And they drive pretty well on that data. So well you can drive 2 cars at once because you only need to intervene every 30 seconds.

You can keep an eye on 2 roads switching back and forth realities. But then Tesla improves. It is every 45 seconds. And you know interventions are coming up because they always happen at the same spot. But you work those out because Tesla gives you the tools to help them learn. Then every minute. You decide to take on more cars, which covers more ground so you can train faster.

One major problem Tesla is having is training these AI’s rapidly. They look for specific things and they are training it one by one.

This is a distributed problem (drive everywhere) that has many moving parts. But there is one thing I do know – (I did take Distributed Computing in University) – is that big distributed problems can be tackled by many processors.

In this case, each processor is actually a human. The rate at which the driving system learns is the rate at which humans will teach it.

In order for humans to be able to teach it, efficient tools need to exist. However to teach it at all, you need these tools. So I assume the tools do exist already.

The human problem right now is that Tesla is paying people to train this. I have Autopilot and Full Self Driving. I would be happen to train this system a little better in the places it misbehaves.

For example, on the 401 going to Ganonoque, I can put my parents address in and it doesn’t know to get off at the Thousand Islands Parkway. It tries to stay on the 401. The map seems to know to, but the car isn’t listening to the map.

In this situation, a human could say, “at this location, decide based on where the map wants me to go.”

It does this behaviour on a Model S and Model 3. I can confirm.

Programmed. Better. No more issue there. So why doesn’t my dad get to program that in?

Is it because of security? Liability? What harm is there from letting my dad make that input? Or me? I don’t see any, but the switch is not flipped yet.

When Starlink gets in place to be stable in North America, the combination of humans inputting many of their own inputs will increase exponentially.

Beyond that the number of vehicles a user can manage could also increase exponentially.

So now instead of this system learning from some Tesla paid employees, it is learning from people navigating their cars around and driving many cars and that number continues to increase.

One last point. Remember the internal cameras in Teslas? Doesn’t it make a lot more sense that these cameras for for looking inside and managing 30 cars, rather than 1?

Posted In:

Disruption
Friction Reduction
Self Driving
Speculation
Starlink
Tesla

May 29th, 2019

ABOUT THE AUTHOR:
I am TSLA long. I do not fear uncertainties and doubt. Numbers matter.