Is Simulation the Way Forward to Fully Autonomous Driving?

Mike Dempsey of Claytex on why simulation holds the key to robust, reliable autonomy.

Truly autonomous vehicles have been the dream of motorists and automotive engineers from the beginnings of the industry and have been predicted by futurists and writers for over 60 years. Few technologies have been so desired and anticipated, and few have been so difficult to realize. The major inhibiting factor to its development, low cost and portable computational power, has largely been solved, and advanced driver assist systems exist today. 

But as of yet, full autonomy remains elusive. The problem is incredibly complex, and the imperative to produce systems that deliver road safety at levels matching or exceeding human drivers puts a premium on system redundancy and error proofing. An increasing number of industry experts believe that only simulation can deliver the level of testing necessary to create truly robust systems in a world where the number of edge cases seem infinite. 

Joining Jim Anderton to discuss the implications of simulation in this space is Mike Dempsey, Managing Director of vehicle simulation developer Claytex, a TECHNIA company. 

Learn more about how simulation is driving the future of transportation.

This episode of Designing the Future is brought to you by TECHNIA.

The transcript below has been edited for clarity.

Jim Anderton:
Hello everyone, and welcome to Designing The Future

Autonomous vehicles, self-driving cars and trucks, it’s been the dream of motorists and automotive engineers from the beginnings of the industry and it’s been predicted by futurists and writers for over 60 years. The major inhibiting factor to its development, low cost and portable computational power, well that’s largely been solved. And advanced driver assist systems, they exist today. But as of yet, full autonomy remains elusive. The problem is incredibly complex and the imperative to produce systems that deliver road safety at levels as good or better than human drivers puts a premium on system redundancy and error proofing. 

An increasing number of industry experts believe that only simulation can deliver the level of testing necessary to create truly robust systems in a world where the number of edge cases seems infinite. Joining me to discuss the implications of simulation in this space is Mike Dempsey, managing director of Claytex, a TECHNIA Company. 

Claytex has been working on simulation technology for autonomous vehicles since 2016, including vehicle physics models for full motion driving simulators for Formula 1 and NASCAR. Mike is an automotive industry veteran and has studied automotive engineering at Loughborough University and has developed powertrain simulation systems at Ford and Rover. Mike, welcome to the show. 

Mike Dempsey:
Hi, Jim. Great to be here. 

Jim Anderton:
Mike, this subject, it’s a hot topic right now. It’s all over the media. There’s so many misconceptions out there, even amongst industry professionals and engineering professionals. But let’s start by talking about self-driving from an information perspective. Self-driving is about absorbing a huge amount of information from the environment around the car, and that’s got to be processed. And decisions, they have to happen at thousands per minute to make this work. That’s true whether it’s a human driver or machine. 

Now, some companies use real world driving data to try and generate the algorithms or work necessary to make these decisions. But once the basics are mastered, motorway freeway driving at this point, their mastered. Edge cases appear to be where the action is and where the difficulties are. So this would suggest that there’s a diminishing marginal returns in real world testing. The better you get at it, the more difficult it is to get to the edge cases and the farther apart in time they become. Is this a factor? Are there natural sort of intrinsic limits to real world testing? 

Mike Dempsey: So the edge cases, as you say, they were all the risk is for an autonomous vehicle or for any vehicle really. They’re the things that happen very rarely. So you can drive for 10 years and you’ll never have an accident. But one day you might have some sort of strange incident occur. And those are the edge cases. Those are the things that are going to challenge your autonomous vehicles, and those are what we need them to be prepared to handle. 

But we can’t really recreate those or capture those through real world testing, because driving around on public roads, we don’t really want to have an accident. We don’t want to try and cause those accidents to happen on proving grounds, in controlled facilities because these prototype autonomous vehicles cost perhaps millions of pounds to put together. We don’t want to risk damaging them. 

So the way to do it is to look at simulation. And in simulation, we can put our autonomous vehicles into all sorts of edge cases, really high risk situations to see what will they do, how will they handle it. Then we’ve still got to get to the point where we can simulate all of that. We’ve got to have the simulation technology that allows us to fully immerse the autonomous vehicle with all of its sensor suite into these complex environments where we can put these edge cases to them. And that’s really the challenge that we are working on. 

Jim Anderton:
We at engineering.com, we’ve talked to many in the industry and roboticists and some software people. I’ve heard varying opinions about the amount of context necessary to make sense of this sort of self-driving environment. There’s some that will tell me that it ultimately will be important to know the difference between a raccoon, a cat and a piece of paper blowing in the road. Others to say, “No, we can simplify the problem greatly by resolving it down to an obstacle and simply determining whether it’s of a significant mass or what a trajectory is.” Is that still a debate going on in the industry? Does it still matter? Is the machine going to need to know what it’s seeing to drive us safely? 

Mike Dempsey:
It needs to know to some extent what it’s seeing, because it needs to know whether it should expect that thing to move. And if you know what kind of thing it is, you can expect how fast it might move. So if you can recognize that that’s a jogger, you can predict a certain speed that they’re going to go and start to think about the trajectory they’re going to take over the next few seconds. If it’s a dust bin, well it’s probably not going to move. Unless it’s windy and then you might want to recognize that it’s bin and it’s windy and it might move. So there’s all these things that we as humans can extrapolate from the objects that we see and recognize that the computers, the AI systems, need to be able to do similar to be able to understand and be aware of what might happen in front of them in the next few seconds. 

Jim Anderton:
Now that’s interesting. So in the case of your jogger example, for example, as a human driver, I have an expectation that a jogger will move at the speed of a running human. And therefore that individual might be two meters in front of my car in the next second, but he won’t be eight meters in front of my car in the next second. Is that the same sort of process that the machine has to do? 

Mike Dempsey: Exactly. The machines are trying to work out “Where is it safe for me to move? Where can I drive? Where is the safe space that I can move to in the next 2, 3, 5, 10 seconds?” And so understanding where all of the moving objects in the world around them are going to end up or likely to end up helps them plan that out. So if they can know that you are going at four miles an hour because you’re walking quickly or you’re jogging at eight miles an hour, you’re not suddenly going to leap out into a particular lane in front of them. And they can use that information to help them plan. 

Jim Anderton:
Now for human drivers in the example we’re using, for example, we have an intuitive sense of things like wind. Wind speed, wind direction. So basically if we have a sense that it’s blowing left to right, that’s where we expect the rubbish to blow from for example in that case. Is that level of sophistication necessary for autonomous driving? I mean, these things, can they process information very quickly? Or can they just react to the obstacle when it pops up? 

Mike Dempsey:
It’s a good question. And I’m not sure I really know the answer to that. I mean, you’re right. We instinctively, or we recognize from what we see around us that the wind’s blowing across us. That maybe helps us anticipate that if we’re going to go past the truck that we might suddenly hit across window as we go past it. It would be helpful if the vehicles could know that, because you know as you’re going to do that you might have to put some correction on the steering. If the vehicles can also anticipate that, it will probably give them a smoother ride, but I don’t honestly know whether they can or not. 

Jim Anderton:
Yeah, I’m not sure anyone does. A question Mike about the complexity of this problem. Motion in pure kinematics of course is relatively easy to resolve, not too many systems of equations, but we’re talking about things which may move erratically and which may have multiple degrees of freedom. Mathematically, how complex is the problem of predicting even say the potential trajectory of an obstacle? 

Mike Dempsey:
Well, it’s really challenging. So actually, the way that it’s done inside the vehicles is they’re using neural networks to be able to predict from patterns of where you have moved, where you’re likely to go and predicting multiple different paths to have some sort of confidence or some sort of prediction of that could be in this space. From a simulation point of view, we’re trying to then recreate all of this in a virtual world with all of these things going on. 

And that’s really hard. The physics of how a car drives around a road we can. Do the randomness that all of us that sit in those cars and control them and drive them around these worlds and do often unpredictable things, that’s the bit that’s really challenging. And the way that we deal with that is by having these edge cases where we’ve been able to use data from real world sources to capture strange things that happen. 

Jim Anderton:
It’s interesting. In systems that are sort of pseudo automated, I think as some of the point of sale devices for example that we use or even some animation, even the Teams that we’re using right now, latency is always an issue. So the hardware can only process information so fast. Is that something you have to factor when you consider were your simulations for self-driving? 

Mike Dempsey:
Yeah, absolutely. So as you say, all the sensors run at different race. They produce information for the vehicle at different frame rates. So cameras might be running at 30 frames per second, the lidars will be spinning at 300 RPM, the radar will be producing data perhaps every 50 milliseconds. So inside the simulation, what we’ve got to do is make sure that all of those sensor outputs are coming at the right rates and are including all the right dynamic effects as they do that. 

Jim Anderton:
Well, I’m glad you brought the notion of sensors because the industry appears to be settling into two sort of generic camps about sensors. One is led by a very famous global entrepreneur who started an electric car company who appears to favor an all camera, all machine vision approach. Others in the industry want to go multi-spectral. They want perhaps millimeter wave radars, they want lidar, multiple different sensors, different kinds of inputs. And some have gone so far as to say that true autonomous driving will never be achieved with a vision-only approach. And that’s a pretty powerful statement at this point. If you’re simulating this, how do you simulate multiple inputs from multiple different sensors that are delivering different kinds of information about the environment around the car? 

Mike Dempsey: So the way that we are doing it with AVSandbox is that we have an instance. So what we call an instance, a simulation of each sensor type. In each of those sensors, the virtual world that they see is adjusted so it looks right to that particular type of sensor because a radar sees things very differently to a lidar, to a camera. And so we need to make sure that the world that we immerse each sensor into looks correct to that sensor. And then you’ve got to make sure that your models of those sensors are including all the right effects so that for a radar in particular, there’s a lot of bouncing going on between the radar signal. And we’ve got to make sure that the simulation includes all of those reflection effects, because that gives you false targets that the AI system has to be able to figure out what’s a real target, what’s a false target. Otherwise, it might end up trying to avoid the wrong thing. 

Jim Anderton:
Sure. Yeah. Ground clutter’s always been an issue for the radar people. Well, in this case, are these things systems operate sequentially? As individuals, of course, with our eyes and our ears and our proprietary receptors, we’re pretty parallel when we drive. We absorb a lot of information simultaneously, and we sort of make decisions. But I’ve always wondered, do the machines, are they going to have to take inputs one at a time and then make an iterative decision making process? 

Mike Dempsey:
No, they take all that data at once. There will be perhaps different processes that will analyze the lidar data, the radar data, the camera data, to do object detection within them all, and then compare to see where the objects are, “Do all three of the sensors see an object at the same place?” Some people are even going to the route of “Lets just take all the raw data into one perception algorithm and have that look at it all as an instant.” But yeah, they’re not doing this sequentially. They are continuously looking at what the cameras are seeing, what the lidar, what the radar is seeing. And really why they’re heading towards using multiple different types of sensor is that they all have different strengths and weaknesses. 

So you look at a lidar sensor, for example, doesn’t behave particularly well when there’s lots of rain around. But a radar isn’t really affected by rain. Cameras and our own sight is affected by rain as well, the radar less so. So by having multiple different types of sensor, you can really take advantage of the different strengths of these systems, what they can do, what they can’t do, to be able to extend when you can drive the vehicle. That redundancy is a key thing for a lot of the automotive manufacturers. They want to see that there are multiple redundant ways that they can detect things in front of them. 

Jim Anderton:
Well, I’m glad you brought up redundancy because of course that’s a critical issue for safety in any system like this. I think of redundant systems, pseudo AI, a classic example originally was a space shuttle which used a polling system of multiple processors that had to agree on the same result. And if one deviated, the others would then rerun the problem and potentially gang up and shut down the dissenting machine. That’s one way to approach redundancy. Another one is just to error proof the hell out of one system and then test and then go with confidence. Is there one way forward to creating redundancy in the way we need it? 

Mike Dempsey:
There’s probably not one way forward. I mean, all of those methods that you just talked about are already used within automotive to look at having error checks to make sure that sensors are behaving, that they’re seeing what we expect to see. I think that needs to happen more and more for an autonomous vehicle where you’ve got all these different types of sensor. Part of the reason that we are using redundancy is that, as I say, sometimes the sensors don’t work as well, particular weather conditions or particular types of object that sensor can’t see. So usually there is some sort of check within those systems to see “Does more than one sensor see this object and do they all see it in the same place?” Because that helps build confidence that it’s really there rather than just a ghost target or a graphic, a visual artifact that’s they picked up in the camera. 

Jim Anderton:
Mike, how much fidelity do you need to create in your simulation of the environment around the vehicle? I mean, can you simplify the problem down to almost a wire frame model or do you need to see every grain of sand on the roadway? 

Mike Dempsey:
You need a lot of detail in there really, particularly when you are getting to the point of, “We want to use this simulation to prove the vehicle is safe.” So with AVSandbox, what we are creating are digital twins of real world locations. And those are millimeter accurate. That’s really grown out of what we did initially with Formula 1 and NASCAR where motorsport needed to move into using driving simulation because they were no longer allowed to go and do physical testing. And so what they wanted was accurate models of their race tracks and all the tracks that they wanted to go to. If those curve stones or the bumps on the track were out by even a couple of millimeters, then the drivers would be complaining that it wasn’t the same track, it was slightly different, because they are so precise when they’re driving those cars. 

And so the virtual environment that we use was born out of that environment, that real push for high fidelity virtual world. And then that really lends itself to what we’re now trying to do with this autonomous vehicle simulation to be able to use the same technology, the same capabilities to build those accurate models so that when we’re testing in the simulation, it looks like the real world location to the autonomous vehicle. And so that’s one of the… I guess one of the key things is that as we go into proving that something is safe, we’ve got to validate the simulation. You have to go through a step of being able to do tests in the real world, either on approving ground or actually on public road. We’ve got to be able to recreate those inside the simulation environment, because we need to see that the autonomous vehicle responds the same in those world, in those scenarios. That gives you confidence that your simulation is working. And then you can start to extrapolate and look at all these edge cases that you wouldn’t want to do in the real world. 

Jim Anderton:
How complex are these simulations? Are we looking at the number of lines of code and the amount of perhaps processor power that rivals the actual self-driving system itself? 

Mike Dempsey:
Oh, yes. I mean, if we want to simulate an autonomous vehicle that perhaps has 10 cameras on it, several lidar, several radar, we are going to need several PCs all running the very latest and most powerful graphics cards that we can get hold of to be able to generate all of that virtual environment to immerse the AI system in. And so the simulators themselves may even be more powerful than what’s needed in the car because we’ve got to generate all that vision feed that’s going in. And when we get to the stage of trying to verify that that control system’s correct, we’ve got to do all of that in real time. So with AVSandbox, we have the possibility of achieving real time simulation, or we can run it as fast as possible, which might allow you to put much more detail in, but not having to meet that realtime requirement. 

Jim Anderton:
Mike, talking about AVSandbox, all of us for the engineering community, Of course, we’re very driven by cost. Time deadlines are always a factor, but of course overriding everything is cost. It’s clearly enormously expensive to real world test autonomous vehicle systems. Enormous fleets of vehicles are necessary. We mentioned the difficulty in finding all the edge cases, and then even aggregating the data at this point. How much cost saving? Is there some way to estimate how much it’s possible to save in time and in money by using something like AVSsandbox versus the iterative “Let’s build a thousand vehicles and throw them on the street” approach? 

Mike Dempsey:
Well, that’s a good question. I mean, it’s really hard, I think, to put a number on it. But what you are doing with the real world testing is you’re just going out and driving miles and you are hoping that something interesting happens when you get a test scenario. That hoping that at a junction, something a little bit challenging happens and you can experience something new. In simulation, we can guarantee that if you start, you can design a scenario that within 30 seconds, you’ve put the car into something it’s not seen before. 

And then we can put it back into that scenario every time you make a change in the line of code to see “Now, what does it do? Now what does it do?” So we can actually repeatedly and much faster put the vehicle into these scenarios so we can save a huge amount of time, huge amount of effort, and just really improve the robustness of the system as well by being able to do this over and over again in a repeatable way. 

Jim Anderton:
Mike, how about the human in the loop? I think of the aviation industry of course they pioneered simulation both for pilot training and as a development tool. Modern systems, I mean, Airbus comes to mind. I understand that it will refuse an irrational input by a clumsy pilot. It’ll simply sort of analyze sort of the state of the aircraft, the flight, and simply prevent the human from entering the loop and doing something that would compromise the flight. Is that a possibility you think with human in the loop for autonomy? Is it possible to simulate what the person will do regardless of what the machine will do? 

Mike Dempsey:
It’s certainly possible. Well, simulate the person, I don’t know. But certainly we can put the person into a driving simulator. So the sort of driving simulators that have been developed for these motorsport applications and are also used in road car development today, they can provide you with a really immersive environment. So we can put people into those simulators to see how do they react to the experience of being driven around. And it’s actually a really interesting sort of area of research to understand. We talk about proving these vehicles are safe, but what is safety? As an absolute, it’s that you don’t crash into something. But if we put you in an autonomous vehicle at 60 miles an hour down a freeway and it misses another vehicle by a quarter of an inch, technically it’s safe because it didn’t crash, but you would probably never get in that vehicle again because you would not feel safe. 

So there’s a whole load of interest there as to how do we define and figure out what makes us feel safe and what doesn’t. And so I think those kind of technologies will come to the fore and being able to do that we can again put humans into those edge cases and see how do they feel, are they happy with how the vehicle handled that? Or are they scared and therefore, right, we need to change how it handles particular situations. 

Jim Anderton:
Well, that is interesting potential way to describe it. Is it possible that perhaps that automakers, for example, could use simulation technology like this to determine consumer preferences and to see how their potential motorists will react to the autonomous systems themselves? 

Mike Dempsey:
Yeah, I think so. I mean, within… We’ve been working on a project here in the UK which is funded by the UK government. Project’s called Derisk. One of the partners there, Imperial College in London, they’ve done some work with using VR headsets. They put the public into scenarios where there’s traffic coming and trying to understand from those subjects how they felt and did they feel safe and how did they perceive that scenario. And it’s some really interesting work that’s kind of leading us down this path of what is safety and how do we quantify what that is for an autonomous vehicle. Yeah and I think the OEMs, there’s a huge amount of work to be done there, whether it’ll be done by universities or more by the OEMs. I think perhaps by the OEMs, because it will become something that defines your brand as to how safe you are or how you handle things. 

Jim Anderton:
Mike, a final question and a tough question is that, is the technology as it exists today, our ability to code our hardware, our sensors, our processors, do we have the technology on the ground today to achieve truly autonomous driving based on what is available off the shelf right now? 

Mike Dempsey:
So yes, we can make the vehicles drive around and handle the day to day situations. The challenge is all of these edge cases and how are we going to handle and be sure that we can handle these high risk situations. That’s really where the challenge is now, is to get to a point where we can be sure and have that safety assurance that the vehicles are there. Because we already see trials and we have done for a few years now of vehicles handling the routine driving, they can do that. It’s how do we take the safety driver out and make sure they can handle everything else. 

Jim Anderton:
Mike Dempsey, managing director of Claytex, thanks for joining me on the show today. 

Mike Dempsey:
Thanks, Jim. 

Jim Anderton:
And thank you for joining us. See you again next time on Designing the Future.

Learn more about how simulation is driving the future of transportation.