Back to all posts

November 16, 2022

Do delivery drones have cameras?

Thumbnail
There’s a saying: use the right tool for the job. It means don’t use a tool that is too big or small for the task at hand – use one that is sized correctly. In the delivery space, the wrong tool is often used. For example, multi thousand pound vehicles deliver small packages on-demand all the time. At Wing, we’re designing a better tool. To deliver lots of small, lightweight packages – we designed a drone that is small and lightweight. Rightsizing the tool for the task is a key design philosophy for Wing. It drives everything from understanding the problem we’re trying to solve to the next iterations of drones we plan to build to how our drones see and perceive the world around them.

What a drone sees, most people think about the drones that they most often see – off the shelf drones that are used for filming or photographing things from above. These are great for certain use cases – like real estate, filming, or photography. Our drones, however, have a different mission and different requirements. The system we’ve built is highly automated, with pilots that oversee multiple drones at once, and is weight constrained. There is no need for high definition video or imagery. Instead, we designed our drone’s perception capabilities to fit the specific needs of the problem we’re trying to solve: ensuring packages are delivered reliably.
 

Our drones have sensors. They help the drone deliver packages in the correct location. Much like a pin art toy – the ones where you press your hand or face into a bunch of little pins to leave an imprint, multiple sensors on our drones receive light to determine the distance to the ground and nearby obstacles. Like that pin art toy, the drones’ algorithms create an imprint of obstacles below, allowing us to build a three-dimensional map of points on the ground so the drone can identify an optimal delivery spot. If the sensors perceive an obstacle like a car, the drone simply moves over slightly to deliver the package.
While at the delivery zone, our drones use the information received from our sensors to build a very simplified view of the world around them; a process known as semantic segmentation. You can think of this like a paint by numbers exercise. Using an onboard computer and learned computer vision models, the sensors perceive things below and categorize them as roads, trees, or other objects based on the drone’s location. The result is a color coded map of the world that wouldn’t make much sense to a human but allows the drone to understand its surroundings. 
Not all living things are programmed to see the world in the same way. Dogs see fewer colors than humans. Snakes can see in the dark based on the heat signatures of their prey. But with living things, you can’t program them to see things differently. With machines you can, and this allows you to program their perception capabilities based on the function they are meant to serve. With Wing’s drones, we’ve designed a perception system that is the right tool for the job. 

Subscribe to Wing’s blog to get the latest news and tech developments. ↓