Improve your navigation

Flight Performances

  • 32-minute flights
  • Best-in-class input power/output power ratio (propellers: 66 % figure of merit)
  • Designed for outdoor missions as well as indoor (warehouses, tunnels)
  • Rotating twin sensor for obstacle avoidance
  • IPX3: withstands rain and 45 km/h winds
Maximum speed 17 m/s forward – 16 m/s backward and laterall
Wind resistance 12.7 m/s
Flight time 32 min
Max Climbing Speed 4 m/s
Max Descent Speed 3 m/s
Max practical ceiling above sea level 5,000 m
Range 22.5 km at 14 m/s without wind
Max Angular Speed 300°/s on pitch and roll axes & 200°/s on yaw axis

Designed by biomimicry, the new bio-inspired propeller blades use a leading edge similar to the shape of the fins of humpback whales.


This allows a better propulsive efficiency. For the same rotation speed, the thrust is increased. The effect is comparable to an enlargement of the rotor diameter.


Acoustic noise is also reduced, especially on the tonal noise coming from the leading edge. Therefore, ANAFI Ai is quieter [71.5 dBSPL (A) at 1 m] than the Skydio 2 [76.4 dBSP (A) at 1 m].


  • Flight time of more than 32 minutes
  • Top speed of 17 m/s (61 km/h) in forward flight and 16 m/s (58 km/h) in lateral and backward flight, thanks to the optimized aerodynamic performance of the body and the ANAFI Ai powerplant
  • Wind resistance is 12.7 m/s (45km/h)
  • 22.5 km range at 14 m/s without wind thanks to the high efficiency of the motor/propeller torque and the high autonomy of the battery

To ensure its safe flight, ANAFI Ai is equipped with:

  • 2 IMU (an ICM-40609-D and an ICM42605)
  • LIS2MDL magnetometer
  • UBX-M8030 GPS
  • TI OPT3101 time-of-flight (ToF)
  • LPS22HB barometer
  • vertical camera


ANAFI Ai is also equipped with a multidirectional depth sensing system (stereo vision) described in the “Autonomous Flight” section.


Sensors’ characteristics

Flight IMU: ICM-40609-D

  • 3 axes gyroscope
  • Range: ± 2000 °/s
  • Resolution: 16.4 LSB/°/s
  • Bias/accuracy: ± 0.05°/s (after thermal and dynamic calibration)
  • 3 axes accelerometer
  • Range: ± 16g,
  • Resolution: 2.048 LSB/mg
  • Bias/accuracy: ± 0.5 mg (X-Y) ± 1 mg (Z) (after thermal and dynamic calibration)
  • Temperature regulation: controlled heating system in relation to the ambient temperature, stabilized within: ± 0.15 ° C
  • Measurement frequency: 2 kHz


Magnetometer: LIS2MDL

  • Range: ± 49.152 G
  • Resolution: 1.5 mG
  • Bias/accuracy: ± 15 mG (after compensation, at maximum motor speed)
  • Measurement frequency: 100 Hz


Barometer: LPS22HB 1

  • Range: 260 to 1260 hPa
  • Resolution: 0.0002 hPa
  • Bias/accuracy: ± 0.1 hPa
  • Temperature regulation: controlled heating system in relation to the ambient temperature, stabilized within: ± 0.2 ° C
  • Frequency of measurement: 75 Hz
  • Measurement noise: 20 cm RMS


GNSS: UBX-M8030 1

  • Ceramic patch antenna of 25x25x4mm allowing a +2dB gain improvement compared to ANAFI 1
  • Sensitivity: cold start -148 dBm / tracking & navigation: -167 dBm
  • Time-To-First-Fix: 40 seconds
  • Bias/accuracy: Position (standard deviation 1.4 m), Speed (standard deviation 0.5 m/s)


Vertical camera

  • Sensor format: 1/6 inch
  • Resolution: 640x480
  • Global shutter sensor
  • Black & white
  • FOV: horizontal viewing angle: 53.7° / vertical viewing angle: 41.5°
  • Focal length: 2.8 mm
  • Optical flow ground speed measures at 60 Hz
  • Point of interest calculation for accurate hovering at 15 Hz and accurate landing at 5 Hz


ToF: TI OPT3101

  • Range: 0-15 m
  • Resolution: 0.3 mm
  • Bias: ± 2 cm (after calibration)
  • Measuring frequency: 64 Hz


Vertical camera IMU: ICM-42605

  • Gyroscope 3 axes
  • Range: ± 2000 °/s
  • Resolution: 16.4 LSB/°/s
  • Bias/accuracy: ± 0.1 °/s (after dynamic calibration)
  • 3-axis accelerometer
  • Range: ± 16g
  • Resolution: 2.048 LSB/mg
  • Bias/accuracy: ± 2.0 mg (X-Y) ± 5.0 mg (Z) – after dynamic calibration
  • Measuring frequency: 1 kHz
  • Hardware synchronization with the vertical camera, accuracy: 1 µs

The ANAFI Ai flight controller offers an easy and intuitive piloting: no training is required to fly it. It allows the automation of many flight modes (Flight Plan, Cameraman, Hand take-off, Smart RTH).


Sensor fusion algorithms combine the data from all sensors to estimate the attitude, altitude, position and velocity of ANAFI Ai.


State estimation is essential for the proper functioning of drones. Quadrotors are by nature unstable when the flight controller is in open loop; to pilot them easily, let alone to operate them autonomously, it is necessary to stabilize them by closed loop control algorithms. These algorithms compute and send to the motors the commands needed for ANAFI Ai to reach the desired trajectories.


Indoor flight

In the absence of GPS signal, ANAFI Ai relies mostly on the vertical camera measures for velocity and position estimation.


Vertical camera measures are given by two main algorithms:

  • optical flow for velocity estimation
  • keypoint detection and matching for position estimation:

Vertical camera algorithms are still able to run in low light conditions thanks to the fact ANAFI Ai is equipped with a pair of LED lights, located next to the vertical camera. These allow the drone to keep stability, especially when flying indoors or in a GPS-denied environment, below 5 m over the ground. The LED lights power adapts automatically, depending on the algorithms’ needs.

Key features

  • Rotating, wide field of view perception system
  • Surrounding environment depth extraction from stereo matching and depth from motion
  • Occupancy grid representation of the environment
  • Autonomous obstacle detection and avoidance at speed up to 8m/s (29 km/h - 18mph)


This chapter details the sensors, hardware and algorithms used by ANAFI Ai to provide autonomous flight capabilities.


It is organized as follows:

  • in-depth description of the perception system of ANAFI Ai
  • perception algorithms used to reconstruct the 3D environment surrounding the aircraft
  • replanning and obstacle avoidance


Perception system strategy

3D environment perception is a key capability to achieve autonomous flight, especially in obstructed environments. It is a condition for guaranteeing obstacle detection and avoidance, which reduces the supervision load of the drone operator, increases the mission success rate and ensures the safety of the aircraft.


An efficient perception solution is required to unlock the full potential of a flying camera, able to translate and rotate freely in all directions without constraint. In particular, a perception system must allow capturing information on the surrounding environment, in directions that are consistent with the translational flight motion – whatever the orientation of the camera.


ANAFI Ai relies on a unique technical solution based on two mechanical gimbals to decouple the orientation of the main camera and the perception system:

  • the main camera is mounted on a pitch-roll-yaw 3-axis gimbal making its 3D orientation independent from that of the drone
  • the perception system is mounted on a single axis pitch gimbal – coupled to the drone’s yaw movement, it can be oriented in any direction


The pitch axes of the two gimbals are colinear and merged to achieve an ultra-compact design.

The pitch axes of the two gimbals are colinear and merged to achieve an ultra-compact design.


With this solution, it is possible to point the main camera and the perception system in two different directions. This design avoids the use of expensive cameras on the sides, top, bottom and back of the drone, while still providing a large accessible field of view to the perception system.


This section is organized as follows:

  • details on the sensors used for the perception system
  • specifications of both the gimbal of the main camera and the one of the perception system
  • strategies for orienting the perception system to exploit the potential of the dual gimbals design


The perception system relies on a pair of identical cameras, sharing the same pitch axis.

The specifications of the sensors are the following:

  • Model: Onsemi AR0144CSSM28SUD20
  • Color: monochrome
  • Resolution: 1280 x 800 pixels
  • Frame rate: 30 fps
  • Global shutter
  • Full horizontal field of view: 118° (110° used for perception)
  • Full vertical field of view: 72° (62° used for perception)
  • Focal length: 1.47 mm (0.039 inches - 492.94610 pixels)
  • Aperture: f/2.7


The specifications of the stereo pair are the following:

  • Shared pitch axis
  • Baseline/distance: 62mm (2.44 inches)
  • Synchronous acquisition at 30 fps


Dual Gimbal

The gimbal of the main camera is a pitch-roll-yaw 3-axis gimbal with the following specifications:

  • pitch end stops: -116°/+176°
  • roll end stops: +/- 36°
  • yaw end stops: +/- 48°


The gimbal of the perception system is a single axis pitch gimbal with the following specifications:

  • pitch end stops: -107°/+204°
  • time of travel from one end stop to the other: 300 ms


The perception system benefits 311° of travel (296° unmasked by the drone body), which allows backward perception.

The system has been designed so that:

  • the propeller blades cannot enter the field of view of the main camera
  • the main camera does not mask the field of view of the perception system
  • both the main camera and the perception system can fully tilt backward to protect the lenses, during storage or in case of in-flight emergency

When tilted backward, the perception system rotates high up, offering a clear view.

Environment reconstruction

The reconstruction of the surrounding 3D environment for autonomous flight is performed in two steps:

  • extraction of depth information from the perception, as depth maps
  • depth maps data fusion into a 3D occupancy grid


Two methods are used to generate depth maps from the perception sensors:

  • depth from stereo matching
  • depth from motion


Depth from stereo matching

The main method used for depth information extraction relies on the parallax between the two stereo cameras of the perception system. By photographing the environment in the same direction but from two different positions, objects in the field of view of the perception system appear at different positions in the pictures produced by the two cameras. The closer the object, the closer this difference in position.


The strategy thus consists in identifying points in the pictures produced by the left and right stereo cameras corresponding to a same feature in the field of view of the perception system and measuring the position difference of these points in the two pictures. This difference is called the disparity and is measured in count of pixels.

The disparity can then be linked to the depth of each of these points using the following relation depth = focal * baseline / disparity where the depth and the baseline are expressed in the same unit, and the focal length and disparity are expressed in pixel count.


The result of the computation takes the form of a 176 x 90 pixels depth map, for which the value of each pixel corresponds to a depth, in meters. The depth map is updated at 30 Hz.

An immediate result is that the depth measured through this method is discretized, as the disparity can only take discrete values (count of pixels). A 3D point sufficiently far from the perception system that would generate a theoretical disparity smaller than one pixel will thus be considered at infinity as the corresponding actual, discrete, disparity will be 0. The precision of the stereo matching method hence decreases with the distance, though methods exist to reduce this phenomenon by achieving subpixel discretization.

In addition, the disparity diverges as depth get closer to zero. Since the number of pixels in the images is limited, so is the value of the disparity. As a consequence, there is a minimum depth under which the perception system is blind. The value of this minimum depth is 36 cm (14.2 inches) for ANAFI Ai.


About calibration: each pair of stereo cameras is factory-calibrated to precisely measure the slight misalignments that may exist between the two cameras and to compensate for it in the onboard computation of the depth.


The user can also recalibrate the stereo camera pair with the test pattern provided in the packaging. In particular, to some extent, the drone is capable to detect potential calibration errors that may occur over its lifetime. In that case, the drone software will try to adjust and compensate for it, or if it fails to do so, a notification will appear to request a recalibration.

Depth from motion

The motion of the aircraft can also be exploited to collect images of the environment from different point of views, and thus reconstruct depth information. This method is called depth from motion, or monocular perception, since a single moving camera suffices to gather depth information.


The principle is similar to the stereo vision, but rather than comparing images of the environment acquired by distinct observers at same time, the perception compares images of the environment from a same observer at different times. Should the drone be moving, the images from this unique observer will be acquired from different points of view. Knowing the pose at which each frame was taken, points corresponding to a same feature in the different images can be triangulated and put back in 3D.


This results in a 3D point cloud, containing up to 500 points for ANAFI Ai, generated at 10Hz.

The depth from motion algorithm in ANAFI Ai usually generates less information (sparse point cloud) than the stereo matching algorithm and requires the drone to be moving to gather information. Furthermore, this algorithm fails to extract information in the exact direction of motion (at least for straight translations) since in this direction, objects appear almost motionless in the images (focus of expansion).


However, it has a better range of detection (theoretically infinite range) that the stereo matching method.

Occupancy grid

The depth information from the stereo and monocular perception algorithms is integrated into an occupancy grid. This grid discretizes the 3D surrounding environment into 3D cubes, called voxels. To each voxel is attributed a probability to be occupied by an obstacle or, on the contrary, to be free of obstacle.


A raycasting algorithm is used to integrate the depth information into the occupancy grid. For each pixel of the depth map generated by the depth from stereo matching, converted into a 3D point, and for each point of the point cloud from the depth from motion:

  • A ray is cast in the occupancy grid, from the position of the perception system, to the position of the 3D point.
  • The probability of occupation of the voxel containing the 3D point is increased
  • The probability of occupation of all the voxels crossed by the ray -except the one containing the 3D point- is decreased

The grid thus acts both as a temporal filter of the depth information, absorbing any potential noise in depth measurements and as a memory of the previous acquisitions, making it possible to navigate in complex environments, even without a continuous 360° field of view of the perception system.


The occupancy grid constitutes the base for the motion planning algorithms used by ANAFI Ai for autonomous flight and obstacle avoidance.

Obstacle avoidance

With the knowledge of the 3D environment surrounding the aircraft stored in the occupancy grid, it is possible to provide obstacle avoidance capabilities to ANAFI Ai. This offers considerable additional safety to autonomous missions but is also useful for manual flight, especially if the line of sight between the pilot and the aircraft is degraded.


Every 30 ms, ANAFI Ai predicts what the nominal trajectory to follow will be over a short time horizon in the future. This prediction is deduced from the references sent by the user, whether it be piloting commands from the hand controller, waypoints to join for flight plan or an input trajectory. Then, using a simulated internal drone model, a replanning algorithm computes the smallest possible corrections to this predicted nominal trajectory that make it both collision free and feasible by the drone.

ANAFI Ai's obstacle avoidance has been designed to handle speeds up to:

  • horizontal: 8 m/s (29 km/h - 18 mph)
  • ascending: 4 m/s (14 km/h - 8 mph)
  • descending: 3m/s (11 km/h - 7 mph)


Avoidance performances are limited in rainy or strong wind conditions, low light or disturbed satellite navigation environment. Also, it should be ensured that the lenses of the perception system are clean before flying.

Key features

Air SDK (see SDK section) allows developers to access every drone sensor, camera, connectivity interface and autonomous feature. They can therefore customize the drone behavior to create Flight missions. Every Flight mission contains a set of basic behaviors or modes:

  • Ground: behaviors while motors are stopped, such as sensors calibrations
  • Take-off: various take-off strategies
  • Hovering: holding a fixed point
  • Flying: manual and autonomous flight functions
  • Landing: various landing strategies
  • Critical: when a critical condition is detected


The missions developed internally by Parrot and available in FreeFlight 7. Custom Flight missions can write new behaviors or reuse them from the Default mission.

The ANAFI Ai camera has the most accurate stabilization of the micro-UAV market.


It combines a double stabilization:

  • 3-axis with the mechanical gimbal
  • 3 axis with electronic stabilization (EIS)


The mechanical stabilization stabilizes the camera's aiming axis regardless of the drone's flight attitude. The electronic stabilization of the image allows to correct the effect of the micro-vibrations for the frequencies beyond 100 Hz which cannot be managed by a mechanical actuator.


Main camera gimbal

The mechanical stabilization allows the stabilization and orientation of the camera's horizontal viewing axis on all 3 axis.

Key features

  • 3 axes of mechanical stabilization for the main camera.
  • 292° vertical displacement, field of view from -116° to +176°


Gimbal Performances

  ANAFI Ai Skydio 2 MAVIC 2 Air

Angular stabilization accuracy

±1° No data No data
End stops Pitch: -116/+176 Pitch: ±124° Pitch: -135°/+45°
  Roll: ±40° Roll: ± 120° Roll: ±45°
  Yaw: ±52° Yaw: ± 12.5° Yaw: ±100°
Piloting range ±90° (pitch axis) -110°/+45° -90° /24°
Maximal rotation speed ±180°/s (pitch axis) No data 100°/s

Front camera crashproof

No data None


The EIS algorithm corrects the effects of wobble and the distortion of the wide-angle lens, and it digitally stabilizes the image along the 3 axes (roll, pitch and yaw).


The method consists of applying a geometric transformation of the image. The geometric transformation is associated to a timestamp and a precise position thanks to the IMU.


A geometrical transformation is applied to each image according to the distortion of the optics, the wobble and the movements of the camera module measured.

The camera can tilt vertically by -116/+176° around the pitch axis, providing observation above and below the drone. This is a unique capability in the micro-UAV market.