Natural Tasking - Reducing complexity in robot manipulator programming
Staff posted on April 01, 2020 |
The complexity of environment and task drive need for Natural Tasking.

Article by Jeff Sprenger.

Robotic arms, also known as manipulators, are used in many diverse applications, from materials handling, welding, adhesive application, grinding, polishing, inspection, light manufacturing, silicon fab testing, surgery, oil drilling, and subsea machine repair. These robotic solutions use both industrial robots operating behind protective fences and collaborative robots that safely interact with humans in a workspace.

Robots are operating in increasingly complex environments. The dynamic nature of these environments and tasks adds to the complexity: moving parts tracked by vision systems and the coordination of multiple robots working in a single workspace to accomplish a series of tasks. The more complex the environment, the more difficult it is to program using conventional methods of direct motion control. In addition, the tasks themselves become more complex, moving from simple pick and place to welding and assembly tasks. The time it takes to program multiple robots for tasks using motion primitives grows non-linearly with the environment complexity.

Natural Tasking is an approach to robot control that exploits kinematic redundancy by matching the task space with the constraint space. For instance, in an application for applying adhesive or performing welding, the application only requires 5 degrees of constraint. The tool can rotate about its long axis, like a pen. For a 6-axis robot, this leaves one degree of freedom that can be exploited to find the optimal path. Over-constraining the robot by forcing all 6 degrees of freedom can result in un-natural motion - motion that would not seem human-like.

This approach reduces the complexity of programming robotic systems whereby the user specifies what to do rather than how to do it. For example, the robot can be instructed to pick-up an object, independent of its position and orientation, then move the object to another location specified by a tracked marker in the environment.  The movement from points A to B is handled automatically by the path planning, object avoidance, and inverse kinematics algorithms built into the motion control system. (See figure 1)

Figure 1: The sphere is symmetric and can be grasped from any angle. The figure shows 3 approaches to grasp a sphere. The control system uses a 3 DOF constraint (position only) and optimizes the approach by automatically rotating the gripper to avoid collision with the box.
The sphere is symmetric and can be grasped from any angle. The figure shows 3 approaches to grasp a sphere. The control system uses a 3 DOF constraint (position only) and optimizes the approach by automatically rotating the gripper to avoid collision with the box.

Coordinating Multiple Robots for a Single Task

Production line assembly is one application that benefits from natural tasking. Multiple robot arms can work in coordination to combine a set of parts into a single assembly, which is defined using a CAD system. The robot must be able to recognize the parts in arbitrary order and orientation on multiple trays, attach the appropriate grippers, grasp and hold parts together during assembly, test the fit with visual inspection, and then continue to the next steps in assembly. Assembling multiple parts can include insertion of one part into another. This insertion uses force feedback, called admittance control, to ensure a correct fit. Programming this coordination can take weeks if described using motion primitives that move a set of joints on the robot arm. If then a similar assembly is required with somewhat different parts, then the programming task is repeated. With natural tasking, the assembly is described in high level terms that move and combine parts regardless of location or orientation, performing real-time object collision avoidance along the way. Changing the assembly using a different but similar part is a matter of a few hours to change the task and retest in simulation and then directly on the physical robot hardware.

Robotic welding is an example that can be performed using multiple robot arms. One arm uses the welding tool, while the other arms hold the parts to be welded. The control challenge is to coordinate the movements of the robot arms. The tool tip travels a path along the part, which can only be reached if the part is moved while the tool follows the path. This is a complex operation that alters the location of the tool path in 3D space, keeping the path relative to the moving parts. In addition, the tool must move at constant linear speed along the path, so joint rotation speeds must be continually adjusted. The orientation of the parts must also be held so that the tool tip points down, affecting how the weld pool flows.

Figure 2: A video shows a welding simulation using Actin from Energid. A single control system coordinates the movements of three robot arms, each with seven degrees of freedom (DOF).
A video shows a welding simulation using Actin from Energid. A single control system coordinates the movements of three robot arms, each with seven degrees of freedom (DOF).

If each robot has seven degrees of freedom (DOF), this solution requires the kinematic solver to determine how best to use the 21 DOF across all three robots to adjust the part and tool orientation and position to meet the motion constraints of the tool along the weld path. Robots 1 and 2 rotate their respective parts while maintaining the joined seam, while robot 3 will move along the path wherever it is located. Robot 1 and Robot 2 orient the parts so that the welding tool tip on Robot 3 always points down, a requirement for welding.

Reducing Complexity Through Abstraction.

Abstraction helps streamline operations in complex environments by reducing a high dimension problem to an easier to solve lower dimension problem. Consider trying to drive from your home to a new city. If you try to consider every change in every available degree of freedom (the accelerator, brake and steering wheel), the problem is intractable. There are simply too many changes that need to be made. Instead, you break the problem down into different layers of abstraction. At the base level, you have those low-level inputs that change the speed and direction of the car. One step up from there, and you have basic motion primitives, like left turns, right turns, and driving straight. Finally, you have the highest level: your current location and your destination. Your favorite navigation app is the equivalent of natural tasking: it allows you to specify the high-level goal, determines the necessary steps, and breaks that down into manageable primitives that you know how to accomplish.

The natural tasking approach relies on a multi-layer implementation of control:

1)      Motor / Servo control:  Ensuring that the robot actuators are moving to the desired position at the right time by controlling the current supplied to the motor using position encoding feedback.

2)      Motion: Moving a combination of robot joints to achieve a required pose at a specified tool offset at a specified time.

3)      Task: Decomposition of the task into a series of motion primitives. The tasks may be programmed using a scripting language rather than C++. Motion primitives include coordinated joint motions, end effector motions (linear and circular, based on the world coordinate frame or link frame), tool paths, as well as path planning in the known environment. These motions can be defined relative to some object, allowing the system to handle moving parts or targets.

Natural tasking allows the human operator to describe the desired start and end points of the end effector relative to objects in the environment, while the automated control system uses optimization to find the best path choosing from multiple solutions.  The human describes the task, the motion layer breaks that down into motion primitive with adjustments in real-time according to environment changes, and the motor layer drives the robotic actuators to their desired position and velocity. Each layer operates at a different update rate: i.e. Motor: 10 kHz; Motion: 500 Hz; Task 10-0.5 Hz, allowing each layer to react to different types of disturbances to the system.

Moving Parts

The separation of the task and motion control layers allows the engineer to abstract the position and orientation of a part or any other object in the environment. If the task is specified by a moving reference frame attached to the part, then the object can grasp that part anywhere in the workspace. A computer vision tracking system can update the position of the part in real-time, and the natural task is just to grasp the part.

Dynamic Collision Avoidance

The motion layer computes the inverse kinematics, defining the set of joint angles that locates the end effector in the desired position and orientation. To move the end effector from point A to B, the motion control computes a viable path from start to end point performing collision detection continually along the path while in motion. Collision detection includes self-collision with the robot manipulator itself or collision with other objects in the environment. By leaving the automated collision avoidance to the motion controller, the engineer can focus on the higher-level task. This approach allows the addition of machine learning techniques to train at a level of abstraction with reduced complexity and solution space dimension. Artificial intelligence methods can exploit the abstraction of motor control in the same way the human brain’s higher level motor coordination relies on lower level control of muscles in the motor cortex and spinal cord.

Kinematic Redundancy and Optimization:  The position (X, Y, Z) and orientation (roll, pitch, yaw) of the end effector requires six degrees of freedom (DOF).  A robot with a minimum of six degrees of freedom is needed to control 6DOF end effector placement. But many applications require fewer degrees of freedom. For example, a pen on surface application may allow the pen to rotate about its long axis with no change in result. That reduces the required DOF from 6 to 5, and that extra degree of freedom can be exploited to produce multiple solutions to the problem of posing the manipulator. Multiple solutions offer multiple choices that can be exploited by optimization to search for a high quality solution. By considering different costs to the system and adjusting how they are weighted, the control system can find a good solution, if not the best solution, that solves the natural task requirements.


Natural tasking can simplify the programming of robot manipulator tasks in complex environments. It simplifies the problem specification into what to do rather than how to do it. This leaves the decomposition of motion primitive sequencing to the control system, which can explore multiple possible solutions made possible through kinematic redundancy.


Jeff Sprenger is Director of Business Development  at Energid Technologies Inc (, makers of the Actin SDK, adaptive motion control software that uses natural tasking to solve control problems in complex industrial and commercial environments worldwide.


Recommended For You