A new example showcases capabilities of the DART integration. Let’s dive into robotics, explain what DART is able to do and how it compares to Bullet.
What is a robotic manipulator?
A robotic mechanism is constructed by connecting rigid bodies, called links, together by means of joints, so that relative motion between adjacent links becomes possible. Actuation of the joints, typically by electric motors, then causes the robot to move and exert forces in desired ways. Additionally, there exist several sensors that allows us to read the current position, velocity or torque of an actuator.
A robotic manipulator is a robotic mechanism where its links are arranged in serial fashion, that is, they form what we call an open-chain. Without going very deeply into the mechanical details behind the actual machines and without loss of generality, for the remaining of this post a robotic manipulator can be described fully by the mass and inertia properties of its links, its kinematic configuration, its joint positions and its joint velocities. Moreover, we will assume that our manipulators can be control either via torque or velocity commands.
What is this example about?
Before moving on, let us see what we can do in this example (and what we will explain in this post):
As we can see in the video, our example world consists of a robotic manipulator with a gripper attached in its end-effector (that is, the last link in the serial chain) and three small colored boxes. The user can control the manipulator with intuitive commands (e.g., move over the red box or move up/down) and pick up the small boxes to create some towers. They can even destroy the ones that they created!
We will split the post into two main directions: (a) explaining how to use the DartIntegration (and some basic usage of DART), and (b) explaining the basics of the controller developed for this task (from a robotics perspective).
Using the DartIntegration
Dynamic Animation and Robotics Toolkit (DART) is an open-source library that provides data structures and algorithms for kinematic and dynamic applications in robotics and computer animation. Since its main focus is robotics applications, every entity in a DART world is a kinematic skeleton, that is, a collection of links interconnected with joints. Each joint can be actuated with different types of actuation (force or torque, velocity, position) or it can be fixed. DART additionally support non-rigid bodies/links, but we will only consider robots with rigid body links in this post. In DART everything begins by creating an empty world:
auto world = dart::simulation::WorldPtr(new dart::simulation::World);
Inside this imaginary world, our robots will live and flourish. The next step is to create our robots. In DART, even the small boxes that have no actuation are considered as a kinematic skeleton. Let us create our fist “robot” box (for more details on how to use DART, please follow their tutorials):
The are several steps that we need to take in order to create a robot (in DART but in general also). Imagine if we have to create a robot with multiple links and joints! For that reason, several conventions have been defined and used in the robotics community. The most popular ones are the Denavit–Hartenberg parameters and the product of exponentials formula (that is based on the screw theory; for more information please refer to the Modern Robotics book by Kevin M. Lynch and Frank C. Park).
Universal Robot Description Format
Nevertheless, these are still mathematical formulas and they require quite some work to manually perform all the computations. DART (and most robotic simulators) already have some of the two conventions implemented and require from the user to give only the relative transformation from each link to their “parent” joints and from the joints to their “child” links. This greatly simplifies the life of the user, but still writing this in C++ code is demanding and error prone (just have a look at the code above to create a simple box). For this reason, several description formats have arised over the years that try to make it easier and more intuitive to create your robots.
One of the most popular formats is the Universal Robot Description Format (URDF) that is part of the Robotics Operating System (ROS) framework. URDF is an XML format for representing a robot model; for more details, please have a look at the ROS wiki. Here is how a part of the URDF model of the manipulator that we are going to use looks like:
This is to get an idea of how a URDF file would look like; no need to understand it!
Loading a URDF model with DART
Assuming that we have in our possession the URDF model of our robot, here is how to load it with DART:
The KUKA LBR Iiwa manipulator URDF that we used is a modified version of the one in the iiwa_ros package. The Robotiq gripper 2F-85 URDF that we used is a modified version of the one in the robotiq_arg85_description package. Once we have loaded/created all our robots, we add them to the DART world:
How does DART connect to Magnum?
DART connects to Magnum through the SceneGraph library, similar to how BulletIntegration does it. DART integration provides two main classes: (a) DartIntegration::World, and (b) DartIntegration::Object. The most common usage will be something like the following:
/* DART world */ dart::simulation::WorldPtr dartWorld{new dart::simulation::World}; // ... add robots and objects into DART world ... /* Create our DartIntegration object/world */ auto dartObj = new Object3D{&_scene}; auto world = std::make_unique<DartIntegration::World>(*dartObj, *dartWorld)); /* Simulate with time step of 0.001 seconds */ world.world().setTimeStep(0.001); for(UnsignedInt i = 0; i < simulationSteps; ++i) { world.step(); /* Update graphics at ~60Hz (15*0.001 ~= 60Hz) */ if(i % 15 == 0) { world.refresh(); /* Get unused/deleted shapes */ std::vector<std::unique_ptr<DartIntegration::Object>>& unusedObjects = world.unusedObjects(); /* The user is expected to handle unused objects. One possible handling would be to remove them from the parent scene. */ deleteObjectsFromScene(unusedObjects); /* Get updated shapes -- ones that either the materials or the meshes have changed */ std::vector<std::reference_wrapper<DartIntegration::Object>> updatedObjects = world.updatedShapeObjects(); updateMeshesAndMaterials(updatedObjects); /* Clear list of updated objects */ world.clearUpdatedShapeObjects(); } }
In essence, the user needs to handle the deleted objects and the ones that need to be updated. In our example, nothing should be deleted and thus we do not handle the deleted objects. In order to draw the meshes of our robots, we need to create a structure. We will assume a Phong shader and create the following class:
Note that each DartIntegration::Object can contain multiple meshes with color or texture material. To keep track of which objects are being updated (this should only happen if the visual properties, that is the mesh or material information, of a body changes), we have defined a std::unordered_map:
std::unordered_map<DartIntegration::Object*, DrawableObject*> _drawableObjects;
To update the information of our objects (both the transformations but also the graphics part), we perform the following:
world->refresh();
To get and update the new meshes and material information, we perform something like the following:
The DartIntegration::Object stores in memory the mesh, material and texture information and thus it is advised to keep references or pointers to these values. Putting everything together and with a few additional things (e.g., two lights), we can get something like the following:
Simulating any robot with the DartIntegration
Using the exactly the same code (just changing the URDF file), we can load any robot in DART and render it with Magnum. Here are a few other examples rendering a humanoid (iCub), a hexapod and quadruped (ANYmal) robot:
Controlling a robotic manipulator
The example scene looks (relatively) nice, but up until now we cannot control what happens to our imaginary world. But most of the fun comes when we can control our robots and make them do things. In this second part of the post, we will see how we can control our manipulator in order to be able to give it intuitive commands to perform some everyday tasks.
As we have already discussed, our robot’s state can be described by its joint positions and its joint velocities. In the robotics literature, we find the notation for joint positions and for joint velocities.
Actuator types
Robots are mechanical structures that are interconnected and controlled via electrical components. This means that the lowest level control signal happens in current. Because handling current is difficult and non-intuitive, there are two types of motors widely used to provide a higher level of abstraction:
- Torque-controlled actuators; these actuators are equipped with a force/torque sensor and the higher level control signals are the desired torques/forces (depending on the type of the mechanical connection).
- Servo actuators; these actuators are equipped with an encoder sensor (that measures the displacement or current position of the joint) and the higher level control signals are the desired joint positions.
Controlling in joint space
The easiest type of control is to control the robot in what we call the joint-space. This basically means that we have some desired joint positions and some desired joint velocities (usually set to zero) and we want to find the control signal to achieve them.
When controlling in joint-space, the most classical controller is a Proportional–Integral–Derivative (PID) controller. In essence, the desired control signal (no matter the type) is computed as follows:
The user here will need to tune the parameters . In robotics, a variation of this control is widely used (usually named PD-controller):
Using these simple controllers, one can do some very amazing and interesting things.
Controlling in task space
However, most of the interesting things happen in the Cartesian 6D space. This is the space where the end-effector of our manipulator lives in. Moreover, this is the space where most of the real world tasks can be very intuitively described. For this reason, we call it the task-space.
Jacobian
Jacobian matrices are very useful, and heavily used in robotics engineering and research. Basically, a Jacobian defines the dynamic relationship between two different representations of a system. Formally, the Jacobian matrix is the first-order partial derivatives of a function with respect to some variables. Let’s take as an example our robotic manipulator. As we have already, we can describe its configuration by the joint positions . This configuration also gives a 6D position (that is, translation and orientation) for the end-effector, let’s call it . This relationship is known as the forward kinematics and is usually non-linear. The Jacobian matrix (which is a function), , gives us the following relationship:
With this last equation, we have an analytical equation that relates the joint velocities, to end-effector velocities . Moreover, this is a linear relationship. Using the property of conservation of energy (from traditional physics) and some mathematical manipulation, we can generalize this relationship to forces and torques (we drop the arguments in for clarity) as
where is the wrench (force and torque) applied on the end-effector. Using this equation we can plan our desired trajectories in the end-effector, write some controller in that space (like a PD controller that we saw previously), and then transform them to joint-space (the space that we can actually control).
But how do we get this Jacobian matrix? We will not go into the details, but once we have the forward kinematics of our robot (this is either measured or given by the manufacturer), then we just take the partial derivatives and we get our Jacobian.
Using DART to perform task-space control
DART has built-in functions that compute the Jacobian of any link of our robot, and thus controlling our robots in the task-space is quite easy. Here’s what we do:
For making the example, we had to do a few more things, like adding a few boxes, keeping track of their location and updating the desired end-effector location given the user input. Moreover, we added to our manipulator a gripper so that it can grasp the boxes. In reality, most commercially available grippers are position controlled (we cannot send torques as we were describing so far, but we need to give desired positions): to simulate that we use velocity control for the gripper and what we have been describing so far for the rest of the manipulator. For this reason, we need to keep a model of our manipulator without the gripper so that DART can gives us all the information we need cleanly and easily. There is a way to do it without a copy, but involves keeping track of frames and removing specific lines from the Jacobian, and thus, I chose the copy to be more clear.
Why DART and not Bullet?
Bullet Physics is one of the most popular open-source physics engines available and widely used in the graphics and game industry. Although choosing which physics library to use for a project depends on quite many parameters and factors, I am going to list a few reasons why I am using DART more actively than Bullet:
- DART is oriented towards computer animation and robotics. This means that
it already has implemented a wide range of functionality that is needed for
these areas. A few examples consist:
- Forward/Inverse kinematics,
- Forward dynamics,
- Jacobians and their derivatives,
- Robot dynamics quantities (like mass matrix, coriolis and gravity forces, etc.),
- Efficient implementation for rigid body chains (although Bullet has recently added this feature),
- Motion planning.
- Modern C++11/14 code,
- Accurate and stable dynamics for rigid body chains (robotic systems).
Although Bullet is moving towards supporting most of the features that DART already has, to me as a roboticist I find DART more intuitive and closer to my mind of thinking. Nevertheless, if someone is aiming for big open worlds (e.g., like in The Elder Scrolls: Skyrim) or many objects interacting with each other, DART is not designed for this type of situations and Bullet will be significantly faster (possibly less accurate though).
~ ~ ~
Thank you for reading! Be sure to check out the full example code, build it on your machine and play around with its controls.
About the author
Konstantinos Chatzilygeroudis loves to program robots, C++ and has a hidden love for computer graphics. Post-doctoral fellow in robotics and machine learning at LASA team, EPFL. Follow him on Twitter: @chatzilyge
Guest Posts
This is a guest post by an external author. Do you also have something interesting to say? A success story worth sharing? We’ll be happy to publish it. See the introductory post for details.