A new ex­ample show­cases cap­ab­il­it­ies of the DART in­teg­ra­tion. Let’s dive in­to ro­bot­ics, ex­plain what DART is able to do and how it com­pares to Bul­let.

What is a ro­bot­ic ma­nip­u­lat­or?

A ro­bot­ic mech­an­ism is con­struc­ted by con­nect­ing ri­gid bod­ies, called links, to­geth­er by means of joints, so that re­l­at­ive mo­tion between ad­ja­cent links be­comes pos­sible. Ac­tu­ation of the joints, typ­ic­ally by elec­tric mo­tors, then causes the ro­bot to move and ex­ert forces in de­sired ways. Ad­di­tion­ally, there ex­ist sev­er­al sensors that al­lows us to read the cur­rent po­s­i­tion, ve­lo­city or torque of an ac­tu­at­or.

A ro­bot­ic ma­nip­u­lat­or is a ro­bot­ic mech­an­ism where its links are ar­ranged in seri­al fash­ion, that is, they form what we call an open-chain. Without go­ing very deeply in­to the mech­an­ic­al de­tails be­hind the ac­tu­al ma­chines and without loss of gen­er­al­ity, for the re­main­ing of this post a ro­bot­ic ma­nip­u­lat­or can be de­scribed fully by the mass and in­er­tia prop­er­ties of its links, its kin­emat­ic con­fig­ur­a­tion, its joint po­s­i­tions and its joint ve­lo­cit­ies. Moreover, we will as­sume that our ma­nip­u­lat­ors can be con­trol either via torque or ve­lo­city com­mands.

What is this ex­ample about?

Be­fore mov­ing on, let us see what we can do in this ex­ample (and what we will ex­plain in this post):

As we can see in the video, our ex­ample world con­sists of a ro­bot­ic ma­nip­u­lat­or with a grip­per at­tached in its end-ef­fect­or (that is, the last link in the seri­al chain) and three small colored boxes. The user can con­trol the ma­nip­u­lat­or with in­tu­it­ive com­mands (e.g., move over the red box or move up/down) and pick up the small boxes to cre­ate some towers. They can even des­troy the ones that they cre­ated!

We will split the post in­to two main dir­ec­tions: (a) ex­plain­ing how to use the DartInteg­ra­tion (and some ba­sic us­age of DART), and (b) ex­plain­ing the ba­sics of the con­trol­ler de­veloped for this task (from a ro­bot­ics per­spect­ive).

Us­ing the DartInteg­ra­tion

Dy­nam­ic An­im­a­tion and Ro­bot­ics Toolkit (DART) is an open-source lib­rary that provides data struc­tures and al­gorithms for kin­emat­ic and dy­nam­ic ap­plic­a­tions in ro­bot­ics and com­puter an­im­a­tion. Since its main fo­cus is ro­bot­ics ap­plic­a­tions, every en­tity in a DART world is a kin­emat­ic skel­et­on, that is, a col­lec­tion of links in­ter­con­nec­ted with joints. Each joint can be ac­tu­ated with dif­fer­ent types of ac­tu­ation (force or torque, ve­lo­city, po­s­i­tion) or it can be fixed. DART ad­di­tion­ally sup­port non-ri­gid bod­ies/links, but we will only con­sider ro­bots with ri­gid body links in this post. In DART everything be­gins by cre­at­ing an empty world:

auto world = dart::simulation::WorldPtr(new dart::simulation::World);

In­side this ima­gin­ary world, our ro­bots will live and flour­ish. The next step is to cre­ate our ro­bots. In DART, even the small boxes that have no ac­tu­ation are con­sidered as a kin­emat­ic skel­et­on. Let us cre­ate our fist “ro­bot” box (for more de­tails on how to use DART, please fol­low their tu­tori­als):

/* The size of our box */
constexpr Double boxSize = 0.06;

/* Calculate the mass of the box */
constexpr Double boxDensity = 260; // kg/m^3
constexpr Double boxMass = boxDensity*Math::pow<3>(boxSize);

/* Create a Skeleton and give it a name */
dart::dynamics::SkeletonPtr box = dart::dynamics::Skeleton::create("box");

/* Create a body for the box */
dart::dynamics::BodyNodePtr body =
    box->createJointAndBodyNodePair<dart::dynamics::FreeJoint>(nullptr).second;

/* Create a shape for the box */
auto boxShape = std::make_shared<dart::dynamics::BoxShape>(
    Eigen::Vector3d{box_size, box_size, box_size});
auto shapeNode = body->createShapeNodeWith<
    dart::dynamics::VisualAspect,
    dart::dynamics::CollisionAspect, dart::dynamics::DynamicsAspect>(box_shape);

/* Give a color to our box */
shapeNode->getVisualAspect()->setColor(Eigen::Vector3d{1.0, 0.0, 0.0});

/* Set up inertia for the box */
dart::dynamics::Inertia inertia;
inertia.setMass(box_mass);
inertia.setMoment(box_shape->computeInertia(box_mass));
body->setInertia(inertia);

/* Setup the center of the box properly */
box->getDof("Joint_pos_z")->setPosition(box_size/2.0);

See this code in the mag­num-ex­amples re­pos­it­ory.

The are sev­er­al steps that we need to take in or­der to cre­ate a ro­bot (in DART but in gen­er­al also). Ima­gine if we have to cre­ate a ro­bot with mul­tiple links and joints! For that reas­on, sev­er­al con­ven­tions have been defined and used in the ro­bot­ics com­munity. The most pop­u­lar ones are the De­navit–Harten­berg para­met­ers and the product of ex­po­nen­tials for­mula (that is based on the screw the­ory; for more in­form­a­tion please refer to the Mod­ern Ro­bot­ics book by Kev­in M. Lynch and Frank C. Park).

Uni­ver­sal Ro­bot De­scrip­tion Format

Nev­er­the­less, these are still math­em­at­ic­al for­mu­las and they re­quire quite some work to manu­ally per­form all the com­pu­ta­tions. DART (and most ro­bot­ic sim­u­lat­ors) already have some of the two con­ven­tions im­ple­men­ted and re­quire from the user to give only the re­l­at­ive trans­form­a­tion from each link to their “par­ent” joints and from the joints to their “child” links. This greatly sim­pli­fies the life of the user, but still writ­ing this in C++ code is de­mand­ing and er­ror prone (just have a look at the code above to cre­ate a simple box). For this reas­on, sev­er­al de­scrip­tion formats have arised over the years that try to make it easi­er and more in­tu­it­ive to cre­ate your ro­bots.

One of the most pop­u­lar formats is the Uni­ver­sal Ro­bot De­scrip­tion Format (URDF) that is part of the Ro­bot­ics Op­er­at­ing Sys­tem (ROS) frame­work. URDF is an XML format for rep­res­ent­ing a ro­bot mod­el; for more de­tails, please have a look at the ROS wiki. Here is how a part of the URDF mod­el of the ma­nip­u­lat­or that we are go­ing to use looks like:

<!-- link 0 -->
<link name="iiwa_link_0">
  <inertial>
    <origin rpy="0 0 0" xyz="-0.1 0 0.07"/>
    <mass value="5"/>
    <inertia ixx="0.05" ixy="0" ixz="0" iyy="0.06" iyz="0" izz="0.03"/>
  </inertial>
  <visual>
    <origin rpy="0 0 0" xyz="0 0 0"/>
    <geometry>
      <mesh filename="package://iiwa14/link_0.stl"/>
    </geometry>
    <material name="Grey"/>
  </visual>
  <collision>
    <origin rpy="0 0 0" xyz="-0.01 0 0.07875"/>
    <geometry>
      <box size="0.24 0.23 0.1575"/>
    </geometry>
    <material name="Grey"/>
  </collision>
</link>
<!-- joint between link 0 and link 1 -->
<joint name="iiwa_joint_1" type="revolute">
  <parent link="iiwa_link_0"/>
  <child link="iiwa_link_1"/>
  <origin rpy="0 0 0" xyz="0 0 0.1575"/>
  <axis xyz="0 0 1"/>
  <limit effort="320" lower="-2.96705972" upper="2.96705972" velocity="1.4835298"/>
  <dynamics damping="0.5"/>
</joint>
<!-- link 1 -->
<link name="iiwa_link_1">
  ...
</link>

See the full file in the mag­num-ex­amples re­pos­it­ory.

This is to get an idea of how a URDF file would look like; no need to un­der­stand it!

Load­ing a URDF mod­el with DART

As­sum­ing that we have in our pos­ses­sion the URDF mod­el of our ro­bot, here is how to load it with DART:

/* DART: Load Skeletons/Robots */
DartLoader loader;
/* Add packages (needed for URDF loading): this is a ROS-related thing in
   order to find the resources (e.g., meshes), see the "package://" in the
   URDF file above */
loader.addPackageDirectory("name_of_package", "path/to/package");
std::string filename = "path/to/URDF/file";

/* Load the URDF in a DART Skeleton */
auto manipulator = loader.parseSkeleton(filename);

See this code in the mag­num-ex­amples re­pos­it­ory.

The KUKA LBR Ii­wa ma­nip­u­lat­or URDF that we used is a mod­i­fied ver­sion of the one in the ii­wa_ros pack­age. The Ro­botiq grip­per 2F-85 URDF that we used is a mod­i­fied ver­sion of the one in the ro­botiq_ar­g85_­de­scrip­tion pack­age. Once we have loaded/cre­ated all our ro­bots, we add them to the DART world:

/* Add the robot/objects in our DART world */
world->addSkeleton(manipulator);
world->addSkeleton(floorSkel);
world->addSkeleton(redBoxSkel);
world->addSkeleton(greenBoxSkel);
world->addSkeleton(blueBoxSkel);

See this code in the mag­num-ex­amples re­pos­it­ory.

How does DART con­nect to Mag­num?

DART con­nects to Mag­num through the SceneGraph lib­rary, sim­il­ar to how Bul­letInteg­ra­tion does it. DART in­teg­ra­tion provides two main classes: (a) DartInteg­ra­tion::World, and (b) DartInteg­ra­tion::Ob­ject. The most com­mon us­age will be some­thing like the fol­low­ing:

/* DART world */
dart::simulation::WorldPtr dartWorld{new dart::simulation::World};

// ... add robots and objects into DART world ...

/* Create our DartIntegration object/world */
auto dartObj = new Object3D{&_scene};
auto world = std::make_unique<DartIntegration::World>(*dartObj, *dartWorld));

/* Simulate with time step of 0.001 seconds */
world.world().setTimeStep(0.001);

for(UnsignedInt i = 0; i < simulationSteps; ++i) {
    world.step();

    /* Update graphics at ~60Hz (15*0.001 ~= 60Hz) */
    if(i % 15 == 0) {
        world.refresh();

        /* Get unused/deleted shapes */
        std::vector<std::unique_ptr<DartIntegration::Object>>& unusedObjects
            = world.unusedObjects();

        /* The user is expected to handle unused objects. One possible
           handling would be to remove them from the parent scene. */
        deleteObjectsFromScene(unusedObjects);

        /* Get updated shapes -- ones that either the materials or the
           meshes have changed */
        std::vector<std::reference_wrapper<DartIntegration::Object>>
            updatedObjects = world.updatedShapeObjects();

        updateMeshesAndMaterials(updatedObjects);

        /* Clear list of updated objects */
        world.clearUpdatedShapeObjects();
    }
}

In es­sence, the user needs to handle the de­leted ob­jects and the ones that need to be up­dated. In our ex­ample, noth­ing should be de­leted and thus we do not handle the de­leted ob­jects. In or­der to draw the meshes of our ro­bots, we need to cre­ate a struc­ture. We will as­sume a Phong shader and cre­ate the fol­low­ing class:

class DrawableObject: public Object3D, SceneGraph::Drawable3D {
    public:
        explicit DrawableObject(
            const std::vector<Containers::Reference<GL::Mesh>>& meshes,
            const std::vector<MaterialData>& materials, Object3D* parent,
            SceneGraph::DrawableGroup3D* group);

        DrawableObject& setMeshes(
            const std::vector<Containers::Reference<GL::Mesh>>& meshes);
        DrawableObject& setMaterials(
            const std::vector<MaterialData>& materials);
        DrawableObject& setSoftBodies(const std::vector<bool>& softBody);
        DrawableObject& setTextures(std::vector<GL::Texture2D*>& textures);

    private:
        void draw(const Matrix4& transformationMatrix, SceneGraph::Camera3D& camera)
            override;

        Resource<Shaders::Phong> _colorShader;
        Resource<Shaders::Phong> _textureShader;
        std::vector<Containers::Reference<GL::Mesh>> _meshes;
        std::vector<MaterialData> _materials;
        std::vector<bool> _isSoftBody;
        std::vector<GL::Texture2D*> _textures;
};

See this code in the mag­num-ex­amples re­pos­it­ory.

Note that each DartInteg­ra­tion::Ob­ject can con­tain mul­tiple meshes with col­or or tex­ture ma­ter­i­al. To keep track of which ob­jects are be­ing up­dated (this should only hap­pen if the visu­al prop­er­ties, that is the mesh or ma­ter­i­al in­form­a­tion, of a body changes), we have defined a std::un­ordered_map:

std::unordered_map<DartIntegration::Object*, DrawableObject*> _drawableObjects;

To up­date the in­form­a­tion of our ob­jects (both the trans­form­a­tions but also the graph­ics part), we per­form the fol­low­ing:

world->refresh();

To get and up­date the new meshes and ma­ter­i­al in­form­a­tion, we per­form some­thing like the fol­low­ing:

/* For each updated object */
for(DartIntegration::Object& object: world->updatedShapeObjects()) {
    /* Get material information */
    std::vector<MaterialData> materials;
    std::vector<Containers::Reference<GL::Mesh>> meshes;
    std::vector<bool> isSoftBody;
    std::vector<L::Texture2D*> textures;
    for(std::size_t i = 0; i < object.drawData().meshes.size(); i++) {
        bool isColor = true;
        GL::Texture2D* texture = nullptr;
        if(object.drawData().materials[i].flags() &
            Trade::PhongMaterialData::Flag::DiffuseTexture)
        {
            Containers::Optional<GL::Texture2D>& entry = object.drawData()
                .textures[object.drawData().materials[i].diffuseTexture()];
            if(entry) {
                texture = &*entry;
                isColor = false;
            }
        }

        textures.push_back(texture);

        MaterialData mat;
        mat.ambientColor = object.drawData().materials[i].ambientColor().rgb();
        if(isColor)
            mat.diffuseColor = object.drawData().materials[i].diffuseColor().rgb();
        mat.specularColor = object.drawData().materials[i].specularColor().rgb();
        mat.shininess = object.drawData().materials[i].shininess();
        mat.scaling = object.drawData().scaling;

        /* Get the modified mesh */
        meshes.push_back(object.drawData().meshes[i]);
        materials.push_back(mat);
        isSoftBody.push_back(object.shapeNode()->getShape()->getType() ==
            dart::dynamics::SoftMeshShape::getStaticType());
    }

    /* Check if we already have it and then either add a new one or update
       the existing. We don't need the mesh / material / texture vectors
       anywhere else anymore, so move them in to avoid copies. */
    auto it = _drawableObjects.insert(std::make_pair(&object, nullptr));
    if(it.second) {
        auto drawableObj = new DrawableObject{
            std::move(meshes), std::move(materials),
            static_cast<Object3D*>(&(object.object())), &_drawables};
        drawableObj->setSoftBodies(std::move(isSoftBody));
        drawableObj->setTextures(std::move(textures));
        it.first->second = drawableObj;
    } else {
        (*it.first->second)
            .setMeshes(std::move(meshes))
            .setMaterials(std::move(materials))
            .setSoftBodies(std::move(isSoftBody))
            .setTextures(std::move(textures));
    }
}

world->clearUpdatedShapeObjects();

See this code in the mag­num-ex­amples re­pos­it­ory.

The DartInteg­ra­tion::Ob­ject stores in memory the mesh, ma­ter­i­al and tex­ture in­form­a­tion and thus it is ad­vised to keep ref­er­ences or point­ers to these val­ues. Put­ting everything to­geth­er and with a few ad­di­tion­al things (e.g., two lights), we can get some­thing like the fol­low­ing:

Sim­u­lat­ing any ro­bot with the DartInteg­ra­tion

Us­ing the ex­actly the same code (just chan­ging the URDF file), we can load any ro­bot in DART and render it with Mag­num. Here are a few oth­er ex­amples ren­der­ing a hu­manoid (iCub), a hexa­pod and quad­ruped (ANY­mal) ro­bot:

Con­trolling a ro­bot­ic ma­nip­u­lat­or

The ex­ample scene looks (re­l­at­ively) nice, but up un­til now we can­not con­trol what hap­pens to our ima­gin­ary world. But most of the fun comes when we can con­trol our ro­bots and make them do things. In this second part of the post, we will see how we can con­trol our ma­nip­u­lat­or in or­der to be able to give it in­tu­it­ive com­mands to per­form some every­day tasks.

As we have already dis­cussed, our ro­bot’s state can be de­scribed by its joint po­s­i­tions and its joint ve­lo­cit­ies. In the ro­bot­ics lit­er­at­ure, we find the nota­tion q for joint po­s­i­tions and \dot{q} for joint ve­lo­cit­ies.

Ac­tu­at­or types

Ro­bots are mech­an­ic­al struc­tures that are in­ter­con­nec­ted and con­trolled via elec­tric­al com­pon­ents. This means that the low­est level con­trol sig­nal hap­pens in cur­rent. Be­cause hand­ling cur­rent is dif­fi­cult and non-in­tu­it­ive, there are two types of mo­tors widely used to provide a high­er level of ab­strac­tion:

  • Torque-con­trolled ac­tu­at­ors; these ac­tu­at­ors are equipped with a force/torque sensor and the high­er level con­trol sig­nals are the de­sired torques/forces (de­pend­ing on the type of the mech­an­ic­al con­nec­tion).
  • Servo ac­tu­at­ors; these ac­tu­at­ors are equipped with an en­coder sensor (that meas­ures the dis­place­ment or cur­rent po­s­i­tion of the joint) and the high­er level con­trol sig­nals are the de­sired joint po­s­i­tions.

Con­trolling in joint space

The easi­est type of con­trol is to con­trol the ro­bot in what we call the joint-space. This ba­sic­ally means that we have some de­sired q^* joint po­s­i­tions and some de­sired \dot{q}^* joint ve­lo­cit­ies (usu­ally set to zero) and we want to find the con­trol sig­nal to achieve them.

When con­trolling in joint-space, the most clas­sic­al con­trol­ler is a Pro­por­tion­al–In­teg­ral–De­riv­at­ive (PID) con­trol­ler. In es­sence, the de­sired con­trol sig­nal (no mat­ter the type) is com­puted as fol­lows:

\tau = K_p (q^*-q) + K_i \int_0^t (q^*-q)dt + K_d\frac{d(q^*-q)}{dt}

The user here will need to tune the para­met­ers K_p,K_i,K_d . In ro­bot­ics, a vari­ation of this con­trol is widely used (usu­ally named PD-con­trol­ler):

\tau = K_p (q^*-q) + K_d (\dot{q}^*-\dot{q})

Us­ing these simple con­trol­lers, one can do some very amaz­ing and in­ter­est­ing things.

Con­trolling in task space

How­ever, most of the in­ter­est­ing things hap­pen in the Cartesian 6D space. This is the space where the end-ef­fect­or of our ma­nip­u­lat­or lives in. Moreover, this is the space where most of the real world tasks can be very in­tu­it­ively de­scribed. For this reas­on, we call it the task-space.

Jac­obi­an

Jac­obi­an matrices are very use­ful, and heav­ily used in ro­bot­ics en­gin­eer­ing and re­search. Ba­sic­ally, a Jac­obi­an defines the dy­nam­ic re­la­tion­ship between two dif­fer­ent rep­res­ent­a­tions of a sys­tem. Form­ally, the Jac­obi­an mat­rix is the first-or­der par­tial de­riv­at­ives of a func­tion with re­spect to some vari­ables. Let’s take as an ex­ample our ro­bot­ic ma­nip­u­lat­or. As we have already, we can de­scribe its con­fig­ur­a­tion by the joint po­s­i­tions q . This con­fig­ur­a­tion also gives a 6D po­s­i­tion (that is, trans­la­tion and ori­ent­a­tion) for the end-ef­fect­or, let’s call it x(q) . This re­la­tion­ship is known as the for­ward kin­emat­ics and is usu­ally non-lin­ear. The Jac­obi­an mat­rix (which is a func­tion), J(q) , gives us the fol­low­ing re­la­tion­ship:

\begin{aligned} J(q) &= \frac{\partial{x(q)}}{\partial{q}}\\ &= \frac{\partial{x(q)}}{\partial{t}}\frac{\partial{t}}{\partial{q}}\Rightarrow\\ \dot{x} &= J(q)\dot{q} \end{aligned}

With this last equa­tion, we have an ana­lyt­ic­al equa­tion that relates the joint ve­lo­cit­ies, \dot{q} to end-ef­fect­or ve­lo­cit­ies \dot{x} . Moreover, this is a lin­ear re­la­tion­ship. Us­ing the prop­erty of con­ser­va­tion of en­ergy (from tra­di­tion­al phys­ics) and some math­em­at­ic­al ma­nip­u­la­tion, we can gen­er­al­ize this re­la­tion­ship to forces and torques (we drop the ar­gu­ments in J for clar­ity) as

\tau = J^TW

where W is the wrench (force and torque) ap­plied on the end-ef­fect­or. Us­ing this equa­tion we can plan our de­sired tra­ject­or­ies in the end-ef­fect­or, write some con­trol­ler in that space (like a PD con­trol­ler that we saw pre­vi­ously), and then trans­form them to joint-space (the space that we can ac­tu­ally con­trol).

But how do we get this Jac­obi­an mat­rix? We will not go in­to the de­tails, but once we have the for­ward kin­emat­ics of our ro­bot (this is either meas­ured or giv­en by the man­u­fac­turer), then we just take the par­tial de­riv­at­ives and we get our Jac­obi­an.

Us­ing DART to per­form task-space con­trol

DART has built-in func­tions that com­pute the Jac­obi­an of any link of our ro­bot, and thus con­trolling our ro­bots in the task-space is quite easy. Here’s what we do:

/* Get joint velocities of manipulator */
Eigen::VectorXd dq = _model->getVelocities();

/* Get full (with orientation) Jacobian of our end-effector */
Eigen::MatrixXd J = _model->getBodyNode("iiwa_link_ee")->getWorldJacobian();

/* Get current state of the end-effector */
Eigen::MatrixXd currentWorldTransformation = _model->getBodyNode("iiwa_link_ee")
    ->getWorldTransform().matrix();
Eigen::VectorXd currentWorldPosition =
    currentWorldTransformation.block(0, 3, 3, 1);
Eigen::MatrixXd currentWorldOrientation =
    currentWorldTransformation.block(0, 0, 3, 3);
Eigen::VectorXd currentWorldSpatialVelocity = _model->getBodyNode("iiwa_link_ee")
    ->getSpatialVelocity(dart::dynamics::Frame::World(),
                         dart::dynamics::Frame::World());

/* Compute desired forces and torques */
Eigen::VectorXd linearError = _desiredPosition - currentWorldPosition;
/* PD controller in end-effector position */
Eigen::VectorXd desiredForces = _pLinearGain*linearError -
    _dLinearGain*currentWorldSpatialVelocity.tail(3);

/* Special care needs to be taken to compute orientation differences.
   We use the angle-axis representation */
Eigen::VectorXd orientationError =
    dart::math::logMap(_desiredOrientation*currentWorldOrientation.transpose());
/* PD controller in end-effector orientation */
Eigen::VectorXd desiredTorques = _pOrientationGain*orientationError -
    _dOrientationGain*currentWorldSpatialVelocity.head(3);

/* Combine forces and torques in one vector */
Eigen::VectorXd tau(6);
tau.head(3) = desiredTorques;
tau.tail(3) = desiredForces;

/* Compute final forces + gravity compensation + regularization */
forces = J.transpose()*tau + _model->getCoriolisAndGravityForces() -
    _dRegularization*dq;

/* Send commands to the robot */
_model->setCommands(forces);

See this code in the mag­num-ex­amples re­pos­it­ory.

For mak­ing the ex­ample, we had to do a few more things, like adding a few boxes, keep­ing track of their loc­a­tion and up­dat­ing the de­sired end-ef­fect­or loc­a­tion giv­en the user in­put. Moreover, we ad­ded to our ma­nip­u­lat­or a grip­per so that it can grasp the boxes. In real­ity, most com­mer­cially avail­able grip­pers are po­s­i­tion con­trolled (we can­not send torques as we were de­scrib­ing so far, but we need to give de­sired po­s­i­tions): to sim­u­late that we use ve­lo­city con­trol for the grip­per and what we have been de­scrib­ing so far for the rest of the ma­nip­u­lat­or. For this reas­on, we need to keep a mod­el of our ma­nip­u­lat­or without the grip­per so that DART can gives us all the in­form­a­tion we need cleanly and eas­ily. There is a way to do it without a copy, but in­volves keep­ing track of frames and re­mov­ing spe­cif­ic lines from the Jac­obi­an, and thus, I chose the copy to be more clear.

Why DART and not Bul­let?

Bul­let Phys­ics is one of the most pop­u­lar open-source phys­ics en­gines avail­able and widely used in the graph­ics and game in­dustry. Al­though choos­ing which phys­ics lib­rary to use for a pro­ject de­pends on quite many para­met­ers and factors, I am go­ing to list a few reas­ons why I am us­ing DART more act­ively than Bul­let:

  • DART is ori­ented to­wards com­puter an­im­a­tion and ro­bot­ics. This means that it already has im­ple­men­ted a wide range of func­tion­al­ity that is needed for these areas. A few ex­amples con­sist:
    • For­ward/In­verse kin­emat­ics,
    • For­ward dy­nam­ics,
    • Jac­obi­ans and their de­riv­at­ives,
    • Ro­bot dy­nam­ics quant­it­ies (like mass mat­rix, cori­ol­is and grav­ity forces, etc.),
    • Ef­fi­cient im­ple­ment­a­tion for ri­gid body chains (al­though Bul­let has re­cently ad­ded this fea­ture),
    • Mo­tion plan­ning.
  • Mod­ern C++11/14 code,
  • Ac­cur­ate and stable dy­nam­ics for ri­gid body chains (ro­bot­ic sys­tems).

Al­though Bul­let is mov­ing to­wards sup­port­ing most of the fea­tures that DART already has, to me as a ro­boti­cist I find DART more in­tu­it­ive and closer to my mind of think­ing. Nev­er­the­less, if someone is aim­ing for big open worlds (e.g., like in The Eld­er Scrolls: Skyrim) or many ob­jects in­ter­act­ing with each oth­er, DART is not de­signed for this type of situ­ations and Bul­let will be sig­ni­fic­antly faster (pos­sibly less ac­cur­ate though).

~ ~ ~

Thank you for read­ing! Be sure to check out the full ex­ample code, build it on your ma­chine and play around with its con­trols.

Konstantinos Chatzilygeroudis

About the author

Kon­stanti­nos Chatzily­ger­oud­is loves to pro­gram ro­bots, C++ and has a hid­den love for com­puter graph­ics. Post-doc­tor­al fel­low in ro­bot­ics and ma­chine learn­ing at LASA team, EP­FL. Fol­low him on Twit­ter: @chatzilyge

Guest Posts

Guest Posts

This is a guest post by an ex­tern­al au­thor. Do you also have some­thing in­ter­est­ing to say? A suc­cess story worth shar­ing? We’ll be happy to pub­lish it. See the in­tro­duct­ory post for de­tails.