Note

This example is available as a jupyter notebook here.

Loading and working with experimental data¤

import ring

import jax
import jax.numpy as jnp
import numpy as np
import matplotlib.pyplot as plt

import mediapy as media

def show_video(sys: ring.System, xs: ring.Transform) -> None:
    assert sys.dt == 0.01
    # only render every fourth to get a framerate of 25 fps
    frames = sys.render(xs, camera="c", height=480, width=640, render_every_nth=4,
                        add_cameras={-1: '<camera mode="targetbody" name="c" pos=".5 -.5 1.25" target="3"></camera>'})
    media.show_video(frames, fps=25)

Experimental data and system definitions of the experimental setup are located in..

from diodem import load_data, benchmark

Multiple experimental trials are available. They have exp_ids and motion_starts and motion_stops

# inertial motion tracking problem (IMTP)
exp_id = 1
imtp = benchmark.IMTP([f"seg{i}" for i in range(1, 6)])
sys = imtp.sys(exp_id)

Let's first take a look at the system that was used in the experiments.

state = ring.State.create(sys)
# update the maximal coordinates
xs = ring.algorithms.forward_kinematics(sys, state)[1].x
show_video(sys, xs)
Rendering frames..: 100%|██████████| 1/1 [00:00<00:00,  6.64it/s]

As you can see a five segment kinematic chain was moved, and for each segment IMU measurements and OMC ground truth is available.

Let's load this (no simulated) IMU and OMC data.

# `canonical` is the identifier of the first motion pattern performed in this trial
# `shaking` is the identifier of the last motion pattern performed in this trial
motion_start = "canonical"
data = load_data(exp_id, motion_start=motion_start)
data.keys()
dict_keys(['seg1', 'seg2', 'seg3', 'seg4', 'seg5'])
data["seg1"].keys()
dict_keys(['imu_nonrigid', 'imu_rigid', 'marker1', 'marker2', 'marker3', 'marker4', 'quat'])
data["seg1"]["imu_rigid"].keys()
dict_keys(['acc', 'gyr', 'mag'])

The quaternion quat is to be interpreted as the rotation from segment to an arbitrary OMC inertial frame.

The position marker1 is to be interpreted as the position vector from arbitrary OMC inertial frame to a specific marker (marker 1) on the respective segment (vector given in the OMC inertial frame).

Then, for each segment actually two IMUs are attached to it. One is rigidly attached, one is non-rigidly attached (via foam).

Also, how long is the trial?

data["seg1"]["marker1"].shape
(14200, 3)

It's 325 seconds of data.

Let's take a look at the motion of the whole trial.

To render it, we need maximal coordinates xs of all links in the system.

X, y, xs, xs_noimu = benchmark.benchmark(imtp, exp_id, motion_start)
show_video(sys, xs)
Rendering frames..: 100%|██████████| 3550/3550 [00:33<00:00, 104.73it/s]

Perfect. This is a rendered animation of the real experimental motion that was performed. You can see that the spacing between segments is not perfect.

This is because in our idealistic system model joints have no spatial dimension but in reality they have. The entire setup is 3D printed, and the joints are also several centimeters long.

The segments are 20cm long.

We can use this experimental data to validate our simulated approaches or validate ML models that are learned on simulated training data.