Note
This example is available as a jupyter notebook here.
Loading and working with experimental data¤
import ring
import jax
import jax.numpy as jnp
import numpy as np
import matplotlib.pyplot as plt
import mediapy as media
def show_video(sys: ring.System, xs: ring.Transform) -> None:
assert sys.dt == 0.01
# only render every fourth to get a framerate of 25 fps
frames = sys.render(xs, camera="c", height=480, width=640, render_every_nth=4,
add_cameras={-1: '<camera mode="targetbody" name="c" pos=".5 -.5 1.25" target="3"></camera>'})
media.show_video(frames, fps=25)
Experimental data and system definitions of the experimental setup are located in..
from diodem import load_data, benchmark
Multiple experimental trials are available. They have exp_id
s and motion_start
s and motion_stop
s
# inertial motion tracking problem (IMTP)
exp_id = 1
imtp = benchmark.IMTP([f"seg{i}" for i in range(1, 6)])
sys = imtp.sys(exp_id)
Let's first take a look at the system that was used in the experiments.
state = ring.State.create(sys)
# update the maximal coordinates
xs = ring.algorithms.forward_kinematics(sys, state)[1].x
show_video(sys, xs)
As you can see a five segment kinematic chain was moved, and for each segment IMU measurements and OMC ground truth is available.
Let's load this (no simulated) IMU and OMC data.
# `canonical` is the identifier of the first motion pattern performed in this trial
# `shaking` is the identifier of the last motion pattern performed in this trial
motion_start = "canonical"
data = load_data(exp_id, motion_start=motion_start)
data.keys()
data["seg1"].keys()
data["seg1"]["imu_rigid"].keys()
The quaternion quat
is to be interpreted as the rotation from segment to an arbitrary OMC inertial frame.
The position marker1
is to be interpreted as the position vector from arbitrary OMC inertial frame to a specific marker (marker 1) on the respective segment (vector given in the OMC inertial frame).
Then, for each segment actually two IMUs are attached to it. One is rigidly attached, one is non-rigidly attached (via foam).
Also, how long is the trial?
data["seg1"]["marker1"].shape
It's 325 seconds of data.
Let's take a look at the motion of the whole trial.
To render it, we need maximal coordinates xs
of all links in the system.
X, y, xs, xs_noimu = benchmark.benchmark(imtp, exp_id, motion_start)
show_video(sys, xs)