[ad_1]
2 years in the past, I wrote A Information to Docker and ROS, which is considered one of my most incessantly seen posts — doubtless as a result of it’s a difficult matter and other people have been in search of solutions. Since then, I’ve had the possibility to make use of Docker extra in my work and have picked up some new tips. This was lengthy overdue, however I’ve lastly collected my up to date learnings on this put up.
Lately, I encountered an article titled ROS Docker; 6 explanation why they aren’t a very good match, and I largely agree with it. Nevertheless, the truth is that it’s nonetheless fairly tough to make sure a reproducible ROS atmosphere for individuals who haven’t spent years preventing the ROS studying curve and are adept at debugging dependency and/or construct errors… so Docker remains to be very a lot a crutch that we fall again on to get working demos (and typically merchandise!) out the door.
If the article above hasn’t utterly discouraged you from embarking on this Docker journey, please take pleasure in studying.
Revisiting Our Dockerfile with ROS 2
Now that ROS 1 is on its ultimate model and approaching finish of life in 2025, I assumed it will be applicable to rehash the TurtleBot3 instance repo from the earlier put up utilizing ROS 2.
A lot of the large adjustments on this improve must do with ROS 2, together with shopper libraries, launch information, and configuring DDS. The examples themselves have been up to date to make use of the newest instruments for conduct bushes: BehaviorTree.CPP 4 / Groot 2 for C++ and py_trees / py_trees_ros_viewer for Python. For extra data on the instance and/or conduct bushes, seek advice from my Introduction to Conduct Bushes put up.
From a Docker standpoint, there aren’t too many variations. Our container structure will now be as follows:
Layers of our TurtleBot3 instance Docker picture.
We’ll begin by making our Dockerfile, which defines the contents of our picture. Our preliminary base layer inherits from one of many public ROS photos, osrf/ros:humble-desktop, and units up the dependencies from our instance repository into an underlay workspace. These are outlined utilizing a vcstool repos file.
Discover that we’ve arrange the argument, ARG ROS_DISTRO=humble, so it may be modified for different distributions of ROS 2 (Iron, Rolling, and many others.). Relatively than creating a number of Dockerfiles for various configurations, you need to strive utilizing construct arguments like these as a lot as potential with out being “overly intelligent” in a means that impacts readability.
ARG ROS_DISTRO=humble
######################################### Base Picture for TurtleBot3 Simulation #########################################FROM osrf/ros:${ROS_DISTRO}-desktop as baseENV ROS_DISTRO=${ROS_DISTRO}SHELL [“/bin/bash”, “-c”]
# Create Colcon workspace with exterior dependenciesRUN mkdir -p /turtlebot3_ws/srcWORKDIR /turtlebot3_ws/srcCOPY dependencies.repos .RUN vcs import < dependencies.repos
# Construct the bottom Colcon workspace, putting in dependencies first.
WORKDIR /turtlebot3_ws
RUN supply /decide/ros/${ROS_DISTRO}/setup.bash
&& apt-get replace -y
&& rosdep set up –from-paths src –ignore-src –rosdistro ${ROS_DISTRO} -y
&& colcon construct –symlink-install
ENV TURTLEBOT3_MODEL=waffle_pi
To construct your picture with a selected argument — let’s say you need to use ROS 2 Rolling as an alternative — you can do the next… offered that each one your references to ${ROS_DISTRO} even have one thing that accurately resolves to the rolling distribution.
docker construct -f docker/Dockerfile –build-arg=”ROS_DISTRO=rolling” –target base -t turtlebot3_behavior:base .
I personally have had many points in ROS 2 Humble and later with the default DDS vendor (FastDDS), so I like to change my default implementation to Cyclone DDS by putting in it and setting an atmosphere variable to make sure it’s at all times used.
# Use Cyclone DDS as middlewareRUN apt-get replace && apt-get set up -y –no-install-recommends ros-${ROS_DISTRO}-rmw-cyclonedds-cppENV RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
Now, we are going to create our overlay layer. Right here, we are going to copy over the instance supply code, set up any lacking dependencies with rosdep set up, and arrange an entrypoint to run each time a container is launched.
############################################ Overlay Picture for TurtleBot3 Simulation ############################################FROM base AS overlay
# Create an overlay Colcon workspaceRUN mkdir -p /overlay_ws/srcWORKDIR /overlay_wsCOPY ./tb3_autonomy/ ./src/tb3_autonomy/COPY ./tb3_worlds/ ./src/tb3_worlds/RUN supply /turtlebot3_ws/set up/setup.bash && rosdep set up –from-paths src –ignore-src –rosdistro ${ROS_DISTRO} -y && colcon construct –symlink-install
# Arrange the entrypointCOPY ./docker/entrypoint.sh /ENTRYPOINT [ “/entrypoint.sh” ]
The entrypoint outlined above is a Bash script that sources ROS 2 and any workspaces which might be constructed, and units up atmosphere variables essential to run our TurtleBot3 examples. You need to use entrypoints to do another forms of setup you would possibly discover helpful on your software.
#!/bin/bash# Fundamental entrypoint for ROS / Colcon Docker containers
# Supply ROS 2source /decide/ros/${ROS_DISTRO}/setup.bash
# Supply the bottom workspace, if builtif [ -f /turtlebot3_ws/install/setup.bash ]thensource /turtlebot3_ws/set up/setup.bashexport TURTLEBOT3_MODEL=waffle_piexport GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:$(ros2 pkg prefix turtlebot3_gazebo)/share/turtlebot3_gazebo/modelsfi
# Supply the overlay workspace, if builtif [ -f /overlay_ws/install/setup.bash ]thensource /overlay_ws/set up/setup.bashexport GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:$(ros2 pkg prefix tb3_worlds)/share/tb3_worlds/modelsfi
# Execute the command handed into this entrypointexec “$@”
At this level, you need to have the ability to construct the complete Dockerfile:docker construct -f docker/Dockerfile –target overlay -t turtlebot3_behavior:overlay .
Then, we are able to begin considered one of our instance launch information with the precise settings with this mouthful of a command. Most of those atmosphere variables and volumes are wanted to have graphics and ROS 2 networking functioning correctly from inside our container.
docker run -it –net=host –ipc=host –privileged –env=”DISPLAY” –env=”QT_X11_NO_MITSHM=1″ –volume=”/tmp/.X11-unix:/tmp/.X11-unix:rw” –volume=”${XAUTHORITY}:/root/.Xauthority” turtlebot3_behavior:overlay bash -c “ros2 launch tb3_worlds tb3_demo_world.launch.py”
Our TurtleBot3 instance simulation with RViz (left) and Gazebo basic (proper).
Introducing Docker Compose
From the previous few snippets, we are able to see how the docker construct and docker run instructions can get actually lengthy and unwieldy as we add extra choices. You possibly can wrap this in a number of abstractions, together with scripting languages and Makefiles… however Docker has already solved this drawback by means of Docker Compose.
Briefly, Docker Compose permits you to create a YAML file that captures all of the configuration wanted to arrange constructing photos and working containers.
Docker Compose additionally differentiates itself from the “plain” Docker command in its means to orchestrate providers. This includes constructing a number of photos or targets inside the similar picture(s) and launching a number of applications on the similar time that comprise a complete software. It additionally enables you to lengthen present providers to reduce copy-pasting of the identical settings in a number of locations, outline variables, and extra.
The top objective is that we’ve brief instructions to handle our examples:
docker compose construct will construct what we’d like
docker compose up will launch what we’d like
Docker Compose permits us to extra simply construct and run our containerized examples.
The default identify of this magical YAML file is docker-compose.yaml. For our instance, the docker-compose.yaml file seems as follows:
model: “3.9”providers:# Base picture containing dependencies.base:picture: turtlebot3_behavior:basebuild:context: .dockerfile: docker/Dockerfileargs:ROS_DISTRO: humbletarget: base# Interactive shellstdin_open: truetty: true# Networking and IPC for ROS 2network_mode: hostipc: host# Wanted to show graphical applicationsprivileged: trueenvironment:# Wanted to outline a TurtleBot3 mannequin type- TURTLEBOT3_MODEL=${TURTLEBOT3_MODEL:-waffle_pi}# Permits graphical applications within the container.- DISPLAY=${DISPLAY}- QT_X11_NO_MITSHM=1- NVIDIA_DRIVER_CAPABILITIES=allvolumes:# Permits graphical applications within the container.- /tmp/.X11-unix:/tmp/.X11-unix:rw- ${XAUTHORITY:-$HOME/.Xauthority}:/root/.Xauthority
# Overlay picture containing the instance supply code.overlay:extends: baseimage: turtlebot3_behavior:overlaybuild:context: .dockerfile: docker/Dockerfiletarget: overlay
# Demo worlddemo-world:extends: overlaycommand: ros2 launch tb3_worlds tb3_demo_world.launch.py
# Conduct demo utilizing Python and py_treesdemo-behavior-py:extends: overlaycommand: >ros2 launch tb3_autonomy tb3_demo_behavior_py.launch.pytree_type:=${BT_TYPE:?}enable_vision:=${ENABLE_VISION:?}target_color:=${TARGET_COLOR:?}
# Conduct demo utilizing C++ and BehaviorTree.CPPdemo-behavior-cpp:extends: overlaycommand: >ros2 launch tb3_autonomy tb3_demo_behavior_cpp.launch.pytree_type:=${BT_TYPE:?}enable_vision:=${ENABLE_VISION:?}target_color:=${TARGET_COLOR:?}
As you may see from the Docker Compose file above, you may specify variables utilizing the acquainted $ operator in Unix primarily based methods. These variables will by default be learn from both your host atmosphere or by means of an atmosphere file (often known as .env). Our instance.env file seems like this:
# TurtleBot3 modelTURTLEBOT3_MODEL=waffle_pi
# Conduct tree sort: Could be naive or queue.BT_TYPE=queue
# Set to true to make use of imaginative and prescient, else false to solely do navigation behaviors.ENABLE_VISION=true
# Goal colour for imaginative and prescient: Could be crimson, inexperienced, or blue.TARGET_COLOR=blue
At this level, you may construct all the pieces:
# By default, picks up a `docker-compose.yaml` and `.env` file.docker compose construct
# You may also explicitly specify the filesdocker compose –file docker-compose.yaml –env-file .env construct
Then, you may run the providers you care about:
# Carry up the simulationdocker compose up demo-world
# After the simulation has began,# launch considered one of these in a separate Terminaldocker compose up demo-behavior-pydocker compose up demo-behavior-cpp
The total TurtleBot3 demo working with py_trees because the Conduct Tree.
Organising Developer Containers
Our instance up to now works nice if we need to bundle up working examples to different customers. Nevertheless, if you wish to develop the instance code inside this atmosphere, you will want to beat the next obstacles:
Each time you modify your code, you will want to rebuild the Docker picture. This makes it extraordinarily inefficient to get suggestions on whether or not your adjustments are working as supposed. That is already an instantaneous deal-breaker.
You possibly can resolve the above through the use of bind mounts to sync up the code in your host machine with that within the container. This will get us heading in the right direction, however you’ll discover that any information generated contained in the container and mounted on the host will probably be owned by root as default. You may get round this by whipping out the sudo and chown hammer, nevertheless it’s not mandatory.
All of the instruments chances are you’ll use for growth, together with debuggers, are doubtless lacking contained in the container… until you put in them within the Dockerfile, which may bloat the scale of your distribution picture.
Fortunately, there’s a idea of a developer container (or dev container). To place it merely, this can be a separate container that permits you to truly do your growth in the identical Docker atmosphere you’ll use to deploy your software.
There are various methods of implementing dev containers. For our instance, we are going to modify the Dockerfile so as to add a brand new dev goal that extends our present overlay goal.
Dev containers enable us to develop inside a container from our host system with minimal overhead.
This dev container will do the next:
Set up further packages that we might discover useful for growth, similar to debuggers, textual content editors, and graphical developer instruments. Critically, these won’t be a part of the overlay layer that we are going to ship to finish customers.
Create a brand new consumer that has the identical consumer and group identifiers because the consumer that constructed the container on the host. This can make it such that each one information generated inside the container (in folders we care about) have the identical possession settings as if we had created the file on our host. By “folders we care about”, we’re referring to the ROS workspace that comprises the supply code.
Put our entrypoint script within the consumer’s Bash profile (~/.bashrc file). This lets us supply our ROS atmosphere not simply at container startup, however each time we connect a brand new interactive shell whereas our dev container stays up.
###################### Improvement Picture ######################FROM overlay as dev
# Dev container argumentsARG USERNAME=devuserARG UID=1000ARG GID=${UID}
# Set up further instruments for developmentRUN apt-get replace && apt-get set up -y –no-install-recommends gdb gdbserver nano
# Create new consumer and residential directoryRUN groupadd –gid $GID $USERNAME && useradd –uid ${GID} –gid ${UID} –create-home ${USERNAME} && echo ${USERNAME} ALL=(root) NOPASSWD:ALL > /and many others/sudoers.d/${USERNAME} && chmod 0440 /and many others/sudoers.d/${USERNAME} && mkdir -p /dwelling/${USERNAME} && chown -R ${UID}:${GID} /dwelling/${USERNAME}
# Set the possession of the overlay workspace to the brand new userRUN chown -R ${UID}:${GID} /overlay_ws/
# Set the consumer and supply entrypoint within the consumer’s .bashrc fileUSER ${USERNAME}RUN echo “supply /entrypoint.sh” >> /dwelling/${USERNAME}/.bashrc
You possibly can then add a brand new dev service to the docker-compose.yaml file. Discover that we’re including the supply code as volumes to mount, however we’re additionally mapping the folders generated by colcon construct to a .colcon folder on our host file system. This makes it such that generated construct artifacts persist between stopping our dev container and bringing it again up, in any other case we’d must do a clear rebuild each time.
dev:extends: overlayimage: turtlebot3_behavior:devbuild:context: .dockerfile: docker/Dockerfiletarget: devargs:- UID=${UID:-1000}- GID=${UID:-1000}- USERNAME=${USERNAME:-devuser}volumes:# Mount the supply code- ./tb3_autonomy:/overlay_ws/src/tb3_autonomy:rw- ./tb3_worlds:/overlay_ws/src/tb3_worlds:rw# Mount colcon construct artifacts for sooner rebuilds- ./.colcon/construct/:/overlay_ws/construct/:rw- ./.colcon/set up/:/overlay_ws/set up/:rw- ./.colcon/log/:/overlay_ws/log/:rwuser: ${USERNAME:-devuser}command: sleep infinity
At this level you are able to do:
# Begin the dev containerdocker compose up dev
# Connect an interactive shell in a separate Terminal# NOTE: You are able to do this a number of occasions!docker compose exec -it dev bash
As a result of we’ve mounted the supply code, you may make modifications in your host and rebuild contained in the dev container… or you should use useful instruments just like the Visible Studio Code Containers extension to straight develop contained in the container. As much as you.
For instance, when you’re contained in the container you may construct the workspace with:
colcon construct
Resulting from our quantity mounts, you’ll see that the contents of the .colcon/construct, .colcon/set up, and .colcon/log folders in your host have been populated. Which means that should you shut down the dev container and convey up a brand new occasion, these information will live on and can velocity up rebuilds utilizing colcon construct.
Additionally, as a result of we’ve gone by means of the difficulty of creating a consumer, you’ll see that these information usually are not owned by root, so you may delete them should you’d like to scrub out the construct artifacts. It’s best to do that with out making the brand new consumer and also you’ll run into some annoying permissions roadblocks.
$ ls -al .colcontotal 20drwxrwxr-x 5 sebastian sebastian 4096 Jul 9 10:15 .drwxrwxr-x 10 sebastian sebastian 4096 Jul 9 10:15 ..drwxrwxr-x 4 sebastian sebastian 4096 Jul 9 11:29 builddrwxrwxr-x 4 sebastian sebastian 4096 Jul 9 11:29 installdrwxrwxr-x 5 sebastian sebastian 4096 Jul 9 11:31 log
The idea of dev containers is so widespread at this level that a normal has emerged at containers.dev. I additionally need to level out another nice sources together with Allison Thackston’s weblog, Griswald Brooks’ GitHub repo, and the official VSCode dev containers tutorial.
Conclusion
On this put up, you might have seen how Docker and Docker Compose may also help you create reproducible ROS 2 environments. This consists of the flexibility to configure variables at construct and run time, in addition to creating dev containers that will help you develop your code in these environments earlier than distributing it to others.
We’ve solely scratched the floor on this put up, so be sure you poke round on the sources linked all through, check out the instance repository, and usually keep inquisitive about what else you are able to do with Docker to make your life (and your customers’ lives) simpler.
As at all times, please be happy to achieve out with questions and suggestions. Docker is a extremely configurable software, so I’m genuinely inquisitive about how this works for you or whether or not you might have approached issues in a different way in your work. I would study one thing new!
Sebastian Castro
is a Senior Robotics Engineer at PickNik.
[ad_2]