First, something needs to host the website and serve the HTML files that a user’s browser displays. To keep everything self-contained, the robot hosts the website itself instead of hosting it on a server somewhere. An added benefit of this setup is that only people on the same network as the server can access the website (unless IT were to set up port forwarding). People on the university network can be considered slightly more trustworthy that people on the world wide web because they usually have to log in to the network and are less likely to launch large attacks against random PCs they see. Despite not being visible to outside the network, the robot would have dynamic DNS set up to point a domain name to its local IP. Anyone who puts goes to that domain name (like robot.example.com) and is on the university network would be able to view the website.
The webserver software used by this system is Nginx. The purpose of Nginx in this case is to encrypt web traffic going to and from Node.js (more on that in the next paragraph) so that people can’t snoop and steal secrets. To encrypt traffic and get the little green lock symbol in browsers, a certificate is required. Unfortunately, it is generally not possible to get a certificate from a trusted certificate authority when the website is hosted on a local network like this (although it might be possible using a DNS verification method). Instead, the server uses a self-signed certificate generated by a GitLab CI job. The downside is that browsers will show a warning about the certificate when people visit the page for the first time.
Both Nginx and Node.js run in their own Docker containers. This way, their installation is as simple as adding around 30 lines of configuration to the docker-compose.yml file. Only Nginx is exposed to the outside world, where it listens to port 80 (http and ws) and 443 (https and wss). Nginx relays http messages through the Docker network to the Node.js container on port 8080 and ws (websocket) messages on port 8081. Node.js either replies with files, or relays messages to/from the ROS container on port 9090.
Roslibjs supports a wide variety of features and could even be used to stream video from the robot and display it on the website. In the future, it could be used to make something such as a telepresence mode for the robot. It can also be used to display maps or any other data the robot can send.
Everything in the
node folder is run in a docker container made from the official Node.js image. Their setup instructions are available on GitHub. docker-compose mounts a volume so everything in the
node folder is available in the container in the
The Node.js package setup was done following this tutorial. If you need to run npm commands, start the nodejs container and get a terminal to it by running
sudo docker-compose run nodejs sh. npm and ROS conflict so you have to do all your npm stuff in the docker container.
The key functions of node.js:
- Host the website that allows interacting with the robot.
- Hosts a websocket server that functions as a relay to rosbridge in the ROS container. Maybe use this to require a login to connect to the websocket. See example websocket server with authentication.
webviz.io provides a ROS viewer that can display ROS topics and image streams. Access it at https://webviz.io/app/?rosbridge-websocket-url=wss://localhost/ws after changing
wss://localhost/ws to whatever your ROS websocket url is.
nginx is a webserver that relays requests to Node.js and handles ssl encryption. Key functions:
- Handle ssl connections
- Redirect http and ws (port 80) to https and wss (port 433) respectively
- Relay https requests to http://nodejs:8080
- Relay websocket requests to wss:///ws to http://nodejs:8081
It uses a self-signed certificate, which is not ideal because it causes browsers to show a warning but provides fairly good security. It also doesn't check what the host domain is so the website can be accessed by IP, any domain that points to its IP, or locally by going to localhost or 127.0.0.1. Many things about this aren't ideal but it works ok. DigitalOcean provides a tutorial that was partially followed when setting this up that shows how to properly host a website with docker and nodejs. Using certbot to get a real certificate for a website on a local IP might be possible using a DNS authentication method if you DNS provider has a certbot plugin. The self-signed certificate nginx uses in this project is generated with openssl by running the generate-cert.sh script. It was set up following this tutorial.
Be careful because mapping ports with docker seems to overrule the default ufw firewall rules. This is convenient in production but is something to be aware of when developing. More information here.