System Engineering and Embedded Software Architecture Workshop

From the trunk to the leaves


Please fill out this attendance form to give us feedback about where everyone is at.

This is the agenda for the Bazel and System Design workshop. The workshop date is October 18, 2018. Meeting lasts from 6:45 PM - 9:00 PM. Please note that the minutes are approximations, so expect each concept to have an additional ±10 minutes.

  • Ground Control Software Review (5 mins)
  • Systems Engineering (30 mins)
  • Required Knowledge for Controls System (50 mins)
  • Feedback Form (5 mins)
  • Reminders
  • Free Time (50 mins)

Systems Engineering

Systems engineering is an interdisciplinary field of engineering and engineering management that focuses on how to design and manage complex systems over their life cycles. Systems engineering ensures that all likely aspects of a project or system are considered, and integrated into a whole.


How do I understand this project?

Elon Musk Quote
-- Elon Musk

Zoom all the way out

Big Picture

Zoom in a bit

Zoom 1

Zoom in a bit more

Zoom 2

Software System


Electrical System


Want to go deeper?

Read our 14 page technical documentation about Spinny that we wrote up for the 2017 - 2018 season.

Required Knowledge for Controls System

Pixhawk (PX4)




Capabilities of software:

  • Firmware is flashed to the flight controller computer (similar to how you would flash an Arduino)
  • Comes with a fully-featured groundstation interface (QGroundControl)
  • Software-in-the-loop (SITL): Can run simulation on a developer computer (within a docker image)
  • Hardware-in-the-loop (HITL): Can also run simulations on a physical flight controller

Running SITL

Original instructions available here:

Must install Gazebo + get all of their dependencies working (very annoying to do). Will only (officially) work on Linux.
Once you have their environment set up, use this:

Alternative: Use our docker image!
./ controls simulate
tmux a


PX4 (and a lot of other aerial-based platforms) use Mavlink messages for communication. Mavlink is a packet implementation for communicating drone messages over a network. The low level implementation of this protocol allows packets to be sent over UDP, TCP, and serial interfaces with the exact same message format.

Mavproxy adds a routing layer for Mavlink to follow. Messages can be received over a serial connection (from the PX4), and be converted/forwarded to one or many UDP/TCP/other serial connections.
Unfortunately, Mavproxy is written in Python, is very slow, so we have been experimenting with Intel's mavlink-router implementation and cmavnode.

Our processes

Running on the Raspberry pi:

  • flight_loop: Main control loop and state machine for our drone. Iterates at a certain set frequency, processing newly received data and writing output on each iteration.
  • io: Receives sensor data from PX4 flight controller, sends this data to flight loop, receives output data from flight loop, and sends output data to both the PX4 and directly-connected actuators over GPIO (gimbal, alarm, etc.)
  • ground_communicator: Networks with the ground, passes missions and triggers to the flight_loop process, and sends back telemetry from the drone.

Why split up these processes?

  • Split complexity into multiple simpler subcomponents
  • Allow for easier unit test designs for subcomponents
  • Easier debugging (can easily see the inputs/outputs to each component, and can tap into interprocess communication streams to debug)
  • Faster recompilation times (only need to recompile modified processes)


  • Added overall complexity (what if 1/3 processes fail? how do you start all processes at one/network them? how do you ensure that versions of each process are synced up?)

How do we compile for different architectures?

Your computer runs AMD64. The raspberry pi on the drone runs ARMv7, which has a different processor instruction set. Therefore, we need to cross compile for both platforms if we want to both run simulations locally and deploy to hardware later on.

One way of cross compilation is using Makefiles and building the code exclusively using the host system's default compiler. This is not reliable, however, since you don't know what compiler version is being used. Additionally, this requires that the program be built on a Raspberry Pi when we want to deploy, which is extremely slow and buggy if not done right.

Cross compilation requires separate build programs. For AMD64 builds, we use the user's local Clang++ installation. For Raspberry Pi, we use different compiler executables that are specifically built to compile ARMv7 programs on AMD64 architecture. This whole system is encapsulated in a massive docker image, which is dirty but gives a reproduceable environment for running all of this stuff in (and also a lot easier to install).

Our C++ code is built with the Bazel framework, the open-sourced version of Google's build system used on essentially all of their projects. Google engineers have also gone on to work at different companies and develop similar build systems, including Buck (Facebook) and Pants (Twitter).

Build =/= Compilation =/= Linking

Build tools: Make, CMake, Gradle, Ninja, Ant, Bazel, Buck, Pants...
Compilers / Compiler wrappers: gcc, clang++, javac,
Linkers: ld
Archiver: ar

For the most part, cross compilation works pretty well without any work for new developers. However, to extend capabilities in the future, all of the cross compilation rules can be found in the CROSSTOOL file.

Task: Explore the tools used in this file.

  • Where is gcc called when compiling for k8/AMD64?
  • Where is gcc located when compiling for raspi/ARMv7?

Make sure you have the latest updates from the MASTER branch.

    git pull

To access the internal docker container, use the following command (assuming the container is running):

    ./tools/scripts/controls/ bash

For mac users: You must set up the docker-machine environmental variables every time you open a new terminal window. This can be done with the following:

    eval $(docker-machine env uas-env)

Alternatively, run our script here:


Don't add this to your path, as it will fail if the uas-env machine is not running or we decide to change the docker machine name for some reason.

Use this command to get to the hidden bazel cache files:

    cd ~/.cache/bazel/_bazel_uas/

In that folder, there will be a bunch of random hashes. These hashes separate the cache files for different workspaces. cd into the one that looks like the newest (check ls -l -a for the last modified date)

Then, cd external. You will see a list of all the downloaded dependencies. These dependencies are defined in WORKSPACE.

cd into libzmq (just picked a random library for this example). What does cat BUILD.bazel print?

This build file is a Bazel rules file for the given library. Documentation for the stuff in these files is available here.

Feel free to explore around a bit more. To get out of the bash prompt to the controls docker image, use exit.

Create your own Bazel library

We are going to create a new binary for saying Hello World.

    cd drone_code/lib/sandbox
    mkdir test2
    cd test2

Create three source files in this folder.

#include "test2.h"

namespace lib {
namespace test2 {

Test2::Test2() {

void Test2::PrintHello() {
  ::std::cout << "Hello UAS@UCLA!" << ::std::endl;

}  // namespace test2
}  // namespace lib


#pragma once

#include <iostream>

namespace lib {
namespace test2 {

class Test2 {

  void PrintHello();

}  // namespace test2
}  // namespace lib

#include "test2.h"

int main() {
  ::lib::test2::Test2 test2;

You will then need to create a BUILD file in this directory to contain the rules for building our new library and binary. Use drone_code/lib/logger/BUILD for inspiration

Here is a template for your BUILD file (you will need to fill in the ______________ spots)

  name = 'test2',
  visibility = ['//visibility:______'],
  srcs = [
  deps = [

  name = 'test2_lib',
  visibility = ['//visibility:____'],
  srcs = [
  hdrs = [

After creating all of these files,
ls -l -a

should print something like:

total 24
drwxr-xr-x 2 comran comran 4096 Oct 16 12:01 .
drwxr-xr-x 7 comran comran 4096 Oct 16 11:53 ..
-rw-r--r-- 1 comran comran  279 Oct 16 12:00 BUILD
-rw-r--r-- 1 comran comran  214 Oct 16 11:59
-rw-r--r-- 1 comran comran  182 Oct 16 11:59 test2.h
-rw-r--r-- 1 comran comran   64 Oct 16 12:01

If successful, then running ./ controls build from the root of drone_code should work successfully.

To run the binary,
./tools/scripts/controls/ ./bazel-out/k8-fastbuild/bin/lib/sandbox/test2/test2
Why do I need the ./tools/scripts/controls/ Since all of our controls stuff is done within a docker container, all of the file paths are relative in the context of the docker container. Therefore, you either need to enter the container using bash or execute the command directly using (non-interactive).

Taking it one step further, you can also run your executable on a Raspberry Pi. You first need to extract the executable from the docker image. Enter the docker image again using ./tools/scripts/controls/ bash, which should take you to the root of the docker image. Check this by running pwd and making sure that the command prints out /home/uas/code_env.

Run the following to tell bazel to build your code for the Raspberry Pi:

bazel build --cpu=raspi //lib/sandbox/test2:test2

The results of the build will be in bazel-out. However, this directory is a symbolic link, which does not work outside of the docker image. So, let's try copying the binary straight to the code environment root, which will be easier to access. Do this with:
cp bazel-out/raspi-fastbuild/bin/lib/sandbox/test2/test2 .
Now exit, and you should see your executable compiled for the ARMv7 architecture. You can verify the type of this executable by running file test2, which should print out something like:

test2: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/, for GNU/Linux 2.6.26, BuildID[md5/uuid]=c5093e777d841a992b9213f11f46bc69, with debug_info, not stripped

Feedback Form

Please fill out this Google form with your feedback about this workshop. Since this is the first year that UAS@UCLA is allocating hours training new members with workshops, we always want your feedback. The feedback you give will help Leadership figure out how to improve workshops.


Vison Basics Workshop is next Tuesday (10/23/18) at 6:00 PM in Engr IV 38-138.

General Meeting is next Thursday (10/25/18) at 6:00 PM in Boelter 4413.

Free Time

See the Controls page.