Contents
H2: What’s a Rich Text element?
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
H3: Static and dynamic content editing
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
H1: This is a Heading 1
This is some paragraph. lorem epsum.
This is a fig caption. This is how it will look like under a video frame as a description.
H4: How to customize formatting for each rich text
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
H5: Sample text is being used as a placeholder. Sample text helps you understand how real text may look. Sample text is being used as a placeholder for real text that is normally present.
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
H6: How to customize formatting for each rich text
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
Block Quote: Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
This is a heading 3.
- Sample text is being used as a placeholder.
- Sample text is being used as a placeholder.
- Sample text is being used as a placeholder.
This is a heading 2.
- Sample text is being used as a placeholder.
- Sample text is being used as a placeholder.
- Sample text is being used as a placeholder.
# clone openpilot into your home directory
cd ~
git clone --recurse-submodules https://github.com/commaai/openpilot.git
# setup ubuntu environment
openpilot/tools/ubuntu_setup.sh
# build openpilot
cd openpilot && scons -j$(nproc)
This document is mostly written for internal consumption, but I figured, why not make it public? openpilot is our open source ADAS system that anyone can contribute to. We’ll start at the hardware and work our way up.
The Hardware
Three pieces of hardware are needed to use the openpilot system. An EON running NEOS, a panda based on an STM32F4, and a supported car. The panda acts as the safety enforcing bridge between the EON and the car, using a chip with great support for functional safety, and software that will soon be MISRA (done), ISO26262, and SIL2 compliant.
The EON runs a modified version of Android where all the processes that this post is about run. And the car is obviously the car, with 3 CANs in the right arrangement for the car harness (it’s amazing how many manufacturers match this spec).
After the car, we hit the panda firmware, maintained by our hardware team. Through that, we get to the EON, and to the start of our software tour. You’ll find these daemons in openpilot/selfdrive.
They share an IPC format as specified by cereal. It’s all single publisher multiple subscriber messaging, abstracted such that multiple backends can be used. Right now, we support ZMQ and our custom msgq.
The Sensors and Actuators (hardware team)
boardd
This is the receiving side of the panda firmware. It uses libusb to communicate and parse the raw USB layer communications into “can” packets. On grey and black panda, it also broadcasts the GPS packets from the NEO M8.
camerad
This is the camera stack. It’s afaik the only public custom Qualcomm camera implementation, and it speaks directly with the kernel. It captures both the road and driver camera, and handles autofocus and autoexposure.
sensord
The rest of the sensors are handled here, gyro, accelerometer, magnetometer, and light. The GPS and Qualcomm raw GPS is handled here as well.
NEOS kernel space
This is the the Linux kernel and the big mess of Android. You’ll find our kernel here and our Android fork here. The kernel is unified to run on both the OnePlus 3 and the LePro 3.
The Data Processing (research team)
modeld
The main model, in models/driving_model.dlc, takes in a picture from the road camera and answers the question “Where should I drive the car?” It also takes in a desire input, which can command the model to take action, such as turning or changing lanes. This is where a lot of the comma magic happens, it’s deeply temporal and trained in ways and using tricks that exceed the deep learning state of the art.
modeld also runs the posenet, in models/posenet.dlc. It takes in two frames and outputs the 6-DoF transform between them. It is used for calibration and sanity checking, and is not trained in any particularly magical way.
monitoringd (lives in modeld directory)
This is the driver monitoring model runner. It tracks your head pose, eye positions, and eye states using the model in models/monitoring_model.dlc. It runs on the DSP so as to not use CPU or GPU resources needed by the other daemons, giving it tons of room to grow.
locationd/ubloxd (TBD)
So there’s stuff in locationd right now, but it’s not the final goal of a real localizer. Right now, it parses the data stream from the ublox in ubloxd, then combines it with the posenet to get a stable estimate of the yaw.
calibrationd
The model takes in calibrated frames, meaning the yaw and pitch is corrected for, before the model even looks at the picture. This is important because users mount their EONs in all sorts of ways, and calibration outputs the transform to canonicalize it.
The Controls (openpilot team)
controlsd
This is the main 100hz loop driving the car. It gets a plan from plannerd, and constructs the CAN packets required to make that plan happen. It also publishes carState, which is our universal car abstraction.
plannerd (lives in controls directory)
The model output isn’t quite good enough to drive the car. It outputs where the car needs to be, but it doesn’t know how to get the car there. In planner, we run 3 ACADO based MPC control loops, 1 for lateral and 2 for longitudinal control.
radard (lives in controls directory)
This parses the radar into a RadarState packet. Cars have all different radars, and this canonicalizes them.
paramsd (lives in locationd directory)
This is the learner for car based parameters, like tire stiffness, steering angle offset, and steer ratio.
logging/app/UI (cloud team)
loggerd
This daemon subscribes to all the sockets and cameras, and writes them out to the logs.
uploader/deleter
After we’ve logged the data, we have to get it to the cloud. But not all data makes it to the cloud anymore, we delete old data to make sure there’s always free space. Like a real dashcam.
ui
This is the main driving UI. It’s a 2300 line mess and needs a refactor, but it does work.
apk.frame
This is the outer border. Soon, this will be merged into the C++ UI. The source for this lives here.
apk.offroad
This is the settings menu, the onboarding tutorial, the miles display, and the ad for comma prime. It’s written in React Native and lives here.
athenad
This service allows real time communication with your parked car. Check out the API.
System Support (openpilot team)
manager/thermald
This starts and stops the constellation of processes that make openpilot work.
updated
This daemon managed openpilot updates.
logmessaged/tombstoned/logcatd/proclogd
These are helpers to log data in the event of a processing or the system misbehaving.
NEOS user space
This is the termux based userspace on the EON. It provides a Linux like environment on Android.
Call to Action
If you are interested in working on this open source project, comma.ai is hiring an openpilot engineer. Apply today!
Also follow us on Twitter.