Contents
H2: What’s a Rich Text element?
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
H3: Static and dynamic content editing
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
H1: This is a Heading 1
This is some paragraph. lorem epsum.
This is a fig caption. This is how it will look like under a video frame as a description.
H4: How to customize formatting for each rich text
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
H5: Sample text is being used as a placeholder. Sample text helps you understand how real text may look. Sample text is being used as a placeholder for real text that is normally present.
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
H6: How to customize formatting for each rich text
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
Block Quote: Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
This is a heading 3.
- Sample text is being used as a placeholder.
- Sample text is being used as a placeholder.
- Sample text is being used as a placeholder.
This is a heading 2.
- Sample text is being used as a placeholder.
- Sample text is being used as a placeholder.
- Sample text is being used as a placeholder.
# clone openpilot into your home directory
cd ~
git clone --recurse-submodules https://github.com/commaai/openpilot.git
# setup ubuntu environment
openpilot/tools/ubuntu_setup.sh
# build openpilot
cd openpilot && scons -j$(nproc)
Although openpilot models are mostly trained end to end using the path the car drove, there are a few steps in our pipeline that still require knowing what things in the image are.
Unlike datasets like Mapillary Vistas and Cityscapes, we have many less classes, and we include much more diverse images (like night driving). Our segnet has 5 labels, in 3 broad classes.
- Moves with scene — Road, Lane Markings, Undrivable (including sky)
- Moves themselves — Movable (like vehicles, pedestrians, etc…)
- Moves with you — My car (and anything inside it) It’s important to filter out the latter two classes while groundtruthing posenets. And lane markings and roads are important for determining tracks of cars, for current models and potentially future HD maps.
Our current segnet is terrible
Okay, so it’s not usually quite that bad, but in pictures that look less than pristine, the segnet can do a really bad job. That is image 793 in the comma10k dataset.
Right now, you can improve it. Fork the comma10k repo and edit the mask png in whatever image editor you choose. Be careful to only use the 5 colors from the README. When done, submit a pull request.
Commercial Tools
We tried ScaleAPI and Labelbox. The Labelbox labeling tool is slow and hard to use, and we don’t see how it provides much value over the old commacoloring tool, never mind using Photoshop. And the one image we submitted to ScaleAPI came back incorrect, never mind the $6.40 price we paid for it. Tried their auditing, but no response yet.
The large vehicle in the center of the scene is marked “undrivable unmovable,” not a small oversight. The original image is here, 999 from the comma10k dataset. Note that we chose to use more than the 5 labels since ScaleAPI charges the same regardless.
We are unimpressed with the current commercial offerings.
A Continually Improving Dataset
Many of the existing self driving car semantic segmentation datasets have bugs, but because they aren’t on something like GitHub, there’s no way to fix them. We want to make a perfect dataset of 10k images, the comma10k, and forking and pull requests are a great way to do this. GitHub even has a very nice image diff tool.
For now, we have released 1k images in the repo. Let’s get these labelled, then we’ll train a segnet on the 1k to label the next 1k. Touch up those 1k, and after doing this a few times, the touch ups should get easier and easier. 10k pixel accurate images coming soon! We’ve seen huge success with crowdsourced open source projects like this in the past.
This dataset is MIT licensed, meaning it can be used for a huge variety of purposes. Let’s build the next dataset used for semantic segmentation benchmarks everywhere!
Help Out Today
Join our discord and come in the #comma-pencil channel. Procedures are being worked out to make this labeling as fast and as accurate as possible. Clone the repo from here.
We are hoping to have the first 1000 images labelled by 3/16 so we can move on to phase 2. If they are, we’ll have some prizes for the top participants. No purchase necessary to play, just pull requests!
You should already be following us on Twitter.