1 - Overview

A Quick Overview of CherryBot Robot Systems

image

What is it?

A CherryBot is an autonomous robotic system powered with production-ready System on Module(SOM) NVIDIA Jetson Nano board, a low-powered AI deployed as Edge device, a sensor suite that includes multiple cameras, GPS and swappable batteries.It is an autonomous delivery robot which picks up and delivers swags and food items within a conference, campus or a roughly 5-6 square kilometers area centered around university premises. The goal is to allow the robot to traverse a given conference and campus area delimited by coordinates on a GPS receiver, detect and avoid obstacles in its path, thereby distributing swags and food items using Deep Learning algorithm.

What is it good for?

CherryBot Robotic Platform is a game-changing educational robot built to unlock the potential in every learner. It provides users with an in-depth understanding of Artificial Intelligence, IoT Edge devices (such as Jetson Nano and Xavier)

What is it not good for?

This is just a prototype and not to be used for production

Where should I go next?

2 - Hardware

List of Hardware for CherryBot

Preparing Your Environment

S. No. Items Link Reference
1 Prusa i3 MK3S Buy
2 Arduino Uno Buy
3 300 RPM BO Motor-Straight Buy
4 L293D Motor Driver Buy
5 Jetson Nano 4GB Buy
6 Jetson AGX Xavier Buy
7 Arduino Power Supply Buy
8 Cable for Arduino UNO Buy
9 Digital Multimeter Buy
10 Soldering Workstation Buy
11 Soldering Mat Buy
12 SD Card 128 GB Buy
13 Noctua Fans Buy
14 NEO-6M GPS Module with EPROM Buy

Prusa i3 MKS(3D Printer)

Prusa i3 uses 3D printing filament as feedstock to make parts. It is simple to use and consistently produced good-quality prints. Prusa i3 comes with a brand new SuperPINDA probe for improved first layer calibration, added high-quality Misumi bearings and various useful design tweaks to make the printer easier to assemble and maintain. This red-and-black printer measures 15 by 19.7 by 22 inches (HWD), excluding the spool and the spool holder, which sit atop the printer. It is considerably larger than the Original Prusa Mini, which measures 14.6 by 13 by 15 inches (HWD). The i3 MK3S also has a larger print volume, 9.8 by 8.3 by 7.9 inches, compared with the 7-by-7-by-7-inch print volume of the Prusa Mini.

image

Arduino Uno

The Arduino Uno R3 with Cable is a microcontroller board based on the ATmega328. It has 14 digital input/output pins (of which 6 can be used as PWM outputs); 6 analog inputs, a 16 MHz ceramic resonator, a USB connection, a power jack, an ICSP header, and a reset button.

My Image

300 RPM BO Motor-Straight

The 300 RPM BO Motor Plastic Gear Motor – BO series straight motor gives good torque and rpm at lower operating voltages, which is the biggest advantage of these motors. Small shaft with matching wheels gives an optimized design for your application or robot. Mounting holes on the body & light weight makes it suitable for in-circuit placement. This motor can be used with 69mm Diameter Wheel for Plastic Gear Motors and 87mm Diameter Multipurpose Wheel for Plastic Gear Motors.

My Image

L293D Motor Driver

The L293D Motor Driver/Servo Shield for Arduino is probably one of the most versatile on the market and features 2 servo and 4 motor connectors for DC or stepper motors. This Arduino compatible motor Driver shield is a full-featured product that it can be used to drive 4 DC motor or two 4-wire steppers and two 5v servos. It drives the DC motor and stepper with the L293D, and it drives the servo with Arduino pin9 and pin10. The shield contains two L293D motor drivers and one 74HC595 shift register. The shift register expands 3 pins of the Arduino to 8 pins to control the direction of the motor drivers. The output enables the L293D is directly connected to the PWM outputs of the Arduino.

My Image

Fluke 106 Multimeter

Image

Weller WE 1010NA soldering Workstation

image

NEO-6M GPS Module with EPROM

Image

MT3608 2A Max DC-DC Step Up Power Module Booster Power Module

The MT3608 2A Max DC-DC Step Up Power Module Booster Power Module is a low-cost module that can step-up a 2 to 24V input voltage up to a 5 to 28V output at up to 2A.

Image

0.28 Inch 0-100V Three Wire DC Voltmeter

This is a tiny and compact digital voltmeter with a red LED Display. The 0 to 100 V 0.28-inch Digital Voltage Tester DC VOLTMETER requires only a few minutes of configuration and is as simple as directly connecting wires to the source you want to measure and seeing the LED blink

Image

Soldering iron Stand Holder Table Magnifying glass

TE-801 Multi-function LED Magnifier PCB Soldering iron Stand Holder Table Magnifying glass 35X 12X w/ 2-LED Light

My Image

Custom Fume extractor

My Image

Jetson Nano

My Image

2.1 - Working with GPS Module

How to get started with GPS Module

NEO-6M GPS Module with EPROM is a complete GPS module that is based on the NEO 6M GPS. This unit uses the latest technology to give the best possible positioning information and includes a larger built-in 25 x 25mm active GPS antenna with a UART TTL socket. A battery is also included so that you can obtain a GPS lock faster. This is an updated GPS module that can be used with ardupilot mega v2. This GPS module gives the best possible position information, allowing for better performance with your Ardupilot or other Multirotor control platform.

The GPS module has serial TTL output, it has four pins: TX, RX, VCC, and GND. You can download the u-centre software for configuring the GPS and changing the settings and much more. It is really good software (see link below).

Table of Contents

  1. Intent
  2. Hardware
  3. Software
  4. Connect GPS Module to Rasberry Pi
  5. Ploting the GPS Values over Google Map
  6. Stream Data Over PubNub

Intent

How to connect GPS to Raspberry Pi or Arduino, fetch the latitude and longitude values and plot it over Google Map

Hardware

  • Raspberry Pi/Arduino
  • NEO-6M GPS Module with EPROM

image

Software

  • Flash Rapsberry Pi SD card with OS using Etcher

Connect the GPS module to the Raspberry PI.

There are only 4 wires (F to F), so it’s a simple connection.

image

  • Neo-6M RPI

  • VCC to Pin 1, which is 3.3v

  • TX to Pin 10, which is RX (GPIO15)

  • RX to Pin 8, Which is TX (GPIO14)

  • Gnd to Pin 6, which is Gnd

image

Turn Off the Serial Console

By default, the Raspberry Pi uses the UART as a serial console. We need to turn off that functionality so that we can use the UART for our own application. Open a terminal session on the Raspberry Pi.

Step 1. Backup the file cmdline.txt

sudo cp /boot/cmdline.txt /boot/cmdline_backup.txt 

Step 2. Edit cmdlint.txt and remove the serial interface

sudo nano /boot/cmdline.txt

Step 3. Delete console=ttyAMA0,115200

Once you delete it, save the file by pressing Ctrl X, Y, and Enter.

Step 4. Edit /etc/inittab

sudo nano /etc/inittab 

Step 5. Find ttyAMA0

You can find ttyAMA0 by pressing Ctrl W and typing ttyAMA0 on the search line

Press Home > insert a # symbol to comment out that line and Ctrl X, Y, Enter to save.

sudo reboot

Step 6. Test the GPS

Open a terminal session and type

sudo apt-get install gpsd gpsd-clients

Step 7. Start the serial port:

stty -F /dev/ttyAMA0 9600

Now start GPSD:

sudo gpsd /dev/ttyAMA0 -F /var/run/gpsd.sock

Step 8. Final Results

cgps -s

Fetching the Values

Clone the repository

git clone https://github.com/collabnix/cherrybot
cd cherrybot/pubnub/

Fetching the GPS values

python3 gps.py
Latitude=12.9814865and Longitude=77.6683425
Latitude=12.9814848333and Longitude=77.6683436667
Latitude=12.9814841667and Longitude=77.6683451667
Latitude=12.9814818333and Longitude=77.6683461667
Latitude=12.9814853333and Longitude=77.6683491667
Latitude=12.9814783333and Longitude=77.6683485
Latitude=12.9814701667and Longitude=77.6683466667
Latitude=12.981464and Longitude=77.668345
Latitude=12.9814586667and Longitude=77.6683438333
Latitude=12.9814525and Longitude=77.6683428333
Latitude=12.9814458333and Longitude=77.6683421667
Latitude=12.9814395and Longitude=77.6683421667
Latitude=12.9814331667and Longitude=77.668342
Latitude=12.981428and Longitude=77.6683425
Latitude=12.981423and Longitude=77.6683428333
Latitude=12.9814185and Longitude=77.6683431667
Latitude=12.9814146667and Longitude=77.6683436667
Latitude=12.9814095and Longitude=77.6683443333
Latitude=12.9814056667and Longitude=77.6683456667
Latitude=12.981401and Longitude=77.668346
Latitude=12.9813966667and Longitude=77.66834

Ploting the GPS Values over Google Map

Stream Data Over PubNub

If you haven’t already done so, sign up for a free PubNub account before you begin this step.

Change directory

Change directory into the examples directory containing the gps_simpletest.py file and install the PubNub Python SDK.

pip3 install pubnub

Import PubNub Package

import pubnub
from pubnub.pnconfiguration import PNConfiguration
from pubnub.pubnub import PubNub
from pubnub.callbacks import SubscribeCallback
from pubnub.enums import PNOperationType, PNStatusCategory

Configure a PubNub instance with your publish/subscribe Keys

pnconfig = PNConfiguration()
pnconfig.subscribe_key = "YOUR SUBSCRIBE KEY"
pnconfig.publish_key = "YOUR PUBLISH KEY"
pnconfig.ssl = False
pubnub = PubNub(pnconfig)

Then to publish, place a publishing callback somewhere near the beginning of your code. You can write whatever you want for the callback, but we’ll leave it blank as we don’t really need it for now.

def publish_callback(result, status):
    pass
    # Handle PNPublishResult and PNStatus

Here is where you decide what data you want to publish. Since we are building just a simple GPS tracking device, we’re just going to be dealing with the latitude and longitude coordinates.

When you want to publish multiple variables in one JSON, you must create a dictionary like so:

dictionary = {"DATA 1 NAME": gps.DATA1, "DATA 2 NAME": gps.DATA2}

So in our case we would write:

dictionary = {"latitude": gps.latitude, "longitude": gps.longitude}

And then to publish that data, you would format the dictionary like this:

pubnub.publish().channel("CHANNEL").message(dictionary).pn_async(publish_callback)

It is best to place the dictionary and publishing lines within the “if gps.DATA is not none” to avoid any program failures.

Visualize your GPS Data with Google Maps

It’s time to visualize our GPS data in a way that humans can understand.

We’re just going to create a small HTML page that will grab GPS data from our PubNub channel and graph the data with a geolocation API.

Google Maps API

The Google Maps API is a universal tool that is not only one of the cheaper APIs for a greater amount of API calls but also has a rich and expansive toolset for developers. The GPS data is not only more accurate than most other APIs, but also has extensive tools such as “ETA” that uses Google’s geographical terrain data.

So if you ever want to build a serious GPS tracking app with PubNub, Google Maps is the way to go.

Image result for google maps api with marker

You’ll first need to get a Google Maps API Key

Once that’s done, create an .html file and copy-paste the code below (explanation of the code is below as well).

<!DOCTYPE html>
<html>
  <head>
    <title>Simple Map</title>
    <meta name="viewport" content="initial-scale=1.0">
    <meta charset="utf-8">
    <style>
      /* Always set the map height explicitly to define the size of the div
       * element that contains the map. */
      #map {
        height: 100%;
      }
      /* Optional: Makes the sample page fill the window. */
      html, body {
        height: 100%;
        margin: 0;
        padding: 0;
      }
    </style>
    <script src="https://cdn.pubnub.com/sdk/javascript/pubnub.4.23.0.js"></script>
  </head>
  <body>
    <div id="map"></div>
    <script>
  // the smooth zoom function
  function smoothZoom (map, max, cnt) {
      if (cnt >= max) {
          return;
      }
      else {
          z = google.maps.event.addListener(map, 'zoom_changed', function(event){
              google.maps.event.removeListener(z);
              smoothZoom(map, max, cnt + 1);
          });
          setTimeout(function(){map.setZoom(cnt)}, 80); // 80ms is what I found to work well on my system -- it might not work well on all systems
      }
  } 
    var pubnub = new PubNub({
    subscribeKey: "YOUR SUBSCRIBE KEY",
    ssl: true
  });  
  var longitude = 30.5;
  var latitude = 50.5;
  pubnub.addListener({
      message: function(m) {
          // handle message
          var channelName = m.channel; // The channel for which the message belongs
          var channelGroup = m.subscription; // The channel group or wildcard subscription match (if exists)
          var pubTT = m.timetoken; // Publish timetoken
          var msg = m.message; // The Payload
          longitude = msg.longitude;
          latitude = msg.latitude;
          var publisher = m.publisher; //The Publisher
    var myLatlng = new google.maps.LatLng(latitude, longitude);
    var marker = new google.maps.Marker({
        position: myLatlng,
        title:"PubNub GPS"
    });
    // To add the marker to the map, call setMap();
    map.setCenter(marker.position);
    smoothZoom(map, 14, map.getZoom());
    marker.setMap(map);
      },
      presence: function(p) {
          // handle presence
          var action = p.action; // Can be join, leave, state-change or timeout
          var channelName = p.channel; // The channel for which the message belongs
          var occupancy = p.occupancy; // No. of users connected with the channel
          var state = p.state; // User State
          var channelGroup = p.subscription; //  The channel group or wildcard subscription match (if exists)
          var publishTime = p.timestamp; // Publish timetoken
          var timetoken = p.timetoken;  // Current timetoken
          var uuid = p.uuid; // UUIDs of users who are connected with the channel
      },
      status: function(s) {
          var affectedChannelGroups = s.affectedChannelGroups;
          var affectedChannels = s.affectedChannels;
          var category = s.category;
          var operation = s.operation;
      }
  });
  pubnub.subscribe({
      channels: ['ch1'],
  });
      var map;
      function initMap() {
        map = new google.maps.Map(document.getElementById('map'), {
          center: {lat: latitude, lng: longitude},
          zoom: 8
        });
      }
    </script>
    <script src="https://maps.googleapis.com/maps/api/js?key=AIzaSyBLuWQHjBa9SMVVDyyqxqTpR2ZwnxwcbGE&callback=initMap"
    async defer></script>
  </body>
</html>

This part of the code is responsible for rendering our map on the HTML page.

<style>
  /* Always set the map height explicitly to define the size of the div
   * element that contains the map. */
  #map {
    height: 100%;
  }
  /* Optional: Makes the sample page fill the window. */
  html, body {
    height: 100%;
    margin: 0;
    padding: 0;
  }
</style>

Just a little below it, we enter a div id tag to tell where we want the map to render:

//div tag for map[] id

Here we simply import the PubNub JS SDK to enable PubNub data streaming for our GPS data:

<script src="https://cdn.pubnub.com/sdk/javascript/pubnub.4.23.0.js"></script>

We must also import the Google Maps API with this script tag:

<script src="https://maps.googleapis.com/maps/api/js?key=YOURAPIKEY&callback=initMap"async defer></script>

NOTE: The rest of the code is encapsulated within one script tag, so don’t be alarmed if we jump around in explaining this final part of the code.

In order to stream our data, instantiate a PubNub instance:

var pubnub = new PubNub({
    subscribeKey: "YOUR SUBSCRIBE KEY",
    ssl: true
  });

Then we instantiate a PubNub listener with the following code.

pubnub.addListener({
      message: function(m) {
          // handle message
          var channelName = m.channel; // The channel for which the message belongs
          var channelGroup = m.subscription; // The channel group or wildcard subscription match (if exists)
          var pubTT = m.timetoken; // Publish timetoken
          var publisher = m.publisher; //The Publisher
          
          var msg = m.message; // The Payload
          //extract and save the longitude and latitude data from your incomming PubNub message
          longitude = msg.longitude;
          latitude = msg.latitude;
          
        //Create a new Google Maps instance with updated GPS coordinates
      var myLatlng = new google.maps.LatLng(latitude, longitude);
      //Create a marker instance with the coordinates
      var marker = new google.maps.Marker({
          position: myLatlng,
          title:"PubNub GPS"
      });
      
      //center the map with the maker position
      map.setCenter(marker.position);
      //Optional: create a zooming annimation when the gps changes coordinates
      smoothZoom(map, 14, map.getZoom());
      // To add the marker to the map, call setMap();
      marker.setMap(map);
      },
      presence: function(p) {
          // handle presence
          var action = p.action; // Can be join, leave, state-change or timeout
          var channelName = p.channel; // The channel for which the message belongs
          var occupancy = p.occupancy; // No. of users connected with the channel
          var state = p.state; // User State
          var channelGroup = p.subscription; //  The channel group or wildcard subscription match (if exists)
          var publishTime = p.timestamp; // Publish timetoken
          var timetoken = p.timetoken;  // Current timetoken
          var uuid = p.uuid; // UUIDs of users who are connected with the channel
      },
      status: function(s) {
          var affectedChannelGroups = s.affectedChannelGroups;
          var affectedChannels = s.affectedChannels;
          var category = s.category;
          var operation = s.operation;
      }
  });

In order to avoid syntax errors, place a subscriber instance right below the listener.

pubnub.subscribe({
      channels: ['YOUR CHANNEL NAME'],
  });

As you can see, we open up incoming messages with the following line of code.

var msg = m.message; // The Payload

And then extract the variables we desire based on the sent JSON.

longitude = msg.longitude;
latitude = msg.latitude;

We then format the data variables in accordance to a Google Maps object.

var myLatlng = new google.maps.LatLng(latitude, longitude);

To set a Google marker on our GPS coordinates we create a Google Maps marker object.

var marker = new google.maps.Marker({
          position: myLatlng,
          title:"Title of Marker"
      });

Then add the marker to your Google Maps object by calling setMap().

marker.setMap(map);

Of course, it would be nice to center our map on the marker so we can actually see it so we center it on the markers position.

map.setCenter(marker.position);

This is optional, but if you want to add a smooth zooming animation every time you locate a marker, call a smoothZoom function like so.

smoothZoom(map, 14, map.getZoom());

And implement the smoothZoom function somewhere.

function smoothZoom (map, max, cnt) {
      if (cnt >= max) {
          return;
      }
      else {
          z = google.maps.event.addListener(map, 'zoom_changed', function(event){
              google.maps.event.removeListener(z);
              smoothZoom(map, max, cnt + 1);
          });
          setTimeout(function(){map.setZoom(cnt)}, 80); // 80ms is what I found to work well on my system -- it might not work well on all systems
      }
  } 

Lastly we’ll need to initialize the map so we write:

var map;
     function initMap() {
       map = new google.maps.Map(document.getElementById('map'), {
         center: {lat: latitude, lng: longitude},
         zoom: 8
       });
     }

And set the initial values of your latitude and longitude variables to wherever you want.

var longitude = 30.5;
var latitude = 50.5;

And that’s it!

Fetch the values over Google Map

open frontend.html

image

2.2 - Working with BME680 Sensors

Getting Started with BME680 Sensors

image

Hardware requirements:

  • Jetson Nano: 2GB Model ($59)
  • A 5V 4Amp charger
  • 128GB SD card
  • BME680 sensors

Software requirements:

  • Jetson SD card image from NVIDIA
  • Etcher software installed on your system

You can run RedisTimeSeries directly over an IoT Edge device. Follow the below steps to build RedisTimeSeries Docker Image over Jetson Nano:

Verifying Docker version

SSH to 70.167.220.160 and install Docker

pico@pico1:~$ docker version
Client: Docker Engine - Community
 Version:           20.10.3
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        48d30b5
 Built:             Fri Jan 29 14:33:34 2021
 OS/Arch:           linux/arm64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.6
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       8728dd2
  Built:            Fri Apr  9 22:43:42 2021
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 nvidia:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Verifying if Sensor is detected

i2cdetect -r -y 1
    0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- --

Building Docker Image for RedisTimeSeries for Jetson Nano

git clone --recursive https://github.com/RedisTimeSeries/RedisTimeSeries.git
cd RedisTimeSeries.git
docker build -t ajeetraina/redistimeseries-jetson . -f Dockerfile.jetson.edge

Running RedisTimeSeries

docker run -dit -p 6379:6379 ajeetraina/redistimeseries-jetson

Verifying if RedisTimeSeries Module is loaded

redis-cli
127.0.0.1:6379> info modules
# Modules
module:name=timeseries,ver=999999,api=1,filters=0,usedby=[],using=[],options=[]
127.0.0.1:6379>

Clone the repository

$ git clone https://github.com/redis-developer/redis-datasets
$ cd redis-datasets/redistimeseries/realtime-sensor-jetson

Running Sensorload Script

sudo python3 sensorloader2.py --host localhost --port 6379

Running Grafana on Jetson Nano

docker run -d -e "GF_INSTALL_PLUGINS=redis-app" -p 3000:3000 grafana/grafana

There you go..

Point your browser to https://<IP_ADDRESS>:3000. Use “admin” as username and password to log in to the Grafana dashboard.

image

Click the Data Sources option on the left side of the Grafana dashboard to add a data source.

image

Under the Add data source option, search for Redis and the Redis data source will appear as shown below:

image image

Supply the name, Redis Enterprise Cloud database endpoint, and password, then click Save & Test.

Click Dashboards to import Redis and Redis Streaming. Click Import for both these options.

image

Click on Redis to see a fancy Grafana dashboard that shows the Redis database information:

image image

Finally, let’s create a sensor dashboard that shows temperature, pressure, and humidity. To start with temperature, first click on + on the left navigation window. Under Create option, Select Dashboard and click on the Add new panel button.

image

A new window will open showing the Query section. Select SensorT from the drop-down menu, choose RedisTimeSeries as type, TS.GET as command and ts”temperature as key.

image

Choose TS.GET as a command.

image

Type ts”temperature as the key.

image

Click Run followed by Save, as shown below:

image

Now you can save the dashboard by your preferred name:

image

Click Save. This will open up a sensor dashboard. You can click on Panel Title and select Edit.

image

Type Temperature and choose Gauge under Visualization.

image

Click Apply and you should be able to see the temperature dashboard as shown here:

image

Follow the same process for pressure (ts:pressure) and humidity (ts:humidity), and add them to the dashboard. You should be able to see the complete dashboard readings for temperature, humidity, and pressure. Looks amazing. Isn’t it?

image

2.3 - Working with NVIDIA Jetson Nano

Getting Started with NVIDIA Jetson Nano

image

Table of Contents

  1. Intent
  2. Hardware
  3. Software
  4. Preparing Your Jetson Nano

Intent

Everything and anything you want to know about NVIDIA Jetson Nano, Docker & K3s support

Hardware

  • Jetson Nano
  • A Camera Module
  • A 5V 4Ampere Charger
  • 64GB SD card

Software

Preparing Your Jetson Nano

1. Preparing Your Raspberry Pi Flashing Jetson SD Card Image

  • Unzip the SD card image
  • Insert SD card into your system.
  • Bring up Etcher tool and select the target SD card to which you want to flash the image.

My Image

sudo lshw -C system
pico2                       
    description: Computer
    product: NVIDIA Jetson Nano Developer Kit
    serial: 1422919082257
    width: 64 bits
    capabilities: smp cp15_barrier setend swp

CUDA Compiler and Libraries

ajeetraina@ajeetraina-desktop:~/meetup$ nvcc --version
-bash: nvcc: command not found
ajeetraina@ajeetraina-desktop:~/meetup$ export PATH=${PATH}:/usr/local/cuda/bin
ajeetraina@ajeetraina-desktop:~/meetup$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/lib64
ajeetraina@ajeetraina-desktop:~/meetup$ source ~/.bashrc
ajeetraina@ajeetraina-desktop:~/meetup$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_21:14:42_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89

DeviceQuery

$ pwd

/usr/local/cuda/samples/1_Utilities/deviceQuery
sudo make
ajeetraina@ajeetraina-desktop:/usr/local/cuda/samples/1_Utilities/deviceQuery$ sudo make
/usr/local/cuda-10.2/bin/nvcc -ccbin g++ -I../../common/inc  -m64    -gencode arch=compute_30,code=sm_30 -gencode arch=compute_32,code=sm_32 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_72,code=sm_72 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o deviceQuery.o -c deviceQuery.cpp
/usr/local/cuda-10.2/bin/nvcc -ccbin g++   -m64      -gencode arch=compute_30,code=sm_30 -gencode arch=compute_32,code=sm_32 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_72,code=sm_72 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o deviceQuery deviceQuery.o
mkdir -p ../../bin/aarch64/linux/release
cp deviceQuery ../../bin/aarch64/linux/release
ajeetraina@ajeetraina-desktop:/usr/local/cuda/samples/1_Utilities/deviceQuery$ ls
Makefile  NsightEclipse.xml  deviceQuery  deviceQuery.cpp  deviceQuery.o  readme.txt
ajeetraina@ajeetraina-desktop:/usr/local/cuda/samples/1_Utilities/deviceQuery$ ./deviceQuery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA Tegra X1"
  CUDA Driver Version / Runtime Version          10.2 / 10.2
  CUDA Capability Major/Minor version number:    5.3
  Total amount of global memory:                 3956 MBytes (4148387840 bytes)
  ( 1) Multiprocessors, (128) CUDA Cores/MP:     128 CUDA Cores
  GPU Max Clock rate:                            922 MHz (0.92 GHz)
  Memory Clock rate:                             13 Mhz
  Memory Bus Width:                              64-bit
  L2 Cache Size:                                 262144 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS

2. Verifying if it is shipped with Docker Binaries

ajeetraina@ajeetraina-desktop:~$ sudo docker version
[sudo] password for ajeetraina: 
Client:
 Version:           19.03.6
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        369ce74a3c
 Built:             Fri Feb 28 23:47:53 2020
 OS/Arch:           linux/arm64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.6
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       369ce74a3c
  Built:            Wed Feb 19 01:06:16 2020
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.3.3-0ubuntu1~18.04.2
  GitCommit:        
 runc:
  Version:          spec: 1.0.1-dev
  GitCommit:        
 docker-init:
  Version:          0.18.0
  GitCommit:       

3. Checking Docker runtime

Starting with JetPack 4.2, NVIDIA has introduced a container runtime with Docker integration. This custom runtime enables Docker containers to access the underlying GPUs available in the Jetson family.

pico@pico1:/tmp/docker-build$ sudo nvidia-docker version
NVIDIA Docker: 2.0.3
Client:
 Version:           19.03.6
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        369ce74a3c
 Built:             Fri Feb 28 23:47:53 2020
 OS/Arch:           linux/arm64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.6
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       369ce74a3c
  Built:            Wed Feb 19 01:06:16 2020
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.3.3-0ubuntu1~18.04.2
  GitCommit:        
 runc:
  Version:          spec: 1.0.1-dev
  GitCommit:        
 docker-init:
  Version:          0.18.0
  GitCommit:

Installing Docker Compose on NVIDIA Jetson Nano

Jetson Nano doesnt come with Docker Compose installed by default. You will need to install it first:

export DOCKER_COMPOSE_VERSION=1.27.4
sudo apt-get install libhdf5-dev
sudo apt-get install libssl-dev
sudo pip3 install docker-compose=="${DOCKER_COMPOSE_VERSION}"
apt install python3
apt install python3-pip
pip install docker-compose
docker-compose version
docker-compose version 1.26.2, build unknown
docker-py version: 4.3.1
CPython version: 3.6.9
OpenSSL version: OpenSSL 1.1.1  11 Sep 2018

Next, add default runtime for NVIDIA:

Edit /etc/docker/daemon.json

{
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    },

    "default-runtime": "nvidia",
    "node-generic-resources": [ "NVIDIA-GPU=0" ]
}

Restart the Docker Daemon

systemctl restart docker

Identify the Jetson board

pico@pico1:~$ git clone https://github.com/jetsonhacks/jetsonUtilities
Cloning into 'jetsonUtilities'...
remote: Enumerating objects: 123, done.
remote: Counting objects: 100% (39/39), done.
remote: Compressing objects: 100% (30/30), done.
remote: Total 123 (delta 15), reused 23 (delta 8), pack-reused 84
Receiving objects: 100% (123/123), 32.87 KiB | 5.48 MiB/s, done.
Resolving deltas: 100% (49/49), done.
pico@pico1:~$ cd jetson
-bash: cd: jetson: No such file or directory
pico@pico1:~$ cd jetsonUtilities/
pico@pico1:~/jetsonUtilities$ ls
LICENSE  README.md  jetsonInfo.py  scripts

pico@pico1:~/jetsonUtilities$ python3 jetsonInfo.py 
NVIDIA Jetson Nano (Developer Kit Version)
 L4T 32.4.4 [ JetPack 4.4.1 ]
   Ubuntu 18.04.5 LTS
   Kernel Version: 4.9.140-tegra
 CUDA 10.2.89
   CUDA Architecture: 5.3
 OpenCV version: 4.1.1
   OpenCV Cuda: NO
 CUDNN: 8.0.0.180
 TensorRT: 7.1.3.0
 Vision Works: 1.6.0.501
 VPI: 4.4.1-b50
 Vulcan: 1.2.70

Install the latest version of CUDA

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/sbsa/cuda-ubuntu1804.pin
sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda-repo-ubuntu1804-11-3-local_11.3.1-465.19.01-1_arm64.deb
sudo dpkg -i cuda-repo-ubuntu1804-11-3-local_11.3.1-465.19.01-1_arm64.deb
sudo apt-key add /var/cuda-repo-ubuntu1804-11-3-local/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda

Verify Docker runtime

docker info | grep runtime
 Runtimes: nvidia runc io.containerd.runc.v2 io.containerd.runtime.v1.linux

Testing GPU Support

We’ll use the deviceQuery NVIDIA test application (included in L4T) to check that we can access the GPU in the cluster. First, we’ll create a Docker image with the appropriate software, run it directly as Docker, then run it using containerd ctr and finally on the Kubernetes cluster itself.

Running deviceQuery on Docker with GPU support

Create a directory

mkdir test
cd test

Copy the sample files

Copy the demos where deviceQuery is located to the working directory where the Docker image will be created:

cp -R /usr/local/cuda/samples .

Create a Dockerfile

FROM nvcr.io/nvidia/l4t-base:r32.5.0
RUN apt-get update && apt-get install -y --no-install-recommends make g++
COPY ./samples /tmp/samples
WORKDIR /tmp/samples/1_Utilities/deviceQuery
RUN make clean && make
CMD ["./deviceQuery"]
sudo docker build -t ajeetraina/jetson_devicequery . -f Dockerfile
pico@pico2:~/test$ sudo docker run --rm --runtime nvidia ajeetraina/jetson_devicequery:latest
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA Tegra X1"
  CUDA Driver Version / Runtime Version          10.2 / 10.2
  CUDA Capability Major/Minor version number:    5.3
  Total amount of global memory:                 3963 MBytes (4155383808 bytes)
  ( 1) Multiprocessors, (128) CUDA Cores/MP:     128 CUDA Cores
  GPU Max Clock rate:                            922 MHz (0.92 GHz)
  Memory Clock rate:                             13 Mhz
  Memory Bus Width:                              64-bit
  L2 Cache Size:                                 262144 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS

Test 2: Running deviceQuery on containerd with GPU support

Since K3s uses containerd as its runtime by default, we will use the ctr command line to test and deploy the deviceQuery image we pushed on containerd with this script:

#!/bin/bash
IMAGE=ajeetraina/jetson_devicequery:latest
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
ctr i pull docker.io/${IMAGE}
ctr run --rm --gpus 0 --tty docker.io/${IMAGE} deviceQuery

Execute the script

sudo sh usectr.sh
sudo sh usectr.sh 
docker.io/ajeetraina/jetson_devicequery:latest:                                   resolved       |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:dfeaad4046f78871d3852e5d5fb8fa848038c57c34c6554c6c97a00ba120d550: done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:4438ebff930fb27930d802553e13457783ca8a597e917c030aea07f8ff6645c0:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:b1cdeb9e69c95684d703cf96688ed2b333a235d5b33f0843663ff15f62576bd4:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:bf60857fb4964a3e3ce57a900bbe47cd1683587d6c89ecbce4af63f98df600aa:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:0aac5305d11a81f47ed76d9663a8d80d2963b61c643acfce0515f0be56f5e301:    done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:37987db6d6570035e25e713f41e665a6d471d25056bb56b4310ed1cb1d79a100:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:f0f57d03cad8f8d69b1addf90907b031ccb253b5a9fc5a11db83c51aa311cbfb:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:08c23323368d4fde5347276d543c500e1ff9b712024ca3f85172018e9440d8b0:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:04da93b342eb651d6b94c74a934a3290697573a907fa0a06067b538095601745:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:f84ceb6e8887e9b3b454813459ee97c2b9730869dbd37d4cca4051958b7a5a36:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:93752947af53e2a3225e145b359b956df36e20521b5dde0fe6d3fb92fd2a9538:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:b235194751dee33624fc154603f7e25ecdfbb02538fb7d55fa796df9afa95fee:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:905b1329c1d473c79650e33b882d980b3522fb72e58ecd3456c4fb3c4039fe92:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:8931d5ba88b488c949f77f990e8f9198b153ceb71afd0369eac9c39beb38f2d6:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:cfb2938be99fb944fe31165bdf44532a5536865ce53b12eb7758d1e2a51ad33e:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:606a67bb8db9a1111022bdc6406442e11c1a66653136c5c777114bf67b61038a:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:2f37138d1c8ac71d9314a0f8996ba69579bbc6ee6a57440557bc7eef486ed292:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:9ce7ce1da17c2b8149573d1d73132f61a73083f0cd498eeb7a0da404fd77db14:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:a36863a728ec9221c83c745f40511946dfd63beca0f10c9afcc774ef7a98e420:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:86dd6e5994e2c15f2783d8d543327479ccee7f3b20023dd962fdb9a211071e16:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:f5299db1221c515de91f59d84b79f2f839f9c94a5d0cc7fad04134e23ec9b88a:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:15a5811e1a7bf377cbac066b04e0b36b4c1a41ca63eb3c67c17b734577f6beea:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:cb893097de39451407d7167b312ec56eaea80baa041877af8239dbe833fa044b:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 81.4s                                                                    total:  305.5  (3.8 MiB/s)                                       
unpacking linux/arm64/v8 sha256:dfeaad4046f78871d3852e5d5fb8fa848038c57c34c6554c6c97a00ba120d550...

done

./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA Tegra X1"
  CUDA Driver Version / Runtime Version          10.2 / 10.2
  CUDA Capability Major/Minor version number:    5.3
  Total amount of global memory:                 3963 MBytes (4155383808 bytes)
  ( 1) Multiprocessors, (128) CUDA Cores/MP:     128 CUDA Cores
  GPU Max Clock rate:                            922 MHz (0.92 GHz)
  Memory Clock rate:                             13 Mhz
  Memory Bus Width:                              64-bit
  L2 Cache Size:                                 262144 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS


Test 3: Running deviceQuery on the K3s cluster

pico@pico2:~/test$ cat pod_deviceQuery.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: devicequery
spec:
  containers:
    - name: nvidia
      image: ajeetraina/jetson_devicequery:latest

      command: [ "./deviceQuery" ]
pico@pico2:~/test$
sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml kubectl apply -f ./pod_deviceQuery.yaml
pod/devicequery created
pico@pico2:~/test$ sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml kubectl describe pod devicequery
Name:         devicequery
Namespace:    default
Priority:     0
Node:         pico4/192.168.1.163
Start Time:   Sun, 13 Jun 2021 09:16:44 -0700
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
  nvidia:
    Container ID:  
    Image:         ajeetraina/jetson_devicequery:latest
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      ./deviceQuery
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mcrmv (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-mcrmv:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  78s   default-scheduler  Successfully assigned default/devicequery to pico4
  Normal  Pulling    77s   kubelet            Pulling image "ajeetraina/jetson_devicequery:latest"
pico@pico2:~/test$
cat pod_deviceQuery_jetson4.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: devicequery
spec:
  nodeName: pico4
  containers:
    - name: nvidia
      image: ajeetraina/jetson_devicequery:latest
      command: [ "./deviceQuery" ]
pico@pico2:~/test$ 
pico@pico2:~/test$ sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml kubectl describe pod devicequery
Name:         devicequery
Namespace:    default
Priority:     0
Node:         pico4/192.168.1.163
Start Time:   Sun, 13 Jun 2021 09:16:44 -0700
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.42.1.3
IPs:
  IP:  10.42.1.3
Containers:
  nvidia:
    Container ID:  containerd://fd502d6bfa55e2f80b2d50bc262e6d6543fd8d09e9708bb78ecec0b2e09621c3
    Image:         ajeetraina/jetson_devicequery:latest
    Image ID:      docker.io/ajeetraina/jetson_devicequery@sha256:dfeaad4046f78871d3852e5d5fb8fa848038c57c34c6554c6c97a00ba120d550
    Port:          <none>
    Host Port:     <none>
    Command:
      ./deviceQuery
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 13 Jun 2021 09:21:50 -0700
      Finished:     Sun, 13 Jun 2021 09:21:50 -0700
    Ready:          False
    Restart Count:  5
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mcrmv (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-mcrmv:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  7m51s                  default-scheduler  Successfully assigned default/devicequery to pico4
  Normal   Pulled     5m45s                  kubelet            Successfully pulled image "ajeetraina/jetson_devicequery:latest" in 2m5.699757621s
  Normal   Pulled     5m43s                  kubelet            Successfully pulled image "ajeetraina/jetson_devicequery:latest" in 1.000839703s
  Normal   Pulled     5m29s                  kubelet            Successfully pulled image "ajeetraina/jetson_devicequery:latest" in 967.072951ms
  Normal   Pulled     4m59s                  kubelet            Successfully pulled image "ajeetraina/jetson_devicequery:latest" in 1.025604394s
  Normal   Created    4m59s (x4 over 5m45s)  kubelet            Created container nvidia
  Normal   Started    4m59s (x4 over 5m45s)  kubelet            Started container nvidia
  Warning  BackOff    4m20s (x8 over 5m42s)  kubelet            Back-off restarting failed container
  Normal   Pulling    2m47s (x6 over 7m51s)  kubelet            Pulling image "ajeetraina/jetson_devicequery:latest"
pico@pico2:~/test$ sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml kubectl apply -f ./pod_deviceQuery_jetson4.yaml
pod/devicequery configured

2.4 - Working with NVIDIA Jetson AGX Xavier

Getting Started with NVIDIA Jetson AGX Xavier

image

Getting Started with Jetson AGX Xavier

The NVIDIA® Jetson AGX Xavier™ Developer Kit provides a full-featured development platform designed to get you up and running quickly. The included carrier board exposes many standard hardware interfaces, enabling a highly flexible and extensible platform for rapid prototyping. NVIDIA JetPack SDK supports both your developer kit and host development platform

Jetson AGX Xavier module with thermal solution:

  • Reference carrier board
  • 65W power supply with AC cord
  • Type C to Type A cable (USB 3.1 Gen2)
  • Type C to Type A adapter (USB 3.1 Gen 1)

Installing Docker

xavier@xavier-desktop:~$ sudo docker version
[sudo] password for xavier: 
Client: Docker Engine - Community
 Version:           20.10.8
 API version:       1.41
 Go version:        go1.16.6
 Git commit:        3967b7d
 Built:             Fri Jul 30 19:54:37 2021
 OS/Arch:           linux/arm64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.8
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.6
  Git commit:       75249d8
  Built:            Fri Jul 30 19:52:46 2021
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.4.9
  GitCommit:        e25210fe30a0a703442421b0f60afac609f950a3
 runc:
  Version:          1.0.1
  GitCommit:        v1.0.1-0-g4144b63
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
xavier@xavier-desktop:~$ 

Identify the Jetson board

git clone https://github.com/jetsonhacks/jetsonUtilities
python3 jetsonInfo.py 
NVIDIA Jetson AGX Xavier [16GB]
 L4T 32.3.1 [ JetPack 4.3 ]
   Ubuntu 18.04.3 LTS
   Kernel Version: 4.9.140-tegra
 CUDA NOT_INSTALLED
   CUDA Architecture: 7.2
 OpenCV version: NOT_INSTALLED
   OpenCV Cuda: NO
 CUDNN: NOT_INSTALLED
 TensorRT: NOT_INSTALLED
 Vision Works: NOT_INSTALLED
 VPI: NOT_INSTALLED
 Vulcan: 1.1.70
xavier@xavier-desktop:~/jetsonUtilities$ 

Installing Jtop

sudo -H pip install -U jetson-stats
sudo -H pip install -U jetson-stats
Collecting jetson-stats
  Downloading https://files.pythonhosted.org/packages/70/57/ce1aec95dd442d94c3bd47fcda77d16a3cf55850fa073ce8c3d6d162ae0b/jetson-stats-3.1.1.tar.gz (85kB)
    100% |████████████████████████████████| 92kB 623kB/s 
Building wheels for collected packages: jetson-stats
  Running setup.py bdist_wheel for jetson-stats ... done
  Stored in directory: /root/.cache/pip/wheels/5e/b0/97/f0f8222e76879bf04b6e8c248154e3bb970e0a2aa6d12388f9
Successfully built jetson-stats
Installing collected packages: jetson-stats
Successfully installed jetson-stats-3.1.1
xavier@xavier-desktop:~/jetsonUtilities$ 
$jtop
I can't access jetson_stats.service.
Please logout or reboot this board.

image

image

image

xavier@xavier-desktop:~$ jetson_release -v
 - NVIDIA Jetson AGX Xavier [16GB]
   * Jetpack 4.3 [L4T 32.3.1]
   * NV Power Mode: MODE_15W - Type: 2
   * jetson_stats.service: active
 - Board info:
   * Type: AGX Xavier [16GB]
   * SOC Family: tegra194 - ID:25
   * Module: P2888-0001 - Board: P2822-0000
   * Code Name: galen
   * CUDA GPU architecture (ARCH_BIN): 7.2
   * Serial Number: 1420921055981
 - Libraries:
   * CUDA: NOT_INSTALLED
   * cuDNN: NOT_INSTALLED
   * TensorRT: NOT_INSTALLED
   * Visionworks: NOT_INSTALLED
   * OpenCV: NOT_INSTALLED compiled CUDA: NO
   * VPI: NOT_INSTALLED
   * Vulkan: 1.1.70
 - jetson-stats:
   * Version 3.1.1
   * Works on Python 2.7.17
xavier@xavier-desktop:~$ 

Jetson variables

export | grep JETSON
declare -x JETSON_BOARD="P2822-0000"
declare -x JETSON_BOARDIDS=""
declare -x JETSON_CHIP_ID="25"
declare -x JETSON_CODENAME="galen"
declare -x JETSON_CUDA="NOT_INSTALLED"
declare -x JETSON_CUDA_ARCH_BIN="7.2"
declare -x JETSON_CUDNN="NOT_INSTALLED"
declare -x JETSON_JETPACK="4.3"
declare -x JETSON_L4T="32.3.1"
declare -x JETSON_L4T_RELEASE="32"
declare -x JETSON_L4T_REVISION="3.1"
declare -x JETSON_MACHINE="NVIDIA Jetson AGX Xavier [16GB]"
declare -x JETSON_MODULE="P2888-0001"
declare -x JETSON_OPENCV="NOT_INSTALLED"
declare -x JETSON_OPENCV_CUDA="NO"
declare -x JETSON_SERIAL_NUMBER="1420921055981"
declare -x JETSON_SOC="tegra194"
declare -x JETSON_TENSORRT="NOT_INSTALLED"
declare -x JETSON_TYPE="AGX Xavier [16GB]"
declare -x JETSON_VISIONWORKS="NOT_INSTALLED"
declare -x JETSON_VPI="NOT_INSTALLED"
declare -x JETSON_VULKAN_INFO="1.1.70"
xavier@xavier-desktop:~$ 

Installing nvidia-docker

sudo apt install nvidia-docker2

Install nvidia-container-runtime package:

sudo yum install nvidia-container-runtime

Update docker daemon

sudo vim /etc/docker/daemon.json

Ensure that /etc/docker/daemon.json with the path to nvidia-container-runtime:

{
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

Make docker update the path:

sudo pkill -SIGHUP dockerd

Running the DeepStream Container

  • DeepStream 5.1 provides Docker containers for both dGPU and Jetson platforms.
  • These containers provide a convenient, out-of-the-box way to deploy DeepStream applications by packaging all associated dependencies within the container.
  • The associated Docker images are hosted on the NVIDIA container registry in the NGC web portal at https://ngc.nvidia.com.
  • They use the nvidia-docker package, which enables access to the required GPU resources from containers.

Please Note:

The dGPU container is called deepstream and the Jetson container is called deepstream-l4t.

  • Unlike the container in DeepStream 3.0, the dGPU DeepStream 5.1 container supports DeepStream application development within the container.
  • It contains the same build tools and development libraries as the DeepStream 5.1 SDK.
  • In a typical scenario, you build, execute and debug a DeepStream application within the DeepStream container.
  • Once your application is ready, you can use the DeepStream 5.1 container as a base image to create your own Docker container holding your application files (binaries, libraries, models, configuration file, etc.,)

image

This section describes the features supported by the DeepStream Docker container for the dGPU and Jetson platforms.

To run the container:

Allow external applications to connect to the host’s X display:

xhost +

Run the docker container using the nvidia-docker (use the desired container tag in the command line below):

sudo docker run -it --rm --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.1 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples

Option explained:

-it means run in interactive mode

–rm will delete the container when finished

-v is the mounting directory, and used to mount host’s X11 display in the container filesystem

5.1-21.02-samples is the tag for the image; 21.02 refers to the version of the container for that release; samples refers to the container variant

user can mount additional directories (using -v option) as required containing configuration file and models for access by applications executed from within the container

Additionally, –cap-add SYSLOG option needs to be included to enable usage of the nvds_logger functionality inside the container.

See /opt/nvidia/deepstream/deepstream-5.1/README inside the container for deepstream-app usage information. Additional argument to add to above docker command for accessing CSI Camera from Docker: -v /tmp/argus_socket:/tmp/argus_socket For USB Camera additional argument –device /dev/video

sudo docker ps
CONTAINER ID   IMAGE                                             COMMAND       CREATED          STATUS         PORTS     NAMES
ad38d8f4612d   nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples   "/bin/bash"   10 seconds ago   Up 9 seconds             romantic_hopper
xavier@xavier-desktop:~$ 
root@xavier-desktop:/opt/nvidia/deepstream/deepstream-5.1# tree -L 2
.
|-- LICENSE.txt
|-- LicenseAgreement.pdf
|-- README
|-- bin
|   |-- deepstream-app
|   |-- deepstream-appsrc-test
|   |-- deepstream-audio
|   |-- deepstream-dewarper-app
|   |-- deepstream-gst-metadata-app
|   |-- deepstream-image-decode-app
|   |-- deepstream-image-meta-test
|   |-- deepstream-infer-tensor-meta-app
|   |-- deepstream-mrcnn-app
|   |-- deepstream-nvdsanalytics-test
|   |-- deepstream-nvof-app
|   |-- deepstream-opencv-test
|   |-- deepstream-perf-demo
|   |-- deepstream-segmentation-app
|   |-- deepstream-test1-app
|   |-- deepstream-test2-app
|   |-- deepstream-test3-app
|   |-- deepstream-test4-app
|   |-- deepstream-test5-app
|   |-- deepstream-testsr-app
|   |-- deepstream-transfer-learning-app
|   `-- deepstream-user-metadata-app
|-- doc
|   `-- nvidia-tegra
|-- install.sh
|-- lib
|   |-- gst-plugins
|   |-- libiothub_client.so
|   |-- libiothub_client.so.1 -> libiothub_client.so
|   |-- libnvbufsurface.so -> /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so
|   |-- libnvbufsurftransform.so -> /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so
|   |-- libnvds_amqp_proto.so
|   |-- libnvds_audiotransform.so
|   |-- libnvds_azure_edge_proto.so
|   |-- libnvds_azure_proto.so
|   |-- libnvds_batch_jpegenc.so
|   |-- libnvds_csvparser.so
|   |-- libnvds_dewarper.so
|   |-- libnvds_dsanalytics.so
|   |-- libnvds_infer.so
|   |-- libnvds_infer_custom_parser_audio.so
|   |-- libnvds_infer_server.so
|   |-- libnvds_infercustomparser.so
|   |-- libnvds_inferutils.so
|   |-- libnvds_kafka_proto.so
|   |-- libnvds_logger.so
|   |-- libnvds_meta.so
|   |-- libnvds_mot_iou.so
|   |-- libnvds_mot_klt.so
|   |-- libnvds_msgbroker.so
|   |-- libnvds_msgconv.so -> libnvds_msgconv.so.1.0.0
|   |-- libnvds_msgconv.so.1.0.0
|   |-- libnvds_msgconv_audio.so -> libnvds_msgconv_audio.so.1.0.0
|   |-- libnvds_msgconv_audio.so.1.0.0
|   |-- libnvds_nvdcf.so
|   |-- libnvds_nvtxhelper.so
|   |-- libnvds_opticalflow_dgpu.so
|   |-- libnvds_opticalflow_jetson.so
|   |-- libnvds_osd.so
|   |-- libnvds_redis_proto.so
|   |-- libnvds_utils.so
|   |-- libnvdsgst_helper.so
|   |-- libnvdsgst_inferbase.so
|   |-- libnvdsgst_meta.so
|   |-- libnvdsgst_smartrecord.so
|   |-- libnvdsgst_tensor.so
|   |-- libtritonserver.so
|   |-- pyds.so
|   |-- setup.py
|   `-- triton_backends
|-- samples
|   |-- configs
|   |-- models
|   |-- prepare_classification_test_video.sh
|   |-- prepare_ds_trtis_model_repo.sh
|   |-- streams
|   `-- trtis_model_repo
|-- sources
|   |-- SONYCAudioClassifier
|   |-- apps
|   |-- gst-plugins
|   |-- includes
|   |-- libs
|   |-- objectDetector_FasterRCNN
|   |-- objectDetector_SSD
|   |-- objectDetector_Yolo
|   `-- tools
|-- uninstall.sh
`-- version

3 - Contribution Guidelines

How to contribute to the docs

These basic sample guidelines assume that your Docsy site is deployed using Netlify and your files are stored in GitHub. You can use the guidelines “as is” or adapt them with your own instructions: for example, other deployment options, information about your doc project’s file structure, project-specific review guidelines, versioning guidelines, or any other information your users might find useful when updating your site. Kubeflow has a great example.

Don’t forget to link to your own doc repo rather than our example site! Also make sure users can find these guidelines from your doc repo README: either add them there and link to them from this page, add them here and link to them from the README, or include them in both locations.

We use Hugo to format and generate our website, the Docsy theme for styling and site structure, and Netlify to manage the deployment of the site. Hugo is an open-source static site generator that provides us with templates, content organisation in a standard directory structure, and a website generation engine. You write the pages in Markdown (or HTML if you want), and Hugo wraps them up into a website.

All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult GitHub Help for more information on using pull requests.

Quick start with Netlify

Here’s a quick guide to updating the docs. It assumes you’re familiar with the GitHub workflow and you’re happy to use the automated preview of your doc updates:

  1. Fork the Goldydocs repo on GitHub.
  2. Make your changes and send a pull request (PR).
  3. If you’re not yet ready for a review, add “WIP” to the PR name to indicate it’s a work in progress. (Don’t add the Hugo property “draft = true” to the page front matter, because that prevents the auto-deployment of the content preview described in the next point.)
  4. Wait for the automated PR workflow to do some checks. When it’s ready, you should see a comment like this: deploy/netlify — Deploy preview ready!
  5. Click Details to the right of “Deploy preview ready” to see a preview of your updates.
  6. Continue updating your doc and pushing your changes until you’re happy with the content.
  7. When you’re ready for a review, add a comment to the PR, and remove any “WIP” markers.

Updating a single page

If you’ve just spotted something you’d like to change while using the docs, Docsy has a shortcut for you:

  1. Click Edit this page in the top right hand corner of the page.
  2. If you don’t already have an up to date fork of the project repo, you are prompted to get one - click Fork this repository and propose changes or Update your Fork to get an up to date version of the project to edit. The appropriate page in your fork is displayed in edit mode.
  3. Follow the rest of the Quick start with Netlify process above to make, preview, and propose your changes.

Previewing your changes locally

If you want to run your own local Hugo server to preview your changes as you work:

  1. Follow the instructions in Getting started to install Hugo and any other tools you need. You’ll need at least Hugo version 0.45 (we recommend using the most recent available version), and it must be the extended version, which supports SCSS.

  2. Fork the Goldydocs repo repo into your own project, then create a local copy using git clone. Don’t forget to use --recurse-submodules or you won’t pull down some of the code you need to generate a working site.

    git clone --recurse-submodules --depth 1 https://github.com/google/docsy-example.git
    
  3. Run hugo server in the site root directory. By default your site will be available at http://localhost:1313/. Now that you’re serving your site locally, Hugo will watch for changes to the content and automatically refresh your site.

  4. Continue with the usual GitHub workflow to edit files, commit them, push the changes up to your fork, and create a pull request.

Creating an issue

If you’ve found a problem in the docs, but you’re not sure how to fix it yourself, please create an issue in the Goldydocs repo. You can also create an issue about a specific page by clicking the Create Issue button in the top right hand corner of the page.

Useful resources