Writing a (Super Simple) Linux Module

Okay, so Today I’ll need the CR3 value given a PID. The Linux kernel does not give this information out to the userspace, so I’ll be building my own module to take care of this. The code is available on my Gitlab.

Two awesome references I’ll be using for this task:

Making the basic module

We need to the actual module that is loadable into the kernel. This can be done by using the tutorial from the first reference. The main module source code needs to have the proper headers (to call proper module MACROs), the proper calls to the macros (description, author, license), and finally actually define which functions of the module are the entry and exit points.


MODULE_DESCRIPTION("Kernel module to translate given PID to CR3 physical address");
MODULE_AUTHOR("Chang Hyun Park");

static int pid2cr3_init(void)
  printk("%s\n", __func__);
  return 0;

static void pid2cr3_exit(void)
  printk("%s\n", __func__);


The Makefile and the Kbuild files are identical to the first reference, except the filename of the code above (pid2cr3.c, add the object as pid2cr3.o​).

The full code can be found at Gitlab here

Actually doing the intended work

Now, given a PID we want to retrieve the CR3 Value! There are a few ways to pass on the PID value to the module.

  1. Register the module to listen on a system call [here]
  2. Pass on parameters when loading the module (and dynamically change the parameter at runtime) [here]
  3. Open a pseudo-file to communicate with the module (sysfs, procfs).

We take the final approach, where we create a new sysfs directory and file which we program to act differently for reads and writes.

For a write to the file, we receive the PID from the userspace, and for the write, we return the CR3 for the priorly received PID.

The code is available here.

That’s it!


Setting up Openlava

Follow instructions in https://gumdaeng.com/2015/06/10/openlava-installation/


# tar -xvf openlava-3.0.tar.gz
# cd openlava-3.0
# ./configure
# make
# make install
# cd config
# for f in `ls | grep -v in | grep -v Makefile` ; do cp $f /opt/openlava-3.0/etc; done
# cp /opt/openlava-3.0/etc/openlava /etc/init.d
# cp /opt/openlava-3.0/etc/openlava.* /etc/profile.d
# scp [main_server_IP]:/opt/openlava-3.0/etc/lsb.* .
# scp [main_server_IP]:/opt/openlava-3.0/etc/lsf.* .


Trusty Specifics

Now if you are on Trusty (14.04) the following command will start the openlava

# service openlava start

To make openlava startup at runtime

# update-rc.d openlava defaults

Xenial (16.04) or Bionic (18.04) specific

To start:

# /etc/init.d/openlava start

Unfortunately, I haven’t found out how to make it start automatically.

Debugging Guest kernel

Getting the vmlinux file of the Ubuntu kernels

I used the answer on superuser.com.

# Add ppas
echo "deb http://ddebs.ubuntu.com $(lsb_release -cs)-updates main restricted universe multiverse
deb http://ddebs.ubuntu.com $(lsb_release -cs)-security main restricted universe multiverse
deb http://ddebs.ubuntu.com $(lsb_release -cs)-proposed main restricted universe multiverse" | \
sudo tee -a /etc/apt/sources.list.d/ddebs.list
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 428D7C01
# Get the actual vmlinux file
sudo apt-get install linux-image-$(uname -r)-dbgsym
file /usr/lib/debug/boot/vmlinux-$(uname -r) # This is the vmlinux file

Setup qemu to export a serial port for debugging

I referred to the following answer on stackoverflow.

Using this, my resulting qemu code is as follows:

sudo qemu-system-x86_64 -m 512M \
-kernel /boot/vmlinuz-$(uname -r) \
-drive file=ubuntu_14.04.img,index=0,media=disk,format=raw \
-append "root=/dev/sda rw console=ttyS0 kgdboc=ttyS0,115200" \
-netdev user,id=hostnet0,hostfwd=tcp::5556-:22 \
-device virtio-net-pci,netdev=hostnet0,id=net0,bus=pci.0,addr=0x3 \
-serial tcp::1234,server,nowait \
--nographic \

Notice the kgdboc=ttyS0,116200 on the kernel commandline, and the -serial tcp::1234,server,nowait on the qemu argument.

Start the debugging process

Startup the VM using the command above. SSH into your machine

ssh localhost -p 5556

trigger a gdb break(?) on the VM

sudo bash -c "echo g > /proc/sysrq-trigger"

Run GDB on your host

$ gdb /usr/lib/debug/boot/vmlinux-$(uname-r)
(gdb) target remote localhost:1234

And your gdb should be connected to the VM’s kernel.

Note: the vmlinux downloaded from ddebs(?) ppas will not include the source code, and so gdb will complain about source code lines not being found.
This should be substitutable, but I haven’t gotten this to work yet. (will update)

Making Simple KVM Image

This tutorial is aimed at quickly making a KVM image.

First off, there are lots of guides available, but I wanted to quickly, and effortlessly make a KVM image using debootstrap. Some guides on the net used nbd for some reason I cannot comprehend. this post uses the more sensible loopback interface to connect to the qemu disk and debootstrap. This guid is based on the tutorial linked above, but has my personal customization.

# Create VM foundations
$ qemu-img create -f raw ubuntu_16.04.img 20G
$ mkfs.ext4 ubuntu_14.04.img
$ mkdir mnt
$ sudo mount -o loop ubuntu_16.04.img mnt
$ sudo debootstrap --arch amd64 --include=ssh,vim xenial mnt

# Setup created environment
$ sudo chroot mnt
$ passwd # and change root password
$ adduser username
$ usermod -aG sudo username

$ sudo umount mnt

$ sudo qemu-system-x86_64 -m 512M \
-kernel /boot/vmlinuz-$(uname -r) \
-drive file=ubuntu_14.04.img,index=0,media=disk,format=raw \
-append "root=/dev/sda rw console=ttyS0" \
-netdev user,id=hostnet0,hostfwd=tcp::5556-:22 \
-device virtio-net-pci,netdev=hostnet0,id=net0,bus=pci.0,addr=0x3 \
--nographic \

# Setup DHCP for your network in your VM.
# In the **VM console** (provided by qemu)
# login as root
$ ifconfig -a
# find the interface name other than lo
# In my case it was ens3, it could be eth0, etc.

$ vi /etc/network/interfaces
# add the following two lines:
auto ens3
iface ens3 inet dhcp

$ ifup ens3
# Now you have network connectivity!

# connect via SSH to your VM
# Now, back from your **host**
$ ssh localhost -p 5556 


Setup HTTPS on your dockerized Gitlab

So I use Sameersbn’s dockerized Gitlab.

I’m not sure if this is the best way to do this, but it works, so I’m sharing it, and also as a reference for myself for future deployments.

BTW I’m working off Ubuntu 14.04 (Trusty)

First off, get the certbot

$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-nginx

Also, install nginx if you haven’t.

$ sudo certbot --nginx -d example.com -d www.example.com

Be sure to change example.com to your domain name.

Fill in the prompts, and at the end your certificate path will be printed out


The previous command startsup the Ubuntu installed nginx, so you may want to turn it off using

$ service nginx stop

Now, I use the following script to generate the certificate for Gitlab:

<br />#!/bin/bash

# This script updates the certificate for Gitlab with
# the (hopefully) renewed Let's Encrypt Certificate
# We need to do this because the Let's Encrypt Certificates
# are only valid for 3 months at a time, and Synology (tries to) renews it
# every month
# Refer to https://chpresearch.wordpress.com/2016/10/04/synology-gitlab-setup-ssl-over-lets-encrypt/


if [[ $# -eq 1 ]]; then

echo "Generating gitlab certificates to ${PATH_TO_STORE_GITLAB_CERTIFICATE}"

FILES_REQUIRED=(fullchain.pem cert.pem privkey.pem)

for filename in ${FILES_REQUIRED[@]}
if [ ! -e ${PATH_TO_SYNOLOGY_CERTIFICATE}/$filename ];
echo "${PATH_TO_SYNOLOGY_CERTIFICATE}/$filename does not exist!"
exit 1

echo "===Generating gitlab.crt==="
cat ${TMP_FILENAME}.crt

echo "===Generating gitlab.key==="
#cat ${TMP_FILENAME}.key
echo "===Backing up existing Cert & Key==="
if [[ -f ${PATH_TO_STORE_GITLAB_CERTIFICATE}/gitlab.crt ]]; then
if [[ -f ${PATH_TO_STORE_GITLAB_CERTIFICATE}/gitlab.key ]]; then

echo "===Overwritting Existing Cert & Key==="

echo "Done"

Run the script, (I put the script in /your/docker/gitlab/root//certs directory and executed it from there)

Also, you need to generate the DHE parameters. Goto /your/docker/gitlab/root/gitlab/certs directory, and execute the following command

$ openssl dhparam <a href="http://security.stackexchange.com/a/95184" target="_blank" rel="noopener">-dsaparam </a>-out dhparam.pem 2048

Now setup your docker-compose to use HTTPS and not run with self-signed certificates. You should be up and running

Setting Up Infiniband

So we got some new IB cards, and we needed to set them up on our servers. Our servers are Ubuntu 14.04 for this post, but I believe 16.04 should be similar.

Install the cards Physically.

To check if your hardware found your cards, enter the following:

lspci -v | grep Mellanox
02:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3]

You should get something like the above.

Install Infiniband Driver

Refer to the Release notes of version v4_2-1_2_0_0. The reference has a list of packages that are required before installation. I found out afterwards, that the installer seems to check these dependencies and installs them itself, but why not prepare your system beforehand.

$ apt-get install perl dpkg autotools-dev autoconf libtool automake1.10 automake m4 dkms debhelper tcl tcl8.4 chrpath swig graphviz tcl-dev tcl8.4-dev tk-dev tk8.4-dev bison flex dpatch zlib1g-dev curl libcurl4-gnutls-dev python-libxml2 libvirt-bin libvirt0 libnl-3-dev libglib2.0-dev libgfortran3 automake m4 pkg-config libnuma-dev logrotate ethtool lsof

For the libnuma package and the libnl-dev package, the corresponding package names are libnuma-dev and libnl-3-dev​.

Afterwards, checkout the ConnectX-3 Pro VPI Single and Dual QSFP+ Port Adapter Card User Manual for more help with installing.


Now, go ahead and install the Mellanox OFED. Download the installer from the Mellanox website under Products->Software->Infiniband VPI drivers. Go for Mellanox OFED Linux and at the bottom click the Download button. If nothing shows up and you are using Chrome, make sure to enable unsafe scripts.

Download the tgz file (or iso if you prefer iso) for your distribution. Untar the file.

Install the Mellanox OFED by executing the following script:

./mlnxofedinstall [OPTIONS if applicable. I didn't need any]

Afterwards, I rebooted the system.

Assigning IP addresses to each IB

Now Infiniband supports IPoIB that seems to allow infiniband to be resoluted with IP addresses. For this part I referred to the following post. Just to make sure IPoIB is installed, check the following command

lsmod | grep ipoib

There should be a ib_ipoib module loaded.

Now check your ib interface names via ifconfig -a command. Then set your ib IP addresses in /etc/network/interfaces file.

auto ib0
iface ib0 inet static

And bring up your network device (ib) up via

ifup ib0

Setting up the Subnet Manager (If your not using a IB Switch)

Now if you check the status of your ib cards, via ibstat you may find that your card states are State: Initializing. Intel developer zone has a Troubleshooting InfiniBand connection issues using OFED tools Under the state part, I found that the INIT state corresponds to a HW initialized, but subnet manager unavailable situation.

If you are in a situation like I am, where you do not have an Infiniband switch, and you are just connecting nodes directly, you need to start up a SW subnet manager. Another intel guide allowed me to start up the subnet manager.

/etc/init.d/opensmd start

Afterwards my ibstat showed that my State: Active.

I tried a few tests, ib_send_bw to check the performance between two nodes and found that my system was working as expected.

Also, to setup the subnet manager to startup at boot execute the following command

update-rc.d opensmd defaults

Setup cluster NIS Client

There are some good manuals around, but the key thing is

  1. when installing via apt-get make sure to specify the domain name of the NIS master
    1. If things don’t work well, use apt-get purge nis to remove and reinstall nis to setup your nis domain.
  2. setup the ypserver in /etc/yp.conf
    1. ypserver [full address]
  3. add nis to the appropriate lines in /etc/nsswitch.conf
    1. passwd, group, shadow, hosts
  4. Finally use the yptest to check if things are working.
  5. Xenial has an issue where the rpcbind service does not start up properly. I used the following command to set rpcbind to start at bootup.
      1. # systemctl add-wants multi-user.target rpcbind.service
      2. This solution was found on askubuntu

In our setup, make sure not to add the kaist.ac.kr suffix!!

Step is:

  1. apt install nis
  2. check with yptest
  3. If it works, open /etc/nsswitch.conf and add nis to the end of passwd group shadow and hosts
  4. if you’re on Xenial (16.04) follow step 5 above.