Run GNOME on NanoPi M4 (RK3399)

Issues

/usr/bin/gnome-shell: symbol lookup error: /lib/libmutter-4.so.0: undefined symbol: gbm_bo_get_offset

Arch Linux
Refer: https://archlinuxarm.org/platforms/armv8/generic

Mali GPU user space drivers
Drivers: https://github.com/heiher/libmali-rk3399

git clone https://github.com/heiher/libmali-rk3399
cd libmali-rk3399
 
# Build gbm wrapper
make
 
# Install to system
sudo cp conf/mali.conf /etc/ld.so.conf.d/
sudo cp -rd lib /usr/lib/mali
 
# Update ld.so cache
sudo ldconfig

Clutter
Force use GLESv2

# /etc/clutter-1.0/settings.ini
[Environment]
Drivers=gles2

Fix crash when maximizing window
Patch: https://gitlab.gnome.org/GNOME/mutter/merge_requests/515

GPU device access permission

# /etc/udev/rules.d/50-mali.rules 
KERNEL=="mali0", MODE="0666"
sudo chmod 0666 /dev/mali0

Start gdm.

Over!

Run QEMU with hardware virtualization on macOS

在macOS上通过虚拟机运行其它操作系统,又不想用商业软件,那么开源的QEMU是一个比较好的选择。QEMU的功能支持还是比较全面的,除了功能以外,使用虚拟机软件的用户最关心的就是性能了,一个好消息是macOS 10.10+版本已经引人了硬件虚拟化支持框架,也就是Hypervisor.framework,另一个好消息是QEMU也已支持该框架,也就是hvf accelerator。

Requirements
1. macOS 10.10+
2. Macports

Issues
已经使用过的用户可能已经发现,QEMU使用hvf accelerator并开启多核是有问题的呀。的确,QEMU使用hvf accelerator以单核运行时没有问题,当使用-smp参数指定多核时,很大概率上虚拟机硬件初始化都完成不了就死机了。
不过,好消息是该问题也已经修复了,导致这个问题的原因是hvf accelerator代码设计没有考虑到虚拟机启动后所有hvf vcpu都在并行执行指令,其中包括硬件初始化的I/O模拟操作,多个CPU同时对同一硬件执行初始化显然是不行的。

Patch (已经提交上游社区,Review中,期望尽快合并)

Install QEMU

cd ~
git clone https://github.com/hevz/macports
sudo vim /opt/local/etc/macports/sources.conf
# Add local repositories
file:///Users/[YOUR USER NAME]/macports
rsync://rsync.macports.org/macports/release/tarballs/ports.tar [default]
cd ~/macports
portindex
sudo port install qemu

Run Arch Linux
1. 下载Arch Linux安装ISO镜像。
2. 创建一个虚拟机磁盘镜像。
3. 开始安装新的系统。
4. 启动安装后的系统。

mkdir ~/system/images
qemu-img create -f qcow2 ~/system/images/arch.qcow2 40G
 
qemu-system-x86_64 -no-user-config -nodefaults -show-cursor \
    -M pc-q35-3.1,accel=hvf,usb=off,vmport=off \
    -cpu host -smp 4,sockets=1,cores=2,threads=2 -m 4096 \
    -realtime mlock=off -rtc base=utc,driftfix=slew \
    -drive file=~/system/images/arch.qcow2,if=none,format=qcow2,id=disk0 \
    -device virtio-blk-pci,bus=pcie.0,addr=0x1,drive=disk0 \
    -netdev user,id=net0,hostfwd=tcp::2200-:22 \
    -device virtio-net-pci,netdev=net0,bus=pcie.0,addr=0x2 \
    -device virtio-keyboard-pci,bus=pcie.0,addr=0x3 \
    -device virtio-tablet-pci,bus=pcie.0,addr=0x4 \
    -device virtio-vga,bus=pcie.0,addr=0x5 \
    -cdrom ~/archlinux-2019.01.01-x86_64.iso -boot d

安装完成后,删除qemu-system-x86_64最后一行命令即可启动新系统。

Over!

Transparent proxy per application on Linux

This is a transparent proxy per app based on iptables + network classifier cgroup on Linux, and it’s more general than proxychains.

Build and install tproxy

git clone --recursive https://github.com/heiher/hev-socks5-tproxy
cd hev-socks5-tproxy
make
 
sudo cp bin/hev-socks5-tproxy /usr/local/bin/
sudo cp conf/main.ini /usr/local/etc/hev-socks5-tproxy.conf

Install systemd serivce

# /etc/systemd/system/hev-socks5-tproxy.service
[Unit]
Description=HevSocks5TProxy
 
[Service]
User=nobody
ExecStart=/usr/local/bin/hev-socks5-tproxy /usr/local/etc/hev-socks5-tproxy.conf
KillMode=process
Restart=always
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

Install tproxy wrapper

#!/bin/bash
# /usr/local/bin/tproxy
 
NET_CLS_DIR="/sys/fs/cgroup/net_cls/tproxy"
NET_CLS_ID=88
TP_TCP_PORT=1088
TP_DNS_PORT=5300
 
if [ ! -e ${NET_CLS_DIR} ]; then
	sudo sh -c "mkdir -p ${NET_CLS_DIR}; \
		chmod 0666 ${NET_CLS_DIR}/tasks; \
		echo ${NET_CLS_ID} > ${NET_CLS_DIR}/net_cls.classid; \
		iptables -t nat -D OUTPUT -p tcp \
			-m cgroup --cgroup ${NET_CLS_ID} \
			-j REDIRECT --to-ports ${TP_TCP_PORT}; \
		iptables -t nat -D OUTPUT -p udp --dport 53 \
			-m cgroup --cgroup ${NET_CLS_ID} \
			-j REDIRECT --to-ports ${TP_DNS_PORT}; \
		iptables -t nat -I OUTPUT -p tcp \
			-m cgroup --cgroup ${NET_CLS_ID} \
			-j REDIRECT --to-ports ${TP_TCP_PORT}; \
		iptables -t nat -I OUTPUT -p udp --dport 53 \
			-m cgroup --cgroup ${NET_CLS_ID} \
			-j REDIRECT --to-ports ${TP_DNS_PORT};" 2>&1 2> /dev/null
fi
 
echo $$ > ${NET_CLS_DIR}/tasks
 
exec "$@"

How to use?

tproxy COMMAND
 
# For example
tproxy wget http://xxx.com/xxx
tproxy makepkg

Over!

Dump VDSO via GDB

gdb /bin/bash
(gdb) b main
(gdb) r
(gdb) info proc map
Mapped address spaces:
          Start Addr           End Addr       Size     Offset objfile
      ...
      0x7ffff7fd1000     0x7ffff7fd3000     0x2000        0x0 [vdso]
      ...
(gdb) dump binary memory /tmp/vdso.so 0x7ffff7fd1000 0x7ffff7fd3000
(gdb) quit
file /tmp/vdso.so
/tmp/vdso.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=1a3fac101214fe3ecfb3788d4f8af3018f1f2667, stripped

Over!

Disable IBus embed preedit text via dbus-send

dbus-send --bus="`ibus address`" --print-reply \
    --dest=org.freedesktop.IBus \
    /org/freedesktop/IBus \
    org.freedesktop.DBus.Properties.Set \
    string:org.freedesktop.IBus string:EmbedPreeditText variant:boolean:false

Over!

Linux simple source policy routing

Dual network connections
eth0:
Address: 192.168.0.2
NetMask: 255.255.255.0
Gateway: 192.168.0.1

eth1:
Address: 192.168.1.2
NetMask: 255.255.255.0
Gateway: 192.168.1.1

Routing policy
* Transmit via eth0 when source address is 192.168.0.2
* Transmit via eth1 when source address is 192.168.1.2

Commands

# eth0
ifconfig eth0 192.168.0.2/24 up
ip rule add from 192.168.0.2 table 251
ip route add default via 192.168.0.1 dev eth0 src 192.168.0.2 table 251
 
# eth1
ifconfig eth1 192.168.1.2/24 up
ip rule add from 192.168.1.2 table 252
ip route add default via 192.168.1.1 dev eth1 src 192.168.1.2 table 252

Over!

Configuring Bonding Manually via Sysfs

Configuring Bonding Manually via Sysfs
------------------------------------------

	Starting with version 3.0.0, Channel Bonding may be configured
via the sysfs interface.  This interface allows dynamic configuration
of all bonds in the system without unloading the module.  It also
allows for adding and removing bonds at runtime.  Ifenslave is no
longer required, though it is still supported.

	Use of the sysfs interface allows you to use multiple bonds
with different configurations without having to reload the module.
It also allows you to use multiple, differently configured bonds when
bonding is compiled into the kernel.

	You must have the sysfs filesystem mounted to configure
bonding this way.  The examples in this document assume that you
are using the standard mount point for sysfs, e.g. /sys.  If your
sysfs filesystem is mounted elsewhere, you will need to adjust the
example paths accordingly.

Creating and Destroying Bonds
-----------------------------
To add a new bond foo:
# echo +foo > /sys/class/net/bonding_masters

To remove an existing bond bar:
# echo -bar > /sys/class/net/bonding_masters

To show all existing bonds:
# cat /sys/class/net/bonding_masters

NOTE: due to 4K size limitation of sysfs files, this list may be
truncated if you have more than a few hundred bonds.  This is unlikely
to occur under normal operating conditions.

Adding and Removing Slaves
--------------------------
	Interfaces may be enslaved to a bond using the file
/sys/class/net//bonding/slaves.  The semantics for this file
are the same as for the bonding_masters file.

To enslave interface eth0 to bond bond0:
# ifconfig bond0 up
# echo +eth0 > /sys/class/net/bond0/bonding/slaves

To free slave eth0 from bond bond0:
# echo -eth0 > /sys/class/net/bond0/bonding/slaves

	When an interface is enslaved to a bond, symlinks between the
two are created in the sysfs filesystem.  In this case, you would get
/sys/class/net/bond0/slave_eth0 pointing to /sys/class/net/eth0, and
/sys/class/net/eth0/master pointing to /sys/class/net/bond0.

	This means that you can tell quickly whether or not an
interface is enslaved by looking for the master symlink.  Thus:
# echo -eth0 > /sys/class/net/eth0/master/bonding/slaves
will free eth0 from whatever bond it is enslaved to, regardless of
the name of the bond interface.

Changing a Bond's Configuration
-------------------------------
	Each bond may be configured individually by manipulating the
files located in /sys/class/net//bonding

	The names of these files correspond directly with the command-
line parameters described elsewhere in this file, and, with the
exception of arp_ip_target, they accept the same values.  To see the
current setting, simply cat the appropriate file.

	A few examples will be given here; for specific usage
guidelines for each parameter, see the appropriate section in this
document.

To configure bond0 for balance-alb mode:
# ifconfig bond0 down
# echo 6 > /sys/class/net/bond0/bonding/mode
 - or -
# echo balance-alb > /sys/class/net/bond0/bonding/mode
	NOTE: The bond interface must be down before the mode can be
changed.

To enable MII monitoring on bond0 with a 1 second interval:
# echo 1000 > /sys/class/net/bond0/bonding/miimon
	NOTE: If ARP monitoring is enabled, it will disabled when MII
monitoring is enabled, and vice-versa.

To add ARP targets:
# echo +192.168.0.100 > /sys/class/net/bond0/bonding/arp_ip_target
# echo +192.168.0.101 > /sys/class/net/bond0/bonding/arp_ip_target
	NOTE:  up to 16 target addresses may be specified.

To remove an ARP target:
# echo -192.168.0.100 > /sys/class/net/bond0/bonding/arp_ip_target

To configure the interval between learning packet transmits:
# echo 12 > /sys/class/net/bond0/bonding/lp_interval
	NOTE: the lp_inteval is the number of seconds between instances where
the bonding driver sends learning packets to each slaves peer switch.  The
default interval is 1 second.

Example Configuration
---------------------
	We begin with the same example that is shown in section 3.3,
executed with sysfs, and without using ifenslave.

	To make a simple bond of two e100 devices (presumed to be eth0
and eth1), and have it persist across reboots, edit the appropriate
file (/etc/init.d/boot.local or /etc/rc.d/rc.local), and add the
following:

modprobe bonding
modprobe e100
echo balance-alb > /sys/class/net/bond0/bonding/mode
ifconfig bond0 192.168.1.1 netmask 255.255.255.0 up
echo 100 > /sys/class/net/bond0/bonding/miimon
echo +eth0 > /sys/class/net/bond0/bonding/slaves
echo +eth1 > /sys/class/net/bond0/bonding/slaves

	To add a second bond, with two e1000 interfaces in
active-backup mode, using ARP monitoring, add the following lines to
your init script:

modprobe e1000
echo +bond1 > /sys/class/net/bonding_masters
echo active-backup > /sys/class/net/bond1/bonding/mode
ifconfig bond1 192.168.2.1 netmask 255.255.255.0 up
echo +192.168.2.100 /sys/class/net/bond1/bonding/arp_ip_target
echo 2000 > /sys/class/net/bond1/bonding/arp_interval
echo +eth2 > /sys/class/net/bond1/bonding/slaves
echo +eth3 > /sys/class/net/bond1/bonding/slaves

See also: https://www.kernel.org/doc/Documentation/networking/bonding.txt
Over!

一个简单、轻量的 Linux 协程实现

HevTaskSystem 是一个简单的、轻量的多任务系统(或称协程),它工作于 Linux 平台,I/O event poll 基于 Epoll。

协程其实是一种古老的技术,协程有这么几个特点:
1. 协程是一个并发运行的多任务系统,一般由一个操作系统线程驱动。
2. 协程任务元数据资源占用比操作系统线程更低,且任务切换开销小。
3. 协程是任务间协作式调度,即某一任务主动放弃执行后进而调度另外一任务投入运行。

与异步、非阻塞式I/O模型类似,协程技术同样适用于处理海量的并发I/O任务,而且还不会像异步方式使业务代码逻辑支离破碎。

基本信息
HevTaskSystem 目前开放了四个类:HevTaskSystem、HevTask、HevTaskPoll 和 HevMemoryAllocator。
HevTaskSystem 是协程任务系统,管理、调度众多的 HevTask 实例运行。由单一操作系统线程驱动,多个线程可并行驱动多套任务系统。
HevTask 是协程任务,实例可加入某一 HevTaskSystem 中调度运行。
HevTaskPoll 是提供了 poll 兼容的系统调用。
HevMemoryAllocator 是一个内存分配器接口,其后端有两套实现:
* 原始分配器,等价于 malloc/free。
* Slice 分配器,按分配大小限量缓存的分配器,缓存替换算法是 LRU。

Public API
TaskSystem – hev-task-system.h
Task – hev-task.h
TaskPoll – hev-task-poll.h
MemoryAllocator – hev-memory-allocator.h

简单示例
该示例演示了在主线程上运行一个协程任务系统,并创建两个独立的协程任务,分别以不同的优先级运行各自的入口函数。各自的入口函数中各循环2次,每次打印一个字符串并 yield 释放CPU 触发调度切换。

/*
 ============================================================================
 Name        : simple.c
 Author      : Heiher <r@hev.cc>
 Copyright   : Copyright (c) 2017 everyone.
 Description :
 ============================================================================
 */
 
#include <stdio.h>
 
#include <hev-task.h>
#include <hev-task-system.h>
 
static void
task_entry1 (void *data)
{
        int i;
 
        for (i=0; i<2; i++) {
                printf ("hello 1\n");
                /* 主动放弃执行,yield 函数会触发重新调度选取另一任务投入执行 */
                hev_task_yield (HEV_TASK_YIELD);
        }
}
 
static void
task_entry2 (void *data)
{
        int i;
 
        for (i=0; i<2; i++) {
                printf ("hello 2\n");
                hev_task_yield (HEV_TASK_YIELD);
        }
}
 
int
main (int argc, char *argv[])
{
        HevTask *task;
 
        /* 在当前线程上初始化 task system */
        hev_task_system_init ();
 
        /* 创建一个新的 task,栈空间采用默认大小 */
        task = hev_task_new (-1);
        /* 设置该 task 的优先级为 1 */
        hev_task_set_priority (task, 1);
        /* 将该 task 放入当前线程的 task system中,任务人口函数为 task_entry1
         * task_entry1 并不会在 hev_task_run 执行后立即调用,需等到该 task 被调度。
         */
        hev_task_run (task, task_entry1, NULL);
 
        task = hev_task_new (-1);
        hev_task_set_priority (task, 0);
        hev_task_run (task, task_entry2, NULL);
 
        /* 运行当前线程上相关的 task system,当无任务可调度时该函数返回 */
        hev_task_system_run ();
 
        /* 销毁当前线程上相关的 task system */
        hev_task_system_fini ();
 
        return 0;
}

Over!

用龙芯EJTAG硬件断点优化Linux ptrace watch性能

在MIPS标准的协处理器0(CP0)中定义一组硬件watchpoints接口,由于某些原因,龙芯3系列处理器并未实现,这就导致了在该架构Linux系统中用gdb watch只能使用软件断点,真心非常、非常慢。:(

好消息是龙芯3系列处理器是实现了MIPS EJTAG的,兼容2.61标准,那么能否利用MIPS EJTAG的硬件断点功能部件实现Linux ptrace的watchpoints功能呢?答案是肯定的。让我们一起看看具体的方法吧。

首先,我们需要更改BIOS中的异常处理函数,将EJTAG调试异常重新路由至Linux内核中处理,因为MIPS EJTAG异常处理程序的入口地址固定为0xbfc00480

         /* Debug exception */
         .align  7           /* bfc00480 */
         .set    push
         .set    noreorder
         .set    arch=mips64r2
         dmtc0   k0, CP0_DESAVE
         mfc0    k0, CP0_DEBUG
         andi    k0, 0x2
         beqz    k0, 1f
          mfc0   k0, CP0_STATUS
         andi    k0, 0x18
         bnez    k0, 2f
          nop
 1:
         mfc0    k0, CP0_EBASE
         ins     k0, zero, 0, 12
         addiu   k0, 0x480
         jr      k0
          dmfc0  k0, CP0_DESAVE
 2:
         la      k0, 0xdeadbeef
         dmtc0   k0, CP0_DEPC
         dmfc0   k0, CP0_DESAVE
         deret
         .set    pop

这段处理程序实现了两个功能:
1. 将来自用户态的sdbbp指令触发的异常路由至地址 0xdeadbeef。
2. 将来自内核态的sdbbp指令触发的异常或是任意态的非sdbbp触发的异常路由至 ebase+0x480。

接着,我们还需要修改内核,实现下列功能:
1. 实现 EJTAG watch 相关的 probe、install、read、clear 等操作,及合适的调试异常处理程序。
2. 实现 Linux ptrace watch 接口与 EJTAG watch 的对接。

See: https://github.com/heiher/linux-stable/commits/ejtag-watch-4.9

Over!