Reordering on an Alpha processor

A very non-intuitive property of the Alpha processor is that it allows the following behavior:

Initially: p = & x, x = 1, y = 0

    Thread 1         Thread 2
  y = 1         |    
  memoryBarrier |    i = *p
  p = & y       |
Can result in: i = 0

This behavior means that the reader needs to perform a memory barrier in lazy initialization idioms (e.g., Double-checked locking) and creates issues for synchronization-free immutable objects (e.g., ensuring. that other threads see the correct value for fields of a String object).

Kourosh Gharachorloo wrote a note explaining how it can actually happen on an Alpha multiprocessor:
The anomalous behavior is currently only possible on a 21264-based system. And obviously you have to be using one of our multiprocessor servers. Finally, the chances that you actually see it are very low, yet it is possible.

Here is what has to happen for this behavior to show up. Assume T1 runs on P1 and T2 on P2. P2 has to be caching location y with value 0. P1 does y=1 which causes an “invalidate y” to be sent to P2. This invalidate goes into the incoming “probe queue” of P2; as you will see, the problem arises because this invalidate could theoretically sit in the probe queue without doing an MB on P2. The invalidate is acknowledged right away at this point (i.e., you don’t wait for it to actually invalidate the copy in P2’s cache before sending the acknowledgment). Therefore, P1 can go through its MB. And it proceeds to do the write to p. Now P2 proceeds to read p. The reply for read p is allowed to bypass the probe queue on P2 on its incoming path (this allows replies/data to get back to the 21264 quickly without needing to wait for previous incoming probes to be serviced). Now, P2 can derefence P to read the old value of y that is sitting in its cache (the inval y in P2’s probe queue is still sitting there).

How does an MB on P2 fix this? The 21264 flushes its incoming probe queue (i.e., services any pending messages in there) at every MB. Hence, after the read of P, you do an MB which pulls in the inval to y for sure. And you can no longer see the old cached value for y.

Even though the above scenario is theoretically possible, the chances of observing a problem due to it are extremely minute. The reason is that even if you setup the caching properly, P2 will likely have ample opportunity to service the messages (i.e., inval) in its probe queue before it receives the data reply for “read p”. Nonetheless, if you get into a situation where you have placed many things in P2’s probe queue ahead of the inval to y, then it is possible that the reply to p comes back and bypasses this inval. It would be difficult for you to set up the scenario though and actually observe the anomaly.

The above addresses how current Alpha’s may violate what you have shown. Future Alpha’s can violate it due to other optimizations. One interesting optimization is value prediction.



用龙芯EJTAG硬件断点优化Linux ptrace watch性能

在MIPS标准的协处理器0(CP0)中定义一组硬件watchpoints接口,由于某些原因,龙芯3系列处理器并未实现,这就导致了在该架构Linux系统中用gdb watch只能使用软件断点,真心非常、非常慢。:(

好消息是龙芯3系列处理器是实现了MIPS EJTAG的,兼容2.61标准,那么能否利用MIPS EJTAG的硬件断点功能部件实现Linux ptrace的watchpoints功能呢?答案是肯定的。让我们一起看看具体的方法吧。

首先,我们需要更改BIOS中的异常处理函数,将EJTAG调试异常重新路由至Linux内核中处理,因为MIPS EJTAG异常处理程序的入口地址固定为0xbfc00480

         /* Debug exception */
         .align  7           /* bfc00480 */
         .set    push
         .set    noreorder
         .set    arch=mips64r2
         dmtc0   k0, CP0_DESAVE
         mfc0    k0, CP0_DEBUG
         andi    k0, 0x2
         beqz    k0, 1f
          mfc0   k0, CP0_STATUS
         andi    k0, 0x18
         bnez    k0, 2f
         mfc0    k0, CP0_EBASE
         ins     k0, zero, 0, 12
         addiu   k0, 0x480
         jr      k0
          dmfc0  k0, CP0_DESAVE
         la      k0, 0xdeadbeef
         dmtc0   k0, CP0_DEPC
         dmfc0   k0, CP0_DESAVE
         .set    pop

1. 将来自用户态的sdbbp指令触发的异常路由至地址 0xdeadbeef。
2. 将来自内核态的sdbbp指令触发的异常或是任意态的非sdbbp触发的异常路由至 ebase+0x480。

1. 实现 EJTAG watch 相关的 probe、install、read、clear 等操作,及合适的调试异常处理程序。
2. 实现 Linux ptrace watch 接口与 EJTAG watch 的对接。



MIPS J类指令目标范围

MIPS 跳转指令共分为三类:基于PC的相对跳转、基于PC区域的相对跳转、基于寄存器的绝对跳转。其中基于 PC 区域的相对跳转也就是我们要说的 J 类指令。

J类指令有长达26位的指令 index 编码域,因为指令都是4字节对齐的,所有表示的范围是 256M(28位)。那么J类跳转的目标地址是如何计算的呢?

目标PC = 延迟槽指令PC的28位以上的高位 || (J类指令26位的立即数 << 2)


|: 265M 边界
j: j 指令位置
t: 可行的跳转目标位置


System V AMD64 ABI calling conventions

The calling convention of the System V AMD64 ABI is followed on Solaris, Linux, FreeBSD, Mac OS X, and other UNIX-like or POSIX-compliant operating systems. The first six integer or pointer arguments are passed in registers RDI, RSI, RDX, RCX, R8, and R9, while XMM0, XMM1, XMM2, XMM3, XMM4, XMM5, XMM6 and XMM7 are used for floating point arguments. For system calls, R10 is used instead of RCX. As in the Microsoft x64 calling convention, additional arguments are passed on the stack and the return value is stored in RAX.

Registers RBP, RBX, and R12-R15 are callee-save registers; all others must be saved by the caller if they wish to preserve their values.

Unlike the Microsoft calling convention, a shadow space is not provided; on function entry, the return address is adjacent to the seventh integer argument on the stack.


mips64el toolchain for x86_64

mips64el toolchain 是用于在 x86_64 平台交叉编译 mips64el 目标程序的工具集,该工具集分为两种大版本:odd-spreg 和 no-odd-spreg,其中龙芯仅适用 no-odd-spreg 版本。系统库包含 mips64el o32, n32 和 n64 多种版本的库,分别有依赖于 Linux 2.6 内核和 Linux 3.4 内核的两种版本。另外还有支持 x86_64 交叉编译 Mozilla JS 引擎的支持包。

Source: mips64el-toolchain-2.src.tar.xz
toolchain: mips64el-toolchain-2.x64.tar.xz
system libaries (Linux 2.6): mips64el-toolchain-linux-2.6-2.x64.tar.xz
system libaries (Linux 3.4): mips64el-toolchain-linux-3.4-2.x64.tar.xz
system libaries (Linux 3.4 MozJS): mips64el-toolchain-linux-3.4-mozjs-2.x64.tar.xz
toolchain: mips64el-toolchain-2.x64.tar.xz
system libaries (Linux 2.6): mips64el-toolchain-linux-2.6-2.x64.tar.xz
system libaries (Linux 3.4): mips64el-toolchain-linux-3.4-2.x64.tar.xz


sudo tar --numeric-owner -xf xxxx -C /


export PATH=${PATH}:/opt/mips64el-toolchain/bin


sudo ln -s -f linux-2.6 /opt/mips64el-toolchain/platforms/current


# MIPS32 o32
mips64el-unknown-linux-gnu-gcc -march=mips32r2 -mabi=32 -o test test.c
# MIPS64 n32
mips64el-unknown-linux-gnu-gcc -march=mips64r2 -mabi=n32 -o test test.c
# MIPS64 n64
mips64el-unknown-linux-gnu-gcc -march=mips64r2 -mabi=64 -o test test.c


x86 pslldq to Loongson psllq

x86 pslldq 指令逻辑左移字节为单位的数据,而转换成龙芯的MMI只能使用 dsll 和 dsrl 指令模拟实现,需要特别注意的是 dsll 和 dsrl 指令移动的数据是以位为单位的。

/* SSE: pslldq (bytes) */
#define _mm_psllq(_D, _d, _s, _s64, _tf)                    \
        "subu %["#_tf"], %["#_s64"], %["#_s"] \n\t"         \
        "dsrl %["#_tf"], %["#_d"l], %["#_tf"] \n\t"         \
        "dsll %["#_D"h], %["#_d"h], %["#_s"] \n\t"          \
        "dsll %["#_D"l], %["#_d"l], %["#_s"] \n\t"          \
        "or %["#_D"h], %["#_D"h], %["#_tf"] \n\t"
pslldq $4, %xmm0 => mm_psllq(d, d, s32, s64, t)


看龙芯3A的 dmtc1 指令有多慢!

龙芯2F和3A处理器都实现了与 x86 MMX 基本兼容的 SIMD,即 MMI,该 ASE 是在浮点部件中的实现的,并且复用了 64-bit 的浮点寄存器(FPR)。在使用 MMI 时不可避免的会使用到通用寄存器向浮点器移动数据的情况,那么 dmtc1 的效率如何呢?

GPR 向 FPR 移动数据的指令共有3种:
mtc1 : 从 GPR 向 FPR 移动 32-bit 的数据,64-bit 平台上目标 FPR 的高 32-bit 清 0。
mthc1 : 从 GPR (低 32-bit)向 FPR 的高 32-bit 移动 32-bit 的数据,目标 FPR 的低 32-bit 数据保留。
dmtc1 : 从 GPR 向 FPR 移动 64-bit 数据。

从上面的说明可以看出, dmtc1 的功能是可以使用 mtc1 与 mthc1 模拟实现的,那么我们就设计个实验程序来验证一下这两条方式的时间开销分别如何吧。

for (i=0; i<100000000; i++) {
#if 0
    move $2, $3
    mtc1 $3, $f31
    dsra $3, 32
    mthc1 $3, $f31
    move $3, $2
    dmtc1 $3, $f31
    dmtc1 $3, $f31

在 MIPS64 系统上,每个循环中做8次GPR2FPR的数据移动,其 dmtc1 实现时间大概为 0m4.463s,而 mtc1 与 mthc1 组合实现为 0m3.857s,后者如不做寄存器的保存恢复,开销仅为 0m1.791s。