Dump VDSO via GDB

gdb /bin/bash
(gdb) b main
(gdb) r
(gdb) info proc map
Mapped address spaces:
          Start Addr           End Addr       Size     Offset objfile
      ...
      0x7ffff7fd1000     0x7ffff7fd3000     0x2000        0x0 [vdso]
      ...
(gdb) call (int)open("/tmp/vdso.so", 0101, 0644)
$1 = 3
(gdb) call (long)write($1, 0x7ffff7fd1000, 0x2000)
(gdb) call (int)close($1)
(gdb) quit
file /tmp/vdso.so
/tmp/vdso.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=1a3fac101214fe3ecfb3788d4f8af3018f1f2667, stripped

Over!

Reordering on an Alpha processor

A very non-intuitive property of the Alpha processor is that it allows the following behavior:

Initially: p = & x, x = 1, y = 0

    Thread 1         Thread 2
--------------------------------
  y = 1         |    
  memoryBarrier |    i = *p
  p = & y       |
--------------------------------
Can result in: i = 0

This behavior means that the reader needs to perform a memory barrier in lazy initialization idioms (e.g., Double-checked locking) and creates issues for synchronization-free immutable objects (e.g., ensuring. that other threads see the correct value for fields of a String object).

Kourosh Gharachorloo wrote a note explaining how it can actually happen on an Alpha multiprocessor:
The anomalous behavior is currently only possible on a 21264-based system. And obviously you have to be using one of our multiprocessor servers. Finally, the chances that you actually see it are very low, yet it is possible.

Here is what has to happen for this behavior to show up. Assume T1 runs on P1 and T2 on P2. P2 has to be caching location y with value 0. P1 does y=1 which causes an “invalidate y” to be sent to P2. This invalidate goes into the incoming “probe queue” of P2; as you will see, the problem arises because this invalidate could theoretically sit in the probe queue without doing an MB on P2. The invalidate is acknowledged right away at this point (i.e., you don’t wait for it to actually invalidate the copy in P2’s cache before sending the acknowledgment). Therefore, P1 can go through its MB. And it proceeds to do the write to p. Now P2 proceeds to read p. The reply for read p is allowed to bypass the probe queue on P2 on its incoming path (this allows replies/data to get back to the 21264 quickly without needing to wait for previous incoming probes to be serviced). Now, P2 can derefence P to read the old value of y that is sitting in its cache (the inval y in P2’s probe queue is still sitting there).

How does an MB on P2 fix this? The 21264 flushes its incoming probe queue (i.e., services any pending messages in there) at every MB. Hence, after the read of P, you do an MB which pulls in the inval to y for sure. And you can no longer see the old cached value for y.

Even though the above scenario is theoretically possible, the chances of observing a problem due to it are extremely minute. The reason is that even if you setup the caching properly, P2 will likely have ample opportunity to service the messages (i.e., inval) in its probe queue before it receives the data reply for “read p”. Nonetheless, if you get into a situation where you have placed many things in P2’s probe queue ahead of the inval to y, then it is possible that the reply to p comes back and bypasses this inval. It would be difficult for you to set up the scenario though and actually observe the anomaly.

The above addresses how current Alpha’s may violate what you have shown. Future Alpha’s can violate it due to other optimizations. One interesting optimization is value prediction.

From: http://www.cs.umd.edu/~pugh/java/memoryModel/AlphaReordering.html

Over!

Disable IBus embed preedit text via dbus-send

dbus-send --bus="`ibus address`" --print-reply \
    --dest=org.freedesktop.IBus \
    /org/freedesktop/IBus \
    org.freedesktop.DBus.Properties.Set \
    string:org.freedesktop.IBus string:EmbedPreeditText variant:boolean:false

Over!

Linux simple source policy routing

Dual network connections
eth0:
Address: 192.168.0.2
NetMask: 255.255.255.0
Gateway: 192.168.0.1

eth1:
Address: 192.168.1.2
NetMask: 255.255.255.0
Gateway: 192.168.1.1

Routing policy
* Transmit via eth0 when source address is 192.168.0.2
* Transmit via eth1 when source address is 192.168.1.2

Commands

# eth0
ifconfig eth0 192.168.0.2/24 up
ip rule add from 192.168.0.2 table 251
ip route add default via 192.168.0.1 dev eth0 src 192.168.0.2 table 251
 
# eth1
ifconfig eth1 192.168.1.2/24 up
ip rule add from 192.168.1.2 table 252
ip route add default via 192.168.1.1 dev eth1 src 192.168.1.2 table 252

Over!

Alpha 通用64位立即数装载

Alpha 立即数装载方式
1. 使用立即数装载指令
2. 使用访存指令从内存装载

Alpha 立即数装载指令
* lda
格式:lda ra, imm16(rb)
功能:val(ra) = val(rb) + sign_extend_to_64bit(imm16)

*ldah
格式:ldah ra, imm16(rb)
功能:val(ra) = val(rb) + sign_extend_to_64bit(imm16 * 65536)

通用64位立即数装载代码生成

# li64.S
    .text
 
    .globl    li64
    .enty     li64
    .type     li64, @function
    .set      noreorder
    .set      nomacro
    .set      nomove
    .set      volatile
li64:
    ldah      v0, 0(zero) # highest
    lda       v0, 0(v0)   # higher
    sll       v0, 32, v0
    ldah      v0, 0(v0)   # high
    lda       v0, 0(v0)   # low
 
    ret       zero, (ra)
    .end      li64
    .size     li64, .-li64
unsigned long imm64;
 
if ((short) (imm64 >> 0) < 0)
    imm64 += 0x10000ul;
if ((short) (imm64 >> 16) < 0)
    imm64 += 0x100000000ul;
if ((short) (imm64 >> 32) < 0)
    imm64 += 0x1000000000000ul;
 
short highest = (short) (imm64 >> 48);
short higher = (short) (imm64 >> 32);
short highe = (short) (imm64 >> 16);
short low = (short) imm64;

Over!

Configuring Bonding Manually via Sysfs

Configuring Bonding Manually via Sysfs
------------------------------------------

	Starting with version 3.0.0, Channel Bonding may be configured
via the sysfs interface.  This interface allows dynamic configuration
of all bonds in the system without unloading the module.  It also
allows for adding and removing bonds at runtime.  Ifenslave is no
longer required, though it is still supported.

	Use of the sysfs interface allows you to use multiple bonds
with different configurations without having to reload the module.
It also allows you to use multiple, differently configured bonds when
bonding is compiled into the kernel.

	You must have the sysfs filesystem mounted to configure
bonding this way.  The examples in this document assume that you
are using the standard mount point for sysfs, e.g. /sys.  If your
sysfs filesystem is mounted elsewhere, you will need to adjust the
example paths accordingly.

Creating and Destroying Bonds
-----------------------------
To add a new bond foo:
# echo +foo > /sys/class/net/bonding_masters

To remove an existing bond bar:
# echo -bar > /sys/class/net/bonding_masters

To show all existing bonds:
# cat /sys/class/net/bonding_masters

NOTE: due to 4K size limitation of sysfs files, this list may be
truncated if you have more than a few hundred bonds.  This is unlikely
to occur under normal operating conditions.

Adding and Removing Slaves
--------------------------
	Interfaces may be enslaved to a bond using the file
/sys/class/net//bonding/slaves.  The semantics for this file
are the same as for the bonding_masters file.

To enslave interface eth0 to bond bond0:
# ifconfig bond0 up
# echo +eth0 > /sys/class/net/bond0/bonding/slaves

To free slave eth0 from bond bond0:
# echo -eth0 > /sys/class/net/bond0/bonding/slaves

	When an interface is enslaved to a bond, symlinks between the
two are created in the sysfs filesystem.  In this case, you would get
/sys/class/net/bond0/slave_eth0 pointing to /sys/class/net/eth0, and
/sys/class/net/eth0/master pointing to /sys/class/net/bond0.

	This means that you can tell quickly whether or not an
interface is enslaved by looking for the master symlink.  Thus:
# echo -eth0 > /sys/class/net/eth0/master/bonding/slaves
will free eth0 from whatever bond it is enslaved to, regardless of
the name of the bond interface.

Changing a Bond's Configuration
-------------------------------
	Each bond may be configured individually by manipulating the
files located in /sys/class/net//bonding

	The names of these files correspond directly with the command-
line parameters described elsewhere in this file, and, with the
exception of arp_ip_target, they accept the same values.  To see the
current setting, simply cat the appropriate file.

	A few examples will be given here; for specific usage
guidelines for each parameter, see the appropriate section in this
document.

To configure bond0 for balance-alb mode:
# ifconfig bond0 down
# echo 6 > /sys/class/net/bond0/bonding/mode
 - or -
# echo balance-alb > /sys/class/net/bond0/bonding/mode
	NOTE: The bond interface must be down before the mode can be
changed.

To enable MII monitoring on bond0 with a 1 second interval:
# echo 1000 > /sys/class/net/bond0/bonding/miimon
	NOTE: If ARP monitoring is enabled, it will disabled when MII
monitoring is enabled, and vice-versa.

To add ARP targets:
# echo +192.168.0.100 > /sys/class/net/bond0/bonding/arp_ip_target
# echo +192.168.0.101 > /sys/class/net/bond0/bonding/arp_ip_target
	NOTE:  up to 16 target addresses may be specified.

To remove an ARP target:
# echo -192.168.0.100 > /sys/class/net/bond0/bonding/arp_ip_target

To configure the interval between learning packet transmits:
# echo 12 > /sys/class/net/bond0/bonding/lp_interval
	NOTE: the lp_inteval is the number of seconds between instances where
the bonding driver sends learning packets to each slaves peer switch.  The
default interval is 1 second.

Example Configuration
---------------------
	We begin with the same example that is shown in section 3.3,
executed with sysfs, and without using ifenslave.

	To make a simple bond of two e100 devices (presumed to be eth0
and eth1), and have it persist across reboots, edit the appropriate
file (/etc/init.d/boot.local or /etc/rc.d/rc.local), and add the
following:

modprobe bonding
modprobe e100
echo balance-alb > /sys/class/net/bond0/bonding/mode
ifconfig bond0 192.168.1.1 netmask 255.255.255.0 up
echo 100 > /sys/class/net/bond0/bonding/miimon
echo +eth0 > /sys/class/net/bond0/bonding/slaves
echo +eth1 > /sys/class/net/bond0/bonding/slaves

	To add a second bond, with two e1000 interfaces in
active-backup mode, using ARP monitoring, add the following lines to
your init script:

modprobe e1000
echo +bond1 > /sys/class/net/bonding_masters
echo active-backup > /sys/class/net/bond1/bonding/mode
ifconfig bond1 192.168.2.1 netmask 255.255.255.0 up
echo +192.168.2.100 /sys/class/net/bond1/bonding/arp_ip_target
echo 2000 > /sys/class/net/bond1/bonding/arp_interval
echo +eth2 > /sys/class/net/bond1/bonding/slaves
echo +eth3 > /sys/class/net/bond1/bonding/slaves

See also: https://www.kernel.org/doc/Documentation/networking/bonding.txt
Over!

一个简单、轻量的 Linux 协程实现

HevTaskSystem 是一个简单的、轻量的多任务系统(或称协程),它工作于 Linux 平台,I/O event poll 基于 Epoll。

协程其实是一种古老的技术,协程有这么几个特点:
1. 协程是一个并发运行的多任务系统,一般由一个操作系统线程驱动。
2. 协程任务元数据资源占用比操作系统线程更低,且任务切换开销小。
3. 协程是任务间协作式调度,即某一任务主动放弃执行后进而调度另外一任务投入运行。

与异步、非阻塞式I/O模型类似,协程技术同样适用于处理海量的并发I/O任务,而且还不会像异步方式使业务代码逻辑支离破碎。

基本信息
HevTaskSystem 目前开放了四个类:HevTaskSystem、HevTask、HevTaskPoll 和 HevMemoryAllocator。
HevTaskSystem 是协程任务系统,管理、调度众多的 HevTask 实例运行。由单一操作系统线程驱动,多个线程可并行驱动多套任务系统。
HevTask 是协程任务,实例可加入某一 HevTaskSystem 中调度运行。
HevTaskPoll 是提供了 poll 兼容的系统调用。
HevMemoryAllocator 是一个内存分配器接口,其后端有两套实现:
* 原始分配器,等价于 malloc/free。
* Slice 分配器,按分配大小限量缓存的分配器,缓存替换算法是 LRU。

Public API
TaskSystem – hev-task-system.h
Task – hev-task.h
TaskPoll – hev-task-poll.h
MemoryAllocator – hev-memory-allocator.h

简单示例
该示例演示了在主线程上运行一个协程任务系统,并创建两个独立的协程任务,分别以不同的优先级运行各自的入口函数。各自的入口函数中各循环2次,每次打印一个字符串并 yield 释放CPU 触发调度切换。

/*
 ============================================================================
 Name        : simple.c
 Author      : Heiher <r@hev.cc>
 Copyright   : Copyright (c) 2017 everyone.
 Description :
 ============================================================================
 */
 
#include <stdio.h>
 
#include <hev-task.h>
#include <hev-task-system.h>
 
static void
task_entry1 (void *data)
{
        int i;
 
        for (i=0; i<2; i++) {
                printf ("hello 1\n");
                /* 主动放弃执行,yield 函数会触发重新调度选取另一任务投入执行 */
                hev_task_yield (HEV_TASK_YIELD);
        }
}
 
static void
task_entry2 (void *data)
{
        int i;
 
        for (i=0; i<2; i++) {
                printf ("hello 2\n");
                hev_task_yield (HEV_TASK_YIELD);
        }
}
 
int
main (int argc, char *argv[])
{
        HevTask *task;
 
        /* 在当前线程上初始化 task system */
        hev_task_system_init ();
 
        /* 创建一个新的 task,栈空间采用默认大小 */
        task = hev_task_new (-1);
        /* 设置该 task 的优先级为 1 */
        hev_task_set_priority (task, 1);
        /* 将该 task 放入当前线程的 task system中,任务人口函数为 task_entry1
         * task_entry1 并不会在 hev_task_run 执行后立即调用,需等到该 task 被调度。
         */
        hev_task_run (task, task_entry1, NULL);
 
        task = hev_task_new (-1);
        hev_task_set_priority (task, 0);
        hev_task_run (task, task_entry2, NULL);
 
        /* 运行当前线程上相关的 task system,当无任务可调度时该函数返回 */
        hev_task_system_run ();
 
        /* 销毁当前线程上相关的 task system */
        hev_task_system_fini ();
 
        return 0;
}

Over!

Windows 7 有线局域网组播接收丢包调试

一有线局域网实时流媒体组播传输应用从 Windows 10 迁移至 Windows 7 平台后,迁移后传输质量下降明显。

对比实验发现在同一发送端的同一组播窗口中,运行在 Windows 7 系统上的接收端效果明显劣于 Windows 10 接收端。

分析接收端的收到的数据包发现,Windows 7 平台的接收端存在明显的丢包现象。于是排查了这两个方面:
1. Win7 网卡驱动较 Win10 较旧。
2. Socket 默认接收缓冲区是否太小。

针对第1点,在将 Win7 网卡驱动升级至最新后无明显改善。:(
针对第2点,显式设置了接收缓冲区为 1MB 后,接收质量得到明显改善。:)

Over!