本文共 10313 字,大约阅读时间需要 34 分钟。
块设备驱动应该是除了硬件驱动距离硬件最近的驱动系统,构成了文件系统底层基石。
一个基本的块设备驱动包含以下步骤:① 分配、初始化请求队列,绑定请求队列和请求函数。(blk_alloc_queue/ blk_queue_make_request)
② 分配、初始化gendisk,给gendisk的major、fops、queue等成。(alloc_disk/ set_capipility/add_disk) 员赋值,最后添加gendisk。 ③ 注册块设备驱动。(register_blkdev) 在块设备驱动的模块卸载函数中通常需要与模块加载函数相反的工作: ① 清除请求队列。 ② 删除gendisk和对gendisk的引用。 ③ 删除对块设备的引用,注销块设备驱动。块设备的I/O操作方式与字符设备存在较大的不同,因而引入了
request_queue、request、bio等一系列数据结构。在整个块设备的I/O操作中,贯穿于始终的就是“请求”,字符设备的I/O操作则是直接进行不绕弯, 块设备的I/O操作会排队和整合。如图代表一个分区的结构。Boot block用来引导系统,即存储bootloader的位置。每个文件系统有多个组块,
小结:
1.文件系统映射inode和dentry关于page/address_space的关系,并对请求队列进行操作。 1.块设备驱动建立请求队列 2.io调度层转换队列address_space到bio,排序提升磁盘读写效率。 3.硬盘驱动提交bio请求不管文件系统还是块设备驱动,核心思想都是映射数据关系。区别在于文件系统按照inode dentry的方式以数据块方式映射文件和物理内存之间的关系,块设备则以bio的方式映射物理内存到物理硬盘数据映射关系。每一层的操作函数都是对相应层数据位置查找,数据修改,权限检查的过程。
块设备操作合集:file_operations block_device_operations
块设备扇区访问: ll_rw_block
submit_bh ----> submit_bio generic_make_request - blk_queue_make_request - make_request_fn - q->make_request - make_request - elv_merge电梯算法 - generic_uplug_device - q->request_fn(q)inode: 一个inode对应一个客体文件(或目录),携带这个文件的所有元信息,包括文件属性、占用空间等信息。其与On-disk的inode对应。
dentry: 是directory entry的缩写,是一个抽象话概念,不对应On-disk结构(或者你把路径信息理解成dentry,当然这个不严谨)。从其字面意思也可以看出它是内核为构建和访问目录树形结构服务的,是目录与其下文件的缓存,主要信息包括文件名和其parent目录名。一个dentry链表常常是可以从一个文件向上追踪其每一个parent directory,直到root。注意,在内核中可能出现多个dentries结构对应一个inode,比如硬链接就是两个不同名的文件对应一个inode。
文件系统注册后,就可以格式化分区形成sb。每个mount点形成一个sb结构,组织文件系统的节点和目录。 当然每个mount点在同一个namesapce中也会形成树。
struct file_system_type { const char *name; int fs_flags;#define FS_REQUIRES_DEV 1 #define FS_BINARY_MOUNTDATA 2#define FS_HAS_SUBTYPE 4#define FS_USERNS_MOUNT 8 /* Can be mounted by userns root */#define FS_RENAME_DOES_D_MOVE 32768 /* FS will handle d_move() during rename() internally. */ struct dentry *(*mount) (struct file_system_type *, int, const char *, void *); void (*kill_sb) (struct super_block *); struct module *owner; struct file_system_type * next; struct hlist_head fs_supers; struct lock_class_key s_lock_key; struct lock_class_key s_umount_key; struct lock_class_key s_vfs_rename_key; struct lock_class_key s_writers_key[SB_FREEZE_LEVELS]; struct lock_class_key i_lock_key; struct lock_class_key i_mutex_key; struct lock_class_key i_mutex_dir_key;};
mount: 代替早期的get_sb(),用户挂载此文件系统时使用的回调函数。
kill_sb: 删除内存中的super block,在卸载文件系统时使用。定义好file_system_type 利用register_filesystem进行注册:
int register_filesystem(struct file_system_type * fs){ int res = 0; struct file_system_type ** p; BUG_ON(strchr(fs->name, '.')); if (fs->next) return -EBUSY; write_lock(&file_systems_lock); p = find_filesystem(fs->name, strlen(fs->name)); if (*p) res = -EBUSY; else *p = fs; write_unlock(&file_systems_lock); return res;}static struct file_system_type **find_filesystem(const char *name, unsigned len){ struct file_system_type **p; for (p = &file_systems; *p; p = &(*p)->next) if (strncmp((*p)->name, name, len) == 0 && !(*p)->name[len]) break; return p;}
[84442.083932] open file name /tmp/temp vfs ffff92ca1123e710 sb ffff92c935e61800[84442.083986] get current fs root / [84442.084042] get_current_vfsmount ffff92ca37420020 sb ffff92ca39158800 root / [84442.084043] get_current_mount address ffff92ca37420000 root / mnt child curentffff92ca37420060 mnt parent ffff92ca37420000 [84442.084044] list for mount vfsmount address [84442.084045] vfs ffff92ca31b96e80 mount ffff92ca36c953e0 mnt_mounts ffff92c9acf46160 name / sb ffff92c935e61800[84442.084047] vfs ffff92ca36c95380 mount ffff92ca36c956e0 mnt_mounts ffff92ca317e9260 name /dev sb ffff92ca37338800[84442.084048] vfs ffff92ca36c95680 mount ffff92ca36c956d0 mnt_mounts ffff92ca36c956d0 name /shm sb ffff92ca31a89000[84442.084049] vfs ffff92ca36c95800 mount ffff92ca36c95850 mnt_mounts ffff92ca36c95850 name /pts sb ffff92ca3643f800[84442.084051] vfs ffff92c934691e00 mount ffff92c934691e50 mnt_mounts ffff92c934691e50 name /hugepages sb ffff92c9bb8c2000[84442.084052] vfs ffff92ca317e9200 mount ffff92ca317e9250 mnt_mounts ffff92ca317e9250 name /mqueue sb ffff92ca31fd1000[84442.084054] vfs ffff92ca36c95200 mount ffff92c93474f7e0 mnt_mounts ffff92c93474f7e0 name /proc sb ffff92ca3915b800[84442.084055] vfs ffff92c93474f780 mount ffff92c9ba155860 mnt_mounts ffff92c9ba155860 name /sys/fs/binfmt_misc sb ffff92ca351bf800[84442.084056] vfs ffff92c9ba155800 mount ffff92c9ba155850 mnt_mounts ffff92c9ba155850 name / sb ffff92c9b9e90000[84442.084058] vfs ffff92ca36c95080 mount ffff92ca36c95560 mnt_mounts ffff92c9ba131260 name /sys sb ffff92ca31a88800[84442.084059] vfs ffff92ca36c95500 mount ffff92ca36c95550 mnt_mounts ffff92ca36c95550 name /kernel/security sb ffff92ca37310800[84442.084060] vfs ffff92ca36c95b00 mount ffff92ca36c95ce0 mnt_mounts ffff92ca36c96d60 name /fs/cgroup sb ffff92ca31a8a000[84442.084062] vfs ffff92ca36c95c80 mount ffff92ca36c95cd0 mnt_mounts ffff92ca36c95cd0 name /systemd sb ffff92ca31a8a800[84442.084063] vfs ffff92ca36c95f80 mount ffff92ca36c95fd0 mnt_mounts ffff92ca36c95fd0 name /cpu,cpuacct sb ffff92ca31413000[84442.084064] vfs ffff92ca36c96100 mount ffff92ca36c96150 mnt_mounts ffff92ca36c96150 name /freezer sb ffff92ca31412800[84442.084065] vfs ffff92ca36c96280 mount ffff92ca36c962d0 mnt_mounts ffff92ca36c962d0 name /devices sb ffff92ca31412000[84442.084067] vfs ffff92ca36c96400 mount ffff92ca36c96450 mnt_mounts ffff92ca36c96450 name /net_cls,net_prio sb ffff92ca31411800[84442.084068] vfs ffff92ca36c96580 mount ffff92ca36c965d0 mnt_mounts ffff92ca36c965d0 name /hugetlb sb ffff92ca31411000[84442.084069] vfs ffff92ca36c96700 mount ffff92ca36c96750 mnt_mounts ffff92ca36c96750 name /cpuset sb ffff92ca31410800[84442.084070] vfs ffff92ca36c96880 mount ffff92ca36c968d0 mnt_mounts ffff92ca36c968d0 name /pids sb ffff92ca31410000[84442.084072] vfs ffff92ca36c96a00 mount ffff92ca36c96a50 mnt_mounts ffff92ca36c96a50 name /perf_event sb ffff92ca31413800[84442.084073] vfs ffff92ca36c96b80 mount ffff92ca36c96bd0 mnt_mounts ffff92ca36c96bd0 name /memory sb ffff92ca31414000[84442.084074] vfs ffff92ca36c96d00 mount ffff92ca36c96d50 mnt_mounts ffff92ca36c96d50 name /blkio sb ffff92ca31414800[84442.084075] vfs ffff92ca36c95e00 mount ffff92ca36c95e50 mnt_mounts ffff92ca36c95e50 name /fs/pstore sb ffff92ca31a8b000[84442.084077] vfs ffff92ca37420900 mount ffff92ca37420950 mnt_mounts ffff92ca37420950 name /kernel/config sb ffff92ca31995800[84442.084078] vfs ffff92ca34e30300 mount ffff92ca34e30350 mnt_mounts ffff92ca34e30350 name /fs/selinux sb ffff92ca31fd1800[84442.084079] vfs ffff92c9ba131200 mount ffff92c9ba131250 mnt_mounts ffff92c9ba131250 name /kernel/debug sb ffff92ca3915f000[84442.084080] vfs ffff92ca36c95980 mount ffff92c9aa240360 mnt_mounts ffff92ca35f3ea60 name /run sb ffff92ca31a89800[84442.084082] vfs ffff92c9aa240300 mount ffff92c9aa240350 mnt_mounts ffff92c9aa240350 name /user/42 sb ffff92c9abd59800[84442.084083] vfs ffff92c990013480 mount ffff92c9900134d0 mnt_mounts ffff92c9900134d0 name /docker/netns/87c273023604 sb ffff92ca3915b800[84442.084122] vfs ffff92ca35f3ea00 mount ffff92ca35f3ea50 mnt_mounts ffff92ca35f3ea50 name /user/1000 sb ffff92ca35e3b800[84442.084124] vfs ffff92c9ba155500 mount ffff92c9ba155550 mnt_mounts ffff92c9ba155550 name /boot sb ffff92c9b90dc000[84442.084126] vfs ffff92c9ba154300 mount ffff92c9ba154350 mnt_mounts ffff92c9ba154350 name /u01 sb ffff92c9b90dd000[84442.084127] vfs ffff92c9b7ea1500 mount ffff92c9b7ea1550 mnt_mounts ffff92c9b7ea1550 name /var/lib/nfs/rpc_pipefs sb ffff92ca35b33000[84442.084129] vfs ffff92c9acf46100 mount ffff92c9acf46150 mnt_mounts ffff92c9acf46150 name /var/lib/docker/overlay2/eb7a39a649fe5e2047c6eb1f66f96c2f3a0bc6ec37fb0f69e11bd001ea566d81/merged sb ffff92c9abd58000[84442.084130] mount parent vfsmount 246
这段数据说明一个问题,sb代表了磁盘数据组织的数据块,可以是一个分区一个,也可以是一个分区几个sb,但是也代表特定的伪文件系统,可以mount到不同mount结构,形成共享形式。
85548.320801] open file name /tmp/temp vfs ffff92ca026cdb10 sb ffff92c9abd58000[85548.320802] get current fs root / [85548.320803] get_current_vfsmount ffff92ca34afcc20 sb ffff92ca39158800 root / [85548.320805] get_current_mount address ffff92ca34afcc00 root / mnt child curentffff92ca34afcc60 mnt parent ffff92ca34afcc00 [85548.320805] list for mount vfsmount address [85548.320807] vfs ffff92c9922f3a80 mount ffff92c9922f2460 mnt_mounts ffff92ca34afe8e0 name / sb ffff92c9abd58000
/var/lib/docker/overlay2/eb7a39a649fe5e2047c6eb1f66f96c2f3a0bc6ec37fb0f69e11bd001ea566d81/merged为容器中的文件路径。但是是通过mnt_namespace共享的方式挂载sb实现的,而不是简单的chroot。因为在容器内部的current->fs->root并没有挂载到其他路径上。
但是,容器内跟目录和容器外对应的容器目录sb属于同一个,说明这里共享了。通过chroot在外面访问倒是可以。由于容器所在目录的sb共享了,所以容器外的mnt_ns遍历mount找到的sb在容器内也可以找到。所以单纯的判断sb并不能判断容器内外。但是,内部的文件目录和外部进程遍历找到的sb挂载的目录肯定不一样。所以,mount 中的sb和挂载目录均相同,才能说明位于同一个容器内。
struct vfsmount{ struct list_head mnt_hash;struct vfsmount *mnt_parent; /* fs we are mounted on */struct dentry *mnt_mountpoint; /* dentry of mountpoint */struct dentry *mnt_root; /* root of the mounted tree */struct super_block *mnt_sb; /* pointer to superblock */struct list_head mnt_mounts; /* list of children, anchored here */struct list_head mnt_child; /* and going through their mnt_child */atomic_t mnt_count;int mnt_flags;char *mnt_devname; /* Name of device e.g. /dev/dsk/hda1 */struct list_head mnt_list;};
mnt_hash hash 表指针list
mnt_mountpoint 挂载点目录 mnt_root 挂载点根目录 mnt_list vfsmount list我把这个数据库的驱动大致研究了下,核心流程应该是这样。利用register_filesystem生成的超级块核心逻辑。似乎简化通用的file_operation操作,但明显效率提升有限,因为内核到用户态还是进行了一次多余的拷贝。
转载地址:http://wijvb.baihongyu.com/